title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
On scalable oversight with weak LLMs judging strong LLMs
Accept (poster)
Summary: The paper provides a comprehensive study of scalable oversight across 3 access: (1) task, (2) scalable oversight protocol and (3) judge capacity / strength. The authors focus on inference-time scalable oversight, i.e. the debater models are not trained to do debate with a given judge. The authors consider several new tasks in the context of scalable oversight, such as multimodal and closed tasks (as opposed to extractive). They also consider novel protocols: open consultancy and open debates. The results are overall quite mixed, with debate typically doing better than consultancy, but often not substantially outperforming direct QA outside of extractive tasks. Strengths: 1. The study is carefully designed: the authors carefully vary the judge strength, tasks and scalable oversight protocols, and study the effect of each part. 2. The study is quite comprehensive, covering many judge models and tasks. 3. The presentation is balanced: the authors are not over-selling the results. Most of the observations are treated as weak evidence towards a certain hypothesis. The authors also clearly discuss limitations of the study. 4. The results on the debate outperforming consultancy are interesting, and provide some hope for debate as a scalable oversight protocol. 5. Using weak model judges as opposed to information asymmetry is in my opinion a very reasonable idea. Weaknesses: 1. The debaters and the judges are all prompted models. These models are not trained to be particularly convincing to the judge, and the judge is not trained to be an accurate judge. The authors mention training the models as an interesting direction of future work. 2. The results are overall pretty mixed. For example, in Figure 1 on closed and multimodal tasks it appears that almost aways QA is better than both debate and consultancy. In other words, the judge can do better without any scalable oversight. Is that correct? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I wonder if the reason for consultancy working worse than debate is sycophancy of the judge: when it gets only an argument for one side, it is inclined to follow that argument, because it's an LLM trained with RLHF. I wonder if this is not indicative of what a human judge would do. 2. From manual inspection, do you think the reason for poor results in closed and multimodal tasks is (a) poor debate / consultancy arguments or (2) bad decisions conditioned on those arguments? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review, and interesting questions and comments. We are glad you found the study to be “carefully designed” and “comprehensive”, and appreciated the use of weak models as a complement to information asymmetry. Weaknesses: 1. **Training vs prompting:** we agree training would be great, the main reason we didn’t focus here is the extra cost involved. We instead prioritised a range of tasks which we think is the best decision as it showed task type has a big effect on performance. We are planning followup work on training. 2. **Direct QA vs debate:** Please see the top-level comment for our response. Questions 1. **Sycophancy:** This is an interesting hypothesis which could potentially be tested by conducting a human study (something we reference in future work), or by somehow trying to remove the judge errors due to sycophancy, perhaps by using base models for judges instead of RLHF’ed models (though this may induce other issues). Khan et al. have results for consultancy with human judges (on the QuALITY task) which show scores increase from a maximum of 60% with LLM to 80% with humans suggesting LLM judges are significantly worse at this task. We suspect this is not necessarily just sycophancy though, it’s more likely that human judges ask better probing questions to the consultant than LLM judges do. We would be happy to add discussion of this to the revision. 2. **Poor debate vs. poor judging:** We think it is more likely bad judgments as the debate arguments seemed plausible/good, though this is from manual inspection (we ran a small human judging session as researchers). We think judges could do a lot better at deliberating based on the (conflicting) evidence in front of them. We think this could be improved via limited judge fine-tuning on the task of judging debates (something they haven’t been exposed to much in existing fine-tuning).
Summary: This paper focuses on scalable oversight protocols using debates between AI agents to align superhuman AI with human supervision. By studying debates judged by less capable LLMs across various tasks, the research finds that Debates, especially without artificial limitations on judges, more effectively bridge capability gaps compared to Consultancy methods. Stronger debaters also lead to higher judge accuracy, demonstrating the utility of the debate format for scalable oversight. Strengths: The study demonstrates the generalization of the debate protocol, which shows its applicability not only in extractive QA but also its superiority in other tasks compared to consultancy. It was found that debate can reduce the magnification of errors. Weaknesses: While I understand that this paper seeks to examine the effectiveness of Debate and Consultancy across various tasks, there are still some concerns: 1. This paper claims “Debate is likely more promising as a scalable oversight protocol than Consultancy”. However, some experiments indicate that while Debate yields better results than Consultancy, both perform worse compared to directly answering the question (as shown in QA without article in Figure 1). 2. Many results are without corresponding discussions or explanations. i) Why LLMs using Debate and Consultancy would be worse in closed QA and multimodal tasks (Figure 1)? ii) In Section 4.2, when using a weak judge with Open Debate protocol, this paper claims “weaker judges can struggle to discern that this is correct”. When judges get stronger, the judge accuracy improves. Does this imply that Debate may not be effective with weak judges? Additionally, as judges strengthen, how can we determine whether the improvement in accuracy is due to the Debate protocol or simply the enhanced capabilities of the judge? iii) In Section 4.2, when the judge is weak with the Open Consultancy protocol, there is a similar phenomenon with ii), why? Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to Weaknesses. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations and potential societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and agree that further discussions and explanations will help the reader better understand our results. We’d be happy to include these in a revision. Please see below for a more detailed response to your questions: i) **QA without article vs debate:** Please see the top-level comment for our response. This similarly applies to e.g. consultancy vs QA without article. ii) a) **Is debate ineffective with weak judges?** We think yes, that there is some evidence of a certain capability threshold below which judges are unable to effectively judge a debate. Evidence: Fig 1, extractive: the debate accuracy for Gemma7B is significantly lower than other judge models. Fig 1, closed: the debate accuracy for Gemma7B is significantly lower than other judge models, and additionally is significantly worse than QA without article. Fig 2: in open debate, accuracy of Gemma7B is significantly lower. In the current plot this is most evident in extractive tasks, but we have an updated version of Figure 2 (see top-level rebuttal for figure pdf) that also shows this for closed tasks (combined results rather than splitting by protagonist correct/incorrect). We will use this updated figure 2 and provide this discussion in the revision. b) **As judges' capabilities strengthen, how can we determine whether their improvement is due to the debate vs their overall capability strengthening?** We can look at the accuracy increase as the judge model strength increases for different protocols. For GPT3.5 increase over Gemma7B, debate the accuracy increases are: (extractive: 0.14, closed: 0.09) which are notably larger than the score increases for QA without article (extractive: 0.04, closed: 0.01) and QA with article (extractive: 0.02). We will add discussion of these score differences (and other accuracy increases between models) to the revision. iii) For **open consultancy**: a) **Is open consultancy ineffective with weak judges?** We see less evidence of a threshold here, we think because the weak judge can do quite well by deferring to the open consultant and this deference is not too difficult for Gemma7B. b) **As judges' capabilities strengthen, how can we determine whether their improvement is due to the open consultancy protocol vs their overall capability strengthening?** Similarly to ii), we calculate the open consultancy score increases for GPT3.5-Gemma7B as (extractive: 0.08, closed: 0.13). We will add discussion of these score differences (and other accuracy increases between models) to the revision. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. It has addressed some of my concerns, I will raise the score to reflect this. However, I am still concerned that the performance and the corresponding analysis are limited in this paper. Besides, after reading the comments from other reviewers, it seems the novelty of this paper needs to be more clearly demonstrated. I will reduce my confidence as well.
Summary: This paper is concerned with the study of scalable oversight methods, i.e, how can one devise methods that will allow humans to supervise and align superintelligent models (ASI) whose capacities (which include reasoning, strategic thinking, and deception) vastly exceed the ones of humans. Inspired both by recent works studying debate as a method for aligning strongly capable AI and by works modelling scalable oversight with smaller LLMs tasked to align stronger LLMs, the current work studies debate between more capable LLMs, as judged by a weaker LLM, both as a proxy for scalable oversight of ASI by humans and as a proxy for the richness of the signal the judge could provide to the strong LLMs during alignment training. The authors back the task on extractive and closed QA tasks with 2 possible answers, where each of the 2 debating models are given a side and must persuade the judge to agree with them. This debate task is contrasted with the consultancy task, where a single model is given an arbitrary side and must persuade the judge to agree with it (the arguments for the opposing side are not visible). The study is large-scale, involving 9 tasks totaling 128 questions. A set of LLMs of varying sizes are used as judge to assess the effects of the gap between judge and debater capabilities. Open variants of debate and consultancy, where the judge/consultants are allowed to choose the answer they will argue for, are also investigated. Among the important results of the paper, the authors find that under debate, for all judge sizes, the judges achieve better accuracy (predicting the correct answer) than under consultancy, highlighting the promise of debate as a promising alternative to RLHF as a basis for scalable oversight. Then, then show that in their setting, judges are convinced equally by consultants that have chosen the right versus the wrong answer, whereas in the debate case models that have chosen the right answer are believed more often, providing additional evidence for debate vs consultancy. Then, they show that judge accuracy increases as the capabilities of debaters increases (measured by their Elo in debates against other models), showing that debate scales with the capabilities of the LLMs to align. Their result also extends previous work on the debate task, with judges of the same strength as debaters, that was performed in a single task. Strengths: * The subject matter is important and of overarching importance to the Neurips community and beyond; the findings will be of particular interest to anyone concerned with AI safety. * The paper is excellently written. The subject is not completely trivial and there are many setups and extensive experiments to present, but nevertheless the authors do an excellent job of explaining everything and putting it all together, highlighting the main results and the lessons learned as they go along as well as in the introduction and conclusion. The paper is very well contextualized in the related work and relations to prior art are precisely explained and motivate the current approach. I am not an expert in scalable oversight but I feel I have a much stronger grasp on the domain after reading the paper; * The paper proposes to extend the study for a candidate for scalable oversight which follows naturally from previous work by casting it in a setting that is partially representative of the challenges ASI alignment poses. The authors study this as a scientific question, precisely reporting their findings and not overstating the extent to which debate is a definitive solution to scalable oversight. Extensive experiments support all of their claims and conclusions, and overall the paper (and its appendix) are information-rich. * The authors highlight where their results agree or disagree with previous results in the literature. * While debate as an alignment method is not novel it has not been studied with weaker judges judging stronger models, nor with such task variability (including knowledge-intensive tasks and reasoning-intensive tasks). * All results come with clearly marked 95% CIs. Weaknesses: * One could have hoped that debate would perform better, compared to consultancy, but the gap between methods is still small (while consistent). How could one improve on debate to create a stronger signal for alignment? (this is hardly a weakness of the paper, however, but potential solutions to this could be discussed in the paper) * Reproducibility is not perfect, since some results make use of chatgpt; Technical Quality: 4 Clarity: 4 Questions for Authors: * I like the result on chain of thought, it is counterintuitive and the explanation is plausible. Any idea on how to test this? (Maybe looking at attention matrices, or token influence?) * line 275 I’m confused as to how models can both exhibit systematic positional bias and judge accuracy by unaffected by evaluation in both orders. How can this happen? * line 344 “we don’t see such a clear trend of this advantage with increasing Elo” why do you think this is the case? * (very minor) Summary sentences have too many commas, feels not that fluid (l353-357); Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The main limitations of the work have been addressed by the authors at length in their paper, as far as I can tell. The expected societal impact of the work is likely to be overwhelmingly positive, as is usually the case with safety research. Maybe one note is that all alignment research is potentially misalignment research, in the wrong hands -- but this is hardly specific to this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and are heartened to see that the reviewer appreciates the importance of the setting, likes the clarity of our exposition, and notes that our paper is a scientific study of protocols rather than promoting debate in particular. Weaknesses: * **Small gap between consultancy and debate:** we suspect the gap to increase when used as a training signal rather than inference protocol. We mention this in future work, but we’d be happy to directly reference the small gap as well and add more detail and potential solutions. * **Reproducibility:** While reproducibility is desirable, given that many of the most powerful models are closed, it would be a significant limitation if we were to avoid all closed models. To enhance reproducibility as much as possible, we evaluated on one strong open source model (Gemma). Questions: * **CoT:** one could look for substring match/rouge scores between judge CoT and debate reasoning. Token influence/attention could also be indicative, though perhaps more involved (and depending on the judge model may/may not be accessible). In principle, one could also ablate influence by inserting artificial reasoning errors into the debater arguments on some simple synthetic task, then comparing the influence on judge performance with and without CoT. * **Positional bias:** we have since found that positional bias reported in Khan et al (private communication) was due to them always setting the first answer as the correct answer. In our experiments we randomised the position of the correct answer. This then leads to the same mean positional bias as running both-orders (though both-orders has lower variance). We will update the text description to reflect this. * **Advantage with Elo:** we suspect the lack of trend is somewhat similar to Khan et al (on strong models, claude 2.1, gpt4t) which also didn’t see a strong trend (both on advantage and accuracy). Perhaps judge deliberation here is the bottleneck (and held constant) rather than debater skill (increasing). Through studying a more comprehensive range of tasks we were able to identify cases where previously reported trends for debate do and do not hold. * **Copy editing:** We’ll fix the commas in our revision. --- Rebuttal Comment 1.1: Comment: Thank you for answering all of my questions! I am looking forward to follow-up work along the lines you mentioned. In light of the other reviews, and considering alignment is not my area of expertise I will lower my confidence score. I still think all my points stand and my grade is justified, and I would be very happy to see the paper accepted; however I acknowledge I am not familiar with all of the related work and thus encourage the AC to weigh other reviews more strongly than mine.
Summary: This paper primarily investigates scalable oversight by analyzing whether a weaker LLM can supervise a stronger LLM through various prompting pipelines. Specifically, the paper compares the accuracy of responses from a weaker judge model under different interacting protocals with a stronger model, such as debate, consultancy, and direct question answering. The main finding is that having a strong model in debate, compared to consultancy, enables the weaker judge model to achieve better performance. The paper also provides a detailed analysis of different tasks, oversight protocols, and the capabilities of the judge models. Strengths: 1. Compared to previous studies on debate and the judging/critique capabilities of models, this paper conducts more comprehensive experiments and ablations on the judge, primarily comparing the effects of different oversight protocols. 2. The presentation of the paper is clear and relatively easy to understand. Weaknesses: 1. Lack of novelty: The comparison between consultancy and debate has already been explored in previous works [1, 2]. This paper essentially extends these comparisons to more tasks and analyses. 2. Lack of practical significance: Despite the extensive comparative analysis of judge protocols/models, there is no evident improvement brought by the weaker model to the stronger model. For instance, in the debate advocated by the paper, Figure 2 shows that even when the weaker judge model uses the strong model's debate as input, its performance does not surpass that of the strong model. Compared to previous work, I do not see how the paper's analysis provides substantial help in achieving effective scalable oversight. 3. The paper omits some highly relevant works analyzing the capabilities of LLM judges/critics, such as [3] [4]. --- [1] Debating with More Persuasive LLMs Leads to More Truthful Answers https://arxiv.org/pdf/2402.06782 [2] Debate Helps Supervise Unreliable Experts, https://arxiv.org/pdf/2311.08702 [3] Critique Ability of Large Language Models, https://arxiv.org/abs/2310.04815 [4] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning, https://arxiv.org/abs/2402.14809 Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their assessment. We emphasise that the key contribution of our paper is a rigorous, carefully-controlled scientific study of various scalable oversight protocols, rather than showing a particular protocol is better than others, see our top-level comment. We encourage the reviewer to consider our results in the context of the scalable oversight (see below for references on this area) setting with a weak judge overseeing a strong learner, in which the aim is to **improve the accuracy of the weak judge**. In particular, it is **not** our goal to improve the performance of the strong model. ## Weaknesses: 1. **Novelty**: Running similar experiments to [1,2] on a range of tasks has revealed some important differences to naively extrapolating the results on the single extractive task of [1,2]. The consultancy vs debate results of works [1,2] do in fact generalise to other tasks (see Introduction, bullets 1 and 5); however, QA without article vs. debate had different results on closed tasks, compared to what is found in the single extractive task of [1,2]. Our paper also has other novel findings (see Introduction, bullets 2, 3, 4) including previously untested open protocols. (Further, since submission, we have refined our analysis of section 4.3 on persuasiveness in debate, to split Elo calculations by task type, which shows the results of [1,2] on Elo scores also do not generalise to the closed tasks. We will update Figure 3 and its discussion accordingly, see top-level rebuttal for figure pdf. Our rebuttal does not rest on this amendment.) Overall, we think it is novel to extend analysis to new tasks/models/protocols, especially when doing so allowed us to show that the earlier work’s results do not generalise. [1] is an ICML 2024 best paper award winner and we think it is important that the community is aware of when its results do and do not apply. 2. **Practical significance**: - Improvement by the weaker model to the stronger model: this is not what we intend to show, instead we wanted to investigate what affects the weak judge’s accuracy when given access to protocols involving stronger models. - Weaker model does not surpass strong model performance: we do not aim for this, which would be a surprising finding (and outside of the scalable oversight setting). Figure 2 does show though, when the protagonist/consultant is incorrect (light cross) debate with weak judge has higher accuracy than in open consultancy, in which the weak judge often follows the strong consultant’s (incorrect) answer. The significance of this is that, for harder tasks, where the consultant would be incorrect more often, debate will highlight this to the weak judge more effectively. We have an updated version of Figure 2 (see top-level rebuttal for figure pdf) enabling easier comparison of accuracy when splitting by protagonist correct/incorrect. We will use this updated Figure 2 and provide this discussion in the revision. - It helps advance scalable oversight by providing evidence of the efficacy of proposed protocols. This does not rest on improving weaker models beyond the strong model’s performance (that would be an extreme standard to hold scalable oversight to, and is perhaps missing the point of scalable oversight research). The way we advance scalable oversight research is providing empirical evidence about the performance of the protocols, both positive and negative results are useful here. 3. **Citations**: We’d be happy to cite these additional works. How we differ: [3,4] don’t consider the setting of a weak judge overseeing a strong learner (which is our primary focus). ## References that study the same scalable oversight setting (weak supervisor, strong learner) as us: Theoretical - G. Irving, P. Christiano, and D. Amodei. AI safety via debate. arXiv preprint arXiv:1805.00899, 2018. - P. Christiano, B. Shlegeris, and D. Amodei. Supervising strong learners by amplifying weak experts. arXiv preprint arXiv:1810.08575, 2018. - J. Leike, D. Krueger, T. Everitt, M. Martic, V. Maini, and S. Legg. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871, 2018. - J. Brown-Cohen, G. Irving, and G. Piliouras. Scalable AI safety via doubly-efficient debate. ICML 2024 (Oral). https://openreview.net/forum?id=6jmdOTRMIO Empirical scalable oversight/debate - B. Barnes and P. Christiano. Writeup: Progress on AI Safety via Debate, 2020. URL https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1 - A. Parrish, H. Trivedi, N. Nangia, V. Padmakumar, J. Phang, A. S. Saimbhi, and S. R. Bowman. Two-turn debate doesn’t help humans answer hard reading comprehension questions. arXiv preprint arXiv:2210.10860, 2022. - J. Michael, S. Mahdi, D. Rein, J. Petty, J. Dirani, V. Padmakumar, and S. R. Bowman. Debate helps supervise unreliable experts. arXiv preprint arXiv:2311.08702, 2023. - A. Khan, J. Hughes, D. Valentine, L. Ruis, K. Sachan, A. Radhakrishnan, E. Grefenstette, S. R. Bowman, T. Rocktäschel, and E. Perez. Debating with more persuasive LLMs leads to more truthful answers. ICML, 2024 (Oral). https://openreview.net/forum?id=iLCZtl7FTa - A. Radhakrishnan. Anthropic fall 2023 debate progress update, 2023. Weak to strong generalisation - C. Burns, P. Izmailov, J. H. Kirchner, B. Baker, L. Gao, L. Aschenbrenner, Y. Chen, A. Ecoffet, M. Joglekar, J. Leike, et al. Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. arXiv preprint arXiv:2312.09390, 2023. Sandwiching/scalable oversight setup - A. Cotra. The case for aligning narrowly superhuman models. In AI Alignment Forum, 2021. - S. R. Bowman, J. Hyun, E. Perez, E. Chen, C. Pettit, S. Heiner, K. Lukoši ̄ut ̇e, A. Askell, A. Jones, A. Chen, et al. Measuring progress on scalable oversight for large language models. arXiv preprint arXiv:2211.03540, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, which has addressed some of my concerns. The authors clarified that the main contribution of the paper is "providing evidence of the efficacy of protocols," by `testing more tasks/models/protocols based on [1][2]`. Although the paper offers some mixed conclusions across different types of tasks (I acknowledge that these additional results may be valuable to researchers in specific sub-areas), it does not provide significantly more insights compared to previous work and is more like a replication report of [1][2]. Therefore, I will maintain my initial socre. Considering that the authors are more familiar with the scalable oversight setting, I have lowered my confidence and hope the Area Chair will consider the opinions of other reviewers more. However, I still believe this paper does not meet the NeurIPS standard and recommend submitting it to the *CL series instead. --- [1] Debating with More Persuasive LLMs Leads to More Truthful Answers https://arxiv.org/pdf/2402.06782 [2] Debate Helps Supervise Unreliable Experts, https://arxiv.org/pdf/2311.08702
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments. We are glad to hear that reviewers identified our comprehensive experimentation and ablation, the importance of using weaker judge models to supervise stronger debaters, and the clarity of our writing and presentation as strengths of this work. ## QA without article vs debate We address the following comment that came up in multiple reviews: Reviews P56G, MBPi: **In the closed setting (where debaters don't have privileged information), debate often does not outperform QA without article.** We would like to emphasise two points: 1. Our paper is a study comparing scalable oversight protocols, rather than a paper that introduces a particular method (eg debate) and shows it is better than all others (indeed reviewer HiQA notes the relative performance of debate is hardly a weakness of our paper). We are happy to add further emphasis on this in a revision. 2. In light of the prior point, we view this key finding as an interesting and perhaps surprising result of our paper. This is not what one would have expected from naively extrapolating the result of Khan et al. 2024, which only studied a single extractive task (QuALITY). This demonstrates an important contribution: evaluating on a wider range of tasks reveals different performances of the scalable oversight protocols. As for the interpretation of this result: One hypothesis is that because our current judges are instruction-tuned models, trained for general purposes with supervised finetuning and RLHF, these models favor QA without article over debate: QA is typically the format of evaluation benchmarks which are used to select finetuning approaches, and which may be more common in the fine-tuning data (e.g. users typically ask questions and expect an answer). We suspect that judging a debate, in a discerning manner, is more out-of-distribution. Our results provide directions for future research: perhaps debaters which are fine-tuned to convince these judges will be able to learn to produce more in-distribution arguments for them, or perhaps judges can be fine-tuned to make more efficient use of the information they're presented with. It could also be interesting to test this by comparing to human debaters, who may have a more balanced distribution. We will add this discussion to the paper. ## Updates to figures 2 and 3 We attach (see pdf) updates to figures 2 and 3. Figure 2 now shows (top) combined results to better compare open debate vs open consultancy, and (bottom) separately shows accuracy split by correct/incorrect. Figure 3 now shows persuasiveness results split by task type (previously across all tasks) highlighting that the results of Khan et al., 2024 do not generalise to closed tasks. We reference these in rebuttals below. Pdf: /pdf/1ccb00ab54653c7aac6f87710f41ac93cefb69a2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration
Accept (poster)
Summary: This paper proposes a multi-agent architecture for mobile device operation, Mobile-Agent-v2. Mobile-Agent-v2 includes three agents : planning agent, decision agent and reflection agent. To retain focus content, this paper designs a memory unit to record task-related focus content. The planning agent generates task progress based on the history operations, the reflection agent corrects erroneous operations and the decision agent outputs operations. Extensive experiments conducted on various operating systems, language environments, and applications shows that MobileAgent-v2 achieves significant performance improvements compared to single-agent architecture of Mobile-Agent. Strengths: 1. This paper introduces a multi-agent architecture Mobile-Agent-v2, which can alleviate various navigating difficulties inherent in the single-agent framework for mobile device operation tasks. 2. This paper designs a memory unit and reflection agent, which can avoid the loss of focus content navigating and reflection capability. 3. Experimental results demonstrate that Mobile-Agent-v2 achieves significant performance improvements. Weaknesses: 1. Insufficient baselines. More powerful baselines should be considered, such as AppAgent [1] and CogAgent [2]. 2. This paper lack description of how the decision agent retrieves the memory unit. In addition, the content or format of the memory unit storage is not included. [1] AppAgent: Multimodal Agents as Smartphone Users. 2023. [2] CogAgent: A Visual Language Model for GUI Agents. 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the Mobile-Agent-v2 be generalized to IOS systems? 2. How operations knowledge is injected into the model? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors describe limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Question 1: Can Mobile-Agent-v2 be transferred to iOS or other platform?** ## **Response:** * Mobile-Agent adheres to a purely visual solution, making it universally applicable across platforms since Mobile-Agent-v1. Mobile-Agent-v2 continues this approach, allowing it to be transferred to any GUI-based system, such as tablets, PCs, TVs, and automotive systems. * Specifically, we have implemented it on PCs, facilitating operations such as "downloading papers in the browser" and "modifying font and formatting in Word." The relevant code will also be open-sourced on GitHub recently. ## **Question 2: How to implement knowledge injection.** ## **Response:** Operation knowledge is often a tutorial for a specific app. It is input into the decision model as part of the prompt: ```python if add_info != "": prompt += f''' ### Hint ### There are hints to help you complete the user's instructions. The hints are as follows: {add_info} ''' ``` where "add_info" is the tutorial. For example, on TikTok, sharing a video requires clicking the fourth icon on the right. However, the LLM may not have this knowledge. In this case, knowledge can be injected: "The share icon is a white arrow pointing to the right on the right side of the screen." When Mobile-Agent-v2 reaches the step where it needs to click the share icon, it will complete the operation based on the injected knowledge. Since the use of knowledge is determined by the decision agent, other steps are not affected by the presence of the knowledge. ## **Question 3: Lack of other baselines such as CogAgent and AppAgent.** ## **Response:** **CogAgent** CogAgent is a QA model that does not possess the capability to perform concrete operations on real devices through tool invocation. Therefore, our dynamic evaluation framework is not applicable to such QA models. **AppAgent** First, AppAgent requires an additional exploration phase for each app, whereas Mobile-Agent-v2 can perform app operations without the exploration phase. Secondly, AppAgent relies on XML, restricting it to the Android platform. In contrast, Mobile-Agent-v2 uses a purely visual solution, making it platform-independent. Thus, comparing Mobile-Agent-v2 with AppAgent is not fair. Nevertheless, we manually evaluated the AppAgent and the results are shown in the below table. With the same base model, Mobile-Agent-v2 outperforms AppAgent. |Model|Success Rate|Completion Rate| |-|-|-| |AppAgent|66.7%|74.3%| |Mobile-Agent-v2|77.8%|82.1%| ## **Question 4: Memory unit storage format and retrieval method** ## **Response:** * Memory is stored as natural language descriptions of task-related content from historical screenshots. For example, in the task "Check the weather and write a dressing guide in notes," after opening the weather app, the agent needs to remember the weather information from the screenshot for subsequent use. Before exiting the weather app, the agent will add the weather information to the memory unit in plain text. * The stored memory is shared among all agents. When the agent needs to input the dressing guide, the decision agent will automatically retrieve the relevant information from the memory during inference. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 4qiL Comment: Thank you for taking the time and effort to address all questions. There are a few follow-up questions. a) Could you provide more details about the experiments comparing the performance of Mobile-Agent-v2 and AppAgent, such as the environment, setting, number of evaluation tasks, etc. b) How is the storage and retrieval costs of memory cells? --- Reply to Comment 1.1.1: Title: Response to the Official Comment by Reviewer 4qiL Comment: Thank you for your response. Below is our response to the above two questions. ### **Question 1: Could you provide more details about the experiments?** ### **Response:** * Environment: We keep the evaluation environment consistent with the our paper, namely multi-platform (Android OS and Harmony OS), multi-app type (system app and third-party app), and multi-language (English and non-English). All evaluation tasks start from the mobile phone's desktop, and the app used in the instructions is ensured to exist on the desktop. * Setting: We used AppAgent's official open source project for evaluation. During the evaluation, we only conducted an exploration phase, that is, running the official code “learn.py”. All other settings, including hyper-parameters and MLLM selection, were defaulted using the official code. * Number: We hope that AppAgent can be consistent with the paper in terms of apps and evaluation tasks. However, since AppAgent relies on XML, some pages involved in instructions cannot obtain XML files. We found that this may be because the app page has permissions or the page is dynamic (such as video), which makes the XML unable to be obtained through ADB. We also removed these tasks from the evaluation results of AppAgent and Mobile-Agent-v2. In the end, a total of 63 instructions were used for evaluation. ### **Question2: How is the storage and retrieval costs of memory cells?** ### **Response:** * The storage of memory unit depends on the maximum input length of MLLMs. For example, the most advanced MLLM GPT-4o can support an input length of 128K. This means that the memory unit can definitely hold the entire app's tutorials. These tutorials can come from the official instructions of the app, manually written operating experience, or from key information recorded during the historical tasks of Mobile-Agent-v2. * Since memory retrieval is done at the decision stage by decision agent, we cannot directly quantify the retrieval efficiency. However, we have experimentally found that, while keeping the output length unchanged, every 1k tokens added to the memory unit will increase the time it takes the decision agent to make a decision by 200ms to 1s. This means that even if the memory length reaches 3k tokens (the length of a general app tutorial), the decision stage will only increase the time by 8% (the average decision time increased from 21s to 22.8s.).
Summary: This paper proposes mobile-agent-v2, a multi-agent framework for mobile device operation. The framework includes a planning agent, decision agent, reflection agent and memory unit. The multi-agent framework exhibits advantages in reducing errors in long-horizon tasks and achieves better results than single-agent framework on the evaluations designed in the aper. Strengths: 1. Multi-agent framework for mobile device operation is novel in the literature, improving the overall performance of single agent framework, with the effectiveness of each component verified. Weaknesses: 1 The paper lacks a comparison of the evaluation benchmark with previous works such as AppAgent and works mentioned in Sec. 2.2. This makes it difficult to evaluate the overall effectiveness of the framework in comparison with previous works. 2. The evaluation benchmark is relatively small and simple, and how they are evaluated is not very clear. Does the evaluation relies on human evaluator to check the success of each component in the trajectory? Technical Quality: 2 Clarity: 2 Questions for Authors: 1. See weakness, the evaluation details needs further clarification. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. This paper relies on commercial models such as GPT-4 and GPT-4V to complete the tasks. The cost of completing a task is uncaculated or discussed in the paper, but is expected to be more cost than the single agent framework. 2. The efficiency and accuracy trade-off. After introducing mutli-agent framework, the inference time should be much slower. -------- After the rebuttal, the reviewer think the heavy reliance on commercial models and the time and token cost should be discussed in the revision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Question 1: Lack of other baselines such as CogAgent and AppAgent.** ## **Response:** **CogAgent** CogAgent is a QA model that does not possess the capability to perform concrete operations on real devices through tool invocation. Therefore, our dynamic evaluation framework is not applicable to such QA models. **AppAgent** First, AppAgent requires an additional exploration phase for each app, whereas Mobile-Agent-v2 can perform app operations without the exploration phase. Secondly, AppAgent relies on XML, restricting it to the Android platform. In contrast, Mobile-Agent-v2 uses a purely visual solution, making it platform-independent. Thus, comparing Mobile-Agent-v2 with AppAgent is not fair. Nevertheless, we manually evaluated the AppAgent and the results are shown in the below table. With the same base model, Mobile-Agent-v2 outperforms AppAgent. |Model|Success Rate|Completion Rate| |-|-|-| |AppAgent|66.7%|74.3%| |Mobile-Agent-v2|77.8%|82.1%| ## **Question 2: Small and simple evaluation set with unclear evaluation method.** ## **Response:** * Mobile-Agent-v2 uses dynamic evaluation, requiring the agent to directly invoke mobile operation tools and connect to a real device for evaluation. This method is more complex and challenging compared to static app screenshots used in existing work, which only evaluate single-step operations. * Mobile-Agent-v2's dynamic evaluation involves 20 apps, including both system and third-party apps, across various operating systems and languages. Each app has 4 tasks, with an average of 7 steps per task. Each step undergoes three evaluations: planning, decision, and reflection, totaling 1500+ evaluations. This evaluation scale is the largest among works using dynamic evaluation, covering the most apps and having the most comprehensive task coverage. * We also received endorsements from Reviewer **c2St** and **Xdcr** for our evaluation method: **"The paper includes a detailed evaluation across different operating systems, language environments, and applications, providing robust evidence of the system's effectiveness."** and **"The experimental results on real-world mobile apps look promising."** We also recognize the community's lack of a general benchmark based on dynamic evaluation. We are currently working on creating this benchmark and providing a rich set of operation tools to encourage more models to participate in dynamic evaluation. ## **Question 3: Token cost of usage.** ## **Response:** Assuming an average of 7 steps per task, Mobile-Agent-v2, using a multi-agent architecture, consumes a relatively fixed number of tokens per step. In contrast, a single-agent architecture requires inputting a long sequence of operation history and screenshots with each call, leading to increased token consumption as the number of task steps rises. |Architecture|Token per Step|Total Token| |-|-|-| |Mobile-Agent (single-agent)|(1.3k) * steps + 0.4k|43.4k| |Mobile-Agent-v2 (multi-agent)|4.4k|30.8k| ## **Question 4: Efficiency reduction in multi-agent systems.** ## **Response:** * For time overhead, although multiple agents work serialize, there are still phases to parallel. The tables below compare the operation times for the Mobile-Agent single-agent framework and the Mobile-Agent-v2 multi-agent framework. "Screenshot" represents obtaining a screenshot and corresponding image processing from the device, while "Tool Call" represents the use of visual perception tools. It is evident that the increased operation time of the multi-agent framework is acceptable under parallel design. * For memory overhead, Mobile-Agent-v2 calls the large model via API method, so there is no additional memory overhead. **Mobile-Agent (Single-Agent Framework)** |Phase|Preparation|Planning|Decision|Operation|Reflection|Total| |-|-|-|-|-|-|-| |Task|Screenshot|-|Decision|Tool Call, Grounding|-|| |Time|2s||20s|12s|-|**34s**|| **Mobile-Agent-v2 (Multi-Agent Framework)** |Phase|Preparation|Planning|Decision|Operation|Reflection|Total| |-|-|-|-|-|-|-| |Task|Parallel with Planning|Screenshot, Tool Call, Planning, Reflection|Decision, Memory|-| Parallel with Planning|| |Time|-|18s|21s| -| -|**39s**|| --- Rebuttal Comment 1.1: Title: A few further questions Comment: Thank the authors for the detailed response. However, the reviewer is still confused about whether the successful judgment is done by humans or GPT-4V. If it is by GPT-4V, to what degree is the assessment reliable? As mentioned in lines 215-216 of the main paper, there are 88 instructions tested in total, with only 8 instructions for multi-app operations, what is the relationship between the so-called 1500+ evaluation with the 88 instructions? Mobile Agent-v2 seems to increase the overall time to finish the task in comparison with Mobile agent-v1, not to mention it is impractical to use GPT-4V for mobile devices. Further, the time calculation does not consider the information transmission time through the web or cable in real-world applications when using GPT-4V. --- Reply to Comment 1.1.1: Title: Response to the Official Comment by Reviewer cAKE Comment: ### **Question 1: How to determine whether the operation is successful or not?** ### **Response:** Due to the lack of dynamic evaluation benchmarks in the mobile field, all our success or failure judgments are made through manual evaluation. ### **Question 2: What is the relationship between the 1500+ evaluation with the 88 instructions?** ### **Response:** The instructions used in the Mobile-Agent-v2 evaluation require multiple steps to complete, with an average of 7 steps per instruction. For each operation, we manually evaluated the accuracy of planning, decision-making, and reflection. Therefore, there are about 88 x 7 x 3 = 1848 evaluations in total. ### **Question 3: Mobile Agent-v2 seems to increase the overall time to finish the task in comparison with Mobile agent-v1.** ### **Response:** * First although Mobile-Agent-v2 consumes about 15% more time, it achieves an operation success rate of more than 30%. Both are based on the same MLLM, and Mobile-Agent-v2 can significantly improve its operational capabilities by virtue of its architectural advantages. * The main reason that currently limits the operation time of Mobile-Agent-v2 is the inference speed of GPT-4. This is not a flaw in the framework itself. As the inference speed of MLLMs continues to accelerate and more advanced inference acceleration methods are used, or the use of local MLLMs, the inference speed will be further shortened. This will also significantly reduce the time-consuming gap between single-agent architecture and multi-agent architecture, while allowing the operation speed to be close to that of humans. * It is worth noting that although the operation speed of Mobile-Agent is not as fast as that of humans, there are still many practical application scenarios where such operation delays can be accepted. In addition, thanks to the parallel logic between agents, Mobile-Agent is also the fastest architecture that can currently achieve real machine operation. We are currently working on using local models to replace GPT-4 and have achieved initial results. Currently, the operation time per operation can be as low as 10 seconds. ### **Question 4: The time calculation.** ### **Response:** The time we calculate is the time of the complete operation, which includes the network delay and the communication consumption on the pipeline. If Mobile-Agent-v2 is used in a real device or app, it will not increase the operation delay.
Summary: This paper introduces an agentic framework, Mobile-Agent-v2, designed to address the challenges of planning and sequential function/tool invocation for Large Language Models (LLMs) in mobile operation scenarios. Mobile-Agent-v2 comprises three agents: a planner, an actioner, and a reflector, which jointly enhance the performance of LLMs in mobile operation tasks. Experimental results demonstrate the advantage of Mobile-Agent-v2 over the single-agent framework: Model-Agent, across various real-world applications. Strengths: - It is well-written and the structure is clear. - Its studied problem of how to design agentic workflow for LLM to improve planning and sequential tool calling is beneficial to the community. - The experimental results on real-world mobile apps look promising. Weaknesses: - The technical contribution seems limited. Its proposed framework of using planner, actioner and reflector for agentic workflow is a common practice. - The evaluation misses some important ablation studies. Technical Quality: 2 Clarity: 3 Questions for Authors: This paper proposes an agentic workflow, comprising a planner, actioner, and reflector, to enhance the planning and sequential tool-calling capabilities of Large Language Models (LLMs) in mobile operations. The motivation behind this approach is sound, and the experimental results appear promising. However, I have several concerns primarily regarding the technical novelty and the insights derived from the experiments. - The method of decomposing the responsibilities of a single agent into multiple components—planner, actioner, and reflector—is not novel. This approach has been previously proposed by [1] and is widely used in other contexts. Therefore, it is essential to clarify the key challenges and new insights associated with applying this framework to mobile operation scenarios. Without such clarification, it appears that the study merely leverages an existing method and applies it to a specific scenario without significant innovation. - The experimental results across various mobile applications are promising. However, the sequence length and type are critical factors that warrant detailed discussion. While some analysis is provided, the paper lacks comprehensive results regarding task completion accuracy when sequence length and type vary. For instance, it would be beneficial to understand, across different apps, which types of operations are more successfully completed with different tool sequences. These ablation studies are necessary to offer deeper insights. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Question 1: Similar multi-agent architectures have been applied in other scenarios.** ## **Response:** Mobile-Agent-v2 is the first work to use a multi-agent architecture in the mobile domain. The required capabilities of each agent, the tasks they perform, the interaction between agents, and their independent units are significantly different from existing work. Below are the challenges faced by Multi-modal Large Language Models (MLLMs) in the mobile domain and the solutions provided by our framework: 1. **Input type and sequence length:** In the mobile domain, MLLMs' operation decisions face the problem of multi-image long sequences and interleaved text and images input format, which limits MLLMs' decision accuracy. We propose a planning agent to maintain the stability of input sequence length and format in the single image. This input can improve the context learning of MLLMs. 2. **Operation grounding:** In the mobile domain, MLLMs need to generate decisions and their locations. Currently, mainstream MLLMs (even GPT-4) lack grounding capabilities. To address this, we included a visual perception tool to assist decision-making. This purely visual solution overcomes the limitations of the operating platform and offers greater versatility. 3. **Error gandling in long sequences:** In the mobile domain, long sequence operation tasks inevitably lead to errors during intermediate operations. When errors occur, MLLMs often struggle to reflect effectively and correct mistakes. We introduced a reflection agent that uses the MLLMs' multi-image understanding capabilities and contextual learning to judge the accuracy of operations and correct errors. 4. **Following historical screen information:** In the mobile domain, following and navigating history screen information is difficult due to input length limitations and interleaved text and images. We proposed using an independent memory unit to store this information, significantly enhancing the agent's ability to navigate key information. Regarding the design of the Mobile-Agent-v2 framework, we note positive feedback from other reviewers, such as **c2St** and **cAKE**, who endorsed our approach: **"The introduction of a multi-agent system to handle different aspects of mobile device operation is a novel approach"**; **"Multi-agent framework for mobile device operation is novel in the literature"**. This confirms the novelty of applying a multi-agent framework in the mobile domain. Moreover, Mobile-Agent-v2 is a practical framework. Our code, released in the open-source community, has received widespread acclaim, garnered thousands of stars, and has been applied in various real-world scenarios. This practical application stands out among the many works on multi-agent frameworks. ## **Question 2: The relationship between sequence length and operation type requires further study.** ## **Response:** We have compiled statistics on the relationship between operation sequence length and operation accuracy in the table below. |Sequence Length|Open|Click|Swipe|Type|Back|Home|Stop| |-|-|-|-|-|-|-|-| |[1, 4)|86.4%|91.5%|100%|75.0%|-|100%|100%| |[4, 7)|100%|81.3%|100%|60.0%|100%|100%|88.4%| |[7, )|-|80.8%|-|75.0%|-|-| 86.2%| Here are the conclusions: 1. **Minimal impact of sequence length in simple operations:** Operations that do not require precise coordinates or parameters, such as "Swipe", "Back", and "Home", are minimally affected by sequence length. 2. **Success rate variation in complex operations:** For click and stop operations, the success rate is higher at the beginning of the task than in the middle or later stages. This is because early-stage operations usually involve primary pages with better guidance, while advanced pages require the agent to have more robust operational knowledge. However, with the multi-agent architecture, there is no significant decline in the middle and later stages despite the sequence length. --- Rebuttal Comment 1.1: Title: Response to the Official Comment by Reviewer Xdcr Comment: Thank you very much for your feedback and the time you dedicated to reviewing our paper. We greatly appreciate your insights and are pleased to hear that the clarifications we provided were helpful in addressing your concerns. If you have any further questions or suggestions, please do not hesitate to reach out. We would be happy to discuss them with you. --- Rebuttal 2: Comment: Thank the authors for their rebuttals. I will raise my score to 5. However, the results on the multi-modal scenarios suggest that the performance of the proposed agentic framework can be satisfactory only when it has been applied on GPT-4V. This may limit the generalization of the proposed agent frameworks. In the next version of this paper, I would like to see whether the gap between applying Mobile-agent-V2 on GPT-4V and on other open-source models, especially on small models (less than 7B) can be narrowed down. This is because small models are more practical choice when deploying LLMs on the mobile device.
Summary: In this paper, the authors present Moble-Agent-v2, a multi-agent architectruure deisgned to assist mobile device operation. Mobile-Agent-v2 comprises three agents: planning agent, decision agent, and reflection agent. Frist, planning agent summarizes the task progress based on the operation hsistories. Then, the decision agent generates next operation to be executed. Finally, the reflextion agent classifies whether the excuted operation as correct, errorneous, or ineffective. For evaluation, the authors conduct experiments on real mobile devices in various mobile applications. Strengths: - The application of multi-modal LLMs in mobile operation is intriguing - The framework that uses multiple agents demnostrates superior perfornamce compared to the single-agent framework - The paper is easy to follow and evaluation is conducted on the real mobile devices Weaknesses: - The primary concern is the novelty of the paper. While using multiple agents for mobile operations appears novel, the complexity of tasks used in the evaluation suggests that the implementation of each component within Mobile-Agent-V2 is relatively simple. It would be benifical if the authors evalute the proposed framework in diverse user scenarios (refer to the limitation section). - It appears that the authors assume that LLMs possess the knowledge about mobile applications. For example, if the user instruction is "turn on dark mode," how does the LLM determine which setting (general setting, display setting) the agent should execute? Because, sometimes even for the human need the trial and error to execute certain task. Without grounding the available execution in the given context, the proposed method might require multiple trial and error until it successfully completes the task. - Regarding to the upper question, it would be better if the authors provide how many interactions are required to complete a task compared to the optimal execution. - It seems to be neither erroneous nor ineffective operations are recorded in the operation history. If this is the case, isn’t there a potentional risk that the decision agent might repeat the same operation? Woudn’t it be better to use these operation histories to prevent the LLM from repeating erroneous or ineffective executions? - There seems some potential risks associated with inaccuracies in every agents within the proposed framework, especially since it utilizes LLMs. For instance, the planning agent might inaccurately summarize operation histories, the decision agent could misunderstand visual inputs, or it might generate incorrect executions. Can Mobile-Agent-v2 automatically identify which component is responsible for an error? or could the authors provide more statistical information about this? Technical Quality: 3 Clarity: 3 Questions for Authors: - Can the authors provide more information about knowledge injection? It would be helpful if the authors included examples to illustrate this process. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - Including some discussion on the scope limitations of Mobile-Agent-v2 would be beneficial. For instance, what happens if the required application is missing—can Mobile-Agent-v2 suggest an alternative method? Additionally, for ambiguous instructions, is Mobile-Agent-v2 capable of resolving these by seeking clarification from the user? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Question 1: Lack of evaluation in special scenarios, such as unachievable or ambiguous tasks.** ## **Response:** * For tasks that cannot be directly completed through the given instructions, Mobile-Agent-v2 can still attempt to complete them. Due to the high flexibility of Mobile-Agent-v2's operation space, it can simulate almost any operation on a mobile device. Therefore, even if the device does not have the conditions to complete the task, Mobile-Agent-v2 can try to create the conditions through mobile operations. For example, if the task is to open "Facebook" but the app is not installed on the device, Mobile-Agent-v2 will open the app store, search for "Facebook," and click to install it. Then it will return to the home screen and open the app. * For ambiguous instructions that require explicit user input, Mobile-Agent-v2 can resolve them by expanding the operation space. Mobile-Agent-v2 supports custom operations within the operation space. For example, an operation "seek help" can be added with the description: "Use this operation when you have multiple operation paths to get clearer instructions from the user." When multiple choices can fulfill the instruction's requirement, Mobile-Agent-v2 can determine that the current state is ambiguous and proactively invoke the corresponding operation to obtain user guidance. * Due to inherent biases in Multi-modal Large Language Models (MLLMs), some operations with strict triggering conditions may be difficult for the MLLMs to use. We are also working on improving the MLLMs‘ contextual learning and operation invocation abilities through alignment training and reinforcement learning. ## **Question 2: How will the agent operate if the agent lacks operation knowledge?** ## **Response:** * Mobile-Agent-v2 can leverage the operation knowledge within the MLLMs and the multi-agent cooperation to complete complex tasks. Existing mainstream MLLMs, such as GPT-4, Gemini, and Qwen-VL, have operational experience with many apps. Even if the MLLMs lacks knowledge of certain apps, the decision agent can infer possible operations based on page content and the output of the planning agent. Even if errors occur, they can be corrected by the reflection agent. Additionally, operational capabilities can be acquired through training, such as Apple's Ferret-UI. * We propose knowledge injection to explore whether external operation knowledge can compensate for the agent's operational deficiencies. If a task is too difficult for the MLLMs' internal knowledge, knowledge injection can generate tutorials. Details on knowledge injection will be addressed in the next question. * If knowledge injection is not used, the agent can acquire operation knowledge through self-exploration. The agent will perform possible operations and use the reflection agent to observe the results, determining whether the operation was correct. The knowledge gained from exploration is then added to an external knowledge base as input for the decision agent. This way, the agent will not need to explore when facing the same task again. We selected samples from the test set with exploration processes and recorded the steps required to complete the task in the table below, where "KI" represents knowledge injection. We can see that the efficiency of the additional exploration process is acceptable. |w/o KI|w/ KI| |-|-| |8.6|6.4| ## **Question 3: How to implement knowledge injection?** ## **Response:** Operation knowledge is often a tutorial for a specific app. It is input into the decision model as part of the prompt: ```python if add_info != "": prompt += f''' ### Hint ### There are hints to help you complete the user's instructions. The hints are as follows: {add_info} ''' ``` where "add_info" is the tutorial. For example, on TikTok, sharing a video requires clicking the fourth icon on the right. However, the LLM may not have this knowledge. In this case, knowledge can be injected: "The share icon is a white arrow pointing to the right on the right side of the screen." When Mobile-Agent-v2 reaches the step where it needs to click the share icon, it will complete the operation based on the injected knowledge. Since the use of knowledge is determined by the decision agent, other steps are not affected by the presence of the knowledge. ## **Question 4: No record of incorrect operations in operation history.** ## **Response:** To simplify the structure of the prompt, we have omitted the situation when an incorrect operation occurs. We save the incorrect operations in another part of the decision agent's inputs: ```python if error_flag: prompt += f''' ### Last operation ### You previously wanted to perform the operation \"{last_summary}\" on this page and executed the Action \"{last_action}\". But you find that this operation does not meet your expectation. You need to reflect and revise your operation this time.''' ``` ## **Question 5: Can the framework automatically attribute errors to specific modules or provide error statistics?** ## **Response:** * Thank you for your question. This is an important capability for UI interaction agents. However, since the modules within the framework are interdependent, it is challenging to attribute errors to a specific module solely through global reflection after each operation. * We manually analyzed the causes of errors in Mobile-Agent-v2 on the evaluation set. The results show that errors can occur at any stage. |Planning Agent|Visual Tool Results|Decision Agent| |-|-|-| |12|10|13| Therefore, to achieve automated error detection, independent reflections need to be designed for each module. For improvement, we will add reflection on the previous module's results while generating results for each module, preventing errors from propagating between modules. This will not increase latency due to the parallel design. Moreover, errors can be intercepted at the point of occurrence. This will be a key direction for our future work. --- Rebuttal 2: Comment: I thank authors for their detailed responses to my questions. While each element of Mobile-Agent-v2 may not be novel in itself, its significance lies in configuring the agentic workflow through multiple agents within the mobile application domain. Therefore, I believe that this paper should be evaluated from an end-user perspective. From this viewpoint, it is essential for the framework to automatically identify and resolve the errors, as highlighted in question 5. Additionally, minimizing the exploration, as discussed in question 2, is crucial for enhancing the user experience. Moreover, regarding question 1, the proposed framework might handle ambiguous tasks, but there is a lack of experimental proof. As these details crucial to enhancing the user experience are missing in Mobile-Agent-v2, I will maintain my current score. I hope these aspects can be addressed in future version of the paper. --- Rebuttal Comment 2.1: Title: Response to the Official Comment by Reviewer kbfL Comment: Thank you for your continued evaluation and thoughtful feedback. We appreciate your recognition of the significance of configuring an agentic workflow within the mobile application domain. We understand your concern regarding the practical utility of Mobile-Agent-v2 compared to existing methods. It is worth emphasizing that our approach has demonstrated a substantial improvement in task completion rates and a significant reduction in errors during navigation tasks. These results directly translate to an enhanced user experience, particularly in complex mobile operation scenarios. The ability of Mobile-Agent-v2 to efficiently manage long sequences and interleaved data in real-world applications is a notable advantage over other related methods. The framework's design is specifically optimized to minimize user intervention and exploration during mobile operations, which is critical for maintaining a smooth and intuitive user experience. * **Automatically attribute errors.** We agree that automatic error resolution and handling ambiguous tasks are critical aspects. This study is focused on establishing the foundational architecture of Mobile-Agent-v2, specifically addressing the challenges of task progress navigation and content focus in mobile operations. * **end-user evaluation.** The area of mobile GUI agent is still at very early stage. The current time and resource constraints make end-user evaluations challenging. Our primary focus at this stage has been on establishing the core architecture and demonstrating its effectiveness through technical benchmarks, which we believe are essential first steps before moving on to broader end-user evaluations. We believe that Mobile-Agent-v2 is well-positioned to significantly advance mobile operation tasks, and we are committed to improving the framework based on your valuable suggestions.
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers for their valuable and constructive feedback, which will be pivotal in enhancing the quality of our work. We are encouraged by the following reviewers' perceptions: * Innovative and interesting multi-agent architecture in the mobile domain (c2St, kbfL, cAKE). * Comprehensive experiments are promising (c2St, Xdcr). * Significant performance improvements (c2St, kbfL, cAKE, 4qiL). * Well-written and easy to follow (kbfL, Xdcr). We appreciate the valuable suggestions and questions raised by the reviewers regarding Mobile-Agent-v2. These insights are significantly important for refining our work and guiding future research. We have diligently addressed all concerns and questions from the reviewers in individual responses. Below, we highlight some of the key issues or frequently asked questions raised by the reviewers. ## **Question 1: Overhead of multi-agent.** ## **Response:** * For time overhead, although multiple agents work serialize, there are still phases to parallel. The tables below compare the operation times for the Mobile-Agent single-agent framework and the Mobile-Agent-v2 multi-agent framework. "Screenshot" represents obtaining a screenshot and corresponding image processing from the device, while "Tool Call" represents the use of visual perception tools. It is evident that the increased operation time of the multi-agent framework is acceptable under parallel design. * For memory overhead, Mobile-Agent-v2 calls the large model via API method, so there is no additional memory overhead. **Mobile-Agent (Single-Agent Framework)** |Phase|Preparation|Planning|Decision|Operation|Reflection|Total| |-|-|-|-|-|-|-| |Task|Screenshot|-|Decision|Tool Call, Grounding|-|| |Time|2s||20s|12s|-|**34s**|| **Mobile-Agent-v2 (Multi-Agent Framework)** |Phase|Preparation|Planning|Decision|Operation|Reflection|Total| |-|-|-|-|-|-|-| |Task|Parallel with Planning|Screenshot, Tool Call, Planning, Reflection|Decision, Memory|-| Parallel with Planning|| |Time|-|18s|21s| -| -|**39s**|| ## **Question 2: How to implement knowledge injection?** ## **Response:** Operation knowledge is often a tutorial for a specific app. It is input into the decision model as part of the prompt: ```python if add_info != "": prompt += f''' ### Hint ### There are hints to help you complete the user's instructions. The hints are as follows: {add_info} ''' ``` where "add_info" is the tutorial. For example, on TikTok, sharing a video requires clicking the fourth icon on the right. However, the LLM may not have this knowledge. In this case, knowledge can be injected: "The share icon is a white arrow pointing to the right on the right side of the screen." When Mobile-Agent-v2 reaches the step where it needs to click the share icon, it will complete the operation based on the injected knowledge. Since the use of knowledge is determined by the decision agent, other steps are not affected by the presence of the knowledge. ## **Question 3: Similar multi-agent architectures have been applied in other scenarios.** ## **Response:** Mobile-Agent-v2 is the first work to use a multi-agent architecture in the mobile domain. The required capabilities of each agent, the tasks they perform, the interaction between agents, and their independent units are significantly different from existing work. Below are the challenges faced by Multi-modal Large Language Models (MLLMs) in the mobile domain and the solutions provided by our framework: 1. **Input type and sequence length:** In the mobile domain, MLLMs' operation decisions face the problem of multi-image long sequences and interleaved text and images input format, which limits MLLMs' decision accuracy. We propose a planning agent to maintain the stability of input sequence length and format in the single image. This input can improve the context learning of MLLMs. 2. **Operation grounding:** In the mobile domain, MLLMs need to generate decisions and their locations. Currently, mainstream MLLMs (even GPT-4) lack grounding capabilities. To address this, we included a visual perception tool to assist decision-making. This purely visual solution overcomes the limitations of the operating platform and offers greater versatility. 3. **Error gandling in long sequences:** In the mobile domain, long sequence operation tasks inevitably lead to errors during intermediate operations. When errors occur, MLLMs often struggle to reflect effectively and correct mistakes. We introduced a reflection agent that uses the MLLMs' multi-image understanding capabilities and contextual learning to judge the accuracy of operations and correct errors. 4. **Following historical screen information:** In the mobile domain, following and navigating history screen information is difficult due to input length limitations and interleaved text and images. We proposed using an independent memory unit to store this information, significantly enhancing the agent's ability to navigate key information. ## **Question 4: Lack of other baselines such as CogAgent and AppAgent.** ## **Response:** **CogAgent** CogAgent is a QA model that does not possess the capability to perform concrete operations on real devices through tool invocation. Therefore, our dynamic evaluation framework is not applicable to such QA models. **AppAgent** First, AppAgent requires an additional exploration phase for each app, whereas Mobile-Agent-v2 can perform app operations without the exploration phase. Secondly, AppAgent relies on XML, restricting it to the Android platform. In contrast, Mobile-Agent-v2 uses a purely visual solution, making it platform-independent. Thus, comparing Mobile-Agent-v2 with AppAgent is not fair. Nevertheless, we manually evaluated the AppAgent and the results are shown in the below table. With the same base model, Mobile-Agent-v2 outperforms AppAgent. |Model|Success Rate|Completion Rate| |-|-|-| |AppAgent|66.7%|74.3%| |Mobile-Agent-v2|77.8%|82.1%|
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper titled "Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration" presents a multi-agent architecture designed to address the challenges of mobile device operation tasks, specifically focusing on task progress navigation and focus content navigation. The proposed system includes three agents: a planning agent, a decision agent, and a reflection agent. The architecture aims to improve task completion rates and operational efficiency compared to existing single-agent architectures. Strengths: **Innovative Multi-Agent Architecture:** The introduction of a multi-agent system to handle different aspects of mobile device operation is a novel approach. It effectively distributes the workload among specialized agents, which likely contributes to improved performance. **Comprehensive Experimental Evaluation:** The paper includes a detailed evaluation across different operating systems, language environments, and applications, providing robust evidence of the system's effectiveness. **Significant Performance Improvements:** Experimental results show that Mobile-Agent-v2 achieves over a 30% improvement in task completion compared to a single-agent architecture, demonstrating the practical benefits of the proposed system. Weaknesses: **Limited Discussion on Scalability:** There is insufficient discussion on the scalability of the proposed system. The paper does not address potential challenges when scaling the architecture to more complex tasks or larger datasets. **Potential Overhead of Multi-Agent Coordination:** While the multi-agent approach shows improved performance, the paper does not discuss the potential computational overhead and complexity introduced by coordinating multiple agents, which could be a significant drawback in resource-constrained environments. Technical Quality: 3 Clarity: 3 Questions for Authors: - How does the proposed system handle the increased computational overhead associated with running multiple agents concurrently on resource-constrained mobile devices? - Can the system be extended to support more complex multi-app operations that require intricate coordination between different agents? - How does the reflection agent determine the appropriate corrective measures for erroneous operations, and what is the success rate of these corrections in practice? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Question 1: Overhead of multi-agent.** ## **Response:** * For time overhead, although multiple agents work serialize, there are still phases to parallel. The tables below compare the operation times for the Mobile-Agent single-agent framework and the Mobile-Agent-v2 multi-agent framework. "Screenshot" represents obtaining a screenshot and corresponding image processing from the device, while "Tool Call" represents the use of visual perception tools. It is evident that the increased operation time of the multi-agent framework is acceptable under parallel design. * For memory overhead, Mobile-Agent-v2 calls the large model via API method, so there is no additional memory overhead. **Mobile-Agent (Single-Agent Framework)** |Phase|Preparation|Planning|Decision|Operation|Reflection|Total| |-|-|-|-|-|-|-| |Task|Screenshot|-|Decision|Tool Call, Grounding|-|| |Time|2s||20s|12s|-|**34s**|| **Mobile-Agent-v2 (Multi-Agent Framework)** |Phase|Preparation|Planning|Decision|Operation|Reflection|Total| |-|-|-|-|-|-|-| |Task|Parallel with Planning|Screenshot, Tool Call, Planning, Reflection|Decision, Memory|-| Parallel with Planning|| |Time|-|18s|21s| -| -|**39s**|| ## **Question 2: Can Mobile-Agent-v2 be extended to more complex multi-app scenarios?** ## **Response:** * Yes, Mobile-Agent-v2 can be extended to more complex multi-app scenarios. Mobile-Agent-v2 is a purely visual general framework that is not restricted by app type or operating system, allowing it to be freely extended to any scenario. Mobile-Agent-v2 inherently possesses the capability to plan and make decisions in complex scenarios. Benefiting from the planning agent not being affected by long sequence and image-text interleaved inputs, the decision agent can ensure decision accuracy. Additionally, the memory unit can store key information from previously opened apps' screenshot for use in subsequent operations. * For extremely complex multi-app scenarios, Mobile-Agent-v2 also supports custom extensions to the operation space. By simply adding the functional description of the operation to the operation space, Mobile-Agent-v2 can perform the operation when needed. For example, if the user needs to use the Mobile-Agent-v2 to crawl the screenshot of the mobile app in batches, the user can add the "screenshot" operation, and request the Mobile-Agent-v2 in the instruction to use the operation to intercept the screen after completing the specified operation task. This extension can effectively improve accuracy without affecting operation efficiency. ## **Question 3: How does the reflection agent determine if an operation is correct and the success rate of error correction?** ## **Response:** * The Multi-modal Large Language Model (MLLM) itself has multi-image understanding capabilities, which supports multi-image input reflection. While the decision agent outputs an operation, it also outputs the operation intent. The internal contextual reasoning ability of the MLLM can be used to determine whether the operation result meets expectations. Additionally, the MLLM has some operation knowledge, so even if the decision agent's operation intent is inaccurate, it can still make a judgment based on the user's task. * We have compiled statistics on the success rate and average steps required for the reflection agent's error correction in the table below, where "Reflection SR," "Correction SR," and "Average Steps" respectively represent the success rate of reflection, the success rate of error correction, and the average steps for error correction. The results show that over **60%** of operational errors can be successfully corrected, with the cost being less than **2** steps. |Reflection SR|Correction SR|Average Steps| |-|-|-| |94%|62%|1.63|
null
null
null
null
null
null
How Diffusion Models Learn to Factorize and Compose
Accept (poster)
Summary: This paper investigates the capabilities of diffusion models, particularly Denoising Diffusion Probabilistic Models (DDPMs), in learning factorized representations and achieving compositional generalization. The authors aim to quantify this by analyzing mechanisms to train the model, supporting the hypothesis that the architecture of diffusion models has an inductive bias towards such factorized representations. Their results on a toy dataset suggest that, to achieve out-of-distribution compositional generalization, the training set must: i) contain at least a few compositional examples of the factors, and ii) present the factors independently of each others across the full range of their variability. If either of these conditions i) or ii) is not met, the model will fail to generalize out of distribution. Strengths: - The manuscript is particularly well-written and presented, with clear and thorough explanations. The methodology and experiments are relevant and consistent, and they are both well-explained and well-illustrated. - The originality of this work lies in explaining the factorization capability emerging in the manifold from the perspective of percolation theory. This supports the hypothesis that the training set needs a certain level of correlation among the components of independent features to achieve a faithful representation, which in turn leads to compositional generalize. Weaknesses: The only weakness I can think of in this work is that it is conducted in simplistic toy settings. However, I don't believe this discredits the work and interesting experiments presented in the paper. Additionally, the authors acknowledge this issue and suggest exploring more naturalistic and relevant experiments in future studies. Some typos: - Line 7: first time using the abbreviation ‘DDPMs’ in the abstract, maybe precise what it is (Denoising Diffusion Probabilistic Models) - Line 198: at the end of the sentence, there is an extra ‘not’ Technical Quality: 4 Clarity: 4 Questions for Authors: N/A Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: See “Weaknesses” Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to provide us with constructive feedback. Please find our responses to the specific concerns and questions below. Weaknesses 1. We thank the reviewer for the very positive feedback. As noted in the global response section “Regarding the toy setting,” using a simple toy setting allows us to carefully examine various effects. Future investigations on more realistic datasets are necessary but could introduce many competing effects. Despite the simplicity, our study provides valuable insights into compositionality and generalization in Diffusion models. Future research should explore why Diffusion models cannot encode continuous latent features continuously and how percolation theory of manifold formation can be applied to natural image data. 2. We thank the reviewer for pointing out our typos. We will fix them in the manuscript. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed feedback. I remain fully convinced of the contribution of this article, despite the simplicity of the toy dataset used. A controlled and simplified setting is essential for investigating hypotheses, and while extending the work to more naturalistic data would be valuable, a clear starting point is necessary.
Summary: This paper investigates how and when diffusion models learn factorized representations of composable features. To this end, the authors construct controllable synthetic datasets by compositionally combining 1D and 2D Gaussian data and examine the factorized representation and the compositional generalization capability of the diffusion model. Systematic analysis on the controllable dataset indicates that the diffusion model learns orthogonal but not necessarily parallel representations and is capable of compositionally generalization to OOD samples if a few compositional examples are provided. Additionally, the authors draw a connection to percolation theory, suggesting that a certain amount of correlated data is required to learn factorized representations. Strengths: - The paper is well-written and easy to follow. - The simple yet well-controlled experiments and comprehensive analysis provide a clear understanding of factorization and compositionality in diffusion models. - Connecting the empirical findings to percolation theory improves the understanding on the emergence of factorization. Weaknesses: - The experiments are conducted solely on a simple dataset. The 2D Gaussian Addition dataset features only basic additive compositions of 1D Gaussian sprites. It remains unclear whether the paper's claims extend to more complex compositions, such as multiplicative compositions involving scale, color, etc. - Some of the conclusions, such as those regarding the properties and requirements for compositional generalization, have already been addressed in previous work [1]. [1] provides theoretical analyses and conditions for sufficient compositional support. Therefore, the experiments and conclusions from section 3.2 provide limited additional insights. [1] Wiedemer et al., “Compositonal Generalization from First Principles”, in NeurIPS 23. Technical Quality: 3 Clarity: 3 Questions for Authors: - Are the conclusions and implications specific to the diffusion model? Although the paper claims to investigate the compositionality of diffusion models, the experiments and analysis do not seem diffusion-specific. Other general generative models might exhibit similar behavior. - In the Gaussian Bump + 1D Gaussian Stripes experiment (Figure 4(g)), what is the accuracy when only 2D Gaussian Bump data is used? Comparing this value would help identify the performance boost from compositional generalization. - How did you format the inputs when training the diffusion model on 1D Gaussian stripe data? Did you input a null value (e.g., $(\mu_x, \phi)$)? - How does the model generate OOD samples in 2D Gaussian Bump Data? From the experiments on the 2D Gaussian Addition dataset, it is concluded that the model cannot interpolate well on unseen examples, as implied by the low accuracy in the intersection areas. By adding 1D Gaussian Addition data, the model learns from single 1D stripe patterns and composes that information to generate OOD samples in the test regions. However, in 2D Gaussian Bump Data + 1D Gaussian Addition data, the model never observed the 2D Gaussian Bump samples in the test regions. How could adding 1D Gaussian Addition data lead to improvement? - In Figures 4(c), 4(d), and 4(e), interestingly, the model trained only on 1D Gaussian data already predicts reasonably on $y$ values (but very low for $x$ values). Is there any particular reason for this result? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The conclusion section describes the limitation regarding the restricted scope and simplicity of the toy dataset. While the study provides valuable insights, the findings are based on synthetic datasets with simple structures, which may not fully capture the complexities of real-world data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to provide us with constructive feedback. Please find our responses to the specific concerns and questions below. Weaknesses 1. We appreciate the reviewer’s feedback. In our paper, we explored both additive and multiplicative composition with the 2D Gaussian Addition (addition of two 1D Gaussian Stripes) and the 2D Gaussian Bump (multiplication of two 1D Gaussian Stripes) datasets. Fig. 4(g) shows that the model can generalize to new 2D Gaussian Bumps given all 1D Stripes and a few 2D Bumps. This multiplicative composition is similar to transformations like scale, color, and style, as it involves "masking" one 1D Stripe with another. 2. We thank the reviewer for bringing Ref. [1] to our attention. While it similarly concludes that gaps in the support of the training dataset hinder learning, it lacks insights from a manifold formation perspective. Their experiments use sprites with a mix of (semi-)continuous and categorical latent variables, showing general results across all types. Missing support in categorical variables makes sense, as a model cannot generate images of monkeys after only seeing cats and dogs. However, with continuous latent features like $x$- and $y$-positions, one would expect interpolation to be possible (e.g., generating a sprite at $x=16$ after seeing $x=15$ and $x=17$). Our experiments focus on the model’s ability to represent continuous latent features. In Sec. 3.1, we found that the model represents continuous features similarly to categorical variables, with some overlap. Consequently, as shown in Sec. 3.2, the model struggles with interpolation. The main takeaway is that the model excels at composition but fails at interpolation. Contrary to popular belief, Diffusion models can perform well at compositional generalization but struggle with continuous latent features, hindering interpolation and robust generalization. Questions 1. While we believe some observations may apply to other generative models, we refrain from general claims, focusing our study on Diffusion models and their factorization and compositionality. Similar studies on vision- or language-based generative models are beyond our current paper's scope. 2. We thank the reviewer for the constructive suggestion. We refer the reviewer to the global response section on “Notes on additional figures - Additional Figure 2: Data Efficiency Scaling”. 3. When generating 1D Gaussian Stripes, we embed the data image (32 x 32) into a larger canvas (44 x 44), centering it and creating a 6-pixel border. We generate 2D Gaussian Additions centered in this extended border region, most of which (except the corner ones) will have one Gaussian Stripe partially visible in the 32 x 32 space. We then crop the central 32 x 32 pixels, keeping the label as the center of the 2D Gaussian Addition in the extended space. Thus, 1D Gaussian Stripes have one coordinate outside the 32 x 32 image space, preserving the 2D structure for both 1D and 2D data points. Details of the data generation process are in Sec. C.1 of the Appendix. 4. The experimental setup is detailed in Sec. C.1 Fig. 8 of the Appendix. The model effectively learned spatial information from the 1D Gaussian Stripes. Given a few examples of 2D Gaussian Bumps, it learned to multiplicatively compose the 1D Gaussian Stripes into Bumps. Combined with the results in response to Question 2, this demonstrates the model's ability to transfer knowledge across different forms of compositionality. 5. We appreciate the reviewer's detailed observation. The 1D models in Fig. 4(c)-(e) are trained on equal numbers of vertical and horizontal 1D Gaussian Stripes, providing the same training examples for learning $x$ and $y$. While the exact reason for the higher accuracy in generating $\mu_y$ is unclear, we observed that the model often defaults to generating 1D Stripes when it fails to generate the intersection of two Stripes, possibly skewing the accuracy distribution between $\mu_x$ and $\mu_y$. --- Rebuttal 2: Title: Response to the Rebuttal Comment: Thank you to the authors for the detailed reponse. I believe most of my concerns have been addressed. Although the authors employed controlled and simple toy datasets, I find their experimental setting and comprehensive analysis sufficient to investigate their hypotheses and provide insights regarding the compositionality of diffusion model. Therefore, I raise my score to weak accept.
Summary: This paper investigates, on a very simple toy dataset, how conditional diffusion models learn factorized representations of the data, and the extent to which they can compositionally generalize out of distribution. Additionally, the authors make a connection to percolation theory in physics. Strengths: The motivation and research questions are very interesting and relevant for the community. They are outlined in a compelling way in the introduction. The experiments are carefully designed and rather interesting. Weaknesses: Main high-level weaknesses: - This paper takes a promising approach to a crucial research question, but to me it does not deliver. Although using toy data allows for broader experimentation, the evaluation is overall quite limited. For this reason, though the authors make an effort to clarify the relevance of their results for realistic settings (Sec. 4.1), this still sounds unconvincing. - Clarity, especially in the presentation and discussion of the results. I found the results and conclusions significantly more difficult to parse than would be expected from toy experiments. --- Expanding on the above: - In general, the design and motivation for the experimental study is a bit lacking. I appreciate toy experiments, but they should reflect more realistic cases as much as possible, and in this case we are assuming that the model observes basically all information necessary to reproduce the data (as opposed to typical cases where the conditioning signal has significantly less information than the data itself). The authors should provide a solid justification for this choice, and more in general argue how such a toy scenario may be informative for more complex settings. The most natural next step would be to include a dataset that comes significantly closer to realistic settings, although of course some investigations will probably be impossible in that case. The trade-off between relevance and controllability is a hard one, and the current paper seems to be heavily on the latter side. - Even sticking to the current toy data, a broader evaluation would be possible and useful. There are several degrees of freedom that can be explored further, e.g., the UNet input noise level at evaluation time, the mutual information between condition and data (which could range from perfect - the current unrealistic case - to zero - the unconditional case), different compositionality patterns (as done in some references in the paper), the layer of the UNet at which representations are extracted. - The abstract states "paving the way for future research aimed at enhancing factorization and compositional generalization". This may be an overstatement, given my point above. What are concretely actionable insights from these experiments? - The experiments here investigate the representations in the last layer of a UNet. Why this choice? Representations in diffusion models could also be considered to be the activations at different layers, especially the bottleneck, which has been investigated in the literature. Another representation can be the latent variable deterministically corresponding to the data using the probability flow ODE. - The bottleneck idea is mentioned only in Appendix A.1, and in a negative way. This reinforces my belief that this toy scenario is too far from realistic settings, where the bottleneck is widely used as representation in diffusion models. However, let me still point out that I find the toy scenario a very interesting and promising direction. - As far as I can tell, there is no mention of the noise schedule for training DDPM. - When evaluating the representations in the UNet, what is the noise level in the input? I would expect this to significantly affect the representation (especially since you're using the last UNet layer and the UNet is trained to predict noise -- but this is just a hunch). - The models here are basically trained to convert the $(\mu_x,\mu_y)$ conditioning pair to an image that is deterministically determined from such a pair. I would be a lot more interested in investigating the representations of an *unsupervised* model trained on such data, where the labels are inaccessible but can be used for evaluation -- similarly to how e.g. disentanglement is evaluated. - Alternatively, to mimic real-world conditional (e.g. text-to-image) generation, there could be some stochasticity involved, such that the observed data is not trivially obtained from the conditioning signal. - To make the experiments and results fully understable and accessible to a wider range of researchers, I would strongly recommend including a quick introduction to the relevant concepts from geometry/topology. - In addition, the individual results subsections seem to lack a clear structure, which makes them not particularly easy to follow. Some more intuitive explanation, as well as highlighting the main takeaways from each experiment, might help. A few minor or more detailed comments: - In the contributions: "differing values of the same feature are also treated similarly" - what does this mean? - In Section 3.1 there's mention of $x$ and $y$: are these actually $\mu_x$ and $\mu_y$, since these are the ground-truth generative factors of the data? - At the beginning of the results section, the dataset is modified to have a torus topology. Why not define the dataset like this in the first place? - Line 124. "If the model were to parameterize x and y independently with two ring manifolds, we expect to see a Clifford torus in the neural activations, rather than a different geometry such as the 3D torus." What does it mean exactly that the model *parameterizes* x and y? My interpretation is that, when generating data conditional on $x$ and $y$ (which I take to be the means), we can observe how the activations in the pre-determined UNet layer change, as we change $x$ and $y$. - Line 127 and following: "we first confirm that the model indeed learns a torus representation of the dataset". What would the model alternatively learn? Since we're considering such simple toy datasets, I think it's expected that we can get a full intuition of what is going on. In my opinion, this is not the case. The following lines also involve technical terms from topology that are not properly introduced. - It's unclear what exactly effective dimensionality is, why it is important here, and what we can learn from it. - The conclusions drawn in the last paragraph of page 4 are not very clear. For example: "These results suggest that x and y are independently encoded in pairwise orthogonal subspaces, but different values of x’s and values of y’s respectively are not encoded in the same way, i.e. in parallel subspaces". - line 170: "compositionally generalizing out of the training distribution as well as its ability to spatially interpolate in a single variable alone". What is the precise difference between these two scenarios? In general, interpolation and composition don't seem to be accurately defined (although they can be implicitly inferred e.g. by lines 172-179). Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The limitations are mentioned in the discussion section. Unfortunately, I believe some of them may be too large to ignore. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to provide us with constructive feedback. Please find our responses to the specific concerns and questions below. Weaknesses Expanding on high-level 1. We kindly refer the reviewer to the first two sections of the global response. 2. We appreciate the reviewer's suggestions. To summarize your suggestions: a) Input noise level for UNet, b) Imperfect conditioning label, c) Different compositionality patterns, d) Different layers of UNet. We have explored a), c), and d). For a), we have studied the bottleneck and layer 4 at various noise levels (diffusion timesteps) and found minimal differences between representations at different timesteps, with the bottleneck having diminishing signals. Thus, we used the final timestep output from layer 4 for our analysis. For c), as shown in Fig. 4, the model learned both multiplicative and additive compositionality with the full range of 1D Gaussian Strips and a few compositional examples. For d), we investigated all UNet layers and found layer 4 better reflects the latent structure of the dataset than other layers, so we used it for our analysis. We have not considered b). While interesting, this doesn't allow precise prompting of the model to output a specific Gaussian at a desired location, complicating evaluation. As a reminder, our focus is on whether the model can even, given perfect label information, organize learned data into an efficient and meaningful representation for generalization. The pursuit of b) falls outside of our research scope. 3. We thank the reviewer for this feedback. We will modify this sentence in the abstract. 4. Although Diffusion model bottlenecks have been shown to encode semantic information, in our case, layer 4 acts as the bottleneck due to the skip connection. This does not affect our main point, as we are focused on the factorization of the model's learned representation. 5. We kindly refer the reviewer to the global response section on “Notes on additional figures - Additional Figure 1: Impact of Annealed Noise Schedule on Learning Rates”. 6. See response to “Expanding on high-level” 2. 7. Investigating representations learned by unsupervised models is intriguing. Previous work has focused on disentanglement in unconditional Diffusion models, but evaluating compositional generalization without explicit conditional input is challenging. In terms of stochasticity, our conditional training uses classifier-free guidance, dropping labels 10% of the time to co-train conditional and unconditional models. Moreover, we know that the model isn't solely relying on conditional input, as it sometimes fails to learn even with perfect conditions (e.g., Fig. 4(c)-(e)). Future work should nevertheless evaluate unconditional models or conditional models with imperfect input for their factorization and compositionality. 8-9. We will refine our manuscript to reflect the suggestions. Minor 1. By the original sentence, we meant that different values of the same feature (e.g., $x = 13, 14, 15$) are encoded similarly to how different features ($x = 13$, $y = 13$) are encoded, treating $x$ and $y$ more like categorical than continuous variables. We will clarify this in the original sentence. 2. We refer to $x$ and $y$ as features and $\mu_x$ and $\mu_y$ as the center locations of the Gaussian bumps in a data image. The latter is intrinsic to the dataset, while the prior is learned by the model. 3. Although periodic boundary conditions could be enforced in all datasets, the resulting torus manifold is nonlinear and less straightforward to analyze using linear methods. Otherwise, there's no fundamental difference between datasets with and without these conditions. 4. By “parameterize $x$ and $y$,” we mean how the model represents the two features. Jointly, the model may learn a 2D look-up table (3D torus), while independently, it may learn $x$ and $y$ as 1D rings (4D Clifford torus). Our geometry/topology test distinguishes between these scenarios. 5. Our tests detect whether the learned representation is a 3D or Clifford torus. This requires the object to be a torus, which isn't always obvious during training. The persistent homology results in Fig. 2(b) help clarify this. We will refine the manuscript to be more precise. 6. Effective dimension, computed by the participation ratio (lines 134-135), measures the intrinsic dimensionality of learned representations. Plotting it over training epochs helps us understand if the model learns a Clifford torus (dimensionality of 4) and whether it does so from first learning a 3D torus (dimensionality of 3). Fig. 2(c) shows higher than 4-dimensional effective dimensionality, indicating independent learning of $x$ and $y$. Fig. 2(d) eigenspectra and Fig. 2(g) PCA projections suggest the learned representation isn't a perfect Clifford torus but higher-dimensional and cone-shaped. We will add more details on this in the manuscript. 7. The model treats $x$ and $y$ as categorical variables with non-zero overlaps. This means $x=16$ is a separate category from $x=17$, not neighboring values of a continuous variable. Technically, if $x$ and $y$ were continuous, different values should be encoded in parallel subspaces, like a Clifford torus. The model learns a hyper-factorized version, representing $x$ and $y$ as categorical. We will clarify this in the manuscript. 8. "Compositional generalization" means combining components in ways not seen in the training set (e.g., given ${(a, b), (c, d)}$, output ${(a, d), (c, b)}$). Interpolation refers to combining values (e.g., output $x=16$ given $x=15$ and $x=17$). These are distinct forms of generalization, which we probe separately in Sec. 3.2. We will define these clearly at the beginning of Sec. 3.2. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough and thoughtful reply. Based on your clarifications, I am raising my score to borderline accept, as I would no longer strongly oppose acceptance of this paper. However, I still have reservations about the motivation and the experimental setting. While toy experiments can be valuable, they need to offer insights that are likely to extend to more complex and realistic scenarios. For example, synthetic images can serve as proxies for real images, and small real images can approximate larger ones. In this case, it remains unclear what real-world scenarios these extremely simplified experiments are meant to approximate. When I asked in my initial review, "What are concretely actionable insights from these experiments?" it was a genuine concern. While toning down the language and claims in the paper would be a positive step, I believe it is essential to clarify this point in the final version, should the paper be accepted. Otherwise, I would encourage a deeper reconsideration of these issues, as this work has the potential to be a strong contribution. Additionally, I believe the paper would greatly benefit from being more accessible and clear, which would also enhance its impact across various subfields. As I mentioned, providing some background on relevant concepts from geometry and topology, or at least offering more intuitive explanations, would be very helpful. For example, intrinsic dimensionality is a concept that many in the machine learning community likely understand intuitively, but the paper would be improved by including both an intuitive explanation and a precise mathematical definition (perhaps in the appendix). Similar clarity should be provided for other concepts that may not be familiar to researchers outside of geometry, such as persistent homology, persistence diagrams and how to interpret them, the role of orthogonal/parallel subspaces in this context, etc. This would be fine at a geometry-centered workshop, but not at the main conference, unfortunately. At the very least, I would strongly recommend that the authors incorporate the updates they promised in the rebuttal to me and the other reviewers, and to assume less prior knowledge of topology, given that these fields are not highlighted in the title or keywords. Just a minor additional point about related work (this is of course just a suggestion): you could consider the empirical result in [Träuble et al., 2020](https://arxiv.org/abs/2006.07886) (Sec. 4.3), and in object-centric learning, the theoretical results in [Wiedemer et al., 2023](https://arxiv.org/abs/2310.05327) and empirical in [Dittadi et al., 2021](https://arxiv.org/abs/2107.00637). Thanks again for the discussion!
Summary: In this work, the authors investigate how diffusion models achieve compositional generalization. Through controlled experiments on conditional DDPMs with 2D Gaussian data, the authors find that these models learn semantically meaningful, factorized manifold representations of composable features. These representations are orthogonal for independent feature variations but not aligned for different values of the same feature, resulting in superior compositionality but limited interpolation over unseen feature values. The study reveals that a small amount of compositional examples can enhance this capability and links the formation of these representations to percolation theory in physics. This work provides insights into the mechanism of compositionality in diffusion models, guiding future research to improve their factorization and generalization for real-world applications. Strengths: 1. The work investigate the factorization and compositionality of diffusion model, which is an interesting and valuable problem. Weaknesses: 1. The major concern is that the work only considers a highly reduced setting, which makes it hard to validate wether the conclusions generalize to real world applications. Since the setting is oversimplified, it's also unclear how the conclusions can be extended to popular applications like conditional text2image generation. 2. The performance of diffusion is mostly evaluated with customized metrics without comparison to other models. It's hard to get a sense of how well/bad the model performs in term of the metrics. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Given the dataset contains only ~100K synthetic images, the model can simply memorize the data. This makes it hard to argue the generalization behavior of the model can be extended to real image diffusion models. 2. Why the output of layer 4 is used to investigate the internal representations? Do the authors try representations from other layers? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The work sufficiently addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to provide us with constructive feedback. Please find our responses to the specific concerns and questions below. Weaknesses 1. We thank the reviewer for the feedback, we kindly refer the reviewer to the global response Section “Regarding the toy setting” addressed to all reviewers. 2. Due to the simplicity of our toy setting, we design the custom metrics such that it is tailored to our objective of investigating the factorization and compositionality in Diffusion models. Towards this goal, we have specifically designed the task and the metrics accordingly such that they target the model’s ability in recovering the spatial location of the Gaussian bumps. Questions 1. Naively, when a model resorts to memorization, it typically suffers from poor performance in the out-of-distribution task sets. We have carefully selected our task and model such that the task is not too trivial for the model and that the model can just memorize all the training data. This is indeed shown through the out-of-distribution model performance evaluation in, for example, Fig. 5. We note that even given a subset of the original dataset, the model has similar in-distribution and out-of-distribution performance, which means that the model is not simply memorizing all in-distribution data but rather trying to learn the correct representation. 2. The reason why we have chosen layer 4 of the UNet as our internal representation is because it better reflects the latent structure of the dataset than other layers. Specifically, we have consistently noticed that the bottleneck layer of the UNet gives diminishing signal due to the utilization of the skip connection. As a result, we chose the layer immediately following the bottleneck layer as our internal representation. Moreover, we have investigated all the layers of the UNet and found that layer 4 gives the strongest semantic signal. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank authors for answering my questions. Though as also pointed by other reviewers, the paper only considers toy setting and lacks experiment on larger and more practical settings. After reading the discussions between authors and reviewers (including me), I believe the paper does provide some interesting observations of the underlying representation structures learned by diffusion models. And it could potentially guide the development of more efficient and powerful diffusion models. Therefore I have raised my score to 5 meaning I won't go against accepting the paper.
Rebuttal 1: Rebuttal: We thank all the reviewers for the constructive feedback and suggestions. Below are some of the recurring concerns/questions that we would like to address to all reviewers. Regarding the toy setting It is a tradeoff between a simple setting, which allows for controlled studies to isolate all possible effects, and a more complex setting that mimics popular application regimes. We agree with the reviewers that in larger, more realistic settings, richer dynamics absent from the toy setting would emerge. However, this does not undermine the relevance of our observations or the importance of starting with simple, controlled scenarios, especially given the complexity of neural networks. In fact, We expect many insights from our toy experiments to hold for larger models. For example, our experiments showed that the model's inability to interpolate well is due to the categorical-like encoding of continuous latent features, highlighting the limited ability of Diffusion models to encode continuous quantities key to image generation according to real-world physics. Our observation of the connection between percolation theory and manifold formation should also be relevant to more realistic datasets, where the measure of "overlap" is more abstract (e.g., cosine similarity between images). Using Gaussian bumps, a well-studied system in percolation theory with computable thresholds, validated our observations. Extending this overlap measure to natural image data and studying manifold formation in more realistic settings is a promising future direction. Finally, interpretability studies of Diffusion models on natural/synthetic datasets show varying degrees of success, especially in compositionality and factorization. Even slightly more realistic synthetic datasets than Gaussian datasets (e.g., dSprites, CLEVR) have led to conflicting conclusions, as noted in the Related Work section. This underscores the need for careful and controlled experiments, motivating our study of the Gaussian dataset, which has a low-dimensional, intuitive latent representation. This allowed us to analyze its geometry and topology to study its factorization explicitly. More importantly, our case study of the Gaussian dataset revealed phenomena of greater scientific interest than we initially expected. These observations, impossible without the toy setting, provide a foundation for understanding larger, real-world models. Future work should aim to validate and extend these observations to larger models with more realistic datasets. Regarding the specific experimental setup The task of image reconstruction of Gaussians given perfect conditional information (center coordinates) is inspired by traditional cognitive psychological studies. In these studies, subjects perform simple tasks while their behavior and brain activities are recorded. For tasks involving continuous latent factors (e.g., angles, color hues), humans and animals can represent these data using continuous attractors (e.g., rings or lines), which are efficient and robust. Our study aims to determine if Diffusion models can similarly learn these continuous attractors in a factorized and generalizable manner, akin to biological brains. Thus, we designed our task to mimic cognitive experiments, analyzing the "brain activity" (Sec. 3.1) and "behavior" (Sec. 3.2) of the Diffusion model during the task. Our findings reveal that, unlike biological brains, Diffusion models do not learn continuous manifolds representing continuous data variation, making them less efficient and robust. Notes on additional figures We have included two additional figures in the new supplementary material attached below. Additional Figure 1: Impact of Annealed Noise Schedule on Learning Rates We included a figure showing how an annealed noise schedule affects the learning rates of different concepts. Using a noise schedule similar to the original DDPM paper, we analyzed the relationship between learning rates and signal-to-noise ratios. We hypothesize that high-frequency details (e.g., moles) are drowned out more quickly with an aggressive noise schedule, while low-frequency features (e.g., hair color) persist longer and are easier to learn. This aligns with existing literature observations that low-frequency features are learned before high-frequency ones. To verify, we trained 1D conditional Diffusion models on sinusoidal data with high- and low-frequency components. Figure 1(b) shows the model learns the low-frequency component faster and more accurately due to the annealed noise schedule. Figure 1(c) illustrates that more noising steps result in diminishing signal-to-noise ratios in most noised image samples, making high-frequency details harder to learn. We have previously omitted this from the main paper as it’s not central to our main message. Additional Figure 2: Data Efficiency Scaling We included a figure on the data efficiency scaling of Diffusion models across various datasets. Our original paper (Sec. 3.2) shows that combining 2D Gaussian data with 1D Gaussian data reduces the number of compositional examples needed for compositional generalization. Figure 2 in the supplementary material compares data efficiency between models trained on i) 2D Gaussian Bumps, ii) 2D Gaussian Bumps + 1D Gaussian Stripes, and iii) 2D Gaussian Bumps + 2D Gaussian Additions + 1D Gaussian Stripes. Figure 2(a)-(c) show that models trained on ii and iii achieve higher accuracy with fewer compositional examples. We also show in Figure 2(d) that for dataset size $N$, models trained with 1D Gaussian Stripes are more data-efficient, with data needed to reach an accuracy threshold growing linearly rather than quadratically. This signifies that not only can the model learn multiple forms of compositionality, but do so in a more data efficient manner when the dataset is mixed in with the set of 1D components, which potentially suggests a more data-efficient training approach for Diffusion models. Pdf: /pdf/c3f2b6ddc9f14e3f2fb1c2bbbd1bb0836d05517a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
UrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction
Accept (poster)
Summary: This paper presents a framework denoted UrbanKGent, for finetuning an LLM to assist the construction of knowledge graph construction (specifically triplet extraction relation prediction). The gist of the study is to construct an ad-hoc corpus for finetuning. In this process, a method of trajectory refinement is proposed to enhance both the effectiveness and explainability of the framework. Strengths: 1) An interesting and important application domain of LLM 2) A generally well-developed workflow for tackling the target problem 3) The evaluation is more or less adequate for showing the superiority Weaknesses: 1) It seems that the generation of the corpus, which is a key ingredient of the framework, lacks a systematic and scientific methodology for its generation. 2) The transferability is questionable – is this method useful in cities that are not finetuned? 3) The description of the roles of GPT4 and the smaller LLM that is finetuned is vague. Technical Quality: 3 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > [W1] It seems that the generation of the corpus, which is a key ingredient of the framework, lacks a systematic and scientific methodology for its generation. Sorry for the confusion. As mentioned by the reviewer, the corpus is very important for the UrbanKGent framework. Therefore, we construct the corpus by uniform sampling (Line 308-310) from multi-source urban data (i.e., AOI, road network, POI, review, and web page), which are widely used for urban knowledge distillation. By finetuning with the generated corpus, we demonstrate the performance of LLMs on UrbanKGC tasks could be largely improved. We will provide a more clear description of the corpus generation in our final paper version to avoid potential confusion. > [W2] The transferability is questionable – is this method useful in cities that are not finetuned? Thanks to the reviewer's question to help us clarify potential confusion. The proposed UrbanKGent framework could enhance UrbanKGC performance in both fine-tuning and non-fine-tuning scenarios. Specifically, as demonstrated in Table 3 of our paper, the performance of various LLM backbones consistently improves only using the UrbanKGent Inference pipeline without fine-tuning, compared with zero-shot reasoning and In-context-learning paradigms. Therefore, in cities where the LLM has not been fine-tuned, our method retains practical usage value. > [W3] The description of the roles of GPT4 and the smaller LLM that is finetuned is vague. Sorry for the confusion. As mentioned in Section 4.3 (Line 286-288) in our paper, we derive GPT-4 for trajectory generation consisting of two steps (i.e., the instruction generation and iterative trajectory refinement shown in Figure 4). The obtained trajectory will be further used to fine-tune the small LLMs. The fine-tuned smaller LLMs then will be used for UrbanKGC tasks with faster inference speed and lower cost. --- Rebuttal Comment 1.1: Title: Thanks Comment: I would like to thank the authors for the response. I would like to keep my rating as I find the argument about transferability is not that convincing. --- Reply to Comment 1.1.1: Title: Response to e3YZ Comment: Dear Reviewer e3YZ: We are pleased that the previous rebuttal clarified most of the confusion. However, we apologize for the remaining confusion regarding "the transferability of the proposed framework in cities that are not fine-tuned." Please check the following response for a more detailed explanation. We agree with the reviewer that it is crucial to assess whether the proposed UrbanKGent framework can enhance UrbanKG construction in cities where the data was not used for fine-tuning. In fact, we have discussed this scenario in our main results. Specifically, we **directly apply the proposed UrbanKGent framework (i.e., UrbanKGent Inference in Table 3)** to different LLMs without fine-tuning, and observe the performance changes compared to prompting LLMs with zero-shot reasoning or in-context learning paradgim. As can be seen in Table 3, the proposed UrbanKGent framework was applied to the NYC and CHI datasets **without any fine-tuning of the LLMs for these specific cities.** Despite the absence of fine-tuning, the UrbanKGent framework could significantly improve the performance of various LLMs (including open-source models like Llama and Mistral, as well as API-based models like GPT-3.5 and GPT-4) on UrbanKGC tasks. For example, after applying the UrbanKGent framework, Llama-2-13B's RTE performance improved by 84.21%, rising from 0.19 to 0.35 under human evaluation, compared to the zero-shot reasoning paradigm. Similarly, Mistral-2-7B and GPT-3.5 showed improvements of 64.71% (0.17→0.28) and 38.71% (0.31→0.43), respectively. The improvements in KGC tasks are also substantial. These results demonstrate that our **proposed UrbanKGC framework can significantly enhance LLM performance on UrbanKGC tasks, even without city-specific data for fine-tuning**. We will include a more detailed description of the experimental settings and results analysis in the final version of our paper to avoid such confusion. We sincerely appreciate your insightful question and the opportunity to clarify this aspect of our work. Best, NeurIPS 2024 Conference Submission 16941 Authors
Summary: The paper introduces UrbanKGent, a comprehensive framework utilizing a large language model (LLM) for constructing urban knowledge graphs (UrbanKGs). The key components of UrbanKGent involve creating an instruction set for tasks like relational triplet extraction and knowledge graph completion, which are tailored to urban data's heterogeneity and spatial characteristics. An iterative refinement module is employed to improve and adjust trajectories derived from the GPT-4 model, enhancing the quality of the knowledge graph. The framework is then fine-tuned using these enhanced trajectories on the Llama 2 and Llama 3 models, resulting in UrbanKGC agents, available in 7/8/13B versions. Strengths: 1. Intensive efforts have been dedicated including crawling and preprocessing raw data, empirically validating the insufficiency of GPT-4, and instruction fine-tuning the UrbanKGC agent. 2. The paper focuses on real-world scenarios with domain knowledge infused in UrbanKGC agent construction and carries out comprehensive evaluations on real-world datasets using human and GPT-4 self-assessment. Weaknesses: 1. The computational efficiency of the proposed algorithms and their scalability with the size of the dataset is not thoroughly discussed. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. As mentioned in Appendix Section D.1, human annotators are employed to fill out an evaluation form. Is it possible to provide more detailed information on human annotators, such as an exact number of human annotators? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: As acknowledged in appendix section F, the paper has limitations on the further application demonstration of construction UrbanKGs and the evaluation method in this work is cost-intensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > [W1] The computational efficiency of the proposed algorithms and their scalability with the size of the dataset is not thoroughly discussed. We sincerely appreciate the reviewer's valuable comment, which helps us improve the quality of our paper. As suggested, we provide detailed inference latency for the UrbanKGent family across different dataset scales in the following table. As shown in Table 1, by applying VLLM [1] techniques to accelerate our current framework, UrbanKGent-13B can automatically construct an UrbanKG with approximately one million triples (905,442 extracted from the NYC-Large dataset) in about 62 minutes. We also report the average inference time of the UrbanKGent family. For every 1,000 data records processed, the UrbanKGent takes 0.57 minutes on the 7B version, 0.76 minutes on the 8B version, and 1.53 minutes on the 13B version, respectively. Table 1: The inference latency comparison of UrbanKGC using the UrbanKGent family before and after using VLLM acceleration. We use two middle-size datasets (i.e., NYC and CHI) and two large-scale datasets (i.e., NYC-Large and CHI-Large) for UrbanKG construction. The accelerated inference latency is **bolded.** Dataset|Latency (minutes)|||Data Volume -|-|-|-|- || UrbanKGent-7B | UrbanKGent-8B |UrbanKGent-13B| NYC|6.83/**1.19**|7.62/**1.58**|16.48/**3.19**|2,089 CHI|3.60/**0.62**|4.02/**1.13**|8.69/**1.68**|1,102 NYC-Large|132.33/**23.07**|147.75/**30.76**|319.38/**61.93**|40,480 CHI-Large|94.39/**16.55**|105.36/**21.93**|227.76/**44.47**|28,868 -|3.27/**0.57**|3.65/**0.76**|7.89/**1.53**|Per 1,000 records Currently, the maximum dataset scale we used is about forty thousand. We are working on processing larger-scale data and analyzing potential scalability issues. **Reference**: [1] Kwon, Woosuk, et al. "Efficient memory management for large language model serving with pagedattention." *Proceedings of the 29th Symposium on Operating Systems Principles*. 2023. > [Q2] As mentioned in Appendix Section D.1, human annotators are employed to fill out an evaluation form. Is it possible to provide more detailed information on human annotators, such as an exact number of human annotators? Thanks to the reviewer's valuable suggestion. We invite 3 AI experts to fill out the evaluation form of the prediction results from various LLM variants. All of them possess work experience as algorithm engineers at Internet or AI companies. To avoid potential performance bias (as a priori, it is believed that larger-size LLMs are better), we do not reveal which results come from which LLMs. This rigorous process resulted in the annotation of 200 random samples. We will provide a more detailed description in Appendix Section D.1 in our final paper version. We sincerely appreciate the valuable suggestions provided by the reviewer. --- Rebuttal Comment 1.1: Comment: Thank you for addressing the concerns raised in the review. Your additional details regarding the computational efficiency and scalability of the UrbanKGent family are appreciated. The reported latencies and the application demonstrate a significant improvement in performance, particularly when dealing with larger datasets like NYC-Large and CHI-Large. Thank you again for your efforts to improve the paper. These additional clarifications will undoubtedly enhance the quality of the manuscript.
Summary: The paper presents UrbanKGent, a unified large language model agent framework for urban knowledge graph construction. The framework consists of knowledgeable instruction generation, tool-augmented iterative trajectory refinement, and hybrid instruction fine-tuning. The authors evaluate the framework on two real-world datasets using both human and GPT-4 self-evaluation. The experimental results show that UrbanKGent outperforms 31 baselines in urban knowledge graph construction tasks and achieves state-of-the-art performance. Strengths: 1. The paper proposes a unified framework for urban knowledge graph construction, addressing the challenges of heterogeneous relationship understanding and geospatial computing. 2. The knowledgeable instruction generation module and tool-augmented iterative trajectory refinement module are innovative and practical methodologies. 3. The experimental evaluation is comprehensive, including both human evaluation and GPT-4 self-evaluation. 4. The results show that UrbanKGent outperforms 31 baselines in urban knowledge graph construction tasks and achieves state-of-the-art performance. Weaknesses: 1. This paper focuses on urban knowledge graph construction tasks. However, the details of the urban knowledge graph are not clearly defined. For example, the ontology of urban KGs, entities types, and relations types are not discussed. This limits the understanding of the proposed framework for readers who are not familiar with urban knowledge graphs. 2. The KGC tasks only focus on the geospatial relations. Is there any specific reason to select these relations? Can we predict other relations using the same framework? 3. The motivation of the KGC task is not clearly explained. If the geospatial information is given, can we directly use the tool to get their relations? The paper should provide more insights into the motivation behind KGC task and the necessity of using LLMs for it. 4. The ablation study is not clearly described. The paper should provide more details on the datasets, settings, and the performance of the final model used in Table 10. Otherwise, it is hard to evaluate the effectiveness of each module. 5. The iterative trajectory self-refinement module is not well explained. Based on what criteria is the trajectory refined? How many iterations are performed? How does the performance improvement as the number of iterations increases? Moreover, if LLM lacks the knowledge about urban KGs as the motivation of this paper, how can it be a good judger? Is there any analysis about the correctness of the comments and refined trajectory? 6. The cost for fine-tuning data construction is not discussed. How much data is required for fine-tuning the model? What is the cost of collecting and labeling the data for fine-tuning? 7. In experiments, the paper should compare UrbanKGent with existing UrbanKG construction methods rather than purely focusing on the LLMs which are not tailored for this task. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. What is the ontology of urban KGs? 2. What is the motivation for selecting geospatial relations for KGC tasks and using LLMs for this? Can we predict other relations using the same framework? Can we directly use the tool to get relations if geospatial information is given? 3. Please provide more analysis of the iterative trajectory self-refinement module. 4. Settings of the ablation studies. 5. Cost for fine-tuning data construction. 6. Can we compare UrbanKGent with existing UrbanKG construction methods? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1&Q1 Due to the tables related to weakness 1 and question 1 are included in the Supplementary PDF of the **Author Rebuttal**, we have moved the corresponding responses to the **Author Rebuttal** for better understanding. We apologize for any inconvenience this may cause. > W2,W3&Q2 We choose geospatial relations for KGC tasks for two reasons. First, spatial relations are the majority in UrbanKG (about 60% in both datasets, as reported in Table 15 of PDF). Second, as validated by recent works [1-2], spatial relation understanding is one of the most challenging tasks for LLMs. Therefore, we use geospatial relations to demonstrate the effectiveness of the UrbanKGent in KGC. Other relations, including temporal and functional, can be predicted using this framework by extending corresponding instructions. For example, by injecting the time information (e.g., the date of building was built) of urban entities into the instruction, we can complete their missing temporal relation (e.g., built earlier than). The capability of UrbanKGent to identify other relation types is also echoed by its success in RTE tasks. Regarding external tool use, it is possible to invoke tools to calculate certain geospatial relations, but not all, with sufficient geospatial information. For example, given the Region Connection Calculus (RCC) of two entities, we can derive up to 5 defined geospatial relations using the GIS system. However, some spatial relations may not be able to be extracted using tools, especially when entity information is incomplete, or the corresponding relation type and tool are unknown. Overall, we acknowledge the merit of using external tools to derive urban relations. In fact, we have devised tool use as an indispensable block in the tool-augmented trajectory refinement block, and are working to incorporate more spatiotemporal tools to improve the effectiveness of our framework. Reference: [1] GeoLM. EMNLP. 2023. [2] Are Large Language Models Geospatially Knowledgeable? 2023. > W4&Q4 We use UrbanKGent-7B as the final model, with the same setting in the overall experiment (Section 5.2) on the NYC dataset. For the ablation study (Line 720 - 724), we remove the knowledgeable instruction template ($UrbanKGent-7B^{\spadesuit }$), multi-view design ($UrbanKGent-7B^{\star }$), external geospatial tool invocation ($UrbanKGent-7B^{\ddagger}$), and iterative trajectory self-refinement ($UrbanKGent-7B^{\dagger}$) respectively from UrbanKGent-7B to validate their effectiveness. The results show that removing any of these modules will lead to performance degeneration, demonstrating their effectiveness. We will add more details in E.2 and provide more descriptions in the caption of Table 10. > W5&Q3 We prompt the backbone LLM to automatically judge if a trajectory is faithful and provide refinement feedback for the unfaithful trajectory, more details are explained in **Lines 272-274**. Trajectory Updater will then follow the feedback to refine the current trajectory. For the maximum iterations, as explained in **Lines 280-282**, we set it to 3 to avoid excessive cost. To further understand the role of iteration numbers, we set maximum iterations from 0 to 10, and report the model's performance as well as the average stopping epochs in which the predefined stopping condition is satisfied (i.e., all trajectories are faithful). |The GPT-evaluation results on NYC dataset using UrbanKGent-7B|||| -|-|-|- maximum Iteration|stopping epochs|RTE (acc)|KGC (acc) 0|-|0.44|0.47 3|2.46|0.46|0.49 5|2.84|0.47|0.48 10|3.25|0.48|0.49 As can be seen, the model performance drops without iterations. However, larger than 3 iterations only lead to marginal improvements. This suggests that 3 iterations are cost-effective for the model to achieve the predefined stopping condition. Moreover, we agree with the reviewer that LLM cannot work as a good judger if it lacks urban knowledge. As shown in Table 10, the $UrbanKGent-7B^{\spadesuit }$, whose knowledgeable instruction template (Line 720-721) is removed, performs poorly, although it incorporates the iterative trajectory self-refinement method. Regarding the correctness of the refined trajectory, the performance improvement reported in the above table indicates that the refined trajectory is more accurate. The reported case study in Figure 8 in our paper also echos its effectiveness, e.g., the model can identify some missing triplets after self-refinement. > W6&Q5 The constructed instruction dataset for fine-tuning can be found in **Supplementary Material**. As suggested, we provide statistics and the cost of prompting GPT-4 for instruction generation in the following Table: Task|Number of raw records|Number of instructions generated by UrbanKGent pipeline|Cost (dollar) -|-|-|- RTE|354|4,246|12.51 KGC|354|2,011|18.07 In total|708|6,257|30.58 As can be seen, 6,257 instructions are used and the cost of calling GPT-4 is 30.58 dollars. > W7&Q6 Existing UrbanKG construction methods rely heavily on manual extraction of urban entities and relations, as detailed in **Lines 394-400**. These manual methods, while effective, are not directly comparable to our automated approach without corresponding entities and relations annotated in the training data. However, we agree that a comparison can still provide valuable insights. To this end, we performed a quantitative comparison between previous manual methods and our proposed automatic paradigm using the latest UrbanKG benchmark [1]. As shown in **Table 4** in our paper, UrbanKGent can construct UrbanKGs with the same scale of triplets and entities in the benchmark by using only one-fifth of the data. Furthermore, our method expands the relationship types by a hundred times, demonstrating improvements in efficiency and comprehensiveness. This comparison underscores UrbanKGent's potential for automatic UrbanKG construction, making it a competitive alternative against manual methods. Reference: [1] UUKG. NeurIPS 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the responses, which partially addressed my concerns except for the necessity of using LLM for geospatial relation prediction. Thus, I raised my score to Borderline acceptance. --- Reply to Comment 1.1.1: Title: Response to Reviewer GYCX Comment: Dear Reviewer GYCX: We are pleased that most of the confusion has been clarified by the previous Rebuttal. We appreciate your expertise and insights in helping us improve the quality of our paper. But we apologize for the difficulty the reviewer still experienced in understanding the necessity of using LLMs for geospatial relation prediction. Please check the following response for a more detailed explanation. We propose using LLMs for geospatial relation prediction for two primary reasons. First, existing GIS tools may struggle to extract certain spatial relations when geospatial information is incomplete. For instance, if the polygon data (i.e., the latitude and longitude boundaries) of two urban entities, such as Queens and Staten Island, is incomplete, traditional methods may fail to predict missing geospatial relations (e.g., whether they are disconnected). In contrast, LLMs can leverage both geospatial and semantic information to infer these relations. For example, an LLM might successfully infer that "Queens and Staten Island are geospatially disconnected" by directly using their semantics. Second, predicting new spatial relations often requires using multiple GIS tools or even developing new ones, which can be labor-intensive. LLMs, however, can efficiently manage this process by automatically routing tasks to existing tools or implicitly building a neural inference function for geospatial relation prediction. In our framework, we have designed the LLM to invoke various external tools to derive urban relations, and we are working to integrate more spatiotemporal tools to enhance the framework's effectiveness. Overall, the usage of LLMs for geospatial relation prediction has practical potential and has been explored in prior research. For example, recent studies [1-3] have quantitatively evaluated LLMs' ability to predict spatial relationships and perform certain spatial calculations. Furthermore, works like CityGPT [4], CityBench [5], and BB-GeoGPT [6] demonstrate the potential of LLMs in automating complex geospatial reasoning tasks, including geospatial relation prediction. We believe these efforts are crucial for the future deployment of LLM-based applications in urban and GIS contexts. In such a scenario, we present the first attempt to use LLMs for urban geospatial relation completion within the UrbanKG construction process. We believe our approach can serve as a valuable reference for researchers in this field. We will include the discussion on the necessity of using LLMs for geospatial relation completion in the final version of our paper. We sincerely appreciate your insightful question and the opportunity to clarify this aspect of our work. Best, NeurIPS 2024 Conference Submission 16941 Authors Reference: [1] Li, et al. “GeoLM: Empowering Language Models for Geospatially Grounded Language Understanding.” EMNLP. 2023. [2] Bhandari, et al. “Are Large Language Models Geospatially Knowledgeable?” ICAGIS. 2023. [3] Mooney, et al. “Towards Understanding the Geospatial Skills of ChatGPT: Taking a Geographic Information Systems (GIS) Exam.” SIGSPATIAL. 2023. [4] Feng, Jie, et al. "CityGPT: Empowering Urban Spatial Cognition of Large Language Models." *arXiv preprint arXiv:2406.13948* (2024). [5] Feng, Jie, et al. "CityBench: Evaluating the Capabilities of Large Language Model as World Model." *arXiv preprint arXiv:2406.13945* (2024). [6] Zhang, Yifan, et al. "BB-GeoGPT: A framework for learning a large language model for geographic information science." *Information Processing & Management* 61.5 (2024): 103808.
Summary: This paper proposes a unified large-scale language model agent framework called UrbanKGent, specifically for the Construction of Urban Knowledge Graph Construction (UrbanKGC). UrbanKGent utilizes instruction generation of heterogeneous and geospatial information, as well as an iterative trajectory optimization module based on GPT-4, to further enhance the ability to extract critical knowledge from multi-source city data and effectively reduce the significant manual labor of traditional methods. By fine-tuning the hybrid instructions on the Llama 2 and Llama 3 series models, the researchers developed the UrbanKGC agent, including the UrbanKGent-7/8/13B version. Experiments on two real-world datasets show that the UrbanKGent family not only significantly outperforms 31 benchmark models on the UrbanKGC task, but is also more than 20 times more cost-effective than GPT-4, while being able to build a knowledge map of cities with hundreds of times richer relationships with less data. Strengths: 1. UrbanKGent proposed in this paper provides a unified solution for building urban knowledge graphs, which automates the process of extracting key information from multi-source urban data and reduces the need for human intervention. 2. The authors propose instruction generation methods of "heterogeneity awareness" and "geospatial information fusion", which can better capture the characteristics of urban knowledge graph construction tasks and make up for the shortcomings of ordinary LLM in understanding complex heterogeneous relationships and geospatial computing capabilities. 3. Experimental results on two real-world data sets show that UrbanKGent not only significantly outperformed 31 benchmarks on the UrbanKGC task but also outperformed the state-of-the-art LLM GPT-4 by more than 10% at about 20 times lower cost, which has great potential for efficiency and economy in practical applications. Weaknesses: 1. As the size of the dataset grows, the algorithmic efficiency and scalability of UrbanKGent may become an issue. Although not mentioned in the paper, in practical applications, the processing of large-scale data sets may require more computational resources and time. Although the paper notes limitations, the specifics may require more elaboration. For example, under what conditions may the model perform poorly and how these limitations affect the quality and reliability of the final knowledge graph? 2. UrbanKGent may face privacy and fairness challenges, especially when handling personal or sensitive information. I wonder what the author thinks about privacy and fairness. Although these issues are not discussed in detail in the paper, in actual deployment, if not properly addressed, they can lead to violations of personal privacy or unfairly affect specific groups. 3. The UrbanKGent framework is mainly aimed at urban knowledge graph construction tasks. I wonder if the author has considered the applicability of other types of knowledge graph construction tasks. Can the UrbanKGent framework proposed in this paper be further extended to the knowledge graph construction in other fields? Technical Quality: 3 Clarity: 2 Questions for Authors: NA Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > [W1] As the size of the dataset grows, the algorithmic efficiency and scalability of UrbanKGent may become an issue. Although not mentioned in the paper, in practical applications, the processing of large-scale data sets may require more computational resources and time. Although the paper notes limitations, the specifics may require more elaboration. For example, under what conditions may the model perform poorly and how these limitations affect the quality and reliability of the final knowledge graph? Thanks for the reviewer's insightful question, which helps us improve the quality of our paper. We agree with the reviewer that efficiency is important in practical agent deployment. As suggested, we provide a more detailed efficiency analysis of UrbanKGent in the following table, and we also use LLM acceleration techniques VLLM [1] to improve the scalability of UrbanKGent. As shown in Table 1, by applying VLLM techniques to accelerate our current framework, UrbanKGent-13B can automatically construct an UrbanKG with approximately one million triples (905,442 extracted from the NYC-Large dataset) in about 62 minutes. We also report the average inference time of the UrbanKGent family. As can be seen in Table 1, For every 1,000 data records processed, UrbanKGent takes 0.57, 0.76, and 1.53 minutes on 7B, 8B, and 13B versions, respectively. Table 1: Comparison of UrbanKGC inference latency (minutes) using the UrbanKGent family before and after using VLLM acceleration. We use two middle-size datasets (i.e., NYC and CHI) and two large-scale datasets (i.e., NYC-Large and CHI-Large) for UrbanKG construction. The accelerated inference latency is **bolded.** Dataset|Latency (minutes)|||Data Volume -|-|-|-|- || UrbanKGent-7B | UrbanKGent-8B |UrbanKGent-13B| NYC|6.83/**1.19**|7.62/**1.58**|16.48/**3.19**|2,089 CHI|3.60/**0.62**|4.02/**1.13**|8.69/**1.68**|1,102 NYC-Large|132.33/**23.07**|147.75/**30.76**|319.38/**61.93**|40,480 CHI-Large|94.39/**16.55**|105.36/**21.93**|227.76/**44.47**|28,868 -|3.27/**0.57**|3.65/**0.76**|7.89/**1.53**|Per 1,000 records Currently, the maximum dataset scale we used is about forty thousand. We are working on processing larger-scale data and analyzing the potential performance degradation and KG quality issues. Thanks to the reviewer for introducing these interesting research questions, and we will discuss them in the Section "Limitation and Future Work" in our paper. > [W2] UrbanKGent may face privacy and fairness challenges, especially when handling personal or sensitive information. I wonder what the author thinks about privacy and fairness. Although these issues are not discussed in detail in the paper, in actual deployment, if not properly addressed, they can lead to violations of personal privacy or unfairly affect specific groups. Thanks for the reviewer's insightful question. Currently, we collect raw urban data from various public data providers (e.g., OSM, Wikipedia) to construct UrbanKGent. Although sensitive information is not a critical issue in these open-source data, we agree with the reviewer's concern regarding potential data privacy and fairness issues once UrbanKGent is deployed. To address this, we can further perform safety alignment [1] in UrbanKGent to control its outputs. In addition, we can subscribe to external LLM services to filter the malicious information of UrbanKGent's output. The above two strategies have been widely used in many online LLM reasoning services (e.g., ChatGPT, Ernie Bot, and Tongyi Qianwen), and we believe they would be the practical solution. We will further discuss this in the "Limitations and Future Work" section of our paper to provide more insights. **Reference**: [1] Ji, Jiaming, et al. "Beavertails: Towards improved safety alignment of llm via a human-preference dataset." *Advances in Neural Information Processing Systems* 36 (2024). > [W3] The UrbanKGent framework is mainly aimed at urban knowledge graph construction tasks. I wonder if the author has considered the applicability of other types of knowledge graph construction tasks. Can the UrbanKGent framework proposed in this paper be further extended to the knowledge graph construction in other fields? Thanks for your insightful comment and question! Yes, although the proposed framework is designated to the urban domain, the construction pipeline can be easily extended to the construction of knowledge graphs for other domains. One of the most straightforward ways is to modify the instruction template to adapt this framework to other domains. For example, we can replace the urban knowledge currently encoded in the RTE instruction template with other domain knowledge, to adapt our framework for the knowledge graph construction in other fields. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for your responses. MY concerns are addressed. I will raise my score to accept.
Rebuttal 1: Rebuttal: **Dear Reviewer GYCX**: We thank you for the precious review time and valuable comments. To ease understanding, we have moved weakness 1 and question 1 here, which are related to UrbanKG entity and relation ontology statistics. Please refer to Table 13, Table 14, and Table 15 in the **PDF** for detailed information on UrbanKG entity and relation ontology statistics. > [W1] This paper focuses on urban knowledge graph construction tasks. However, the details of the urban knowledge graph are not clearly defined. For example, the ontology of urban KGs, entity types, and relation types are not discussed. This limits the understanding of the proposed framework for readers who are not familiar with urban knowledge graphs. > [ Q1] What is the ontology of urban KGs? We apologize for the confusion. The constructed UrbanKGs can be found in **Supplementary Material**. As the reviewer suggested, we provide entity and relation ontology statistics in the PDF. As shown in Table 13, both urban entity and relation can be pre-categorized into 4 coarse-grained ontologies: spatial, temporal, functional, and others. After multi-view entity recognition and relation extraction (shown in Figure-4(a) in our paper), the fine-grained entity ontologies (1,028 and 755 entity types of NYC-Large and CHI-Large UrbanKGs, respectively) and fine-grained relation ontologies (2,138 and 1,366 relation types of NYC-Large and CHI-Large UrbanKGs, respectively) are obtained. To ease understanding, we also provide illustrative entity ontology (e.g., University) and relation ontology (e.g., Locate-in) examples in Table 14 and Table 15, respectively. In addition, we also display the entity and relation ontology distribution of constructed UrbanKGs on the NYC-Large and CHI-Large datasets. More detailed ontology information can be found in the Supplementary Material. We will include detailed statistics about the UrbanKG ontology in the final version of our paper. We sincerely appreciate the reviewer's valuable suggestions. Best, NeurIPS 2024 Conference Submission 16941 Authors Pdf: /pdf/9ef1e38de03fb2da094554e115b7d8260bd34c0e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reinforcement Learning with Euclidean Data Augmentation for State-Based Continuous Control
Accept (poster)
Summary: This paper introduces a novel data augmentation method for reinforcement learning in continuous control tasks. The key innovation is leveraging Euclidean symmetries inherent in these tasks by applying rotational transformations to the original states. The authors propose using a limb-based state representation instead of the standard joint-based one to make states more amenable to these transformations. Integrated with the DDPG algorithm, this method is evaluated on 10 tasks from the DeepMind Control Suite, demonstrating improvements in data efficiency and performance, especially for complex 3D tasks. The approach outperforms existing data augmentation techniques and proves computationally efficient compared to alternative methods. Strengths: Originality: - The idea of leveraging Euclidean symmetries for data augmentation in state-based RL is novel and underexplored. - The limb-based state representation is an innovative solution enabling effective Euclidean augmentation. Quality: - Comprehensive experiments on 10 continuous control tasks, including 2D and 3D variants. - Thorough ablation studies on the effect of augmentation ratio. - Comparison against relevant baselines including standard RL algorithms and previous augmentation methods. Clarity: - The paper is well-structured. - Limitations are discussed openly. Significance: - The method shows substantial improvements on challenging 3D tasks like Humanoid_run where standard methods struggle. Weaknesses: 1. The paper doesn't clearly address how the method handles environments where Euclidean data augmentation might violate constraints (e.g., obstacles in AntMaze from MuJoCo). 2. The data composition process is not fully explained. Section 4.3 doesn't specify if $B_\text{aug}$ transitions are added to $B$ or substitute points in $B$. 3. The study is limited to 0-100% augmentation, while exploring larger multipliers could be insightful as one transition can be rotated with different angles 4. The focus is primarily on rotation transformations. Exploring other transformations like translation and reflection could provide a more comprehensive analysis. 5. In 5 out of 9 tasks, $ρ_\text{aug}$ = 0% achieves the best performance, suggesting the limb-based representation may be more critical than the data augmentation itself. However, the choice of limb-based representation seems motivated primarily by enabling Euclidean DA, which may limit generalizability. In addition, the advantages of limb-based over joint-based representations are not thoroughly justified. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. There appears to be a typo in line 271 - it should likely read "transform states $s_t$ and $s_{t+1}$". 2. The description of rotations in Section 4.3 is confusing. Why are there two separate rotation steps mentioned? 3. Some recent state-based data augmentation methods are not well cited in the related work, including: - Hindsight experience replay (Andrychowicz et al., 2017) - Counterfactual data augmentation (Pitis et al., 2020) - MoCoDA (Pitis et al., 2022) - Guided Data Augmentation (GuDA) (Corrado et al., 2024) Particularly, GuDA, while focusing on offline-RL and including human guidance, describes a method for state-based data augmentation, including rotation, transformation, and reflection. Discussing these works could strengthen the paper's positioning in the field. 4. The paper focuses solely on the DDPG algorithm. Exploring compatibility with other popular RL algorithms would increase impact. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors adequately address limitations, noting the need for task-specific tuning and knowledge of strict Euclidean symmetries. They provide constructive suggestions for future work. Regarding societal impact, while the immediate concerns are limited for simulated tasks, a brief discussion on long-term implications for robotics, especially on real robots, could be valuable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review! We are glad to provide a response to address your concerns and look forward to follow-up discussions. **On Weaknesses 1** Our method (Euclidean rotations) can be straightforwardly applied to constraints like obstacles: one just needs to include constraints information in the task features (see our line 186, Section 4.1). For example, in AntMaze, the robot is placed in a maze to navigate from a start point to an end point. Because the shape of the maze (i.e., the wall obstacles) is fixed, we can represent its shape using a directional vector and transform it (e.g., rotate) during data augmentation. See Figure 1(c) in our rebuttal pdf for an illustration. **On Weaknesses 2** We here clarify that $B_{\rm aug}$ transitions are transformed to *substitute* the original data points. This ensures that, compared to no data augmentation, we keep the same batch size and gradient update to ensure fair comparison. We will make this clear in our revision. **On Weaknesses 3** As we have clarified for Weakness 2, the upper bound of $\rho_{\rm aug}$ is 100% because we do substitution. Using multipliers larger than 100% needs to increase batch size, which we would avoid to ensure fair comparison with the no-augmentation baseline. Note that, even with substitution, a transition can still be rotated with different angles in different gradient update iterations. **On Weaknesses 4** One should only choose transformations that respect the symmetries in the task of interest. We do only rotations about the z-axis (${\rm SO}_{\vec{g}}(3)$) because it is the only valid transformation respecting the symmetries in our tasks. The other Euclidean transformations, i.e., translations or reflections, are not valid because: - The robots in the tasks use egocentric representation (line 256) and therefore it’s already translation-invariant. - Reflections require gait symmetry (e.g., the left leg has the same length, stiffness, etc.), which the robots in these MuJoCo tasks do not necessarily have. Some work modified the robot to enforce gait symmetry before applying reflections (e.g., see Table 1 of Corrado & Hanna labeled [1] by Reviewer X7ZX), but we did not do it. **On Weaknesses 5 - performance** We first clarify that we performed 10 tasks, 9 in Figure 3 and the 10th in Figure 4. We provide a more detailed interpretation of the results to explain the effectiveness of our method. 4 out of the 10 tasks are 2D: Figure 3’s (a)(c)(e) and Figure 4. For (a)(c)(e), $\rho_{\rm aug}$ > 0% is no better than $\rho_{\rm aug}$ = 0%; for Figure 4 (Reacher_hard), $\rho_{\rm aug}$ > 0% is better. This can be explained by the nature of the tasks: - In (a)(c)(e), a robot learns to walk/run forward with its “legs” in the 2D xz-plane, and there are no rotation symmetries within the plane itself. Our method adds the y-axis and performs rotations about the z-axis, which is valid but does not generate more data within the xz-plane that would be useful for the original 2D task. Therefore, it is expected that $\rho_{\rm aug}$ > 0% will provide limited benefit for these tasks. - In Figure 4 (Reacher_hard), a 2-limb robot is placed on the xy-plane (ground) and is tasked to rotate about the z-axis to reach a target point on the xy-plane with its tip. There are rich rotation symmetries within the 2D xy-plane, and our data augmentation generates more data therein. Therefore, it is expected that $\rho_{\rm aug}$ > 0% will be beneficial. For the other 6 tasks which are 3D, $\rho_{\rm aug}$ > 0% is slightly better than $\rho_{\rm aug}$ = 0% (especially in early training) in Figure 3’s (b)(f) and is much better in the other 4 tasks. This is because these tasks all have rotation symmetries about z-axis. Note that these tasks include the 3D version of Figure 3’s (a)(c)(e). **On Weaknesses 5 - limb-based vs joint-based representations** Although simulators like MuJoCo often use joint-based representations as default, both representations are convenient to obtain. Given a joint-based representation, the computation of its corresponding limb-based representation is known as *forward kinematics*, which is well-studied in robotics and implemented in simulators such as MuJoCo. We also use MuJoCo’s original APIs for forward kinematics. Therefore, choosing limb-based representations should not limit generalizability. Regarding justifying limb-based over joint-based representations, it is non-trivial to provide a theory that proves advantages of one over the other (if any), especially in the context of deep RL. That's why we have performed our experiments to give empirical evidence. **On Question 1** Thank you. We will fix it. **On Question 2** There is only one rotation applied to a chosen transition (lines 274-277). We also mention rotation in lines 271-273 just to describe how the loss is computed. We will modify the description to make this clear. **On Question 3** Thank you for suggesting the papers as they are indeed related to our work. As these papers are also mentioned by Reviewer X7ZX, please refer to our global response that compares our work with those papers. The comparison clarifies our different and orthogonal contributions. **On Question 4** Thank you for the suggestion. Our data augmentation method can be straightforwardly applied to any off-policy algorithm. We have performed additional experiments applying it to SAC on the tasks of Walker_run and Hopper3D_hop in Figure 1(a)(b) in our rebuttal pdf. The results show our data augmentation has similar effects for SAC. These results provide evidence that our method is well compatible with other RL algorithms. **On Limitations** Thank you for the suggestion. We would not make assertive claims on real robots, since we did not provide evidence therein. However, as data is often even more scarce on real robots, we do believe any data augmentation method, including ours in this work, should be valued. --- Rebuttal 2: Comment: Thanks for the clear feedback. I think they have addressed most of my concerns. If time permits, I would still highly recommend running **"W3: The study is limited to 0-100% augmentation, while exploring larger multipliers could be insightful as one transition can be rotated with different angles."** It makes sense to ensure it is a substitution for a fair comparison with the baseline as mentioned in the response of weakness 2. However, in terms of understanding how far this method is beneficial on top of the base algorithm—given that collecting real trajectories is expensive while data augmentation is nearly zero cost—I think it would be interesting and would definitely make the paper stronger to see that for N real collected trajectories, [1, 10, 50, 100, ...] * N through augmentation on the N trajectories outperforms the base algorithm. --- Rebuttal Comment 2.1: Comment: Thank you! We are glad our reponse have addressed your concerns. We are working on the suggested experiments and will update you on our progress by discussion deadline. We agree that they will provide additional insights. --- Reply to Comment 2.1.1: Title: Updates on the suggested experiments (1/3) Comment: Dear reviewer, We have conducted additional experiments as suggested. The primary request is to investigate whether it is beneficial to rotate transitions in a batch with different angles, as the submission only has “multipliers” up to 100%. To fulfill this request, we have run the following DDPG variants based on our limb-based data augmentation method: - Sample a batch of $B$ limb-based transitions - Perform $N(>1)$ iterations of loss computations and gradient updates - For the first iterations, use the original, un-rotated $B$ transitions - For each of the rest $N-1$ iterations, rotate all $B$ transitions with a random angle per transition For fair comparison, we have run similar DDPG variants of $N$ “inner-loop” iterations with the original, joint-based representation: - Sample a batch of $B$ transitions of join-based representations - Perform $N(\geq 1)$ iterations of loss computations and gradient updates using the same B transitions This way, the standard DDPG in our submission is the (B=256, N=1) case.
Summary: This paper introduces a novel data augmentation strategy tailored for reinforcement learning (RL) agents operating in state-based continuous control environments. The method leverages limb-based state features rather than joint-based configurations, allowing for more effective augmentation. Experiments conducted on various tasks from the DeepMind Control Suite demonstrate significant improvements in data efficiency and performance over standard RL algorithms and existing augmentation methods. Strengths: The paper introduces a unique approach to data augmentation for RL in state-based continuous control, moving away from traditional perturbation methods to Euclidean transformations. This approach is innovative and addresses the limitations of existing methods. Weaknesses: Firstly, I am not an expert in this research field. I think this paper lacks some deeper theoretical support. The scenario covered by Theorem 1 is overly simplistic. The proposed method involves additional computations for applying Euclidean transformations and managing the limb-based state features. The paper could benefit from a more detailed discussion on the computational overhead and potential optimizations. Technical Quality: 3 Clarity: 2 Questions for Authors: Could the authors provide more details on how to tune the various parameters involved in the proposed method, such as the choice of Euclidean transformations and the proportion of augmented data? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors acknowledge the need for task-specific tuning of the hyperparameter and the requirement for knowledge of strict Euclidean symmetries. They have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review! We are glad to provide a response and look forward to follow-up discussions. **On Weaknesses - theoretical support** The paper does not state a Theorem 1. Can you further clarify your concern? Thanks. **On Weaknesses - computational overhead** Our method requires minimal additional computation. The only additional computation is transforming a subset of transitions in the mini-batch, as described in Section 4.3. Note that, compared to the standard method with no data augmentation, we keep the same batch size and gradient update to ensure no additional computation is required. The transformation is essentially matrix multiplication (involving 3D rotation matrices), which incurs negligible overhead in the training process, as we have shown in Figure 4 (bottom). **On Questions - choices of Euclidean transformations** The available Euclidean transformations are not tunable, because one should only choose transformations that respect the symmetries in the task of interest. We focus on rotation symmetries about the z-axis (${\rm SO}_{\vec{g}}(3)$) because it is the only available symmetries in our tasks. The other Euclidean transformations, i.e., translations or reflections, are not available because: - The robots in the tasks use egocentric representation (line 256) and therefore it’s already translation-invariant. - Reflections require gait symmetry (e.g., the left leg has the same length, stiffness, etc.), which the robots in the tasks do not necessarily have. Some prior work modified the robot to enforce gait symmetry before applying reflections (e.g., see Table 1 of Corrado & Hanna labeled as [1] by Reviewer X7ZX), but we did not do it. **On Questions - tuning hyperparameters** $\rho_{\rm aug}$ is our only additional hyperparameter on top of an off-the-shelf algorithm like DDPG. Its tuning is standard, no different than other hyperparameters: - We did grid search, which is standard and the same way as tuning other DDPG hyperparameters - We have discussed the effect of $\rho_{\rm aug}$ in Section 5.3, which gives insights on tuning strategy. In short, tasks with richer symmetries benefit from a larger $\rho_{\rm aug}$. - An automatic tuning method might be beneficial, which we leave for future work (Section 6). --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the response. Since I am not an expert in this field, I will maintain my current neutral score. Thank you again or your detailed responses to my review. --- Reply to Comment 1.1.1: Comment: Thank you!
Summary: This paper proposes a data augmentation technique that leverages Euclidean symmetries (e.g. rotational symmetry) in a task's dynamics to generate augmented data. When a task's state features do not have such symmetries, the paper also discusses how to define a new state representation with these symmetries (so that data augmentation can be applied). Empirically, the paper shows that RL is more data efficient with this data augmentation technique than without it on various DMControl tasks. Strengths: 1. The paper focuses on a class of augmentations that have been under-studied (compared to visual augmentations) 2. Redefining a task's state representation to enable data augmentation is an interesting concept to me. 3. Comparing the proposed augmentation strategy with an RL agent that learns with an equivariant network architecture was useful; I've always wondered how these two different approaches compared. Weaknesses: 1. **The story seems improperly situated in the data augmentation literature.** While it is true that most prior data augmentation works focus on perturbation-based augmentations, many prior works have exploited task symmetries and invariances to generate additional data that agrees with the task's dynamics and reward function [1-6]. Corrado et. al [1] calls these dynamics-invariant augmentations. 2. **It’s unclear if this change of representation is necessary for data augmentation.** Corrado et. al [1-2] perform data augmentation using SO(3) rotations in MuJoCo tasks similar to those studied in this paper *without* changing the task’s state features. 3. **Weak empirical results.** In 5/9 tasks, DDPG + data augmentation performs just as well as DDPG without data augmentation. With 5 seeds per curve, I have low confidence in the observed benefits in the remaining tasks; RL is notoriously high variance, and the variance between runs is enough to create statistically different distributions. 95% confidence belts assume the distributions of returns at each evaluation point are normally distributed, which is likely not the case. [1] Corrado & Hanna. "Understanding when Dynamics-Invariant Data Augmentations Benefit Model-free Reinforcement Learning Updates." ICLR 2024. [2] Corrado et. al. "Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning." arXiv:2310.18247 [3] Pitis et. al. “Counterfactual Data Augmentation using Locally Factored Dynamics.” NeurIPS 2020. [4] Pavlov et. al. "Run, Skeleton, Run: Skeletal Model in a Physics-Based Simulation." AAAI 2018. [5] Abdolhosseini et. al. “On Learning Symmetric Locomotion.” ACM SIGGRAPH 2019. [6] Adrychowicz et. al. "Hindsight Experience Replay." NeurIPS 2017. [7] Henderson et. al. "Deep Reinforcement Learning that Matters." AAAI 2018 **Minor Comments** 1. I think section 3.2 can be omitted; the augmentation framework described in section 4.3 can be applied to any off-policy RL algorithm. 6. Line 271: typo, I think the second $s_t$ should be $s_t\prime$ Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Line 40-43: Could the authors elaborate on what “uncorrelated” means here? 4. Line 159: Does anything change if the morphology tree contains a cycle? 5. Line 272: What does it mean to keep $a_t$ and $r_t$ invariant? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: The paper addresses limitations, though I think an additional limitation should be emphasized: not only may itbee difficult to specify a symmetry, symmetries might not even exist in some tasks. I don't believe this limitation takes anything away from the paper though; it's a core limitation of most non-perturbation-based data augmentation methods, and it's fine if these methods only apply to some tasks. That's simply the nature of data augmentation. Thankfully, in many real-world tasks (especially robotics tasks where you have an agent acting in 3D space) a human *can* identify symmetries. Pitis et. al [1] and Corrado et. al [2] discuss this point too. [1] Pitis et. al. “Counterfactual Data Augmentation using Locally Factored Dynamics.” NeurIPS 2020. [2] Corrado & Hanna. "Understanding when Dynamics-Invariant Data Augmentations Benefit Model-free Reinforcement Learning Updates." ICLR 2024. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review! We are happy to provide a response. We hope it can trigger your re-evaluation and look forward to follow-up discussions. **On Weakness 1** Thank you for suggesting the papers as they are indeed related to our work. As some of the papers are also mentioned by Reviewer 5Bxo, please refer to our global response that compares our work with those papers. The comparison clarifies our different and orthogonal contributions. **On Weakness 2** Indeed, one can perform SO(3) rotations on the original joint-based state representation. In our tasks, only the torso ($i=1$) has features for orientation (in its up to 6 DoFs) while features of other limbs ($i>1$) are just angles and angular velocities of the hinges. Therefore, under a rotation, only the torso’s orientation features are changed, while other features stay unchanged. This is what prior work such as Corrado et. al. did. However, we hypothesized this data augmentation by rotating original joint-based features would bring little benefit to locomotion tasks. This is because torso features make up only a fraction of all features when the number of limbs is large (which is the case in our tasks). Our limb-based representation instead rotates all limbs to provide richer augmentation. Our hypothesis is supported by results of Corrado & Hanna [1] (see Figure 16 of [1]: confidence intervals are overlapped for Ant; rotation underperforms than no-augmentation for Humanoid). This is also evidenced by our added results in Figure 3 in the rebuttal pdf: we have added curves corresponding to performing rotation data augmentation on original state features, which is less performant than ours. **On Weakness 3 - effectiveness** We first clarify that we have performed 10 tasks, 9 in Figure 3 and the 10th in Figure 4. We here provide a more detailed interpretation of the results to explain the effectiveness of our method. 4 out of the 10 tasks are 2D: Figure 3’s (a)(c)(e) and Figure 4. For (a)(c)(e), $\rho_{\rm aug}$ > 0% is no better than $\rho_{\rm aug}$ = 0%; for Figure 4 (Reacher_hard), $\rho_{\rm aug}$ > 0% is better. This can be explained by the nature of the tasks: - In (a)(c)(e), a robot learns to walk/run forward with its “legs” in the 2D xz-plane, and there are no rotation symmetries within the plane itself. Our method adds the y-axis and performs rotations about the z-axis, which is valid but does not generate more data within the xz-plane that would be useful for the original 2D task. Therefore, it is expected that $\rho_{\rm aug}$ > 0% will provide limited benefit for these tasks. - In Figure 4 (Reacher_hard), a 2-limb robot is placed on the xy-plane (ground) and is tasked to rotate about the z-axis to reach a target point on the xy-plane with its tip. There are rich rotation symmetries within the 2D xy-plane, and our data augmentation generates more data therein. Therefore, it is expected that $\rho_{\rm aug}$ > 0% will be beneficial. For the other 6 tasks which are 3D, $\rho_{\rm aug}$ > 0% is slightly better than $\rho_{\rm aug}$ = 0% (especially in early training) in Figure 3’s (b)(f) and is much better in the other 4 tasks. This is because these tasks all have rotation symmetries about z-axis, so our data augmentation is beneficial. Note that these tasks include the 3D version of Figure 3’s (a)(c)(e). To summarize, we did not cherry pick tasks where our method outperforms others; instead, we sampled a range of tasks with various degrees of symmetries to show when and why our method can bring benefits. **On Weakness 3 - seeds and confidence intervals** For the 4 tasks where $\rho_{\rm aug}$ > 0% is better, we have completed another 5 seeds during rebuttal. With the 10 seeds, Figure 2 in the rebuttal pdf updates the training curves with mean+95% CI. Figure 2 also gives inter-quartile means (IQMs) with 95% bootstrap confidence intervals, which is more robust to outliers (used in prior work like Corrado et. al. [2]). We believe these results give enough confidence on the effectiveness of our method. **On Minor Comment 1** Thank you for the suggestion. We will consider removing Section 3.2 in our revision. **On Minor Comment 2** Thank you. We will fix it. **On Question 1** We mean, although the original transition comes from the ground truth dynamics and reward functions, the perturbed transition does not necessarily. In this sense, they are “uncorrelated”. In the words of Corrado et. al., perturbation is not a Dynamics-Invariant data augmentation. **On Question 2** For morphology trees containing cycles, our key idea and methodology, i.e., ${\rm SO}_{\vec{g}}(3)$ data augmentation on limb-based representation, still applies well. One just needs to identify the kinematic features therein and perform appropriate transformations. We focus on no-cycle trees because most locomotion tasks (including all of our tasks) do not contain a cycle, and the torso/base often serves as the tree root. In our tasks, the root is special in the sense that it can have up to 6 DoFs, while other nodes only have hinge-like DoFs. Again, we focus on this case to make our method description clear, but our methodology applies to cycles. **On Question 3** That simply means we do not change them. Since actions a_t are scalar torques, they should not change under rotations. Rewards should also stay the same under rotations. --- Rebuttal Comment 1.1: Comment: Dear reviewer, As the discussion deadline approaches, we would like to know if our response has addressed your concerns. Should any concerns remain, we will gladly address them. Thank you again for reviewing our paper. --- Rebuttal Comment 1.2: Comment: Thank you for the detailed response! I'm glad to see the comparisons with the prior works; including a some discussion on them would better situate this paper in the literature. It would also emphasizes some of the novelty by clarifying how these augmentations differ from those in prior works, particularly this part of your response: > However, we hypothesized this data augmentation by rotating original joint-based features would bring little benefit to locomotion tasks. This is because torso features make up only a fraction of all features when the number of limbs is large (which is the case in our tasks). Our limb-based representation instead rotates all limbs to provide richer augmentation. Regarding experiments: It would be clearer just have the 3D tasks where augmentation helps in the main paper. You can explain that these augmentations would not be helpful in the 2D tasks, and then point the reader to an appendix containing the 2D experiments. I have raised my score because of these clarifications. All of my questions have been answered. Just a quick follow-up comment: I think a clearer alternative to "uncorrelated" would be to say something like "the augmented data generally does not agree with the task's dynamics" or "the augmented data is generally not dynamics-invariant." --- Reply to Comment 1.2.1: Comment: Thank you! We will adopt your suggestion on "uncorrelated".
Summary: This work proposes a novel data augmentation approach that leverages Euclidean symmetry for continuous control to improve the efficiency and performance of RL algorithms. The authors integrated their approach with DDPG and performed a series of comparison against the vanilla SAC algorithm, other augmentation techniques, and other equivariant methods. The results show some improvements in performance across a range of continuous control tasks, especially those with rich 3D motions and large number of limbs and joints. Strengths: 1. The proposed approach improved the learning efficiency of RL algorithms (demonstrated with DDPG in this work) without the need of changes to the algorithm, which lowers the barrier for adaptability for the community. 2. Euclidean transformations maintain the inherent physics of the task, ensuring that the augmented data are still representative of realistic scenarios. This preservation is crucial for the relevance and usefulness of the augmented data. Weaknesses: 1. This method is primarily applicable to tasks with clear Euclidean symmetries. So, it seems that in environments where such symmetries are not evident or relevant, the approach may not be applicable or effective. While the 'Limitation' section briefly touches upon this issue, it is not clear what categories of continuous control problems this approach will be applicable to. 2. The effectiveness of Euclidean transformations may vary depending on whether the task is set in a 2D or 3D space, with potentially limited benefits in more constrained settings (like planar movements). 3. As the 'Limitation' section suggests, practical application of the framework will be difficult due to the task-specific hyperparameter tuning requirements. While the author suggests adaptive hyperparameter tuning as a potential future work, some insights into what characteristics of the environment dynamics dictate the optimal choice of hyperparameters such as \rho_{aug} would be helpful. Also, in the context of practical application, switching from joint-based to limb-based configurations complicate the state representation and requires significant changes to the training environment. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Section 5.1, the authors say "To summarize, our method reliably improves DDPG and SAC, the state-of-the-art RL algorithms on the continuous control tasks." However, it looks like only vanilla SAC is used in this work for comparison. Hence, the mention of improving SAC should be removed. 2. In line 194 p^y_1 should be p^z_1. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Technical limitations are highlighted in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review! We are glad to provide a response and look forward to follow-up discussions. **On Weakness 1** Our work focuses on robot locomotion tasks as instances of continuous control, which clearly exhibit Euclidean symmetries. Other robotics tasks (e.g., navigation) and many applications that operate in the 3D physical space should also exhibit Euclidean symmetries. We agree that there are continuous control tasks that might not exhibit Euclidean symmetries, such as those in electrical and power engineering. We will update our "Limitation" section and other parts of the paper to clarify this. **On Weakness 2** We agree that the effectiveness of Euclidean transformations depends on the task, and our results indicate that it is even more subtle than just 2D vs 3D, for which we provide a more detailed interpretation below. In short, Euclidean transformations are effective if the task itself exhibits corresponding symmetries; otherwise, one should not expect these transformations to improve the performance in the first place. Our experiments include 10 tasks, 9 in Figure 3 and the 10th in Figure 4. 4 out of the 10 tasks are 2D: Figure 3’s (a)(c)(e) and Figure 4. For (a)(c)(e), $\rho_{\rm aug}$ > 0% is no better than rho_aug = 0%; for Figure 4 (Reacher_hard), $\rho_{\rm aug}$ > 0% is better. This can be explained by the nature of the tasks: - In (a)(c)(e), a robot learns to walk/run forward with its “legs” in the 2D xz-plane, and there are no rotation symmetries within the plane itself. Our method adds the y-axis and performs rotations about the z-axis, which is valid but does not generate more data within the xz-plane that would be useful for the original 2D task. Therefore, it is expected that $\rho_{\rm aug}$ > 0% will provide limited benefit for these tasks. - In Figure 4 (Reacher_hard), a 2-limb robot is placed on the xy-plane (ground) and is tasked to rotate about the z-axis to reach a target point on the xy-plane with its tip. There are rich rotation symmetries within the 2D xy-plane, and our data augmentation generates more data therein. Therefore, it is expected that $\rho_{\rm aug}$ > 0% will be beneficial. For the other 6 tasks which are 3D, $\rho_{\rm aug}$ > 0% is slightly better than $\rho_{\rm aug}$ = 0% (especially in early training) in Figure 3’s (b)(f) and is much better in the other 4 tasks. This is because these tasks all have rotation symmetries about z-axis, so our data augmentation is beneficial. Note that these tasks include the 3D version of Figure 3’s (a)(c)(e). **On Weakness 3 - tuning $\rho_{\rm aug}$** Per our response to Weakness 2, our results have provided the following insights: Tasks with richer symmetries often benefit from larger values of $\rho_{\rm aug}$ (e.g., $\rho_{\rm aug}$=100% is best for Humanoid tasks). **On Weakness 3 - limb-based vs joint-based configurations** Although simulators like MuJoCo often use joint-based configurations as default, both configurations are convenient to obtain. Given a joint-based configuration, the computation of its corresponding limb-based configuration is known as “forward kinematics”, which is well-studied in robotics and implemented in simulators such as MuJoCo. We did use MuJoCo’s original APIs for forward kinematics. Therefore, we argue using limb-based configurations is not a significant issue in practice. **On Question 1** Thank you for the suggestion. We will remove "SAC" in the statement. **On Question 2** Thank you. We will fix it. --- Rebuttal Comment 1.1: Comment: Dear reviewer, As the discussion deadline approaches, we would like to know if our response has addressed your concerns. Should any concerns remain, we will gladly address them. Thank you again for reviewing our paper. --- Rebuttal Comment 1.2: Title: Response to rebuttal Comment: Thanks for the responses to my questions. I am sticking to my score. --- Reply to Comment 1.2.1: Comment: Thank you!
Rebuttal 1: Rebuttal: We attach here a pdf that contains additional results and illustrations, which we refer to in our response to individual reviews. We below compare our work with papers [1-6] suggested by reviwer X7ZX to clarifies our different and orthogonal contribution. Some of these papers are also suggested by reviewer 5Bxo. **Our contribution** Our contribution is a novel data augmentation transformation, namely ${\rm SO}_{\vec{g}}(3)$ transformation on limb-based state representations, for RL-based robot locomotion. Our experiments on a wide range of simulated locomotion tasks show the advantage of this transformation over alternatives in sample and computation efficiency. **Comparision with [1-6]** Our contribution is different from the papers [1-6] in the following sense: - [1,2] and our work both consider “dynamics-invariant” data augmentation transformations, but our focus is very different. [1,2] focus on better leveraging known, existing dynamics-invariant transformations, drawing their conclusions mostly from robot navigation and manipulation tasks (e.g., AntMaze, Soccer, panda-gym); while we propose a novel transformation for robot locomotion. - Specifically, [1] studies the question of when data augmentation is helpful and concludes the augmentation ratio is crucial (their contribution 3). Their paper’s main body focuses on robot navigation and manipulation, where transformations like goal-relabelling and translations are key to overcoming the sparse reward issue. [1] does use rotation too, in a toy 2D navigation task (Goal2D-v0) where the robot is a particle (no meaningful kinematics) and in the dense-reward MuJoCo locomotion tasks (their Appendix F) where the rotation is performed on joint-based representation and does not yield improvement over no-augmentation baseline (see our response to Weakness 2 for more details). - [2] uses transformations to randomly generate synthetic data and then asks humans to select high-quality ones. Such manual filtering is helpful when doing robot navigation, which [2] focuses on, because humans can easily tell high-quality trajectories based on distance to goal; but it is much more difficult for humans to tell effective joint movements for locomotion. Our method does not require human effort at all. - [3] (and their follow-up work MoCoDA [Pitis et al., 2022]) proposes a data augmentation transformation that requires local (causal) independence, so that augmentation can be performed via stitching independent trajectories from decomposed, independent parts, which is useful for tasks like particles moving and 2-arm robot with static base. We focus on locomotion tasks that do not exhibit sparse kinematic interactions between limbs and therefore cannot benefit much from [3]’s method. For example, the cheetah’s two legs are connected through the moveable torso, and therefore we cannot decompose and stitch their separate trajectories. - [4] focuses on the transformation of reflection that exploits bilateral gait symmetry. Reflections require gait symmetry (e.g., the left leg has the same length, stiffness, etc.), which our MuJoCo tasks do not necessarily have. Some work modifies the MuJoCo robot to enforce gait symmetry before applying reflections (e.g., see Table 1 of [1]). Our rotation transformation instead does not require gait symmetry. - [5] also focuses on reflections (they call them "mirror") for locomotion. They perform trajectory-level transformations for on-policy RL, which is technically not sound but they found it "not a critical issue in practice", while we do rotation-based transition-level transformations for off-policy RL. Moreover, their data augmentation does not yield significant improvement over the no-augmentation baseline (see their Figure 3, DUP vs BASE). - [6] addresses the problem of sparse rewards in reinforcement learning by the goal-relabelling transformation, which is different from our rotation transformation and does not bring much benefit to locomotion tasks. Pdf: /pdf/f477cc89c70449e77bcf78d820c2ba9aa4c474dc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Streaming Bayes GFlowNets
Accept (poster)
Summary: Generative flow nets are extended to streaming inference by using the prediction of the previous network as the prior for the current one. Two training methods are proposed: streaming balance which uses squared difference of target and learned log posteriors, and VI which uses KL divergence. Analytic results for both methods bound the error in terms of updating error and error from the previous step. Three experiments show the method can accurately track the true posterior across updating steps. Strengths: The methods are elegant and sound and the experiment results are strong. This is an important alternative to streaming VI that I'm excited to try out myself. Weaknesses: Not a major weakness, but the extension to the streaming case (this paper's main contribution) is fairly straightforward. It was clear from the beginning that the predicted distribution from the previous step would provide the prior for the next, with the obvious adaptations of the balance and VI objectives. Both $p_B$ terms in the streaming balance condition (3) appear to be unnecessary. Prop 1 will hold without them. We just need a joint distribution over $(x,\tau)$ (equivalently, a distribution over $\tau$) with the desired marginal on $x$, and $f(\mathcal{D}_{t+1}|x) p^t_F(\tau)$ already has that property. Compare to the definition of the target distribution $p(\tau)$ in (5). Sec 3.2 just gives the objective and doesn’t explain how $p_F$ and $Z$ are learned (likewise with the argmax lines in Algo 1). Is it just SGD on $\mathcal{L}_{SB}$? The main text says nothing about how to construct the graph ($\mathcal{S}$ and $\boldsymbol{A}$). Details are given in the appendix -- in all cases the allowable paths build the desired objects from simpler ones -- but a sentence in the main text would help. Technical Quality: 4 Clarity: 3 Questions for Authors: (2): why do we need to sample from $p_B$ and importance weight, rather than sampling directly from $p_F$? (5),170,178: $\pi_{t+1}$ should be $f(\mathcal{D}_{t+1}|x)$, yes? The derivations all look correct if so. 181: what is $\delta$? $Z_{(t+1)}$ at 205 and $\hat{Z}_{t+1}$ are the same? More of a suggestion: could the method be extended to cases where the target objects grow at each time step? For example, phylogenetic trees would grow as new species are observed. It seems like a natural application of flow nets since you can expand the graph by appending a new terminal layer. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The set generation task is degenerate because the objective is additive. A model only needs to track the posterior on individual $x$s. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully reading and appreciating our work. Below, we address each of your comments. > Both $𝑝\_{𝐵}$ terms in the streaming balance condition (3) appear to be unnecessary. Indeed, when $p_{B}^{t}(\cdot | s)$ is fixed, it does not depend on $t$ and the terms corresponding to the backward policy in Equation (3) cancel out. However, albeit relying on a fixed $p_{B}$ is an often implemented strategy for learning GFlowNets — that we adopt in our work —, $p_{B}$ can in principle be jointly learned with $p_{F}$. In this case, $p_{B}^{t}$ would also depend on $t$ and there would be no cancellations in Equation (3). Nonetheless, due to the lack of clear empirical evidence in the literature supporting the learning of $p_{B}$, we choose to fix it in all our experiments. We will emphasize this point and include the simplified version of Equation (3) in the revised manuscript. > Sec 3.2 just gives the objective and doesn’t explain how $𝑝\_{𝐹}$ and $𝑍$ are learned Thanks for noticing. You are correct: both $p_{F}$ and $Z$ in Section 3.2 and in Algorithm 1 are learned by minimizing the corresponding objectives via SGD. We will point this out in the updated manuscript. > The main text says nothing about how to construct the graph ($𝑆$ and $𝐴$). Details are given in the appendix -- in all cases the allowable paths build the desired objects from simpler ones -- but a sentence in the main text would help. Thank you for the suggestion. Indeed, we briefly discuss this in the lines 81-84 of section 2 preliminaries and in more detail in the appendix, but we will make this more explicit in the revised manuscript. The design of the graph ($S,A$) is application-dependent and should be considered on a problem-by-problem basis. As a general guideline, when the elements of the target’s support $\mathcal{X}$ can be described as being built from simpler partial objects, then, the initial state can correspond to an empty object and the non-terminal states would be partial compositions with an edge from $s$ to $s’$ meaning that $s’$ differs from $s$ by a single additional atomic component. For example, a phylogeny is a rooted tree where all the leaves correspond to the species observed and the internal nodes correspond to unobserved ancestors. The states in the support are single trees and the non-terminal states are forests, i.e. disconnected collections of trees, where the construction follows by joining two trees in the forest by an unobserved node. In other words, the state graph is implicitly defined by iteratively applying actions to the initial state. > (2): why do we need to sample from $p_{B}$ In fact, we could directly sample from $p_{F}$ and count the relative frequency of $x$ to estimate the marginal $p_{T}(x)$. However, our experience suggests that the estimator in Equation (2), which is also used by Malkin et al. [1] and Zhout et al. [2], generally yields better estimates of $p_{T}(x)$. > (5), 170, 178: $\pi_{𝑡+1}$ should be $𝑓(\mathcal{𝐷}_{𝑡+1}|𝑥)$, yes? Yes! Thank you for the observation. We will fix this in the revised manuscript. > 181: what is $\delta$? This is a typo; we meant $\nabla\_{\theta} \mathbb{E}\_{\tau \sim p_{F}^{t + 1}}[\gamma(\tau)]$ instead of $\mathbb{E}[\delta(\tau)]$. > $𝑍_{(𝑡+1)}$ at 205 and $\hat{𝑍}_{𝑡+1}$ are the same? Yes! This too is a typo. > More of a suggestion: could the method be extended to cases where the target objects grow at each time step? Thank you for the question. We do not believe that SB-GFlowNets can be immediately extended to accommodate the training of GFlowNets when the size of the generated objects grows. There are two main reasons for this. Firstly, the neural network that encodes the policy often receives a fixed-size input corresponding to a representation of the current state. When the size of this representation changes, it is unclear how the neural network should be updated. However, for many applications, this could be circumvented by using GNN- or DeepSet-based policy networks, which work well with inputs of varying size. Secondly, it is not totally clear what would happen with the terminal flows when additional nodes/transitions are added to the flow network. Which parts of the network remain balanced upon this addition? Should we learn a novel policy on the expanded network, or can we exploit the knowledge of past models to alleviate the learning problem? These questions are both important and interesting, deserving a work of their own. [1] GFlowNets and Variational Inference. Malkin et al. ICLR 2023. [2] PhyloGFN: Phylogenetic inference with generative flow networks. Zhou et al. ICLR 2024 --- Rebuttal Comment 1.1: Comment: Thanks for the clear responses. Great paper.
Summary: The authors provide a framework that allows training of GFlowNet models with streaming data by checkpointing a previously trained GFlowNet model and proposing the streaming balance condition. The problem as well as the approach is well motivated and theoretically sound and this work would be a valuable contribution to the research community. The authors conduct experiments on synthetic tasks and phylogenetic inference, and show improved scalability. Additionally, there is some theory that describes and bounds the approximation error based on errors on the checkpointed GFlowNet model and current estimation problem. Overall, I think the work is quite interesting but it would be nice to consider another experiment since currently most of the experiments are based on synthetic data. Strengths: - The authors provide useful theoretical contributions that bound the approximation errors into two terms: error from solving the current distribution matching problem and error coming from suboptimal solution to the previous distribution matching problem. - The problem of training sampling methods that can incorporate streaming data is quite important and relevant to the field, especially for Bayesian Inference methodologies. - The authors do provide a good set of experiments, and show the benefits of their proposed approach over the standard training of GFlowNets. Weaknesses: - I found Definition 2 to be quite confusing to read and it is still not clear to me how the equivalence holds. Could the authors clarify what the KL divergence is between, why is it valid and how does it circumvent the problem of parameterizing a partition function? - The original setup proposed by the authors follows from a trivial extension of the GFlowNets framework which begs to question the novelty of the work. - It is unclear how their formulation solves the problem of permutation invariance / iid treatment of observations. In particular, given two sets $D_1$ and $D_2$, we know that the posterior distribution $p(x | D_1, D_2)$ can be equivalently written as $$ p(x | D_1, D_2) = \frac{p(D_1, D_2 | x) p(x)}{p(D_1, D_2)} = \frac{p(D_1 | x) p(D_2 | x) p(x)}{p(D_1, D_2)} = \frac{p(D_1 | x) p(x|D_2)}{p(D_1 | D_2)} = \frac{p(D_2 | x) p(x|D_1)}{p(D_2 | D_1)} $$ Then, in particular, how is it maintained that given any split of the data $D$ into $D_1$ and $D_2$, the final GFN trained leads to the same solution. If the GFN learns the right solution, then it will follow simply, but it would be nice if the model could satisfy this permutation invariance inherently. Could the authors provide some ablations and control experiments to show to what extent is this constraint satisfied over $t$? - The authors should add an experiment on discovering causal graphs conditioned on observational samples, which would strengthen their paper considerably. - The authors should consider the following related work, which are either connected to GFlowNets, sampling from an unnormalized density, or modeling Bayesian posterior inference: Cranmer, Kyle, Johann Brehmer, and Gilles Louppe. "The frontier of simulation-based inference." Proceedings of the National Academy of Sciences 117.48 (2020): 30055-30062. Mittal, Sarthak, et al. "Exploring Exchangeable Dataset Amortization for Bayesian Posterior Inference." ICML 2023 Workshop on Structured Probabilistic Inference {\&} Generative Modeling. 2023. Zhang, Qinsheng, and Yongxin Chen. "Path integral sampler: a stochastic control approach for sampling." arXiv preprint arXiv:2111.15141 (2021). Richter, Lorenz, et al. "VarGrad: a low-variance gradient estimator for variational inference." Advances in Neural Information Processing Systems 33 (2020): 13481-13492. Sendera, Marcin, et al. "On diffusion models for amortized inference: Benchmarking and improving stochastic control and sampling." arXiv preprint arXiv:2402.05098 (2024). Berner, Julius, Lorenz Richter, and Karen Ullrich. "An optimal control perspective on diffusion-based generative modeling." arXiv preprint arXiv:2211.01364 (2022). Akhound-Sadegh, Tara, et al. "Iterated denoising energy matching for sampling from Boltzmann densities." arXiv preprint arXiv:2402.06121 (2024). Richter, Lorenz, Julius Berner, and Guan-Horng Liu. "Improved sampling via learned diffusions." arXiv preprint arXiv:2307.01198 (2023). Technical Quality: 3 Clarity: 2 Questions for Authors: - It is not clear what do the authors mean by $\mathbf{A}$ in their notation at the start of Section 2. - The authors seem to have a typo in the equation under Line 142, essentially the right hand equation should have $p_F^{t+1}$. - Can the authors provide a visualization of the Set generation task? If I understand correctly, the authors first sample $d$ points randomly and the corresponding $f_i(j)$ randomly as well where $i = 1, ..., K+1$ and $j = 1, ..., d$. Then they define $R_i$ over each set, and a tempered version of $R_i$ with temperature $\alpha$. Is that it? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have adequately talked about the limitations and impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the suggestion of additional experiments and references, which we will include in the revised manuscript. We hope our additional experiments and clarifications enhance your perception of our work. Please let us know if we have left blank spots or if further clarifications are needed. > I found Definition 2 to be quite confusing to read and it is still not clear to me how the equivalence holds. Could the authors clarify what the KL divergence is between, why is it valid and how does it circumvent the problem of parameterizing a partition function? Thank you for the opportunity to clarify Definition 2. In fact, there was a typo. The denominator in Eq. 5 should be $p\_{F}^{t}(\tau) f(\mathcal{D}\_{t+1}|x)$ instead of $p\_{F}^{t}(\tau) \pi\_{t+1}(x)$. That being said, Eq. 5 is the KL divergence between the trajectory-level distribution we are learning $p\_{F}^{t + 1}(\tau)$ and that of the previous timestep $p\_{F}^{t}(\tau)$ weighted by the likelihood $f(\mathcal{D}\_{t+1}|x)$ for batch $t+1$. The corrected version of eq. 5 is: $$ \mathcal{L}\_{KL}(G\_{t+1} ; G\_{t}) = \mathbb{E}\_{\tau \sim p\_{F}^{(t + 1)}} \left[ \log \frac{p\_{F}^{(t + 1)}(\tau)}{p\_{F}^{(t)}(\tau) f(\mathcal{D}\_{t+1}|x) }\right] \overset{C}{=} \mathcal{D}\_{KL} \left[ p\_{F}^{(t + 1)}(\tau) || p(\tau) \right], $$ where $p(\tau) \propto p\_{F}^{t}(\tau) f(\mathcal{D}\_{t+1}|x)$. We hope this correction clarifies definition 2. Otherwise, please let us know. > The original setup proposed by the authors follows from a trivial extension of the GFlowNets framework which begs to question the novelty of the work. We believe our main contribution is proposing and analyzing the first general-purpose method for streaming approximate Bayesian inference for discrete distributions, including distributions with highly structured supports (e.g., graphs). Additionally, as far as we know, this is the first work explicitly addressing the problem of training GFlowNets in a dynamically evolving environment. > It is unclear how their formulation solves the problem of permutation invariance / iid treatment of observations Thanks for the interesting question. As you mentioned, the permutation invariance is clearly satisfied if we perfectly minimize $\mathcal{L}\_{SB}$ (Eq. 4) or $\mathcal{L}\_{KL}$ (Eq. 5) at each time step. However, when this is not the case, the learned distribution is not necessarily invariant to the order in which the datasets $D_{1}, \ldots, D\_{t}$ are observed. To illustrate this, we again examine the problem of phylogenetic inference for two different scenarios highlighted in Fig. 1a of the rebuttal PDF. In both scenarios, we consider that two datasets, $D_{1}$ and $D_{2}$, are observed in sequence and compare the distribution of an SB-GFlowNet trained on the tuples $(D_{1}, D_{2})$ and $(D_{2}, D_{1})$. Fig. 1a (left panel) shows the resulting distributions are not necessarily identical when the GFlowNets at $t = 1$ are not sufficiently well-trained. In contrast, Fig. 1a (right panel) highlights that both distributions are approximately the same when the GFlowNets at $t = 1$ and $t = 2$ are adequately trained. A similar observation holds for the set generation task in Fig. 1b. To the best of our knowledge, this sensibility to the observations’ ordering due to imperfectly approximated posteriors is inherent to any posterior propagation scheme (e.g., streaming VI). We will add this discussion and these experiments to the revised manuscript. > authors should add an experiment on discovering causal graphs conditioned on observational samples, which would strengthen their paper considerably. Thank you for the suggestion. We adopted Deleu et al.’s DAG-GFlowNet [1] to address the causal discovery problem with GFlowNets and followed the experimental setup for Fig. 3 of the same paper. Six novel batches of 200 samples were observed --- one at a time for each streaming update. Notably, Fig. 2 of the rebuttal PDF shows SB-GFlowNet accurately matches the target distribution on each streaming update. Similarly, Fig. 3 highlights that the probability of the true DAG characterizing the distribution of the observed data tends to increase along training. We will include these additional experiments in the revised manuscript. [1] Bayesian Structure Learning with GFlowNets. Deleu et al. UAI 2022 > authors should consider the following related work, which are either connected to GFlowNets, sampling from an unnormalized density, or modeling Bayesian posterior inference Thank you for the recommendations. We will properly include all related works in the revised manuscript. > It is not clear what do the authors mean by $𝐴$ in their notation at the start of Section 2. Indeed, we missed including a definition $A$, which represents the adjacency matrix of the DAG $\mathcal{G}$. We clarify this in the revised manuscript. > The authors seem to have a typo in the equation under Line 142, essentially the right hand equation should have $p\_{F}^{t + 1}$ Thanks for catching this typo. It should in fact be $p\_{F}^{t + 1}$ instead of $p\_{F}^{t}$. We will update the manuscript accordingly. > Can the authors provide a visualization of the Set generation task? We have included an illustration of the set generation task in Fig. 4 of the rebuttal PDF. This task consists of learning to sample from a distribution over sets by iteratively adding elements to an initially empty set. Regarding the definition of each reward $R\_{i}$, it is exactly as you described. Each $f\_{i}(j)$ is independently sampled and fixed before training and, for a set $S$, we let $\log R_{i}(S) = \sum\_{j \in S} f\_{i}(S)$ be the unnormalized log-probability of $S$. The additional presence of $\alpha$ is to evaluate the impact of $R_{i}$’s sparsity on the efficacy of the learning objectives — we repeated the streaming experiment applying different temperatures $\alpha$ to the rewards $R_{i}$ (Table 1), i.e., using $R_{i}^{1/\alpha}$. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: Thanks to the authors for providing a detailed and compelling rebuttal. While most of my concerns have been addressed, I would additionally like to point out that the work would be considerably stronger if the authors could consider a Bayesian linear regression model, with increasing number of observations seen on the X-axis and the KL divergence with the true posterior, which is available in closed form, on the Y-axis. However, I do understand that the discussion period is coming to an end soon and the authors might not have enough time to run this experiment for the discussion. Since all my other concerns have been addressed, I have updated the score accordingly.
Summary: This paper introduces Streaming Bayes GFlowNets (SB-GFlowNets), which enables streaming Bayesian inference over discrete parameter spaces, relying on the expressive power of GFlowNets as amortized samplers over discrete compositional objects. The process is akin to Bayesian streaming, where the posterior updates with each new data stream and the posterior from the batch before acts as a prior for the current batch. The authors use a GFlowNet to fit the initial posterior and update it training only on new data stream batches. To do so, the authors propose two different solutions: enforcing a streaming balance condition at each streaming step, or relying on direct divergence-based updates (letting go of the estimation of the normalizing constant). For the latter, the authors show how to obtain low-variance estimators for the gradient of the proposed KL divergence loss. Finally, the authors provide a theoretical performance analysis of their two proposed variants along with a diverse set of experiments to showcase the performance in each case, and show the gain in training time in the streaming setting compared to retraining for all the data at once at each step. Strengths: - The authors provide a complete study of all the important components within their work (experiments, runtime comparison to full retraining and practical performance enhancement etc). - The idea is interesting across many applications where data streams are processed continuously. - The paper is overall well written and easy to follow. Weaknesses: - I'm not sure where equation (9) comes from. Equating \$\mathcal{L}\_{SB}(\tau)\$ to 0 would result in \\[ p\_F^{(t+1)}(\tau) = \frac{Z\_t}{Z\_{t+1}} \cdot \frac{p\_B^{(t+1)}(\tau \vert x)}{p\_B^{(t)}(\tau \vert x)} \cdot p\_F^{(t)}(\tau) \cdot f(\mathcal{D}\_{t+1} \vert x) \\] Even by assuming \$(p\_F^t, p\_B^t, Z\_t)\$ satisfy TB \$(\mathcal{L}\_{TB} \rightarrow 0)\$, I'm not sure how that would directly yield equation (10). The proofs for proposition 2 also seem to start from (9). Can you elaborate on that? - To what extent is the assumption that new data is independent of past data under the posterior predictive? This question is especially important in cases where you try to sample from the marginal likelihood over some object (for instance for structure learning), where you lose that independence under the posterior given only the structure (and not parameters for example). I suspect this might be the reason for performance degradation across streaming steps in the Bayesian phylogenetic inference case, given that Felsenstein's algorithm assigns a marginal likelihood. **Minor Comments** - Line 30, $\{y_i\}_{i=1}^2$ should be defined/introduced better. - Line 134, space after comma. - In line 142, in the second TB condition, it should be $p_F^{(t+1)}(\tau)$ instead of $p_F^{(t)}(\tau)$. - $p_F^{(t+1)}$ should be evaluated at $\tau$ in equation (10). Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The authors adequately addressed limitations throughout the paper (for instance posterior error propagation etc) and through a separate section in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our work. We have corrected all the typos you pointed out. We hope our clarifications address your concerns and enhance your view of our work. > I'm not sure where equation (9) comes from ... Thank you for catching this typo, some of the indices were written incorrectly but the result shown in the paper is correct. We have fixed it and added additional clarifications to the paper that we will now explain here. Indeed, as you’ve written (9) should have $\frac{Z_t}{Z_{t+1}}$ instead of the written $\frac{Z_{t+1}}{Z_{t}}$ and on (10) the left hand side should be $\frac{p_{F}^{t + 1}(\tau)}{p_{B}^{t + 1}(\tau | x)}$. Then, by assuming that the balance condition is satisfied, we deduce that $Z_t\cdot p_F^t(\tau) = p_B^t(\tau|x)\pi_t(x)$, in other words, $\frac{p_F^t(\tau)}{p_B^t(\tau|x)} = \frac{\pi_t(x)}{Z_t}$. By doing this substitution on the corrected eq (9), we obtain the corrected eq (10): $$\frac{p_{F}^{t + 1}(\tau)}{p_{B}^{t + 1}(\tau | x)} = Z_{t + 1} \pi_{t}(x) f(\mathcal{D}_{t + 1} | x).$$ Now, as defined in the paper: $$p\_{\intercal}^{t + 1}(x)= \mathbb{E}\_{\tau \sim p\_{B}^{t + 1}(\cdot | x)}\left[\frac{p\_{F}^{t + 1}(\tau)}{p\_{B}^{t + 1}(\tau | x)}\right],$$ which allows us to conclude, as written in the submitted version, $ p_{\intercal}^{t + 1}(x) \propto \pi_{t + 1}(x)$. As for the proof of proposition 2, we fix typos and added extra comments to improve the clarity of presentation. Once again we have written $\frac{Z_{t+1}}{Z_{t}}$ instead of the correct $\frac{Z_t}{Z_{t+1}}$. Additionally, this proposition does not use either eq (9) or (10) in the proof. Due to a typo, the equation in lines 532-533 looks similar to eq (9), however, the corrected version, $p\_{\intercal}^{(t + 1), \star}(x) = \frac{Z\_{t + 1}}{Z\_{t}} p\_{\intercal}^{t}(x) f(\mathcal{D}\_{ t + 1 } | x)$ is not related due to the fact that it does not refer to either $p_{F}^{t}(\tau)$ or $p_{F}^{t+1}(\tau)$ but to $p_{\intercal}^{t}(x)$ and $p_{\intercal}^{t+1}(x)$. Once again, we note that these minor corrections do not change the proof significantly and our conclusions still hold. > To what extent is the assumption that new data is independent of past data under the posterior predictive? This question is especially important in cases where you try to sample from the marginal likelihood over some object (for instance for structure learning), where you lose that independence under the posterior given only the structure (and not parameters for example). I suspect this might be the reason for performance degradation across streaming steps in the Bayesian phylogenetic inference case, given that Felsenstein's algorithm assigns a marginal likelihood. First, we would like to clarify the assumptions of our model. In the joint distribution, we only require that each data observation is conditionally independent given the parameters, i.e. the variable we want to compute the posterior on. In other words, given a joint distribution: $$ p(\theta, \mathcal{D}\_1, \mathcal{D}\_2, \ldots, \mathcal{D}\_4) = p(\theta) \prod\_{i=1}^4 p(\mathcal{D}\_i | \theta), $$ this only assumes that, given $\theta$, there is independence between the data $\mathcal{D}_i$, however, if we marginalize $\theta$ out of this distribution, the data distribution would be correlated. The fact that the marginal distribution is correlated is not an issue for our method. In the Bayesian phylogenetics case, the $D_i$ comprises the i-th nucleobase for all biological species. We understand that our description in lines 302-304 might be misleading, since $D_1,\ldots, D_M$ are independent given the tree structure $T$ but $S_1, \ldots, S_N$ are not independent. We will clarify this in the revised manuscript. Please let us know if further clarifications are required. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my comments. I have raised my score to 7.
Summary: This paper introduces Streaming Bayes GFlowNets, a method for performing approximate Bayesian inference over discrete parameter spaces in streaming data settings. SB-GFlowNets allow efficient updating of posterior approximations as new data arrives. The method reduces training time compared to retraining GFlowNets from scratch and maintains comparable accuracy. In conclusion, SB-GFlowNets enable streaming variational inference for discrete parameters, opening up new applications of Bayesian methods to large-scale streaming data problems. Strengths: - SB-GFlowNets significantly reduce training time compared to retraining GFlowNets from scratch for each new data batch. This is particularly valuable for large-scale streaming data problems. - The paper provides a theoretical analysis of how errors propagate through posterior updates - The method is demonstrated to work well on various tasks (set generation, Bayesian linear preference learning, and online phylogenetic inference) Weaknesses: - As mentioned in the paper's limitations section, inappropriate approximations to earlier posteriors may propagate through time, potentially leading to increasingly inaccurate models. This is a form of catastrophic forgetting - Due to the error accumulation issue, there may sometimes need to retrain the model from an earlier checkpoint or using the full posterior - As the paper mentioned, for very sparse target distributions, the performance can degrade, particularly when using the KL-based training scheme Technical Quality: 3 Clarity: 3 Questions for Authors: - Do you think the accumulative error issue can be solved by cache previous gradient (borrowing tricks from continual learning)? - It would be great if you can just compare with some continual learning method as baselines. - In sparse target distributions, if not using KL-based training scheme, what are the possible schemes? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for thoughtful feedback. We hope our clarifications solve your concerns and elevate your appraisal of our work. Otherwise, we will gladly engage further during the discussion period. > Do you think the accumulative error issue can be solved by cache previous gradient (borrowing tricks from continual learning)? Thanks for the question. The accumulative error issue as described in the paper relates to the issue of the past GFlowNets ($t^\prime < t+1$) not being trained until they reach global optima, affecting the quality of the current GFN at time ($t+1$). As we briefly discussed in lines 196-198, our suggestion is to reuse a previous checkpoint at time ($S$) which has better training loss and use all of the data after that point ($t +1 > t^\prime > S$) to train a GFN for time $t^\prime$ until $t+1$. The closest parallel to this strategy in continual learning is _replay_. However, replay typically entails using data from past tasks/batches to avoid catastrophic forgetting. We would like to emphasize that there are fundamental differences between catastrophic forgetting and accumulative errors. The most straightforward is that forgetting may happen in continual learning even if we reach local optima consecutively for all data batches. In our case, we show in the paper when the models converge in our proposed losses, there is no issue with accumulative error. That being said, including arbitrary terms in our SB-GFlowNets generally entails losing our correctness guarantees. This is the case for replay terms, but also for regularization terms like the elastic weight consolidation. Nonetheless, it is interesting to note both our divergence-based loss in Eq. 5 can be written as $$ \mathbb{E}\_{p\_F^{(t+1)}}\left[ - \log \pi\_{t+1}(x) \right] + \mathcal{D}\_{\text{KL}}\left[p\_{F}^{(t+1)} \| p\_{F}^{(t)}\right], $$ where the KL term also penalizes forward policies that deviate from that of the previous time step. We hope this clarification addresses your question. Otherwise, we would be really happy if you could elaborate further on possible connections with continual learning that we could explore during the discussion period. > It would be great if you can just compare with some continual learning method as baselines. To the best of our knowledge, there is no continual learning method capable of learning discrete posteriors over discrete random variables. If you believe we are missing any relevant piece of work, we would gladly test against them and report our results during the discussion period and add it to the final manuscript. > In sparse target distributions, if not using KL-based training scheme, what are the possible schemes? As discussed in the paper, the issue with sparse target distributions is that GFlowNets trained with KL-divergence based criteria may fail to represent all modes of the final posterior distribution; we refer to this phenomenon as mode collapse in the paper. An alternative scheme that circumvents this issue (and is another contribution of our work) is to minimize the streaming balance loss (Eq. 4) in an off-policy fashion. In summary, if the target distribution is not sparse, we expect the KL-based training to converge faster, but, on the other hand, streaming balance condition based training can be applied to sparse target distributions. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my question. I am happy to improve my score.
Rebuttal 1: Rebuttal: Dear reviewers and AC, We are glad to know reviewers found our work to be valuable contribution to the research community (c2ea), our method to be elegant (bMoM) and our experiment results to be strong (bMoM), diverse (ZbQa), and convincing (xCCn) — opening a path for new applications of Bayesian methods to large-scale streaming data problem (w9BG). We are grateful for the reviewers' constructive feedback and suggestions to improve our work. We will incorporate all clarifications requested by the reviewers in the revised manuscript. At the request of reviewer c2ea, we also have run additional experiments in causal discovery. Best regards, Authors Pdf: /pdf/48bb280c70106b8050f7ec398a44c0937bd38a44.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces a method for Bayesian inference in streaming data scenarios, particularly for discrete parameter spaces, called Streaming Bayes GFlowNets (SB-GFlowNets). The main contributions are: * Bayesian Streaming Inference: SB-GFlowNets enable the continuous updating of posterior distributions as new data arrives, without needing to recompute from scratch. * Addressing Intractability: The method addresses the challenge of approximating intractable posteriors in discrete state spaces, which is a limitation of existing variational inference (VI) techniques. * GFlowNet Utilization: SB-GFlowNets leverage GFlowNets, a class of amortized samplers, to approximate the initial posterior and update it incrementally with new data. * Case Studies: The effectiveness of SB-GFlowNets is demonstrated in linear preference learning and phylogenetic inference, showing its ability to sample from unnormalized posteriors efficiently in a streaming context. * Performance: The method is significantly faster compared to repeatedly training a GFlowNet from scratch for the full posterior. Strengths: Two technically novel training algorithms for streaming inference on GFlowNets based on modified balance condition and VI with a control variate. Presentation was reasonably clear and the experimental results were convincing. Removes limitation of GFLowNets, an increasingly important family of models, to only batched training. Weaknesses: A weakness of the paper is that it may not sufficiently highlight the significance of streaming inference compared to batch inference. This may be apparent to Bayesian practitioners but not so to a general ML audience. Are there more compelling applications of streaming inference? Technical Quality: 4 Clarity: 3 Questions for Authors: Sentence in lines 58-59 trails off. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, limitations have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions to improve our manuscript. We hope our answers elevate your appraisal of our work. Should you feel that any points require additional clarification, we are more than willing to engage further. > A weakness of the paper is that it may not sufficiently highlight the significance of streaming inference compared to batch inference. This may be apparent to Bayesian practitioners but not so to a general ML audience. Are there more compelling applications of streaming inference? Thank you for the suggestion. One especially compelling application is Bayesian phylogenetics, which we consider in our experiments. Streaming Bayes would allow researchers to update posteriors over the phylogeny tree as they decode new nucleobases in genetic sequences – without reprocessing previously decoded nucleobases. This is particularly convenient since, while there are GPU-accelerated algorithms to compute the likelihood [1-2], “making things fit within a GPU” to minimize communication overhead is a major concern in practical Bayesian phylogenetics. In practice, phylogenetic analyses might involve hundreds of thousands of nucleobases. More generally, in streaming data settings (i.e., ever-increasing datasets), updating approximate Bayesian posteriors in batch mode entails repeatedly estimating the true posterior from scratch. The major raison d'être for streaming Bayes methods is alleviating this computational bottleneck by reusing the previous posterior estimate as the prior of the next posterior estimate ("Today's posterior is tomorrow's prior'). We will incorporate this discussion in the introduction to make it more suitable for a broader ML audience. > Sentence in lines 58-59 trails off. Thank you for pointing it out, we accidentally commented out part of this sentence. The intended sentence is as follows: “Notably, despite their successful deployment in solving a wide range of problems ([8 , 9, 19 – 21, 29, 31, 55 ]) , previous approaches assumed that the target (posterior) distribution did not change in time. Hence, this is the first work handling the training of GFlowNets in dynamic environments.” [1] Ayres et al., BEAGLE 3: Improved Performance, Scaling, and Usability for a High-Performance Computing Library for Statistical Phylogenetics, Systematic Biology, 2019 [2] Suchard, Rambaut. Many-core algorithms for statistical phylogenetics. Bioinformatics, 2009. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I thank the authors for their thoughtful response to my questions and suggestions. That combined with the discussion with other reviewers below, I am glad to increase my score.
null
null
null
null
null
null
Breaking Semantic Artifacts for Generalized AI-generated Image Detection
Accept (poster)
Summary: This paper addresses the task of AI-generated image detection and specifically the challenge that arises by shifts in the semantics between training and test samples. It is demonstrated through experiments that different artifacts are produced by different generators, leading to performance drops in the cross-generator setting, and that the artifacts related to dataset semantics are inherited to the generated images leading to performance drops in the cross-scene setting. To this end a pipeline is proposed involving patch-based training instead of processing the whole-image or a patch-shuffled version of it. The performance of the trained detector is better than the baselines in the cross-scene setting and in the cross-generator setting (with ACC) but worse in the cross-generator setting (with AP). Strengths: S1-The problem of current detectors that overfit the dataset's semantics is important. S2-The preliminary experimental analysis of artifacts is enlightening. S3-The results support the effectiveness of the method. S4-The experimental analysis (comparisons, ablations) is sufficient to understand the model's performance. Weaknesses: W1-The problem of performance drop in cross-scene settings has already been discussed in (Dogoulis et al. 2023). W2-The method is extremely simple. A small CNN model processes the image patches (instead of the image), then aggregates the corresponding features and finally classifies the sample. Nothing methodologically novel. Also, its performance is very close and in some cases worse than the SotA NPR model. The initial findings seem very interesting, thus a more sophisticated approach would potentially perform even better in the cross-scene setting. W3-Big performance gaps by this method between ACC and AP are still observed although it is claimed that this is what the method addresses. E.g., in Table 5 --> ACC 72.58 AP 81.12. W4-Resizing the images to 256 reduces the synthetic artifacts, and it should be omitted. The fact that the training images are small makes the effect of resizing very small. I would like to see how much the results are changed if the authors omit this part of the data augmentation/pre-processing pipeline. Dogoulis, P., Kordopatis-Zilos, G., Kompatsiaris, I., & Papadopoulos, S. (2023, June). Improving synthetically generated image detection in cross-concept settings. In Proceedings of the 2nd ACM International Workshop on Multimedia AI against Disinformation (pp. 28-35). Technical Quality: 2 Clarity: 2 Questions for Authors: What is the performance of the model if resizing is completely omitted? How does the existing previous approach by Dogoulis et al. perform on your experimental design? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Sections 3.1 and 3.2 are not part of the methodology in my opinion, they should be placed under a separate motivation section as preliminary/insightful experiments. Section 3.3 although very small is still verbose and gives unnecessary details. A similar setting has already been proposed by Dogoulis et al. 2023 which should be cited and the differences with the proposed definition/approach should be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read our manuscript and for providing detailed comments! Below are our responses to the comments provided: > Q1: Comparison to (Dogoulis et al. 2023), which has discussed the problem of performance drop in cross-scene settings. First, we acknowledge that the cross-scene generalization problem is well recognized, and our contribution is not to define the problem but to better solve it. Tables A and B in the PDF attached with the global response show that our method largely surpasses (Dogoulis et al. 2023) by **32.38\%** in Acc. and **29.65\%** in AP on open-world evaluation. See the following for the explanations of their poor performance and the experimental settings. The original setting in (Dogoulis et al. 2023) is less challenging because they only focus on generalization between different concept classes (e.g., objects) but do not involve more diverse contents, represented by the LAION, as in our work. Therefore, their assumption is essentially not sufficient for text-conditional generation. We notice that they originally select the top-10k images of 20k generated images as training sets, while we have 48k generated images. For a fair comparison, we consider two settings: A) top-10k images and B) top-24k images. We followed their official codes and selected ResNet-50 as the backbone. Gaussian blurring and JPEG compression are applied as data augmentation with a probability of 10\%. > Q2: Missing discussion and comparison on the influence of resizing the images. See our response in Q3 and Q4 of Reviewer Zwma. In sum, we add ablation experiments and find that padding is a better choice than image resizing. > Q3: Big performance gaps between Acc. and AP are still observed in Table 5. As seen in our response in Q3 and Q4 of Reviewer Zwma, the most considerable gaps on specific generators, e.g., SITD, SAN, CRN, IMLE, and WFIR can be solved by replacing the resizing with padding. For example, Table B shows that our method reaches the SOTA on cross-GAN/CNN (Acc. **80.13\%**, AP **84.97\%** vs. Acc. **72.58\%**, AP **81.38\%** of the second best method, NRP. The ACC-AP gap decreases from **8.8\%** to **4.84\%**. > Q4: The method is extremely simple and nothing is methodologically novel. Close performance compared with the SOTA baseline. Based on our new results in Table B, our method substantially improves the previous SOTA, NPR, from ACC. 75.38\% to 85.97\% in the open-world evaluation. We welcome further improvements on top of our work. However, we believe the simplicity of our method does not diminish its novelty. Conceptually, our novelty lies in introducing the concept of "semantic artifacts" and demonstrating that breaking "semantic artifacts" is the key to cross-scene generalization. Based on this finding, we propose a novel design of a patch shuffle-based end-to-end detection framework. > Q5: Sections 3.1 and 3.2 are not part of the methodology in my opinion, they should be placed under a separate motivation section as preliminary/insightful experiments. Section 3.3 although very small is still verbose and gives unnecessary details. We will further smooth the logic in Sections 3.1 and 3.2, and move unnecessary details to the Appendix. We agree that Sections 3.1 and 3.2 are not directly about the methodology, but we group them together with Section 3.3 to ensure a smooth transition from the motivation about "semantic artifact" to the method design. --- Rebuttal Comment 1.1: Comment: Dear reviewer iKNb, We hope our rebuttal has addressed your concerns. We are looking forward to your reply and are more than willing to provide responses to any further inquiries you may have. Thank you very much.
Summary: This paper identifies a significant drop in accuracy for existing detectors when generalizing across different image scenes, attributing failures to "semantic artifacts." To address this, a new approach is proposed that involves image patch shuffling and training a patch-based classifier, enhancing generalization. Strengths: This paper is relatively well-motivated as AI-generated image detection is a crucial issue. I also find the evaluations thorough. The strengths are as follows: The target issues of the paper are meaningful and worth exploring. The motivation is clear. The paper is easy to follow. Weaknesses: 1. The idea comparison of patchfor [6] and this paper is needed. 2. It is better to conduct an experiment using the images collected from fake news on the Internet. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! Below are our responses to the comments: > Q1: Idea-level comparison to PatchFor [6]. The key idea-level difference between PatchFor and our method is the use of patch shuffle. This is the core of our method, which ensures that the model completely removes the semantic information as a whole, thereby mitigating overfitting to semantic artifacts. Another technical difference is that we force our model to accept **only** patch inputs but PatchFor still accepts the full image, which contains rich semantic information. > Q2: Experiments using the images collected from fake news on the Internet. We have already followed the common practice [4,7,8] for datasets and experimental settings. Notably, these datasets include data from the Internet, such as Deepfake (sourced from YouTube videos) and WFIR (sourced from websites). For the fake news, we believe it is clearly a valuable direction for future work but technically, it is infeasible because the ground truth of the fake news on the Internet is hard to get, making the model training and evaluation not possible. --- Rebuttal Comment 1.1: Comment: Dear reviewer 4wrZ, We hope our rebuttal has addressed your concerns. We are looking forward to your reply and are more than willing to provide responses to any further inquiries you may have. Thank you very much.
Summary: The paper looks into the problem of AI generated image detection. Given the onset of diffusion models they highlight how previous approaches which claim to generalize fail in setting with new datasets and models. The authors also motivate by visualizing frequency spectrum images. The paper then proposes a patch based feature extraction approach followed by a classifier which works on the image patches to classify whether the image is real or generated. Using such an approach is able to remove issues which some of the previous approaches faced and helps generalize better. Strengths: The following are the strengths of the paper: 1. The paper is very well written and easy to follow. The authors explain different components well. 2. The papers motivation is sound and the authors are able to highlight drawbacks of existing approaches well and also provide some proof backing their claims. 3. On highlighting the problems, the authors propose a new approach to handle the task and incorporate components in it which make it generalizable to new generators. 4. The authors have a good suite of generators that they evaluate in to show effectiveness of their approach and also showcase cross scene and cross model generalization. Weaknesses: The following are the weaknesses of the work: 1. The authors propose patch based learnings as a way to make the approach more generalizable. While they show frequency based visualizations for normal images, similar comparison with patch shuffled as well as image patches is missing. 2. In L178-180, it is not clear what artifacts authors are referring to. Also L192-193 again refers to the same Figure 3 but it is unclear why the visualization should be more focused on point regions. It could be that the detector feels the full region is the cause of its prediction. 3. In L242-243 the authors mention that they resize the images. Missing discussion whether this resizing alters the artifacts in the images. A visualization similar to frequency visualizations would help clarify this. 4. Missing analysis of why the proposed approaches has considerable gaps with existing approaches on specific generators. For example in Table 5 for many generators the proposed approach is far behind the best approach. Similar discussion for the quantiative results should be provided to give some intuitions. 5. Missing comparisons: a) Missing comparison with Towards Discovery and Attribution of Open-world GAN Generated Images. b) A previous work Towards Discovery and Attribution of Open-world GAN Generated Images also looked at fingerprints from generators. Missing comparison and discussion about it. Small corrections: L119 bot -> both L119 synthetics -> synthetic Technical Quality: 3 Clarity: 3 Questions for Authors: I think the paper is well written and in general easy to follow and sound. At the same time I have mentioned a few clarification and comparisons in the weaknesses and would hope the authors answer some of them especially regarding more discussion about the presented quantitive numbers and missing comparisons with mentioned approaches. Some more analysis around frequency visualizations of patches and patch shuffled images would be good. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback and comments! We are glad to hear that you believe this is important work and that it meets the standard for publication. Below are our responses to the comments provided: > Q1: Missing frequency visualizations on patch-shuffled images as well as image patches. Thank you for your suggestions. The frequency visualizations of shuffled images and image patches are provided in Figure A of the PDF attached with our global response. These visualizations support our original hypothesis. Specifically, based on the visualization of shuffled images, most artifacts are removed during the patch shuffling. For instance, the distinct artifacts between CelebA and LAION or between LDM-CelebA and LDM-LAION are significantly reduced. In addition, the visualizations of image patches reveal an intriguing finding that low-frequency features are weakened but high-frequency features (corresponding to artifacts) are enhanced. > Q2: Not clear what artifacts are in L178-180 and why the visualization in Figure 3 should be more focused on point regions. We will make it clear that the "artifacts" in L178-180 refer to "generator artifacts", as defined in Section 3.1. For visualizations in Figure 3, we will clarify that the generator artifacts generally correspond to more universal, point regions, which represent a deeper receptive field, meaning a higher-level feature extraction. So it can be seen that our method successfully directs the model to focus on these regions rather than the concentrated, semantic regions. > Q3: Missing discussion and visualization on the influence of image resizing. We acknowledge that the use of image resizing is not a particular design. In our additional experiments, we also test image padding and find it to be more effective. See Figure A and Tables A and B in the PDF attached with our global response. Tables A and B demonstrate the superior results of zero padding compared to resizing, particularly in cross-GAN/CNN generalization. In particular, the performance of our method has been largely boosted on SITD, SAN, CRN, IMEL, and WFIR. It can be attributed to their high image resolution, which introduces variations in artifacts and leads to the loss of low-level features when resizing is applied. Figure A supports this finding by showing that the frequency features of SITD (with image resolution of over 4,000×3,000) images change a lot after resizing. Note that on ganGAN, resizing is better than padding probably because their images have been resized during the original data collection. > Q4: Missing analysis of why the proposed approaches have considerable gaps with existing approaches on specific generators. The considerable gaps (on SITD, SAN, CRN, IMEL, and WFIR) have been explained in the above response. > Q5: Missing comparison and discussion about "Towards Discovery and Attribution of Open-world GAN Generated Images". The suggested paper and our work focus on different tasks. They aim at attribution and discovery of generated images, rather than our (generalized) detection. Therefore, a direct comparison is not possible. Technically, their method relies on out-of-distribution detection and clustering while ours directly destroys artifacts. > Q6: Small corrections: L119 bot -> both L119 synthetics -> synthetic Thanks. We will fix them. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I appreciate the authors' efforts in the rebuttal. Given the clarifications provided by the authors and the response and reviews to other reviewers, I will keep the positive initial rating I had provided. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our rebuttal and supporting the acceptance of the paper. We will certainly incorporate your suggestions into the final version.
Summary: The paper investigates the robustness and generalization of AI-generated image detectors, with a particular focus on the influence of semantic artifacts on detector performance. The authors highlight that semantic artifacts can significantly impact the effectiveness of AI detectors. To explore this phenomenon, the authors employed a novel approach by training a classifier on image patches rather than whole images. By training a classifier on image patches, the authors conducted experiments on 31 different families of AI-generated images. This study provides a unique and intriguing perspective on how semantic artifacts can affect the performance and generalization of AI detectors. Strengths: Originality: The paper presents a unique angle by examining how semantic artifacts impact the generalization of AI image detectors. While this finding is not entirely unexpected, it represents a novel and insightful contribution to the field. Quality: The paper successfully motivates the significance of semantic artifacts and their influence on generalization. However, the demonstration of this effect could be more thorough. Clarity: The paper is well-written and includes visualizations that effectively aid in understanding the concepts and findings presented. Significance: The paper demonstrates a notable increase in generalization performance, ranging from 3 to 6 points, across 31 test sets, including open-world scenarios. This improvement is significant and highlights the potential impact of addressing semantic artifacts in AI-generated image detection. The paper evaluated on up to 8 baselines, while NPR outperforms on open-world, the results generally is good. Weaknesses: Missing Ablations: The impact of patch size on the performance of the classifier is not explored. Including an ablation study on different patch sizes would provide a deeper understanding of its influence on the results. Missing Visualizations: Figure 3 lacks examples of real images, which is crucial for the reader to fully comprehend the differences and understand the context of the study. Including these examples would complete the visualization and enhance clarity. Missing Data Details: The paper does not provide sufficient details on how real images were selected for the study. Additionally, there is a lack of discussion on various data design choices. Addressing these points would strengthen the paper by clarifying the methodology and justifying the design decisions. Missing Citations: The paper should cite relevant works that discuss the use of patches to mitigate the influence of semantic features [1, 2]. [1] https://arxiv.org/abs/1905.13549 [2] https://openaccess.thecvf.com/content/CVPR2022/papers/Mao_Causal_Transportability_for_Visual_Recognition_CVPR_2022_paper.pdf Technical Quality: 2 Clarity: 2 Questions for Authors: Impact of Paired Data: What if paired data were used, where each real image is paired with an AI-generated image sharing the same semantic content? This pairing could be achieved by conditioning the generation process on the real image. By ensuring that both images share the same semantic features, we could eliminate semantic artifacts as a spurious factor. This approach might already solve the problem and lead to better generalization. The paper would benefit from discussing this potential strategy and its implications for improving the robustness of AI-generated image detectors. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read our manuscript and for providing detailed comments! Below are our responses to your comments: > Q1: Missing ablations of patch size on the performance. We have already reported the ablation results of patch size (and model depth) in Appendix A.1.1, and we will add conclusions in the main text based on them. The superiority of our method holds for all tested patch sizes and model depths. Specifically, a too-large patch size enlarges the receptive field, potentially exacerbating the overfitting issues, while a too-small patch size destroys certain low-level semantic features, potentially causing underfitting. > Q2: Missing visualizations of real images in Figure 3. We add such visualizations to the attached PDF in our global response, as illustrated in Figure B, and we will include them in our updated manuscript. As expected, the CAM visualizations of all the detectors on real images (i.e., with the label "0") show almost no activated regions. This aligns with the high Acc. results of real images in cross-scene settings (see Table 1). > Q3: Missing details about the data selection and design. We have already detailed the dataset collection in Appendix A.4, and we will summarize them in the main body of our updated manuscript. Specifically, for DM-generated images, our selected generative models span a long time range, from 2020 to now. They fall into 3 categories: unconditional, class-conditional, and text-conditional. For real images, the training data of each generative model are used to ensure the same distribution to their generated images. > Q4: Missing citations of [1, 2], which discuss the use of patches to mitigate the influence of semantic features. Thank you for recommending these two papers, which can be used to better motivate our use of patches. We will cite them in our updated manuscript. > Q5: Data pairing as a potential baseline. This is indeed a good point! A similar idea has been adopted by very recent methods, e.g., DIRE [54], which models the error between an input image and its reconstructed counterpart by a pre-trained diffusion model. We have validated that our method outperforms DIRE by **25.07\%** in Acc. and **24.55\%** in AP on open-world evaluation (see Table 3). We believe data pairing has the following limitations: + It targets specific semantic objects, e.g., church, bedroom, human face, making it feasible only for unconditional or class-conditional generated images. However, for text-conditional generated images, which contain complicated semantics, paired data are hard to define and collect. + Although paired data can remove the semantic artifacts, the model may still learn the domain-specific information. Then, when it comes to different domains during testing, the performance decreases. For example, a generated human face normally contains most artifacts in its domain-specific regions, e.g., the lineament and hair. This assumption is supported by our results in Table 4, where DIRE (trained on CelebA) degrades from **83.25\%** Acc. on CelebA-LDM to **50.45\%** Acc. on LAION-LDM. In contrast, our method is both tailored to specific semantics and indiscriminately destroys the semantic artifacts. --- Rebuttal Comment 1.1: Title: Thank you for the reply. Comment: My questions are addressed and score updated. --- Reply to Comment 1.1.1: Comment: Thank you for your response and increasing the score. We will incorporate your feedback into the final version.
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful assessment of our work and for providing useful feedback and actionable suggestions. They found that our research makes novel and insightful contributions (reviewer a5zv), with a clear and sound motivation (reviewers a5zv, Zwma, and 4wrZ), an effective/generalizable method (reviewers a5zv, Zwma, and iKNb), thorough/sufficient experiments (reviewers 4wrZ and iKNb), and good writing (all reviewers). They mainly request more comparisons with recent research, discussion on the impact of preprocessing, and visualizations to further support our claims. To address them, we mainly provide: ● A comparison to the approach from "Dogoulis et al. Improving synthetically generated image detection in cross-concept settings" and an ablation of image resizing on the performance. (Tables A and B) (reviewer iKNb) ● Additional frequency visualizations on images from 4 real datasets and 4 generative models with different pre-processing pipelines. (Figure A) (reviewers Zwma and iKNb) ● CAM visualizations extracted from different detectors on real images of church or bedroom. (Figure B) (reviewer a5zv) All these new results support our original claims. **In particular, thanks to the suggestion from reviewers, the performance of our method has been largely improved by replacing the resizing with padding.** Pdf: /pdf/414690c22b6c9a9354c4c8909a677e53a03b87dd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment
Accept (poster)
Summary: The paper proposes a new re-weighting strategy for training VLMs to improve visual-language alignment. The motivation is to assign higher weight to visually relevant tokens and lower weight to visually irrelevant and visually contradictory tokens. The re-weighting factor relies on the logit difference with and without image inputs for each token. Using the proposed CAL, the paper demonstrates general improvement in VQA and captioning tasks. Strengths: The motivation to dynamically assign weights to different types of tokens is intuitive and well-studied in the language domain. Several studies have attempted to apply this method to vision-language tasks in a zero-shot manner, which incurs high inference costs. This paper proposes training the model with this method instead. The approach is simple and effective, demonstrating general improvements over existing state-of-the-art models across benchmarks. The analysis further validates the effectiveness of the proposed method. Weaknesses: 1. **Lack of Clarification in Presentation**: Some content is confusing. For example, equation 2 is averaged across the entire sentence with length $l$, yet the description in lines 88-89 refers to the loss objective for the $i^{th}$ sample at token $t_j$, which seems more like the loss at a single token. Additionally, equation 4 lacks clarification; it's unclear how to apply average pooling with a window size from this equation. Some experiments also lack necessary context. For instance, it's unclear which model is evaluated in Figure 2a, and the ablation study in section 3.3 doesn't specify which model is used for evaluation. The main result in section 3.3 doesn't disucuss how to apply CAL into model training, at which stage for example. The annotation is inconsistent; in equation 4, it's $\tilde{w}^{i,t_j}$, while in equation 5, it's $\tilde{w}(i.t_k)$ 2. **More Experiments Are Required**: The paper presents decent improvements over some baselines, mainly in vision-language tasks. However, since the training recipe has changed, it would be better to also evaluate other tasks, such as language-only tasks, to see how the CAL affects the model's ability in standard language understanding. Additionally, the method seems quite relevant to hallucination tasks, so it would be beneficial to report results on those as well. 3. **Lack of Originality**: The motivation and methodology are already well-studied in language models [1,2], and prior studies have also applied changes to logits based on the presence or absence of images to measure the relevance of visual and language inputs. In this context, the paper simply applies the method during the training stage, instead of the inference stage as in [3]. Therefore, the novelty is somewhat limited. [1] Jiang S, Wolf T, Monz C, et al. TLDR: token loss dynamic reweighting for reducing repetitive utterance generation[J]. arXiv preprint arXiv:2003.11963, 2020. [2] Lin Z, Gou Z, Gong Y, et al. Rho-1: Not all tokens are what you need[J]. arXiv preprint arXiv:2404.07965, 2024. [3] Zhu L, Ji D, Chen T, et al. Ibd: Alleviating hallucinations in large vision-language models via image-biased decoding[J]. arXiv preprint arXiv:2402.18476, 2024. Technical Quality: 3 Clarity: 1 Questions for Authors: 1. The trend shown in Figure 2a is based on which model? Is this a common phenomenon across existing VLMs? 2. How do you deal with cases where a semantic word has been split into two subparts, each with a different value, such as "tr" and "uck" in Figure 2a? 3. The CAL seems to rely heavily on the model's logits with and without image input. However, can the model accurately predict changes in logits, as shown in Section 2, during the initial alignment stages? 4. The training requires an additional forward pass of the LLM. Why is the training described as "lightweight" in the paper? Doesn't this double the cost? Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: not applied Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their constructive feedback and for recognizing the strengths of our work. We appreciate the detailed insights and suggestions provided. Below, we address the key points raised in your review. > **W1: Lack of Clarification in Presentation** We acknowledge the need for better clarification and consistency in the presentation. We will fix these typos in the revision. We would like to clarify some of the points for your concern. > **W2: More Experiments Are Required** Thank you for your constructive advice. Following most of the existing works on VLMs, we report the same VLM benchmarks to examine the cross-modal ability for better comparison. We also understand and agree with your concern that CAL might affect performance on language-only tasks. To address your concern, **we experimentally verified that CAL did not interfere the performance on language-only tasks**. We further conducted additional evaluations on language-only tasks including AGI-eval, BBH, and MMLU. These statistics show that CAL achieves comparable results, and even outperforms the baseline on BBH on LLaVA-15-13B. | Method | AGI-eval | BBH | MMLU | | -------------- | :------: | :----: | :---: | | LLaVA-15-7B | 38.14 | 40.78 | 50.56 | | LLaVA-15-7B + CAL | 37.65 | 40.33 | 50.63 | | LLaVA-15-13B | 39.55 | 49.27 | 55.11 | | LLaVA-15-13B + CAL | 39.52 | 49.67 | 55.14 | As for hallucination issues, our method prioritizes visually correlated tokens to enhance cross-modal alignment. Nevertheless, since we maintain a minimal weight for all tokens, some contradictory tokens inevitably continue to impact the training process, which can potentially lead to hallucinations as baseline model does. Furthermore, it's important to recognize that LLMs inherently exhibit hallucination issues. Merely enhancing cross-modal alignment, although beneficial, is not a comprehensive solution. This challenge necessitates the adoption of additional techniques tailored to address this specific problem. Besides, CAL shows comparable performance on POPE with the baseline models, as presented in the Appendix in Table 7. > **W3: Lack of Originality** Thank you for your feedback. We would like to clarify the originality of both our motivation and methodology. For motivation, we are the first to study token discrepancy in the cross-modality alignment stage of existing VLMs, while [1] and [2] focus on language-only scenarios. We are glad to include them in the related works, but we would like to clarify the significant difference from these works. Our work and [2] are concurrent but use different techniques: [2] trains a token-level selective model using high-quality data, while we select tokens based on contrastive image inputs. Unlike [1] and [2], our motivation is to enhance cross-modal alignment by prioritizing visually correlated tokens rather than handling noise/hard tokens. For similarity with contrastive-decoding methods, we emphasize that contrastive learning is widely used but applied differently across scenarios. Our innovation lies in token-level training data selection by identifying visually correlated tokens in training labels using contrastive learning and utilizing logit differences for token loss reweighting. > **Q1: Trend in Figure 2a** Thank you for your insightful question. The trend shown in Figure 2a is a common phenomenon across various VLMs, including LLaVA-15-7B-PT, LLaVA-15-7B-SFT, and MGM-7B-SFT. The high cosine similarity in weight assignment distribution across these models on ShareGPT4V dataset, as shown below, supports this: | Model | LLaVA15-7B-SFT | MGM-7B-SFT | | ------------------ | :------------: | :--------: | | LLaVA15-7B-PT | 0.9470 | 0.9352 | | LLaVA15-7B-SFT | - | 0.9191 | > **Q2: Handling Split Semantic Words** Thank you for your question. We do not strictly distinguish or design special methods for subwords since CAL is orthogonal to word tokenization. Currently, there is no theoretical basis to decompose sub-word level interactions within a semantic word. We consider the logit change at the token level. > **Q3: Model's Ability to Predict Logit Changes** Thank you for your feedback. The current standard practice in training VLMs primarily aims to map image features to the text embedding space. Before this alignment stage, the CLIP ViT is pre-trained on a billion-level dataset, and the LLM is trained on a trillion-level dataset. These models are highly capable of encoding images and handling complex language tasks. In the first phase of VLM training, it is common practice to freeze the ViT and the LLM, and only train a projector to align their features. This step can be quite straightforward and quick. The figure in Section 2 comes from the pre-trained LLaVA-NeXT-13B, demonstrating that the ability to distinguish tokens emerges early in the alignment process. > **Q4: Training Cost and "Lightweight" Claim** For the concern of "lightweight". **Experimentally**, as discussed in the Appendix (Table 10), CAL adds only a 20% increase in training time, far from "doubling". **Theoretically**, the additional forward pass of CAL neither needs gradient back-propagation nor needs storing the intermediate activation values, which is both computation time friendly and memory efficient. Besides, the second forward pass does not need image tokens as the original forward process does, resulting in reduced sequence length and increased speed. --- We hope these clarifications and additional details adequately address your concerns. Thank you once again for your valuable feedback. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response in addressing my questions and concerns. The explanation effectively clarifies the concern regarding training costs, particularly noting that the second forward pass does not require storing intermediate activations or performing backward propagation. However, the claim of a 20% increase in time is not scientifically grounded, as it can vary depending on the dataset used for training. My third question was not fully understood by the authors. The inquiry was not about the training recipe for MLLMs like LLaVA. In the early stages of training, one might assume the model lacks the ability to generate captions from images but the LLMs can perfectly generate text from a partial caption. The question is whether the gap in predicted logits with and without the image is accurate in the initial stage where the model lacks good captioning capacity. It might be more appropriate to apply this training during the instruction-tuning stage, where it is assumed the model can generate captions after pretraining. Another way to frame my question is: considering that language models typically use RoPE as a positional embedding, how does prepending image tokens affect logits given the altered positional embedding of text tokens in this scenario? Overall, while some concerns remain, particularly regarding the limited originality of the methodology in light of prior works, I find the work interesting and believe it may inspire future research. As such, I have raised my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you for your detailed feedback and for raising your score. We appreciate your recognition of our work's potential and your valuable suggestions. Regarding the computational cost, we can preprocess and store the CAL weights using the trained model offline, which will not increase the cost in subsequent training. This is one of our future improvement directions. In the pretraining stage, since the number of trainable parameters (only projector) is limited and there is a minimum weight for all text tokens, inaccuracies in predicting logits in the early stages do not significantly impact the overall performance. As shown in Table 3, applying CAL in both pretrainig and finetuning stages yields the best performance. Regarding the issue of positional embeddings, in the current version, we ensure that the positional embeddings for text tokens are consistent in both two scenarios. Once again, thank you for your valuable insights and support. We are committed to addressing these issues in our future work.
Summary: This paper introduces Contrastive Alignment (CAL), a straightforward yet effective re-weighting strategy designed to improve multimodal alignment. Specifically, the authors propose contrasting image inputs and calculating the differences in prediction logits for each text token to determine its training weights. Extensive experiments are conducted to demonstrate the effectiveness of CAL across various tasks, providing a robust validation of the proposed approach. Strengths: 1. The Contrastive Alignment (CAL) proposed in this paper is an efficient method for multimodal alignment, particularly beneficial when training data is heavily contaminated with noise. The dynamic adjustment of training weights represents a lightweight yet innovative solution to this problem. 2. The experimental section of this paper is thorough. It includes extensive quantitative analysis and demonstrates the performance of CAL across a broad range of scenarios. These results not only allow for a deep understanding of the CAL method but also provide valuable insights into its practical applications. Weaknesses: 1. While CAL is innovative for aligning multimodal data, its robustness and comprehensiveness in complex task settings, such as multimodal question answering (VQA), appear limited. The method ensures correct alignment but may inadvertently impair other model capabilities, such as reasoning ability or retention of original world knowledge. This issue is critical in VQA scenarios where data often include elements not directly related to image content, potentially conflicting with it and incorporating extensive external knowledge. The application of CAL in such contexts might suppress training on these critical aspects, introducing uncontrollable biases and worsening model performance. Although CAL shows promise in simpler tasks like captioning and grounding, its performance in VQA tasks is inconsistent, and its effectiveness in addressing hallucination issues, such as in the POPE dataset, is negligible. 2. CAL’s effectiveness appears contingent upon the pre-existing performance of the underlying multimodal model. Since it calculates visually correlated weights based on the model itself, any hallucinations or errors present in the base model are not only perpetuated but potentially exacerbated by the CAL method. This limitation restricts CAL’s applicability and caps its performance, as there is no mechanism within CAL to correct these amplified errors. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Why MGM-HD-13B perform worse that -7B model in caption tasks as shown in Tab-2 in the paper ? 2. The visually correlation weights in CAL are smoothed using average pooling with a window size of W. How is W determined? Is this parameter set consistently across all datasets, or does it require adjustment for each training run to optimize performance? 3. The paper mentions that CAL adds minimal computational overhead compared to other data scaling strategies. However, could the authors provide more quantitative examples or an intuitive comparison? It seems that processing very long texts could significantly increase the overhead. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their constructive feedback and for recognizing the strengths of our paper. We appreciate the detailed insights and suggestions provided. Below, we address the key points raised in your review. > **W1: Robustness in Complex Task Settings Like VQA** Actually, aligning with previous works, we have conducted extensive validation on various VQA tasks, as shown in Table 1(Doc, Chart, Text VQA, mmstar, mmtbench and etc). And our methods surpass the baseline in most cases. As for the retention of original world knowledge, we present the performance on OKVQA[1], which is used to test the model's ability to answer questions requiring external knowledge beyond what is explicitly shown in the image. The CAL models exceed baseline models in most cases. | Model | Baseline | +CAL | |-------|-------------|-----------------| | LLaVA-15 7B | 56.46 | 58.77 | | LLaVA-15 13B| 53.09 | 56.20 | | LLaVA-NEXT 7B | 56.08 | 55.26 | | LLaVA-NEXT 13B| 55.09 | 55.69 | We also evaluate the pure text capability of the baseline and CAL models. Among the tests used, BBH is specifically employed to assess reasoning ability. There are no significant differences compared to the baseline. | Method | AGI-eval | BBH(For reasoning)| MMLU | | -------------- | :------: | :----: | :---: | | LLaVA-15-7B | 38.14 | 40.78 | 50.56 | | LLaVA-15-7B + CAL | 37.65 | 40.33 | 50.63 | | LLaVA-15-13B | 39.55 | 49.27 | 55.11 | | LLaVA-15-13B + CAL | 39.52 | 49.67 | 55.14 | As for hallucination issues, our method prioritizes visually correlated tokens to enhance cross-modal alignment. Nevertheless, since we maintain a minimal weight for all tokens, some contradictory tokens inevitably continue to impact the training process, which can potentially lead to hallucinations as baseline model does. Furthermore, it's important to recognize that LLMs inherently exhibit hallucination issues. Merely enhancing cross-modal alignment, although beneficial, is not a comprehensive solution. This challenge necessitates the adoption of additional techniques tailored to address this specific problem. > **W2: Dependency on Base Model Performance and Amplification of Existing Errors** Thank you for your observation regarding CAL's dependence on the base model's performance. While it is true that CAL could potentially amplify errors, we would like to clarify that **this scenario is rare**. The amplification of errors only occur when there are both **hallucinations in the training dataset** and **such hallucinations are also captured by CLIP**, which is rare in practice. Moreover, as demonstrated in our **experiments on the POPE** (for hallucination evaluation) dataset, our method achieves comparable performance with the baseline. | POPE Performance | Baseline | +CAL | | Baseline | +CAL | |----------------------|----------|-------|---------------|----------|-------| | MGM 7B | 85.7 | 87.5 | LLaVA-15 7B | 86.2 | 85.5 | | MGM 13B | 86.2 | 86.4 | LLaVA-15 13B | 85.7 | 85.8 | | MGM-HD 7B | 85.7 | 87.0 | LLaVA-NEXT 7B | 86.8 | 86.7 | | MGM-HD 13B | 86.3 | 86.4 | LLaVA-NEXT 13B| 87.2 | 87.2 | > **Q1: Performance Discrepancy Between MGM-HD-13B and MGM-HD-7B** The performance discrepancy arose from the initial zero-shot evaluation approach in MGM, where results were largely influenced by output style. We retrained the MGM-HD model using the LLAVA-15 dataset, which includes training data from both COCO and TextCaps. The retrained results showed that the performance trend in MGM aligns with the LLAVA models, resolving the discrepancy. | Model | COCOCaps | TextCaps | Model | COCOCaps | TextCaps | | -------------- | -------- | -------- | -------------- | -------- | -------- | | MGM 7B | 107.54 | 108.17 | MGM 7B + CAL | 110.33 | 113.33 | | MGM 13B | 113.00 | 112.47 | MGM 13B + CAL | 113.91 | 118.10 | > **Q2: Smoothing Window Size (W) Determination** We fixed the window size (W) to 3 in all our experiments to smooth the weight distribution. An additional experiment where we removed the average pooling step showed slightly inferior performance, supporting our chosen. We will include these findings and a discussion on the fixed window size in the revised paper. | Benchmark | ChartQA | Docvqa| SQA-I | COCO Caption| TextCaps| OCRBench | Refcocog val| | -------------- | --- | --- | --- | --- | --- | --- | --- | | w/ AvgPool | 67.2 | 80.1 | 71.5 | 120.6 | 124.4 | 574 | 80.4 | | w/o AvgPool | 66.3 | 79.5 | 72.4 | 116.7 | 123.8 | 581 | 79.5| > **Q3: Computational Overhead of CAL** As detailed in Table 10, CAL generally introduces an additional computational cost of approximately 20%. This overhead is primarily due to **an extra forward pass without gradient computation**. In exceptionally long texts scenarios, the original forward and backward process also need to deal with long sequences. Therefore, although the absolute overhead increases in such scenarios, the relative increase compared to shorter samples remains consistent. --- [1] OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge --- Rebuttal Comment 1.1: Comment: Thanks for the response, most of my concerns have been addressed. However, for an algorithm that introduce additional computational costs, mere performance enhancements in "most" scenarios are insufficient, particularly as we observe that the CAL method results in performance degradation in the LLaVA-Next model. This phenomenon could be anticipated given the lack of interpretability of CAL in complex settings, or as discussed in W2, the introduction of uncontrollable biases stemming from inherent limitations of the base model. Despite its flaws, this work challenges the dominant paradigm of pure supervised learning in multimodal training, making it a valuable contribution to the field. In recognition of its potential, I will raise my score. I urge the authors to provide more quantitative analyses in future version of the paper, such as exploring the weight distribution highlighted by Reviewer DwVB, or presenting scenarios where CAL fails to enhance performance, to deepen our understanding of these issues. --- Reply to Comment 1.1.1: Comment: Thank you for your detailed feedback and for raising your score. We appreciate your recognition of our work's potential and your valuable suggestions. Regarding the computational cost, we can preprocess and store the CAL weights using the trained model offline, which will not increase the cost in subsequent training. This is one of our future improvement directions. In future versions of our paper, we will add more analyses, including exploring weight distribution and scenarios where CAL may not enhance performance. In our subsequent work, we will strive to address the uncertainties introduced by the base model.
Summary: This paper points out that parts of samples in the broadly used datasets contain visually contradictory text tokens. To mitigate the sub-optimal cross-modal alignment in VLMs, this proposed method is to assign distinct contributions for each text token based on its visual correlation. Strengths: 1. The proposed method is easy to implement and effective. The idea of assigning distinct weights for each token is intuitive. 2. This paper is well organized and provides clear preliminaries, helping readers understand the method. 3. CAL achieves consistent and solid performance across different benchmarks. Weaknesses: Please see the Questions Technical Quality: 4 Clarity: 3 Questions for Authors: 1. It’s unclear how to calculate the prediction logit distribution o without input I for the VLM like CLIP. Is this method not applicable to the pure image-text matching VLM like CLIP? 2. There’s no l in Equation 4. Does it mean W in Sec. 3.1 implementation details “we set l in Equation 4 to 3 for all experiments”? 3. For Figure 3, it is better to describe what the dashed line represents as well. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their positive feedback and for highlighting the strengths of our work. We appreciate the detailed insights and suggestions provided. Below, we address the key points raised in your review. > **Q1: How to Use on CLIP** Thank you for your question regarding the calculation of the prediction logit distribution (o) without input (I) for Vision-Language Models (VLMs) like CLIP. For models like CLIP, we can assess the image-text relation by evaluating the change in CLIP score between the original and the noise-added images. This approach aligns with the essence of our method, which is focused on contrastive learning. > **Q2: There’s No 'l' in Equation 4. Does It Mean 'W' in Sec. 3.1?** Thank you for pointing out this issue. We acknowledge the typo in our manuscript and will correct it in the revised version of the paper. > **Q3: Dashed Line in Figure 3** The dashed line represents the asymptote. We will update the figure description in the revised version of the paper to clearly indicate this and prevent any further confusion. --- We hope that these clarifications and additional details adequately address your concerns. Thank you once again for your valuable feedback. --- Rebuttal Comment 1.1: Title: Response to rebuttal by authors Comment: Thanks for the response. The Answer for Q1 is still a vague idea in its infancy. In future work, the authors may implement the CAL on CLIP by using "noise-added images" to decide the token scores and see if it works. I choose to keep my original positive rating. --- Reply to Comment 1.1.1: Comment: Thank you for your constructive feedback and for maintaining your positive rating. We appreciate your suggestion and will explore implementing CAL on CLIP with noise-added images in future work. --- Rebuttal 2: Comment: Hi Reviewer tsLt, Could you take a look at the authors' rebuttal and finalize your rating? Thanks, AC
Summary: This study proposes a reweighting strategy, namely contrastive alignment, to enhance model learning on visually correlated tokens. Specifically, the authors divided the tokens into three sub-groups, virtually related, non-related, and contradictory, and assigned different weights to each group. The weight is calculated based on the predicted contrastive logit (w/o visuals) and further post-processed by clamping and average pooling. This study is evaluated on vision question answering, image caption, and grounding tasks using two foundation model structures, LLaVA and Mini-Gemini. Strengths: -- The paper is well-written. -- Deploying a reweighting strategy on different types of tokens seems feasible for performance improvement. -- The division of token groups is straightforward yet reasonable. Weaknesses: -- This study does not compare with any baseline models within the topic of resisting noisy tokens, such as [*] and [**]. -- In [*], a study with similar aims was presented. In particular, they also studied the image-token consistency. Thus, detailed baseline comparisons, including overall performance and token consistency evaluation/comparison, are expected. -- The authors claim that the proposed strategy is simple yet efficient. However, it may rely upon several strong assumptions, such as high capability requirements of the pre-trained model. These assumptions are not presented, discussed, and validated in the paper. -- Weight distribution on different token sub-groups is not presented. This would contribute to the validation of weighting. [*] Gou, Yunhao, et al. "Leveraging per image-token consistency for vision-language pre-training." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [**] Wu, Cheng-En, et al. "Why Is Prompt Tuning for Vision-Language Models Robust to Noisy Labels?." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: See above weakness. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: As I can see, this paper has no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your constructive feedback and highlighting the strengths of our work. We appreciate the detailed insights and suggestions provided. Below, we address the key points raised in your review. > **W1: Lack of Baseline Comparisons** We are happy to include these papers in the related work, however, we would like to clarify that these works might not be suitable for comparison as baseline methods in this paper. First of all, the two methodologies mentioned are primarily **designed for visual-language representation tasks, focusing on classification and retrival tasks**. However, CAL is designed for generation task, so there are no common benchmarks for comparision. Secifically, Gou et al. (2023) use a method involving masked images and a **BERT-like** approach to identify salient tokens, which is not applicable to the **causal decoder-only large language models** used in our method. Wu et al. (2023) focus on enhancing the CLIP pretraining scheme, which is different from the multimodal generation tasks we address. Secondly, our method focuses on ensuring better alignment between visual and textual feature spaces by prioritizing visually correlated tokens. While our method does reduce the influence of noisy tokens, its primary aim is to diminish the impact of irrelevant tokens, which constitute a much larger proportion. Therefore, addressing noise tokens is not its central goal. > **W2: Strong Assumptions** Thank you for pointing out this concern regarding the assumptions in our proposed strategy. 1. The basic assumption of CAL is the **high image-text matching capability of Vision Transformer** and a **well-trained Large Language Model**, which are already facts in current structures of Generation VLMs (like LLaVA). The current standard practice in training VLMs primarily aims to map image features to the text embedding space. Before this alignment stage, the CLIP ViT is pre-trained on a billion-level dataset, and the LLM is trained on a trillion-level dataset. They are of great capability to encode images and deal with complex language tasks. In comparison, the data (about 1M samples in LLaVA) used during the VLM training stage mainly serves to align the feature spaces of the two models and to enable the model to answer image-related questions. 2. Another assumption of CAL is that **after training a simple projector only, the VLM is capable of distinguishing visually correlated tokens**. In the first phase of VLM training, it is common practice to freeze the ViT and the LLM, and only train a projector to align their features. In the second phase, we finetune the model using high-quality data to enable it to answer image-related questions. To validate this assumption, **we finetune CAL models using two types of pre-trained versions: one with the original pre-trained model (only train the projector) and one with the fully trained baseline model (after the finetuning phase)**. We then compared the performance of these two versions and found no significant differences in performance. We will include a more detailed discussion of these assumptions and their validation in the revised version of the paper to address this concern explicitly. | Model | Pretrain | ChartQA | Docvqa | SQA-I | COCO Caption | TextCaps | OCRBench | Refcocog val | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Baseline(LLaVA-Next-13B) | Original Pretrain | 63.8 | 78.4 | 71.8 | 118.5 | 118.2 | 553 | 79.8 | | CAL | Original Pretrain | 67.2 | 80.1 | 71.5 | 120.6 | 124.4 | 574 | 80.4 | | CAL | Baseline Pretrain | 66.3 | 79.5 | 72.4 | 116.7 | 123.8 | 581 | 80.2 | > **W3: Weight Distribution Not Presented** Thank you for your insightful comment on the weight distribution across different token sub-groups. We appreciate your recognition of our token classification approach. However, it is important to note that there is no existing labeled dataset that directly identifies these three types of tokens. Given the impracticality of manually annotating a large volume of data at the token level within a short time frame, we utilized the RLHF-V[1] dataset, **which includes annotations for incorrect words in captions**. Using this dataset, we conduct a statistical analysis and find that **the average logits difference (diff) for erroneous tokens is significantly lower than that for correct tokens**. This finding supports our hypothesis about the effectiveness of our weighting mechanism. We will include these details and present the weight distribution analysis in the revised version of the paper to provide a more comprehensive validation of our weighting approach. It is worthy to mention that since this dataset is specially collected, the number of false tokens is much larger than other tokens. (LLaVA-15) | Token Type | Token Num | avg_diff_value (7B model) | avg_diff_value (13B model) | | --- | --- | --- | --- | | False token | 344197 | 0.163 | 0.277 | | Correct token | 76783 | 2.271 | 2.10 | --- We hope that these clarifications and additional details adequately address your concerns. Thank you once again for your valuable feedback. [1] Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback --- Rebuttal Comment 1.1: Comment: The authors partially addressed my concerns. So, I increased my rate. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for raising your score. We appreciate your recognition of our work's potential and your valuable suggestions. --- Rebuttal 2: Comment: Hi Reviewer DwVB, Could you take a look at the authors' rebuttal and finalize your rating? Thanks, AC
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation
Accept (poster)
Summary: This work presented a new consistency based framework for video diffusion model distillation. Specifically, the adversarial loss is leveraged to enhance the video quality and consistency distillation loss is performed in the motion embedding space to learn the video motion patterns effectively. In addition, the authors proposed mixed trajectory distillation to ensure better alignment between training and inference phases. The experimental results demonstrate the proposed approach could produce more visual-pleasing results compared with previous distillation methods. Strengths: 1. The proposed disentangled motion-appearance distillation is reasonable and effective. 2. The generated results in Fig. 5 and supp are very promising. 3. The quantitive comparisons in Table 1 and 2 are convincing. Weaknesses: 1. The adversarial loss is not stable and could the authors employ other manners such as perceptual loss? Technical Quality: 3 Clarity: 3 Questions for Authors: Could the proposed algorithm achieves satisfactory performance on other video generation tasks such as StableVideoDiffusion (Image-to-video) and AnimateAnyone (human video generation)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The proposed algorithm is not evaluated on high-resolution video generation diffusion models such as (1024x576 or 768x768). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your recognition of our novel contributions and promising results. We follow your advice to conduct the following set of experiments. **Weakness 1: Perceptual loss instead of adversarial loss.** Following your suggestion, we conducted experiments comparing perceptual loss with our approach, using ModelScopeT2V teacher on the WebVid mini val set. The results are summarized in the table below. Our observations are: The adversarial loss outperforms using 1 and 2 sampling steps, benefiting low-step sampling by producing sharper details. Both losses perform similarly using 4 and 8 steps. We do not observe apparent training instability using adversarial loss, thanks to the well-trained feature extractor DINO v2 and the discriminator gradient penalty loss. We will add more analysis and details in the revised version. | Method | FVD @ 1 step | 2 steps | 4 steps | 8 steps | CLIPSIM @ 1 step | 2 steps | 4 steps | 8 steps | | ---------------- | ------------ | ------- | ------- | ------- | ---------------- | --------- | --------- | --------- | | Adversarial loss | **703** | **650** | 767 | 760 | **28.70** | **30.37** | **30.90** | **30.77** | | Perceptual loss | 826 | 693 | **748** | **742** | 27.31 | 29.32 | 30.47 | 30.71 | **Question 1: Results on image-to-video and human video generation.** Following your advice, we conduct the following image-to-video distillation experiment using Stable Video Diffusion (SVD). The quantitative results on MSRVTT are listed below. We make the following observations. We outperform Euler and AnimateLCM on FVD across most sampling steps. Compared to AnimateLCM, which finetunes the entire SVD (1.6B parameters), we achieve better performance by training only 5.9% of parameters using a lightweight LoRA (95M parameters). We will add these results to our revised manuscript and make our SVD-based MCM checkpoint publicly available. | Method | # Params | FVD @ 1 step | 2 steps | 4 steps | 8 steps | | -------------- | -------- | ------------ | ------- | ------- | ------- | | Euler | - | 1639 | 1633 | 1268 | 1043 | | AnimateLCM | 1.62B | 772 | **442** | 253 | 242 | | **MCM (ours)** | **95M** | **749** | 463 | **246** | **235** | Due to limited rebuttal time, we demonstrate our method's compatibility with ControlNet for controllable human video generation. Examples are provided in the attached PDF. This integration highlights our MCM’s versatility and potential for broader applications. We will include additional results in the revised version. **Limitation 1: High-resolution video generation evaluation.** Following your advice, we conducted high-resolution video generation evaluations at 768x768 resolution, using our AnimateDiff-based MCM on the WebVid mini val set. The results below demonstrate that our MCM achieves state-of-the-art performance in high-resolution video generation. We've included promising examples of these high-resolution outputs in the PDF attached to our general response. | Method | FVD @ 1 step | 2 steps | 4 steps | 8 steps | CLIPSIM @ 1 step | 2 steps | 4 steps | 8 steps | | --------------------- | ------------ | -------- | ------- | ------- | ---------------- | --------- | --------- | --------- | | DDIM | 5364 | 2654 | 1378 | 973 | 20.23 | 20.53 | 23.67 | 28.93 | | DPM++ | 2371 | 1208 | 973 | 990 | 21.85 | 24.81 | 28.63 | **30.15** | | LCM | 1273 | 1065 | 979 | 986 | 27.81 | 28.90 | 29.87 | 29.93 | | AnimateLCM | 1673 | 1348 | 1078 | 979 | 25.03 | 27.72 | 29.01 | 29.23 | | AnimateDiff-Lightning | 1374 | 1367 | 1297 | 1370 | 28.34 | 29.22 | 29.78 | 29.89 | | **MCM (ours)** | **1108** | **1037** | **962** | **897** | **29.86** | **30.21** | **30.75** | 29.98 | --- Rebuttal Comment 1.1: Comment: Thanks for the response. My concerns have been addressed well.
Summary: This paper proposed a single-stage video diffusion distillation method that can disentangle motion and appearance learning, thus improving frame appearance using various high-quality image data. The proposed mixed trajectory distillation mitigates the training-inference differences in terms of video quality. The extensive experiments demonstrate the superior performance in enhancing frame quality in the video diffusion model. Strengths: 1. The proposed disentangled motion distillation and mixed trajectory distillation are intuitive and novel. 2. The experiments are thorough. They are conducted across various datasets and show superior results in terms of video diffusion distillation. The ablation study shows the effectiveness of the proposed disentangled motion distribution and mixed trajectory distribution modules. 3. The paper is well-written and easy to follow. Weaknesses: 1. Motion jittering in the supplementary video. It's probably caused by the teacher model but the authors could better discuss the way to alleviate it. 2. In Fig. 6, there is no caption to indicate which one is the result of the proposed methods and which one is the designed two-stage baseline. What are the differences between the first row and the second row? Technical Quality: 4 Clarity: 3 Questions for Authors: 1. In Fig. 6, why do the "Ours w/ Webvid" results also have watermarks? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of this paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our MCM’s contributions and providing constructive feedback. We respond to your concerns below. **Weakness 1: Motion jittering.** Thank you for the feedback! The motion jittering is largely inherited from the teacher model (the same phenomenon was also reported in AnimateDiff-Lightning [35]). To mitigate this, we could use the following methods. - Increase the hyperparameter $\lambda_{\text{real}}$ in mixed trajectory distillation, so that MCM could learn more information from real videos instead of jittering teacher output. - Apply additional temporal losses to maintain stable video outputs, such as constraints on the brightness or optical flow changes. We will add additional discussion in the revised version. **Weakness 2: Fig. 6 caption.** Our Fig. 6 shows only our MCM results adapted to different image dataset styles, not two-stage results. The first and second rows represent the first and last video frames, respectively, similar to Fig. 5. We'll clarify this in the revision. **Question 1: Watermark in “Ours w/ WebVid” in Fig. 6.** Thank you for noticing the watermark. This is because all WebVid videos contain “shutterstock” watermarks. We use WebVid for a fair comparison with ModelScopeT2V-based video diffusion distillation methods, such as DDIM, DPM++, and LCM. We will elaborate more on this in the revised version. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response. It has addressed my concerns.
Summary: This paper proposes a video diffusion distillation method that disentangles motion and appearance learning. Basically, it proposes to enhance the appearance generation with high-quality image data and distill motion knowledge from the video teacher model. Strengths: 1. The proposed method can distill motion knowledge from video diffusion models and improve the appearance quality through disentangled motion distillation. 2. The mixed trajectory distillation is proposed to improve training-inference alignment and enhance generation quality. 3. This paper is technically clear and the organization is good. Weaknesses: 1. The introduction of gaps between the training and inference distillation inputs is not so straightforward. 2. The introduction to related work needs to be significantly enhanced, especially in terms of the idea of decoupling appearance and motion, which is no longer uncommon and has many related works. 3. It simply provides the conclusion that "learnable representation works the best" without giving specific analysis as to why. Such an analysis may be more helpful for following research. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Broader impacts and limitations have been discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for identifying our key strengths and providing insightful comments. We address your concerns point by point. **Weakness 1: Gaps in training and inference distillation inputs.** Thank you for raising the concern! There exist two major gaps in the mixed trajectory distillation. - Distribution mismatch gap. During training, we assume all inputs are noisy low-quality videos; during inference, the input will be noisy high-quality videos. - Information leakage gap. During training, the original consistency distillation takes as input noisy videos that contain ground-truth video information; during inference, the sampling will start from pure noise, containing zero signal. Our mixed trajectory distillation simultaneously addresses these two problems. We will make this clear in the revised version. **Weakness 2: Related work in decoupled appearance and motion.** Thank you for the suggestions! While disentangle appearance and motion is well-established in video understanding, our work addresses unique challenges in video diffusion distillation. - Video diffusion models suffer from long sampling time due to the additional temporal dimension. - Popular open-source video datasets contain low-quality frames, such as watermarks, motion blur, and low resolution. - Our MCM simultaneously achieves video diffusion acceleration and frame quality improvement by leveraging additional high-quality image data. - We introduce disentangled motion distillation and mixed trajectory distillation to overcome specific challenges in video diffusion models. We will expand our related work section to include additional references and contextualize our contributions in video diffusion distillation. **Weakness 3: Analysis on learnable motion representation.** Thank you for the advice! Our Table 5 shows that most representations, except latent low-frequency components, reduce FVD and improve CLIPSIM at low sampling steps. This indicates that disentangling motion from the raw latent space for consistency learning enhances video quality. The learnable motion representation performs best, outperforming handcrafted ones. We attribute this to its ability to: - Adaptively learn optimal motion features, capturing complex motion patterns that generic handcrafted representations might miss. - Better separate motion from appearance, reducing conflicts in learning high-quality frames. This learned disentanglement allows for more effective motion consistency modeling while preserving frame appearance quality. We will elaborate more on this in the final version. --- Rebuttal 2: Comment: The authors have addressed most of my concerns. However, the response to Weakness 2 is still very sketchy. I suggest the authors provide more detailed discussions in the revised paper. I decide to keep my initial positive rating.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for identifying our novel contribution in video diffusion distillation, promising qualitative and quantitative results, and well-written paper. We address all concerns in the individual responses below. Please find the attached PDF file for our additional qualitative results. All training/inference code and model checkpoints used in our original manuscript and rebuttal will be made publicly available. Pdf: /pdf/f2ccaea19b3e1cf763fc952ab43fc3226a1ef22b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GaussianCut: Interactive segmentation via graph cut for 3D Gaussian Splatting
Accept (poster)
Summary: The paper proposes a method for interactive multiview segmentation of scenes represented as a set of 3D gaussians (3D gaussian splatting). The method takes as an input multiple views of a scene, constructs 3D Gaussian splatting representation of the scene, takes interaction from the user (scribbles, points, or text prompts), uses a video segmentation method (prior work) to get object masks from user input in each view, and then constructs a graph with nodes corresponding to the gaussians. A binary energy function on this graph is designed which encourages similar gaussians (both in space and color) to be connected with edges of higher weight. Also for each Gaussian there is a likelihood term representing how likely it is to be foreground/background. Finally this energy function is optimized with a graph cut and gives a separation of the cloud of gaussians into two sets: foreground/background. After separation in 2 sets, foreground/background can be visualized separately using the corresponding set of Gaussians. Experimental results outperform prior work. Strengths: The proposed method is well motivated. The energy function makes sense. Weaknesses: Computational cost is not real time. Mapping user input to gaussians could be better described. For someone not familiar with 3D Gaussian splatting prior work, it is hard to understand how it works. An illustration could help. The cluster similarity in unary terms (Eq. 3) seems to have a big impart on the performance, but is kind of ad-hock not so well motivated. The problem might be a result of 'shortcutting' bias in the energy formulation (cuts separating a small component are cheaper). What you are doing here is kind of reminiscent of saying that pixels close to the foreground scribbles should be more likely in the foreground (if I think of an analogy to 2D segmentation). Technical Quality: 3 Clarity: 3 Questions for Authors: Without the mask refinement method (just from user scribbles), by how much the performance goes down? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback and kind words about our work. > Computational cost is not real time. The reviewer is right in noting that the graph cut algorithm does not run in real-time. However, once segmented into foreground and background Gausisans, it can be rendered in real-time. We kindly refer the reviewer to Table 2 (global response) to see a detailed analysis of pre-processing, training, and segmentation time for our method. Since our method does not perform changes to the 3DGS optimization, the overall time is much less than the baselines. Exploring real-time segmentation is an exciting future direction for this project. > Mapping user input to gaussians could be better described. In 3DGS, we can keep track of the Gaussians splatting to a particular pixel in 2D while rasterizing from that viewpoint. So for a particular Gaussian, we can compute the ratio of pixels splatted in the foreground (as per the 2D mask) and the total number of pixels splatted by that Gaussian. This gives us the coarse estimate. Thank you for highlighting this. We will add an equation and an illustration to make this more clear to the reader. > Intuition for the cluster similarity term The intuition behind adding a cluster similarity term is to improve the accuracy of foreground identification. If a Gaussian is in a region with other Gaussians that are likely foreground, it might also be foreground. We start with a coarse estimate of each Gaussian's likelihood of being foreground or background, derived from coarse splatting. However, this initial estimate can have errors due to inaccuracies in the 2D segmentation masks. Nodes with high confidence (weights $w_g$ close to 1) of belonging in the foreground can serve as prototypes for other similar nodes. Directly finding the closest high-confidence node for each Gaussian is computationally expensive. Therefore, we cluster these high-confidence nodes to reduce the computational load. We also present an ablation study on the number of clusters in Table 8. After experimenting with various heuristics, this clustering approach proved to be quite effective. >Without the mask refinement method (just from user scribbles), by how much the performance go down? We kindly refer the reviewer to Figure 12 which shows this effect qualitatively. If we do not use any 2D mask (just rely on the user scribbles), we only get the area around the scribble from coarse splatting. Applying graphcut has much more significant benefits in this case. We show the quantitative results on five scenes from the LLFF dataset (scribbles following NVOS bnchmark). The numbers in the table are IoU, classification accuracy. | Scene | Scribbles | Scribbles (with graphcut) | GaussianCut | |----------|------------------|--------------------------|--------------------| | Fern | 8.17, 74.50 | 47.97, 84.52 | 83.06, 94.60 | | Flower | 7.48, 79.16 | 85.30, 97.61 | 95.37, 98.91 | | Fortress | 15.12, 84.15 | 95.67, 99.56 | 97.95, 99.61 | | Trex | 6.74, 88.53 | 50.44, 91.96 | 83.43, 97.83 | | Orchids | 6.17, 84.84 | 85.25, 97.55 | 95.80, 99.31 | The performance drop is natural as the default implementation uses masks from multiple views but graph cut on 3D Gaussians can still retrieve major parts of the objects even with just the scribbles. We can include more qualitative results in the appendix to show this.
Summary: The paper proposes GaussianCut for interactive multi-view scene segmentation using 3D Gaussian Splatting. It first accepts user input (clicks, scribbles or text, similar to SAM) on single images, and them aim to segment the corresponding 3D Gaussians. The method constructs a graph based on scene Gaussian, and then using graph-cut algorithm to minimize the energy function. Segmentation tracking method are employed to provide the initial segmentation masks. The experiments are performed on LLFF, Shiny, SPIn-NeRF and 3D-OVS. Strengths: 1. The paper has a good presentation and clear writing, where figures are clean for the readers to understand the method pipeline. 2. The paper has a good motivation on extracting objects from explicit 3D Gaussians. Graph cut is well known for image segmentation on pixels, and the extension to 3D Gaussians is interesting. 3. The quantitative results in Table 1, 2, 3 and 4 show the reasonable segmentation performance brought by GaussianCut. Weaknesses: 1. Although extracting objects from explicit 3D Gaussians has a good motivation, this has been studied in previous works like LangSplat [a] and Gaussian Grouping [b], but these two very relevant methods are missing quantitative comparisons in the main paper. It is not clear what are the advantages of the proposed Gaussian Cut comparing to [a] and [b]. In Table 10 (should move to the main paper), [a][b] are compared but the performance of Gaussian Grouping is still on par or even better than the proposed GaussianCut. To understand the paper novelty, the paper should clarify these main differences / advantages in the main paper, and also include the detailed running speed comparison. 2. Using video tracking masks to obtain coarse segmentation is not new, which has been explored in [b, c], but not compared. Since [a][b] both lift SAM's masks to 3D, these 2 methods can also perform click/scribble based 3D segmentation 3. The limited performance improvement of the introduced spatial, color and cluster similarities in Table 5, which show the improvement of the proposed n-links and t-links are minor. Also, the extension of Graph cut from 2d pixels to 3D Gaussians seems very straight-forward. 4. More comparisons on benchmarks like Replica (setting proposed by Panoptic Lifting [d]) or Lerf-Mask [b] are desired. [a] Langsplat: 3d language gaussian splatting. CVPR, 2024. [b] Gaussian Grouping: Segment and Edit Anything in 3D Scenes. ECCV, 2024. [c] CoSSegGaussians: Compact and Swift Scene Segmenting 3D Gaussians with Dual Feature Fusion. arXiv, 2024. [d] Panoptic Lifting for 3D Scene Understanding with Neural Fields. CVPR, 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: How will GaussianCut perform when video segmentation/track fail due to large motion? which may lead the coarse segmentation contain large portions of errors. How many Gaussians are considered during the graph construction? The most concerning part for me is the tech novelty of the paper. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The limitation is discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for accessing our work and providing valuable feedback. We provide clarifications to the concerns and questions they raise. > Justify the tech novelty. Graph cut from 2d pixels to 3D Gaussians seems very straightforward. Extending graph cut from 2D pixels to 3D Gaussians involves several non-trivial design choices and findings. We kindly refer the reviewer to the global response. > Comparison with LangSplat and Gaussian Grouping and detailed running speed comparison We kindly refer the reviewer to Table 1 (global response) for comparison with LangSplat and Gaussian Grouping, and a detailed speed comparison in Table 2 (global response). For the 3D-OVS dataset, we provided a comparison with these two baselines (Table 10). We would like to clarify that the last line of Table 10 is the average over the Lawn scene (not the overall average). Our overall average is much higher than other baselines as shown below. A reason behind this is that Gaussian Grouping and LangSpalt optimize features for all the Gaussians and it therefore limits interactivity with specific objects. Interactivity here refers to the choices made when selecting the 2D supervision mask. Once a supervision mask is chosen, the underlying feature field is optimized based on that. On the other hand, for GaussianCut, the 2D masks are chosen based on the user input. Overall average on the 3D-OVS dataset. The reported metric is IoU. | 3D-OVS | Gaussian Grouping | LangSplat | CGC | Ours | |---|----|--|-------|-------| | Average IoU | 82.92 | 67.78 | 87.50 | 94.38 | Since the code for CoSSegGaussians is not public yet, we can’t provide a comparison against that. > Dependence on video segmentation model We use video segmentation models for better performance, but our method can work with just one mask as well (Table 4 and Figure 1 in the rebuttal pdf). While our model performance does suffer when using a single mask, it can still perform reasonable segmentation. Langsplat, Gaussian Grouping, CoSSegGaussians, however, require a segmentation mask for all training views. > Limited improvement from proposed n-links and t-links Results in Table 5 show the average performance on 7 scenes. While the effect of each individual component might not seem a lot on average, the effect can be significant depending on the scene. We show two scenes from the SPIn-NeRF dataset where removing the n-links can have a significant effect. GaussianCut also retrieves fine details (as shown in Figure 4 and 10) which are otherwise missed when just using the semantic maps. The numbers in the table are IoU / classification accuracy. | Scene | Coarse | w/o n spatial | w/o n color | w/o cluster | GaussianCut | |--------|----------------|----------------|--------------|----------------|-----------------| | Lego | 88.03/99.63 | 88.43/99.66 | 88.52/99.66 | 89.18/99.69 | 89.18/99.70 | | Truck | 93.32/97.76 | 95.47/98.49 | 95.60/98.54 | 95.67/98.57 | 95.70/98.60 | > Comparison on more datasets like Replica and Lerf-mask We thank the reviewer for suggesting additional datasets. We show quantitative results on four datasets: LLFF from NVOS, SPIn-NeRF, 3D-OVS, and Shiny, and have also compared performance against SA3D, ISRF, SAGA, SAGD, LangSplat, Gaussian Grouping. In addition, we take scenes from mip-NeRF and LERF (Figure 8) to show qualitative results. If needed, we can provide quantitative results on more benchmarks for the revised paper. >How will GaussianCut perform when video segmentation/track fails due to large motion? which may lead the coarse segmentation containing large portions of errors. We kindly refer the reviewer to Figure 5-7. We have also added extreme case (no video segmentation model) qualitative result in Figure 1 (rebuttal pdf). The quantitative results (IoU / classification accuracy) are shown below: | Scene | Single Image (Coarse) | Single Image (graphcut) | GaussianCut | |--------|-----------------------|-------------------------|--------------| | Truck | 55.63 / 83.37 | 71.83 / 90.23 | 95.7 / 98.6 | | Lego | 72.92 / 98.88 | 79.98 /99.26 | 89.2 / 99.7 | > How many Gaussians are considered during the graph construction? We consider all the Gaussians to construct the graph. The number is same as base 3DGS model, ranging from ~850k to 4M for all of our scenes (time taken between 40 sec to 4 mins). --- Rebuttal Comment 1.1: Comment: I read and appreciate the authors' response to my review. After thoroughly considering the feedback from the other reviewers, I am inclined to uphold my original score as "borderline reject" due to the paper's unclear tech contributions in the submitted paper writing and limited performance improvement. To understand the paper novelty, in the next revision, the paper should highlight / clarify the main differences / advantages to existing works in the main paper, and also include the detailed running speed comparison in the paper as well. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their suggestion. As per the reviewers' suggestion, we did a detailed run-time comparison against the feature learning methods during rebuttal. We initially did not compare the runtime against feature-based methods as we operate directly on a pre-trained 3DGS model. Since none of the published prior work do a training-free segmentation of 3DGS model, we did not have a direct baseline to compare against. To make our contribution clear, we have highlighted that our approach is training-free in the abstract (line 15), in Figure 1, in the method section (lines 125-126) and the discussion section (lines 319-320). We have differentiated our work from feature-based methods (lines 84-85 and lines 99-100) in the related work section. While we work on improving our work, we were curious to know what other running speeds the reviewer is interested in seeing beyond the ones provided in this discussion period? Regarding the limited performance improvement, we would again like to highlight that for a training-free approach to perform on par or even better than training-based baselines is not only novel but also surprising. As for improvement, we show a considerable improvement on 3D-OVS dataset (+6.88 absolute IoU gain) and a smaller improvement on NVOS (+1.6 absolute IoU) and SPIn-NeRF (+0.5 absolute IoU). The reason for this is that the performance on the two latter benchmarks is already pretty high even for the baselines and thus, the room for improvement is smaller as this benchmark is becoming saturated.
Summary: This paper presents GaussianCut, a new method for foreground/background segmentation of 3D Gaussian scenes. GaussianCut relies on 2D image/video segmentation masks, which are generated for a subset of the training images. GaussianCut propagates these masks to the 3D Gaussians by projecting each Gaussian onto each mask and averaging the corresponding mask values. It then assembles the Gaussians into a graph where each Gaussian is connected to its nearest neighbors, defines several energy terms (for nodes and edges), and uses these energy terms alongside a graph cut to partition the Gaussians. Unlike several baselines, GaussianCut can be applied to pre-trained 3D Gaussian scenes without retraining/fine-tuning. Strengths: - In Figure 4, the qualitative results of the proposed method look much sharper than those of the baselines. Also, the quality metrics for the proposed method improve over the baselines slightly. - Figures 5 and 7 make it clear that the proposed method is fairly robust to bad 2D input segmentations. - The paper is clear and easy to follow. - The proposed method is simple. Weaknesses: - One major weakness is that none of the tables include the runtimes of the proposed method and the baselines. This is important because segmentation is arguably much more useful if it runs at interactive speeds. The paper also does not list the runtime of the simpler coarse splatting variant/ablation. I think it is crucial for the paper to list both the training-time overhead and segmentation runtime of each method. - Another major weakness is that the paper includes/omits baselines in the tables very inconsistently. For example, SAGD and ISRF are featured in Figure 4, but not in Table 3. On the other hand, MVSeg and SAGA are featured in the table, but not the figure. I would be much more confident in the results if the baselines were always included (when appropriate). Overall, I would be happy to raise the paper's score if the above two major weaknesses were addressed. My score for contribution is held back by the fact that some of the baselines (e.g., SAGA) are significantly faster to run despite having similar quality. - Another smaller weakness is that the paper does not include visualizations of the energy terms, high-confidence clusters, etc. These would be very helpful for building the reader's intuition for the method. Minor nitpicks: - Line 55/56: The following sentence should perhaps include citations: "Recent works have also explored segmentation with Gaussians as the choice for scene representation." - Line 101: I was confused about what is meant by "decomposing the boundary Gaussians." - Line 144: "Guassians" - Line 169: The graph is defined as $(|\mathcal{G}|, \mathcal{E})$. This should be $(\mathcal{G}, \mathcal{E})$ instead. - Line 170: The definition of the neighborhood is listed as $\mathcal{N} \subseteq |\mathcal{G}| \times |\mathcal{G}|$, but the neighborhood is defined in terms of nodes ($\mathcal{N} \subseteq \mathcal{G}$) and not edges (which would be $\mathcal{N} \subseteq \mathcal{G} \times \mathcal{G}$ anyways) in the next sentence. - Line 264: Units (dB) should be listed for the PSNR differences. - Line 296-297: The time cost for segmentation technically grows linearly, but the constant factor is big enough that this doesn't matter. I would update the sentence here to be more precise. - The single mask ablation (line 298) should probably be included in Table 5. Technical Quality: 3 Clarity: 3 Questions for Authors: - Line 171-172: The authors state that "Gaussians that map to the same object would be closer spatially." This seems reasonable, but isn't always the case. For example, dull specular highlights are often represented via transparent surfaces and "clouds" inside objects. Does the proposed method handle these cases well? - The proposed method has a number of hyperparameters. How sensitive is it to these hyperparameters? - How good is the coarse splatting baseline with well-chosen hyperparameters? I think visualizing a sweep of the threshold in the appendix would convince the reader that the proposed approach works better no matter what threshold is chosen. - How are the frames ordered before they are passed to the video segmentation model? How sensitive is the method to this ordering? - To what extent does the proposed method's performance rely on SAM-Track's quality? Do any baselines rely on different video segmentation models, and if so, how do the metrics change when those are updated to use SAM-Track? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. We have provided clarifications throughout and added additional experiments to address the concerns they raised. We appreciate the reviewers’ attention to detail and will incorporate all the suggestions made by them. > Runtimes of proposed method and baselines We have added all the runtimes suggested by the reviewer and we kindly refer them to Table 2 (global response). > Inconsistent baselines We ran all the baselines on the SPIn-NeRF dataset (10 scenes). For ISRF, we get OOM issues at 1008 resolution for 360-degree inward scenes so we resize it to 1/4th resolution. | Method | IoU | Acc | |--|-|-| | MVSeg | 90.9 | 98.9 | | SA3D| 92.4 | 98.9 | | SAGD | 89.7 | 98.1 | | SAGA | 88.0 | 98.5 | | ISRF| 71.5 | 95.5 | | Coarse Splatting | 91.9 | 98.9 | | GaussianCut | 92.9 | 99.2 | For the qualitative results in Figure 4, since MVSeg (part of SPIn-NeRF) is designed for inpainting task, extracting the segmentation module was challenging. For SAGA, we run out of memory for the garden scene in RTX 4090 GPU. It's because the scene has considerably more images (185 images, ~4.4 million Gaussians) and SAGA stores a 32-dimensional feature for each Gaussian. Therefore, we were not able to include SAGA in Figure 4. We will also include all the baselines on all the datasets in the final version. > Visualizations of the energy terms We will be happy to provide visualizations that can improve the clarity of our method. However, it is not clear to us how energy function can be visualized. The weights are assigned on edges, not nodes, which makes visualization tougher. Also, since the energy function is not optimized over time (it is minimized using min-cut algorithms), we can not plot the overall energy either. We would like to request the reviewer to provide suggestions on how meaningful visualizations can be included. > I was confused about what is meant by "decomposing the boundary Gaussians." SAGD[a] also does not require any segmentation-aware training given an optimized 3DGS model. They decompose Gaussians (split a single Gaussian into two) that may lie at the boundary of an object > The single mask ablation (line 298) should probably be included in Table 5. We provide the ablation with a single mask for the NVOS benchmark below. We will also include this in Table 5 in the revised paper. | Method | IoU | |---|--| | Single mask | 86.6 | | Coarse | 91.2 | | GaussianCut | 92.6 | > "Gaussians that map to the same object would be closer spatially." assumption justification Our proposed method is scene agnostic, i.e., it works regardless of the distribution of the underlying Gaussians. With the recent advances in point-based and Gaussian-based techniques, these artifacts might be mitigated and since our model works on a pre-trained 3DGS, it can adapt to such advances. This assumption worked well for the all scenes we tested. > Sensitivity to hyper-parameters We share a default setting in the paper which performs reasonably on all our datasets. The sensitivity of each parameter can be very scene-dependent. For instance, in a scene where parts of an object have different colors, a very high weight on the color similarity can affect adversely. We show the effect of $\lambda$ (controls the pull of neighboring vs terminal edges) and $\sigma$ (decay constant of the similarity function) on two scenes. The reported metric is IoU. | Scene| $\lambda=0.5$ | $\lambda=1$ | $\lambda=2$ | $\lambda=4$ | |--|-|-|--|--| | Fortress | 97.67 | 97.99 | 97.95 | 97.80 | | Lego | 89.15 | 89.18 | 89.18 | 88.49 | | Scene | $\sigma=0.5$ | $\sigma=1$ | $\sigma=2$ | $\sigma=4$ | |--|-|-|-|-| | Fortress | 96.12 | 97.95 | 97.56 | 96.04 | | Lego | 89.20| 89.18 | 89.18 | 89.19 | > Coarse splatting baseline with well-chosen hyperparameters For the four 360-degree inward scenes in the SPIn-NeRF benchmark, we show a sweep of the threshold (default is 0.3). GaussianCut outperforms all the thresholds considered for coarse splatting. It is worth noting that adjusting the threshold of coarse splatting also improves the quality of graph cut (as we directly use these for terminal links weights). | Threshold | IoU | Acc | |-|-|-| | Coarse@0.1 | 88.47 $\pm$ 4.85 | 98.96 $\pm$ 0.53 | | Coarse@0.3 | 89.67 $\pm$ 3.18 | 98.94 $\pm$ 0.72 | | Coarse@0.5 | 87.76 $\pm$ 3.06 | 98.45 $\pm$ 1.50 | | Coarse@0.7 | 83.30 $\pm$ 6.04 | 97.58 $\pm$ 2.84 | | Coarse@0.9 | 72.13 $\pm$ 11.26 | 96.08 $\pm$ 4.69 | | GaussianCut w/ Coarse@0.3 | 90.55 $\pm$ 3.76 | 99.18 $\pm$ 0.41 | >How are the frames ordered before they are passed to the video segmentation model? How sensitive is the method to this ordering? We run the camera on a fixed trajectory to obtain rendering from different viewpoints (spiral trajectory for front-facing and round for 360-degree inward scene). We limit the number of frames to 30 for front-facing and 40 for 360-degree scenes. However, all the training images can also be used for coarse splatting (Table 6). Although it might not be preferred for scenes that have a large number of training images. SAM-Track is quite good for unordered multi-frame images as well. The results in Table 6 were obtained after directly giving all training images to SAM-Track. > To what extent does the proposed method's performance rely on SAM-Track's quality? Do any baselines rely on different video segmentation models, and if so, how do the metrics change when those are updated to use SAM-Track? All our experiments are based on SAM-Track and we did not experiment with another model. Our method can work on any video segmentation model and its quality can affect the final performance. Although we start with SAM-Track, our mask quality is improved significantly (Figure 11). [a] SAGD: Boundary-Enhanced Segment Anything in 3D Gaussian via Gaussian Decomposition, arXiV, 2024 --- Rebuttal Comment 1.1: Comment: Thank you to the authors for providing detailed runtimes, more baseline results, additional hyperparameter sweeps, and comparisons against the coarse splatting baselines. These additional details adequately address my concerns about the evaluation, and so I have raised my overall score from 5 to 6. I think the paper's direct impact may be limited by its less-than-interactive runtime and the fact that Gaussian segmentation is a very specific niche, but runtime could be improved in follow-up research on training-free Gaussian segmentation, and so I think the paper is worth publishing. I would encourage the other reviewers to consider raising their scores above borderline reject. While the proposed method has the disadvantage of not providing interactive segmentation speed, it takes a fundamentally different approach to segmentation compared to the baselines (training-based vs. training-free), and this is valuable in and of itself. The proposed method has the potential to serve as a stepping stone towards more practical segmentation approaches, and with the additional results the authors have provided, I think the reader will have a good sense of the method's strengths and weaknesses. In other words, although the proposed method is not perfect, I think the paper merits more than a borderline reject. --- Reply to Comment 1.1.1: Comment: Thanks for your kind feedback and for recognizing the value of our work!
Summary: This paper proposes GaussianCut for interactive 3D segmentation. The GaussianCut takes a trained 3DGS model and the user prompt as inputs. The SAM model first transfers the user prompt into an initial mask. Then the 3DGS model renders multiple view images and an existing video-tracking model is used for segmenting 2D masks across multiple views. With the masks on multiple views, the splatted Gaussians are identified with two likelihood parameters. Then the graph-cut method is applied for the Gaussian points where each Gaussian is a node, and the edge models the foreground and background relations. Results on multiple benchmarks show the effectiveness of the proposed method. Strengths: - This paper is well-written and easy to follow. - Using Graph Cut to segment the 3D Gaussians makes sense and sounds interesting. - The result on multiple benchmarks shows the proposed method achieves new SOTA performance. Weaknesses: - The proposed method is kind of straightforward. The framework of GaussianCut is a combination of all existing models. The SAM model is used to obtain masks from user prompts. The SAM-Track model is used to generate Multiview masks. 3DGS model can explicitly model relations between 3D Gaussians and 2D pixels. Graph Cut model is used to separate the Gaussian points. This paper should explain the key contributions of GaussianCut itself. - Details of associating 3D Gaussians with 2D masks should be given. As alpha blending will assign different weights to different Gaussians for a single pixel, For the Gaussians splatted into one pixel, are they assigned different weights just for this pixel? Or a hard (binary) assignment is used? - The time cost of the proposed method is shown in Tables 6 and 7. However, the comparisons with other SOTA methods in terms of time cost should be included to illustrate the speed advantage/disadvantage of the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: The novelty of the proposed method should be justified. The proposed method is more like a post-processing. Some important method details and comparisons are missing. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive review. We would like to respond to the questions and concerns that they have posed. > Key contributions clarification We kindly refer the reviewer to the global response which provides a detailed explanation of our key contributions. > Details of associating 3D Gaussians with 2D masks Gaussians are assigned coarse estimates based on their weights during alpha-rendering at rasterization, i.e., for the Gaussians splatted onto one pixel, it is assigned a different weight just for this pixel. This weight is based on its contribution at alpha blending. However, we had experimented with binary weights as well and we did not observe any major performance difference. The binary weights also give a similar performance in our experiments. We will include these details about the mapping of masks to the Gaussians in the revised paper. Thank you for the feedback. > Time cost analysis For a detailed speed analysis, we kindly refer the reviewer to Table 2 (global response). --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The reviewer still retains the following concerns: 1. Novelty. This paper extends the graph cut-based segmentation method into 3D Gaussians, which haven't been explored before. However, 3D Gaussians are essentially a bunch of 3D points. Using Graph cut for 3D segmentation including point cloud segmentation has been explored [1,2]. The current technological contribution is limited. 2. It's still unclear how the soft/hard weights are combined in the proposed method. Also, what are the experimental results of these two different choices? [1] Zhang, Zihui, et al. "Growsp: Unsupervised semantic segmentation of 3d point clouds." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [2] Guo, Haoyu, et al. "Sam-guided graph cut for 3d instance segmentation." arXiv preprint arXiv:2312.08372 (2023). --- Rebuttal 2: Comment: We thank the reviewer for the follow-up. > Coarse splatting We elaborate on the coarse splatting in further detail below: Consider an optimized 3DGS model for a scene, $\mathcal{G}$. For $n$ viewpoints, we obtain masks $\mathcal{M}$ := {$M^j$}$_{j=1}^n$ from a video segmentation model corresponding to $n$ viewpoints $\mathcal{I}$ := {$I^j$}$_{j=1}^n$ . $M^j$ indicates the set of foreground pixels in the viewpoint $I^j$. For each Gaussian $g \in \mathcal{G}$, we maintain a weight, $w_g$, that indicates the likelihood of this Gaussian belonging to the foreground. To obtain the likelihood term $w_{g}^j$ pertaining to mask $j$ for Gaussian $g$, we unproject the posed image $I^j$ back to the Gaussians using inverse rendering and utilizing the mask information, $$ w_g^j = \frac{\sum_{\textbf{p} \in M^j } \sigma_g(\textbf{p})T_g^j(\textbf{p}) }{\sum_{\textbf{p} \in {I}^j} \sigma_g(\textbf{p})T_g^j(\textbf{p})} $$ where $\sigma_g(\textbf{p})$ and $T_g^j(\textbf{p})$ denote the opacity and transmittance from pixel $\textbf{p}$ for Gaussian $g$. If $g$ does not contribute to $\textbf{p}$, the transmittance is taken to be $0$. Combining over all the masks, $$ w_g = \frac{\sum_{j}\sum_{\textbf{p} \in M^j } \sigma_g(\textbf{p})T_g^j(\textbf{p})}{\sum_j\sum_{\textbf{p} \in {I}^j} \sigma_g(\textbf{p})T_g^j(\textbf{p}) } $$ As mentioned in the paper, we use the same formulation as proposed by GaussianEditor [a] and kindly refer the reviewer to their paper for further details. For binary weights (hard assignment), we simply keep a count of the number of foreground and background pixels the Gaussian $g$ splats to in $I^j$. $$ w_g = \frac{\sum_{j}\sum_{\textbf{p} \in M^j } \mathbb{I}(T_g^j > 0) }{\sum_j\sum_{\textbf{p} \in {I}^j} \mathbb{I}(T_g^j > 0)} $$ This $w_g$ is used directly in Equation 3 in the paper. We show the IoU on several scenes comparing soft assignments and hard assignments. Since soft assignment has marginally better performance, it is our default implementation. | Scene | Soft assignment | Hard assignment | |--------------------|-----------------|-----------------| | Fern (NVOS) | 83.06 | 82.56 | | Fortress (NVOS) | 97.94 | 98.12 | | Leaves (NVOS) | 95.95 | 95.60 | | Lego (SPIn-NeRF) | 89.18 | 88.95 | | Pinecone (SPIn-NeRF)| 91.89 | 91.99 | > Novelty concerns Regarding the novelty, the graph construction proposed in our method is significantly different from the point cloud literature. Our proposed method is much simpler and both the approaches mentioned by the reviewer involve training networks to obtain the segments. - GrowSP: Unsupervised Semantic Segmentation of 3D Point Clouds: This method does not employ graph cut for segmentation. It learns per-point features, extracts superpoints, progressively grows those superpoints, and performs clustering for segmentation. - SAM-guided Graph Cut for 3D Instance Segmentation: While this method does use graph cut on point clouds, their formulation differs significantly from our approach. They apply graph cut on superpoints which are obtained from another pre-segmentation model. Therefore, when an object to be selected is part of a superpoint, this technique is unable to segment it out. Our approach, on the other hand, provides finer user control as we do not rely on any point cloud pre-segmentation module. Their primary method involves training a graph neural network (GNN) using high-quality pseudo labels whereas our approach is training-free. Their non-GNN baseline also requires masks from multiple views to assign edge weights. Our proposed approach can work even with a single mask. Moreover, since this method stores SAM features, adapting it for 3DGS would require a much higher memory footprint than our approach (please see discussion section). More generally, while some advancements in point cloud segmentation can be tranferrable for 3DGS, our coarse splatting (section 3.3) and terminal links (section 3.4) are heavily tailored for 3DGS. [a] GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting, CVPR 2024 --- Rebuttal Comment 2.1: Comment: Thanks for the detailed response. The reviewer appreciates it. Please include those details in the revised version. I will increase my rating accordingly. --- Reply to Comment 2.1.1: Comment: We thank the reviewer for the positive feedback and for expressing the intention to increase the rating. We appreciate the constructive review and will ensure that the discussed changes are included in the revised version of the paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful and valuable comments and appreciate that they found our paper well-written, easy to follow (DeYw, 24kC, BR3k), and well-motivated (MadC, DeYw). We are delighted to see that the reviewers recognize the performance improvement of our training-free approach on 3D segmentation benchmarks (Qza2, BR3k, 24kC, MadC), and found the extension of graph cuts to 3D Gaussian Splatting interesting (BR3k, DeYw). We want to use this general response to clearly highlight our key contributions following the suggestions of reviewers BR3k and DeYw, and differentiate our work from feature-based learning approaches (Qza2, DeYw). We ran additional experiments to compare against feature-based baselines (LangSplat[a] and Gaussian Grouping[b]) following the suggestion of reviewers Qza2 and DeYw and show that our model outperforms these baselines. Additionally, we also show a detailed time analysis in Table 2 (global response) following suggestions from reviewers BR3k and 24kC. We found the feedback constructive and will happily incorporate changes suggested by the reviewers. > Key contribution (DeYw, BR3k) Our key contribution is in proposing a method for object selection that is training-free given a pretrained 3DGS model. - **Training-free**: Our method is a post hoc technique and unlike prior work on optimization of per-feature Gaussian [a,b], our technique saves significant optimization time (Table 2) and memory (as we only store one additional parameter per Gaussian). This is not only novel in the sense of a new capability but it is also a surprising result. Given the larger computational budget of training-based approaches, one would expect them to achieve higher performance. - **Extension of graph cuts to 3D Gaussians**: Our work also contributes to Graph cut-based segmentation research. Although thoroughly explored for image segmentation, the extension to 3D Gaussians is non-trivial and has not been considered in previous work. The energy function proposed in our work contains several non-trivial design choices, including, modeling distances to neighbors (which is typically not considered in images), and n-links and t-links design. - **Leverage underlying geometry information**: 2D semantic maps (like SAM), while very robust, can be inefficient at segmenting finer details (Figure 11). Gaussians optimized for a scene capture the geometry of the scene, and our method utilizes these (using the color and position similarity) to retrieve fine details. Feature field-based methods can also miss finer details (e.g., plant decorations in Figure 4). > Comparison against feature-based methods (Qza2, DeYw) Per-Gaussian feature optimization baselines[a-c] alter the fitting process of 3DGS by adding an additional attribute for each Gaussian. Below, we compare our approach with feature-based methods - **Use case**: Feature-based methods learn features for everything in the scene. While useful, it can limit the flexibility of interactivity with a single object. Our method is more flexible in choosing specific object(s) using positive/negative clicks, or scribbles as we generate the 2D masks after the user interaction. - **Optimization time**: The fitting time of feature-based methods (Table 2 global response) as well as the memory footprint of storing additional features increases significantly, which might not be desirable in all applications. - **Reliance on video segmentation model**: Since we use the 2D masks only while rasterization, we do not require masks from all the viewpoints and can also work with just a single mask (Figure 12, Table 4). However, feature-based models require masks for all training views. - **Complimentary rather than competing**: Rather than seeing feature-based methods as a replacement for our method, we see them as complementary. Our energy function can be modified to also include a feature similarity term in equation 2. We see this as an interesting extension of our work. Table 1: NVOS results using Gaussian Grouping (GG) and LangSplat (LS). Our approach gives an overall better performance. LangSplat also fails to give good segmentation results for two scenes (Figure 2 of rebuttal pdf). | Scene | IoU (GG) | IoU (LS) | IoU (ours) | |-|-|-|-| | Horns | 93.61 | 95.99 | 97.03 | | Fern | 87.06 | 83.97|83.07| | Orchids | 86.80 | 96.25| 95.81 | | Flower | 95.27 | 95.24 | 95.37 | | Leaves | 93.00| 29.26 | 95.96 | | Fortress | 97.06 | 97.70 | 97.95 | | Trex | 81.66 | 19.46 | 83.44 | | Average | 90.64 | 73.98 | 92.66 | > Run time analysis (BR3k, 24kC, MadC) We compare the run time of our method with [a-d]. We take an average of the run time over 7 scenes from the NVOS benchmark. We divided the run time analysis into 3 stages: pre-processing (this step involves obtaining features for images), fitting time (which involves optimizing the 3DGS/NeRF model), and segmentation time (time between obtaining user prompts and producing segmentation output). Since we operate directly on the pre-trained 3DGS model, our fitting time is significantly lower than other 3DGS-based approaches. Table 2: Segmentation time (in seconds) on NVOS benchmark |Method|Preprocessing time|Fitting time|Segmentation time|Performance (IoU)| |-|-|-|-|--| |SA3D (NeRF-based)|0|309.14 $\pm$ 18.85|33.89 $\pm$ 13.04|90.3 | |Gaussian Grouping|13.72 $\pm$ 4.63|2096.07 $\pm$ 251.96|0.55 $\pm$ 0.09|90.6| |LangSplat| 2000.34 $\pm$ 1222.19|1346.92 $\pm$247.00 |0.82 $\pm$ 0.02 | 74.0| | SAGA|71.17 $\pm$ 22.74|1448.50 $\pm$ 205.07|0.35 $\pm$ 0.05 |90.9| | Coarse Splatting| 6.11 $\pm$ 0.38 | 510.97 $\pm$ 106.42 |19.48 $\pm$ 4.31 | 91.2 | | GaussianCut | 6.11 $\pm$ 0.38 | 510.97 $\pm$ 106.42| 88.77 $\pm$ 33.68 | 92.5 | [a]Langsplat: 3d language gaussian splatting. CVPR, 2024. \ [b]Gaussian Grouping: Segment and Edit Anything in 3D Scenes. ECCV, 2024. \ [c]Segment Any 3D Gaussians. arXiV, 2023 \ [d]Segment Anything in 3D with NeRFs, NeurIPS, 2023 Pdf: /pdf/9d7db9a5547d4685be0ad56a8c69b409e67a2841.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper presents a method for segmentation in 3D Gaussian Splatting. Given a prompt in 2D, the proposed approach is capable of segmenting objects of interest from the 3D Gaussians. Specifically, the method first performs 2D segmentation in all training views, which are then propagated into 3D using a technique similar to visual culling. This achieves a coarse 3D segmentation in the initial stage. Subsequently, the method employs graph cuts to refine the 3D segmentation. The paper validates the method on multiple datasets using 2D segmentation metrics and demonstrates the potential of the proposed approach. Strengths: The paper proposes refining 3D segmentation via graph cuts using carefully-designed energy functions. This step is practical and effective. It’s surprising to see that the method, even without 2D mask supervision, can still outperform baselines like LERF. Weaknesses: The paper has several weaknesses as below: Writing: - The method is not well-motivated in the introduction section. Specifically, in lines 24-39, the problem defined in the paper is hard to understand. The authors might want to rewrite this section to make the motivation clearer. - Part of the implementation is not clear to me. Specifically, I’m uncertain if the method is training-free for segmentation when provided with a pretrained GS model. If so, it would be beneficial to make this clear, as being training-free is a significant strength compared to other methods. Method: - I’m not entirely convinced by the training-free approach in the proposed method. In my view, learning a per-Gaussian feature for segmentation offers more flexibility. Therefore, I suggest the authors justify their method by demonstrating what the proposed method can achieve that learnable features cannot. Experiments: - The important comparison between baselines that use 3DGS such as GaussianEditor and LangSplat. The paper only reports the baselines that use NeRF, which is generally worse than 3DGS in the segmentation task. In other words, the baselines are insufficient to justify the proposed method. - Both qualitative and quantitative results of 3D segmentation are missing in the result section. From my viewpoint, it’s essential to this paper since the proposed method aims to perform 3D segmentation. While 2D segmentation partially shows the performance of the proposed method, it’s not the correct metric to validate the proposed method. The paper should not avoid this evaluation because 3D GT segmentation is missing in the datasets used in the paper. In this case, I would suggest the author trying some other datasets like ScanNet where 3D GT is available. Technical Quality: 3 Clarity: 3 Questions for Authors: Please address the concerns that I described above. During rebuttal, I would expect author justify better the proposed methods according to my suggestions I made above. Before, I currently vote for reject. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, they did it well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for their feedback on our paper. We thank the reviewer’s suggestion for helping to improve the clarity of our work > Clearer motivation for the introduction Our work is motivated by leveraging the explicit nature of 3DGS representation. With user inputs and the underlying geometry of the scene captured by the Gaussians, objects can be segmented from a scene, without requiring any additional optimization of the 3DGS model. We provide a clearer motivation below and we will include the changes in the revised version of the paper. 3D Gaussian Splatting represents a scene using a set of Gaussians, thereby offering an explicit representation of the scene. Prior works in 3DGS scene segmentation involve augmenting the set of Gaussian with a per-Gaussian feature, that is optimized during the Gaussian fitting, supervised by 2D features. These features provide semantic information and can be used for segmentation. Since 3DGS stores the parameters of each Gaussian explicitly, the size of the feature embedding exacerbates the already high memory footprint of the method. Therefore, such methods have relied on learning low-dimensional features per Gaussian. While this enables a 3D consistent segmentation, optimizing a per-Gaussian feature significantly increases the fitting time of the Gaussians to the scene. Our proposed methodology is a post hoc technique that operates directly on an optimized 3DGS field without requiring any segmentation-aware fitting. We directly tap into the representation and map each Gaussian to its corresponding object(s). We do this by formulating the Gaussians in a scene as an undirected graph and partitioning the set of Gaussians allowing for the extraction of subsets that represent specific objects. > Is the method training-free for segmentation when provided with a pretrained GS model? Yes, our approach is training-free when provided with a pre-trained GS model. We have highlighted this in the abstract and we kindly refer the reviewer to Figure 1 which shows that GaussianCut operates directly on a pre-trained GS model and user inputs. We will also modify our writing to explain this more clearly. To summarize our implementation, we take an already optimized GS model with user inputs on any one image. The user input is processed into dense masks and the masks are also propagated to multiple images using a video segmentation model. We do a "coarse splatting", which assigns each Gaussian a likelihood ratio, which is the fraction of pixels splatted in the foreground (as per the 2D mask) and the total number of pixels splatted by that Gaussian. This step does not require any further GS optimization. It is computed by simply rasterizing each view that has a corresponding 2D segmentation mask. Finally, we formulate a graph from the Gaussians that uses this additional likelihood term and the inherent properties of the Gaussians already learned in the initial fitting. As mentioned by the reviewer, this is indeed significant as it provides segmentation without requiring any changes to the fitting process. As noted in Table 2 (global response), it also saves substantial time compared to optimizing features. > Comparison with other 3DGS and per-Gaussian feature baselines We thank the reviewer for suggesting additional baselines. We would like to clarify that we do provide 3DGS-based baselines. We provide SAGA[a] and SAGD[b] results on the LLFF and SPIn-NeRF datasets (Tables 1, 3, 9), both of which are based on 3DGS. We also compare against Gaussian Grouping, LangSplat, and CGC (all of which are based on 3DGS) in Table 10 (appendix). We had not compared against them on all datasets because our method is focused more on interactive segmentation and these methods propose learning a feature field for all the scene elements (a more detailed distinction between these baselines and our method is provided in the global response). Based on the feedback, we have reported Gaussian Grouping and LangSplat comparison on the NVOS benchmark in Table 1 (global response). > 3D evaluations missing We completely agree with the reviewer that 3D GT is indeed a better metric to evaluate 3DGS segmentation. As noted by the reviewer, most datasets and approaches do not provide such ground-truth and benchmarks for this evaluation. SPIn-NeRF, Shiny, and 3D-OVS, while not 3D consistent, provide masks for multiple views to show the efficacy of methods to some extent. Obtaining the ScanNet dataset requires approval from the authors, which could take up to a week. Since we did not have that much time and the 3DGS baselines (SAGA, LangSplat, Gaussian Grouping) have a larger runtime, especially for scenes with a higher number of images, we could not include the quantitative results for the rebuttal. We will include results from Replica or ScanNet in the final paper. [a] Segment Any 3D Gaussians. arXiV, 2023 \ [b] SAGD: Boundary-Enhanced Segment Anything in 3D Gaussian via Gaussian Decomposition, arXiV, 2024
null
null
null
null
null
null
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Accept (poster)
Summary: The paper proposes to augment hallucination annotation dataset and improve the performance of the hallucination annotator simultaneously in an iterative self-training framework. During each iteration, they use a Expectation-Maximization Algorithm for data annotation and hallucination annotator training. The trained hallucination annotator can be further used for downstream tasks such hallucination evaluation and hallucination mitigation. Extensive experiments were performed to show the superiority in hallucination detection and the effectiveness of the proposed approach. Strengths: - The paper is very easy and pleasant to read. Each section and their subsections are well connected and explained without adding much redundant content. Figure 2 is especially illustrative. - I like how they explain a rather practical work in a theoretical manner using all the math equations in Section 3. It makes the work more sound with less confusions. - Although the model name is ANAH-v2, it actually brings big novelty compared to the original ANAH model. The model is in a self-training manner and the training prompt is expanded to do three tasks. - Large amount of experiments show impressive performance on hallucination detection. The thorough ablation study also shows the importance of each component. Weaknesses: - My biggest concern for this paper is that the model essentially does three things: Factual Existence Judgment, Reference Information Extraction, and Hallucination-Type Judgment. Each step relies on the results from previous step. For example, if the extracted reference information is limited, it will greatly impact the hallucination judgement as well. The experiments mainly focus on the third step without discussions on the previous two steps. So I’m not certain how stable the model actually is. - The hallucination mitigation is a little weak, it basically generates multiple responses and select the response with the least portion of hallucination sentences. There are multiple other ways to mitigate hallucinations that are not compared in Table 8. Technical Quality: 3 Clarity: 4 Questions for Authors: - Typo in ine 76-77, "we reduce the hallucination of the final LLM generations from 25% to 37%." - Line 156, when you use the majority vote to select the the most common hallucination type, wouldn't it be dominanted by "No Hallucination"? As you are also showing in Table 7, the hallucination rate for InternLM2 is less than 20%. If you generate n candidates (depending on how big your n is) and use a majority votes, not many hallucination types will be selected. - How many iterations did you do to get the dataset in Table 1? - In Table 2, any discussions on GPT4 having much better RougeL and BertScore? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The limitations are discussed in Appendix which actually addresses some of the questions I was gonna ask, such as more benchmarks or more backbone models for the hallucination annotator. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comment. Following are our responses to each individual comment (which are highlighted in italics). ### **Response to Weakness1 about stability of the annotation steps:** > *My biggest concern for this paper is that the model essentially does three things: Factual Existence Judgment, Reference Information Extraction, and Hallucination-Type Judgment. Each step relies on the results from previous step. For example, if the extracted reference information is limited, it will greatly impact the hallucination judgement as well.* We acknowledge that there exists the possibility that such previous steps may have influenced the final judgement. Therefore, in addition to majority voting on the final step of data construction, we also perform consistency checks (lines 157-161) on the previous steps to ensure quality. Meanwhile, we utilize RougeL, Bertscore to evaluate the performance of Factual Existence Judgment (Step1) and Reference Information Extraction (Step2), and we find that these two metrics in Table 2 increase steadily as the iteration progresses. These evaluations can show the stability and effectiveness of previous steps. In addition, Hallucination-Type Judgment (Step 3) relies on the results from the previous steps. Therefore, the results for Step 3 (F1 and ACC in Table 2) can also reflect the robustness and accuracy of the previous steps. ### **Response to Weakness2 about mitigation:** > *The hallucination mitigation is a little weak, it basically generates multiple responses and select the response with the least portion of hallucination sentences. There are multiple other ways to mitigate hallucinations that are not compared in Table 8.* Our primary focus is on automatically scaling up the annotation dataset and building annotators. The mitigation section aims to show the potential of our annotator for mitigation, rather than to achieve SOTA results. Therefore, we only used a simple re-ranking strategy to mitigate the hallucination of LLM, and the promising results in Table 8 proved that our annotator could be used for hallucination mitigation. In the future, we will explore methods to apply our annotators in mitigation, such as RLAIF. ### **Response to Question1 about typo error:** > *Typo in ine 76-77, "we reduce the hallucination of the final LLM generations from 25% to 37%."* We use "reduce" because the metric NLI described here is inversely related to the level of hallucinations. For clarification, we will change the presentation to "we reduce hallucination, with the NLI metric increasing from 25% to 37% on HaluEval" in the next version. ### **Response to Question2 about balance:** > *Line 156, when you use the majority vote to select the the most common hallucination type, wouldn't it be dominanted by "No Hallucination"? As you are also showing in Table 7, the hallucination rate for InternLM2 is less than 20%. If you generate n candidates and use a majority votes, not many hallucination types will be selected.* To clarify, the hallucination rate of InternLM2 less than 20% in Table 7 is achieved at settings that provide reference when generating response. In the setting of "w/o reference", the hallucination rate of InternLM2 is ~80% and the responses both with and without reference are mixed together to construct the dataset. Therefore, although the majority voting may enhance such bias in the single setting, the overall dataset is balenced. For example, in the training data at Stage 2, the ratio of hallucinations to non-hallucinations is **52.53 : 47.47**. In addition, the results in Table 3 show our majority-vote method improves accuracy. We will add a corresponding discussion in the next version. ### **Response to Question3 about iteration:** > *How many iterations did you do to get the dataset in Table 1?* 3 iterations ### **Response to Question4 about GPT4 performance:** > *In Table 2, any discussions on GPT4 having much better RougeL and BertScore?* As we mentioned in footnote 2, GPT-4 pre-annotation in ANAH makes the RougeL and BERTScore higher than the zero-shot setting in this work. In order to analyze the causes of this phenomenon more clearly. During the construction of ANAH [1], GPT-4 is used for the initial pre-annotation. Subsequently, humans refine these pre-annotations, and humans tend to not change the pre-annotations. This methodology inherently aligns the final 'golden' answers closely with the outputs by GPT-4. In addition, we added an LLM-based Evaluation to exclude the similarity due to pre-annotations. Specifically, we use FactScore [4] to assess the consistency of generated reference points with the source document. Below is Table 2 from our paper, which contains the newly added metric FactScore (column 3). The results of the FactScore indicates that the reliability of our model's generated reference points progressively improves and ultimately exceeds that of GPT4. This trend is consistent with F1 and ACC, reflecting the reliability of the FactScore. | Model | F1 | ACC | **FactScore** | RougeL | BertScore | |----------------|-------|-------|---------------|--------|-----------| | GPT-4 | 87.11 | 86.97 | **84.39** | 86.32 | 96.21 | | ANAH-7B | 78.69 | 79.92 | **80.60** | 58.51 | 87.27 | | ANAH-20B | 80.49 | 81.01 | **81.51** | 58.82 | 88.44 | | ANAH-V2-Stage1 | 84.45 | 84.85 | **83.63** | 60.10 | 88.43 | | ANAH-V2-Stage2 | 87.75 | 88.18 | **84.54** | 67.28 | 90.80 | | ANAH-V2-Stage3 | 89.30 | 89.55 | **86.36** | 69.44 | 91.43 | We will add a corresponding discussion and evaluation results in the next version. [1] Ji Z, Gu Y, et al. ANAH: Analytical Annotation of Hallucinations in Large Language Models[J]. ACL, 2024. [2] Min S, Krishna K, Lyu X, et al. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation[J]. arXiv preprint arXiv:2305.14251, 2023. --- Rebuttal 2: Title: Please let us know if your concerns have been addressed Comment: Dear Reviewer i3ot, We would like to thank you for the thoughtful and constructive feedback and appreciate that you agree on the strengths of our paper. During the rebuttal, **we have provided more details and analysis to address your concerns.** As the discussion phase is nearing its end, we are warmly concerned whether our rebuttal addresses your concerns. **It would be appreciated if you could raise your score on our paper if we address your concerns.** We thank you again for your effort in reviewing our paper. Best regards, Authors of Submission 10034.
Summary: This paper introduces an iterative self-training framework to address hallucinations in large language models (LLMs), enhancing the accuracy of annotators and scaling up hallucination detection datasets. The framework utilizes the Expectation Maximization algorithm to progressively improve the hallucination annotator's performance by annotating a scaled dataset and training a more accurate annotator in each iteration. Experimental results demonstrate that the enhanced annotator surpasses GPT-4 in hallucination detection, achieving state-of-the-art results on HaluEval and HalluQA. Strengths: 1. The paper addresses a unique perspective on the issue, as many current works focus on hallucination detection and mitigation. The focus on automatically constructing high-quality hallucination datasets is crucial for addressing hallucinations in large language models. This is an important and valuable contribution to the field. 2. The structure of the paper is clear, and the writing is concise. The experimental section is detailed and thorough, enhancing the credibility of the results. Weaknesses: 1. The rationale behind using the EM algorithm to solve this problem is not clearly articulated. What considerations led to this choice? 2. How does the paper ensure that the EM algorithm converges through iterations? Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The difficulty in automatically constructing high-quality hallucination datasets is not clearly explained. The relevant works in this area, the unresolved issues, and why these difficulties persist are not thoroughly discussed by the authors in the paper. 2. The rationale behind using the EM algorithm to solve this problem is not clearly articulated. What considerations led to this choice? 3. How does the paper ensure that the EM algorithm converges through iterations? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful comments. Following are our responses to each individual comment (which are highlighted in italics). ### **Response to Weakness1 about contributions:** > *The description of the methodological contributions in the paper is not very clear. The EM algorithm and the self-consistency method mentioned in the article are existing works.* Our primary contribution is an iterative self-training framework that simultaneously and progressively scales up the hallucination annotation dataset and improves the accuracy of the hallucination annotator. Within this framework, the EM algorithm serves as our theoretical foundation. Additionally, we identified the need for a pipeline during the E-Step to produce more stable output. So we selected self-consistency as a method to achieve this stability. To the best of our knowledge, we are the first to present work on the automated construction of large-scale hallucination datasets. Moreover, the large-scale hallucination dataset and high-precision hallucination annotation model that we finally obtained can serve as the foundation for more research in the future, which we believe is very meaningful. ### **Response to Weakness2 and Question1 about the difficulty of automatic data construction:** > *The difficulty in automatically constructing high-quality hallucination datasets is not clearly explained. The relevant works in this area, the unresolved issues, and why these difficulties persist are not thoroughly discussed by the authors in the paper.* The difficulty of automatic hallucination data construction is the low performance of automatic annotators. The relevant works in this area, the unresolved issues, and why these difficulties persist were discussed in the Introduction (lines 32-35) and Related Work (lines 94-100). To solve these difficulties, we first proposed a progressively self-iterative labeling method that automatically scales up the dataset of fine-grained hallucination annotations, and we also proved the effectiveness of our method. ### **Response to Weakness3 and Question2 about the reason for using EM:** > *The rationale behind using the EM algorithm to solve this problem is not clearly articulated. What considerations led to this choice?* Our task can be formalized as optimizing two hidden variables, the hallucination annotator parameters and the data labels, as described in Section 3.2 (lines 138-140). We believe that the form of our task corresponds exactly to the EM algorithm [1]. ### **Response to Weakness4 and Question3 about EM convergence:** > *How does the paper ensure that the EM algorithm converges through iterations?* EM is a convergent algorithm, as demonstrated in [2]. And, as illustrated by our experimental results, our approach has shown progressive improvement in annotation performance (Table 2) and generalization capabilities (Table 6) through iterations. [1] Dempster A P, Laird N M, Rubin D B. Maximum likelihood from incomplete data via the EM algorithm[J]. Journal of the royal statistical society: series B (methodological), 1977, 39(1): 1-22. [2] Wu C F J. On the convergence properties of the EM algorithm[J]. The Annals of statistics, 1983: 95-103.
Summary: The authors propose an innovative approach to tackle the persistent issue of hallucinations in large language models (LLMs) during long-form question-answering tasks. Current methods for detecting and mitigating these hallucinations are constrained by limited data and high labor costs. To address this, the paper introduces an iterative self-training framework that scales up the hallucination annotation dataset while simultaneously enhancing the accuracy of the annotators. Using the Expectation Maximization algorithm, the process involves multiple iterations where a pipeline annotates a growing dataset, and the improved annotator is used for the next cycle. This approach not only leads to a highly accurate annotator that surpasses GPT-4 but also effectively reduces hallucinations in LLM outputs. The results show significant improvements in key benchmarks, offering a scalable and efficient solution for managing LLM hallucinations. Strengths: 1. One of the major strengths of this paper is its scalable framework. By automating the annotation process and iterating through larger datasets, the authors manage to overcome the usual limitations of manual data labeling, making the approach both cost-effective and efficient. 2. The application of the Expectation Maximization algorithm to improve annotator accuracy through iterative training is a standout feature. This method not only refines the annotations with each cycle but also ensures that the annotator becomes progressively more reliable, which is a clever and effective use of existing statistical techniques. Weaknesses: 1. One of the big drawbacks of the EM algorithm is its sensitivity to initial conditions. If you don't start with the right parameters, the algorithm can easily get stuck in a local maximum instead of finding the best possible solution, which can be frustrating. 2. The EM algorithm can be quite demanding in terms of computational resources. Each iteration involves a lot of number crunching, which means it can be slow and resource-intensive, especially when dealing with large datasets or complex models. 3. The authors could enhance their literature review by including several highly relevant papers on data annotation. The annotation task has been discussed in [1] [2] [3] [4] [5]. [1] https://arxiv.org/abs/2310.04668 [2] https://arxiv.org/abs/2303.15056 [3] https://arxiv.org/abs/2306.04349 [4] https://dl.acm.org/doi/pdf/10.1145/3613904.3642834 [5] https://dl.acm.org/doi/pdf/10.1145/3594536.3595161 Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comment. Following are our responses to each individual comment (which are highlighted in italics). ### **Response to Weakness1 about initial condition:** > *One of the big drawbacks of the EM algorithm is its sensitivity to initial conditions. If you don't start with the right parameters, the algorithm can easily get stuck in a local maximum instead of finding the best possible solution, which can be frustrating.* We acknowledge the sensitivity of the EM algorithm to initial conditions. To address this problem, the first step of our framework was to train an annotator model with a high-quality, human-labeled hallucination dataset, ANAH [1]. We then obtained a model with annotation accuracy close to GPT4, and we used this high-accuracy model as a starting point for subsequent EM operations. Meanwhile, to ensure the stability of the convergence process, we use a progressive scaling strategy. Tables 2 and 4 show the effectiveness of this approach. In Table 2, the performance of the labeler improves continuously with data scaling, and in Table 4, the progressive approach is much more effective than the non-progressive approach. Although we cannot claim that we will eventually converge to a globally optimal solution, a flat convergence region would be nice according to the theoretical analysis [2, 3]. One measure of flatness is generalisability [4, 5], and Table 6 shows that our model has excellent generalisability. Thus, we believe that our method achieves a promising result. We will add a corresponding discussion in the next version. ### **Response to Weakness2 about computational resources:** > *The EM algorithm can be quite demanding in terms of computational resources. Each iteration involves a lot of number crunching, which means it can be slow and resource-intensive, especially when dealing with large datasets or complex models.* We agree that the iterative algorithm requires computational effort. However, it is important to consider our context where building a fine-grained hallucination annotation dataset requires prohibitively high costs and labor intensity. Using the "manual + GPT4-assisted" annotation model, as described in ANAH [1] (0.9 USD and 20 minutes per annotation), it would take 177,237 USD and 65,643 hours to reach the size of the dataset in our work. However, our method uses 32 A100 GPUs to iteratively train the 7B model. It took approximately 100 hours for inference and training. Based on the price of the computing platform Lambda (1.29 USD per GPU per hour), it only costs 4,128 USD. So we believe this is a better trade-off between computing resources and labour+API costs, which is acceptable. Moreover, the large-scale hallucination dataset and high-precision hallucination annotation model that we finally obtained can serve as the foundation for more research in the future, which we think is very meaningful. We will add a corresponding discussion in the next version. ### **Response to Weakness3 about relevant papers:** > *The authors could enhance their literature review by including several highly relevant papers on data annotation. The annotation task has been discussed in [6-10].* Thanks for your suggestion! We discuss the papers you mentioned below. [6] proposed a pipeline for annotating nodes on a graph without labels. [7, 8, 9] discussed the superiority of using GPT4 or ChatGPT as annotators. [10] introduces a self-supervised method using GPT for data annotation. Different from them, we introduce an iterative self-training framework that simultaneously and progressively scales up the hallucination annotation dataset and improves the accuracy of the hallucination annotator. We will add these relevant papers to our related work. [1] Ji Z, Gu Y, et al. ANAH: Analytical Annotation of Hallucinations in Large Language Models[J]. ACL, 2024. [2] Hochreiter S, Schmidhuber J. Flat minima[J]. Neural computation, 1997, 9(1): 1-42. [3] Hochreiter S, Schmidhuber J. Simplifying neural nets by discovering flat minima[J]. Advances in neural information processing systems, 1994, 7. [4] Keskar N S, Mudigere D, Nocedal J, et al. On large-batch training for deep learning: Generalization gap and sharp minima[J]. arXiv preprint arXiv:1609.04836, 2016. [5] Dziugaite G K, Roy D M. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data[J]. arXiv preprint arXiv:1703.11008, 2017. [6] Chen Z, Mao H, Wen H, et al. Label-free node classification on graphs with large language models (llms)[J]. arXiv preprint arXiv:2310.04668, 2023. [7] Gilardi F, Alizadeh M, Kubli M. ChatGPT outperforms crowd workers for text-annotation tasks[J]. Proceedings of the National Academy of Sciences, 2023, 120(30): e2305016120. [8] He Z, Huang C Y, Ding C K C, et al. If in a Crowdsourced Data Annotation Pipeline, a GPT-4[C]//Proceedings of the CHI Conference on Human Factors in Computing Systems. 2024: 1-25. [9] Savelka J. Unlocking practical applications in legal domain: Evaluation of gpt for zero-shot semantic annotation of legal texts[C]//Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law. 2023: 447-451. [10] Pei X, Li Y, Xu C. Gpt self-supervision for a better data annotator[J]. arXiv preprint arXiv:2306.04349, 2023. --- Rebuttal Comment 1.1: Comment: The authors have satisfactorily addressed most of my problems. Most concerns has been addressed and some senarios may out of scope of this paper. I have raised my score. Once again, I want to express my gratitude for your hard work and commitment. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for increasing the rating to 6 (Weak Accept). We are happy that our discussions on algorithm performance, computational resources, and related works are convincing. We will include these discussions in the final manuscript.
Summary: This paper proposes an iterative self-training framework that simultaneously and progressively scales up the hallucination annotation dataset and improves the accuracy of the hallucination annotator. The framework is based on the expectation maximization algorithm, alternately annotating a scaled dataset and training a more accurate hallucination annotator on the dataset. A 7 billion model trained by this framework can surpass GPT-4 and obtains state-of-the-art hallucination detection results on HaluEval and HalluQA by zero-shot inference. Strengths: - The paper is well motivated by the fact that large language model hallucination significantly hinders applications but its annotation is difficult and very labor intensive. - The solution based on self-training effectively and feasibly addresses the above challenge. (1) The paper defines a procedure of analytical hallucination annotation, that aligns with human cognitive processes. (2) Staged multi-dimensional data scaling, collecting synthetic data from more large language models and for more numbers of topics and questions, ensures the richness of the dataset. (3) Leveraging EM algorithm is suitable. - Strong empirical results are shown for both in-domain hallucination detection and on existing benchmarks, HaluEval and HalluQA. - Very well-written paper. Weaknesses: The RougeL and BertScore may not be the most capable metrics for evaluating generated texts. Technical Quality: 4 Clarity: 4 Questions for Authors: Is it possible to leverage large language model based evaluation for hallucination detection? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Appendix E describes the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and for recognizing our efforts! Here are the answers to your question (which are highlighted in italics) regarding the metrics used to evaluate generated texts. ### **Response to Weakness and Question:** > *The RougeL and BertScore may not be the most capable metrics for evaluating generated texts. Is it possible to leverage large language model based evaluation for hallucination detection?* We use RougeL and BertScore as metrics because they are classical and popular metrics in NLG [1-3]. In line with your suggestion, we additionally added an LLM-based evaluation metric. Specifically, we use FactScore [4] to assess the consistency of generated reference points with the source document. Below is Table 2 from our paper, which contains the newly added metric FactScore (column 3). The results of the FactScore indicates that the reliability of our model's generated reference points progressively improves and ultimately exceeds that of GPT4. This trend is consistent with F1 and ACC, reflecting the reliability of the FactScore. | Model | F1 | ACC | **FactScore** | RougeL | BertScore | |----------------|-------|-------|---------------|--------|-----------| | GPT-4 | 87.11 | 86.97 | **84.39** | 86.32 | 96.21 | | ANAH-7B | 78.69 | 79.92 | **80.60** | 58.51 | 87.27 | | ANAH-20B | 80.49 | 81.01 | **81.51** | 58.82 | 88.44 | | ANAH-V2-Stage1 | 84.45 | 84.85 | **83.63** | 60.10 | 88.43 | | ANAH-V2-Stage2 | 87.75 | 88.18 | **84.54** | 67.28 | 90.80 | | ANAH-V2-Stage3 | 89.30 | 89.55 | **86.36** | 69.44 | 91.43 | We will add a corresponding discussion and evaluation results in the next version. [1] Sai, Ananya B., Akash Kumar Mohankumar, and Mitesh M. Khapra. "A survey of evaluation metrics used for NLG systems." ACM Computing Surveys (CSUR) 55.2 (2022): 1-39. [2] Li, Junyi, et al. "HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models." EMNLP 2023. [3] Pagnoni, Artidoro, Vidhisha Balachandran, and Yulia Tsvetkov. "Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics." ACL 2021. [4] Min S, Krishna K, Lyu X, et al. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation[J]. arXiv preprint arXiv:2305.14251, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My question was addressed and I keep the rating. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for keeping the 8 (Strong Accept) rating. We are happy that our discussions and evaluation results about metrics have addressed your question. We will include these discussions in the final manuscript.
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable feedback from the reviewers! We are honored that our work can be reviewed as: - The paper is "well motivated" and addresses the critical issue (R-5wgD). - It provides a novel, effective and feasible solution (R-nXFU & 4rBJ & i3ot), and "obtains strong and thorough empirical results" (R-nXFU & 5wgD & i3ot). - The paper is "well written" (R-nXFU & 5wgD & i3ot) and well explained by figures and equations (R-i3ot). For each question from all reviewers, we have provided a specific response in the relevant section below. Any additional clarification and discussion suggested by the reviewers will be included in the revised version.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Occupancy-based Policy Gradient: Estimation, Convergence, and Optimality
Accept (poster)
Summary: With a focus on "occupancy-based" methods for RL, the authors propose a model-free, policy gradient method for policy optimization in finite-horizon MDPs via occupancy estimation. Guided by the representation of return in terms of the state-visitation probability and reward, they express the gradient of the return in terms of gradient of the log of the state-visitation probability. They propose to estimate the latter term via squared loss regression (see eq 2). This estimate is then used to approximate the gradient of the return which is then plugged in to the usual policy gradient objective. In the online RL setting, they provide gradient estimation guarantees and convergence to a near-optimal policy. For offline RL, due to insufficient coverage of the offline dataset, they provide guarantees on the estimate of the gradient of the clipped return. **Update**: I have revised my score based on the following considerations: 1. The authors satisfactory response to the rebuttal, 2. The unique approach to policy gradients, which may be insightful in advancing research on policy gradient methods, 3. While rigidity of their assumption in the offline setting may limit its broader applicability, it also leaves room for further improvement. 4. The authors willingness to address the other reviewers suggestions on notation and presentation. Strengths: 1. To the best of my knowledge, the authors propose a novel perspective to policy gradients in online and offline RL based on computing the return gradient by first estimating the gradient of occupancy measures. 2. Their method for the online setting is technically sound with local and global convergence claims supported by theoretical analysis and relevant assumptions. Weaknesses: For the offline setting, the authors introduce new assumptions in the appendix which appear to influence their main result in Theorem 16. Precisely, in the proof of Theorem 16, the authors make use of Lemmas which refer to assumption 7 on page 37 and assumption 8 on page 38 to respectively control the MLE estimation error and density ratio estimation error. As such the paper is not self-contained and the results are rather difficult to verify. Technical Quality: 3 Clarity: 3 Questions for Authors: Just some additional comments: 1. Line 70: should be $h'\geq h$. 2. In Line 151, do you mean $\psi^{\pi}$ instead of $\psi$? Also, the first expression $d^{\pi}=\mu(s)^{T}\psi$ seems inconsistent. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None. This is a purely theory paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback. --- **re: "Introduce new assumptions in the appendix which appear to influence their main results"** The omission of Asm. 7 and 8 from the preconditions of Theorem 16 was an oversight, rather than a deliberate choice to obscure dependencies. We will include Asm. 7 and 8 in the preconditions of Theorem 16 for the next revision, and space allowing, will move the definitions of Asm. 7 and 8 (and surrounding discussions) into the main text. We made the difficult decision of not including Asm. 7 and 8 (or Algs. 4 and 5) in the main text due to space constraints, given the breadth of our results in both offline and online policy gradient. Our reasoning was that - Alg. 4 (requiring Asm. 7) and Alg. 5 (requiring Asm. 8) are algorithms from established papers and have been analyzed therein. - They are orthogonal to the novelty of our paper, which uses algorithms from the aforementioned works as subroutines. - Most importantly, we wanted to highlight the distinguishing parts of our analysis given limited space. In addition, they are weaker in some sense than Asm. 5 (expanded on below), which we mentioned briefly in L310, “We focus on discussing Asm. 5 for the offline gradient function class, which requires a stronger level of expressiveness.” Given extra space, we will expand this discussion based on the following points sourced from the Appendix: > Assumption 8 From L1002-1005: “The weight function class completeness assumption is shown Asm. 8, and is satisfied in low-rank MDPs using linear-over-linear function classes that have pseudo-dimension bounded by MDP rank. It can be seen as a 1-dimensional version of Asm. 5 where $\rho = 1$, and in that sense strictly weaker.” >Assumption 7 This is the standard realizability guarantee for maximum likelihood estimation or supervised learning (it is analogous to [Theorem 21, AKKS20] and also required by [HCJ23]), and only requires that the function class includes the ground-truth function. In comparison, Asm. 5 and 8 involve multiple functions. --- **re: "Paper is not self-contained and the results are rather difficult to verify"** Aside from the above issue (for which our previous response proposes a self-contained fix), we believe that the paper is self-contained and verifiable. For all algorithms and results in the appendix we have included rigorous proofs, and assumptions with justification. However, we empathize with the sentiment of your comment in the sense that our results rely on algorithms from established papers as subroutines, which results in layers of analysis. With more space, we hope to provide more details and results on this in the main body, per the existing descriptions, guarantees, and proofs in Appendices D and E. --- **Questions** Yes, it should be $d^\pi = \mu(s)^\top \psi^\pi$. Thank you for catching these typos. We will correct them in our paper. --- **References** [AKKS20] Alekh Agarwal, Sham Kakade, Akshay Krishnamurthy, and Wen Sun. “Flambe: Structural complexity and representation learning of low rank mdps”. In: Advances in Neural Information Processing Systems (2020). [HCJ23] Audrey Huang, Jinglin Chen, and Nan Jiang. "Reinforcement learning in low-rank mdps with density features." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I am satisfied with their clarification of the Assumptions and willingness to move them (including Assumption 6) to the main text. I have also read other review responses.
Summary: This paper introduces policy gradient algorithms that focus on estimating the gradient of the *occupancy measure* with respect to the policy parameters. In this framework, one can easily extend convergence analysis beyond the standard RL objective (e.g., maximization of expected return) to a much broader class of occupancy objectives, such as imitation learning or pure exploration. In addition to an online algorithm, the authors design an algorithm for the offline setting based on pessimistic corrections to density ratios, and provide bounds on the policy gradient error for both under function approximation. In the case of the offline algorithm, this paper demonstrates convergence bounds under weaker assumptions than existing methods. In the case of the online algorithm, the paper further provides conditions under which their policy gradient algorithm achieves global optimality. Strengths: This paper is very interesting and extemely thorough. I really like the idea of estimating the occupancy gradient in order to generalize over objectives for sequential decision-making; these seems quite powerful, and has the potential of being very impactful. Effectively, here the authors have provided a meta-algorithm (with convergence bounds) for a much broader class of algorithms than standard policy gradient methods, which can unify several fields of research. The Offline OccuPG method was really neat, and makes great use of the existing FORC method for estimating clipped density ratios. Weaknesses: The biggest weakness for the paper, in my opinion, is that it could benefit from extra clarification at many points. Realistically, this was probably omitted in favor of making room for more content (the paper is very dense), but I suggest the authors add some guiding discussions. Particularly, I think the following should be clarified: - **Motivation**: My first instinct after having read the draft was "this was very interesting to read, but how is the field of RL better off in light of these results?". The conclusion and introductory sections place a lot of emphasis on the fact that this paper presents the first algorithms/convergence results based on estimating the occupancy measure, but it's not immediately clear why this is so impactful. Ultimately, my impression is that the following two points (especially the first) are the real winners: 1. The ability to optimize a general class of functionals; 2. Weaker assumptions for convergence with the offline algorithm. I believe emphasizing these points more, and motivating them, can really help the reader appreciate the results, but I found these points to get a little lost in the sea of math. - **Comparison to existing work**: While I understand that this paper is the first to analyze convergence of PG methods based on occupancy measures, the paper focuses largely on the application of optimizing expected return. As such, as someone that is not intimately familiar with convergence results for PG methods, it would have been helpful to have a more concrete comparison between the bounds presented in this work and existing ones. - **Misc. writing/clarity issues**: Some math and/or algorithmic details were ambiguous and/or unclear, listed below. This made the logic a little difficult to follow at times. ## Minor issues On line 28, "In answer" -\> "In response". In Algorithm 1, $\mathcal{D}^{\mathrm{grad}}$ is not properly defined. On line 2 of the algorithm, it is implied that it is seeded the same way as $\mathcal{D}^{\rm reg}$, but on line 7 it appears that $\mathcal{D}^{\rm grad}$ is supposed to contain reward data (unlike $\mathcal{D}^{\rm reg}$). In Algorithm 1, I think the definition of $\mathcal{D}^{\mathrm{reg}}$ has a typo. It says $\mathcal{D}_h^{\mathrm{reg}} = \{(s\_h, a\_h, s\_{h+1})\}\_{i=1}^n$, but the index $i$ does not appear anywhere. My guess is that it should really be something like $\mathcal{D}\_h^{\mathrm{reg}} = \{(s\_h^{(i)}, a\_h^{(i)}, s\_{h+1}^{(i)})\}\_{i=1}^n$, where e.g. $\{s^{(i)}\_h\}$ is the sequence of states encounted over the course of rollout $i$. In Theorem 2, $\mathsf{pd}\_{\mathcal{G}}$ looks confusing (not clear that it is $\mathsf{p}$ times $\mathsf{d}\_{\mathcal{G}}$ as opposed to a variable called $\mathsf{pd}\_{\mathcal{G}}$). Is it necessary to use sans serif font for $\mathsf{p}$? In definition 46, I believe there is a notational issue. It says $\mathcal{F}\subset\mathbb{R}^{\mathcal{X}}$, but then you have a term $\mathrm{sign}(f(x_1^n - c))$ for $f\in\mathcal{F}$. But $x_1^n - c\not\in\mathcal{X}$. Are you actually mapping $f$ over the dimensions of $x_1^n - c$? Also, there is a typo here, $c$ should be $v$. Technical Quality: 4 Clarity: 3 Questions for Authors: In Corollary 6, what does it mean to run OccuPG with initial distribution $\nu$? On line 238, you defined $\bar{\pi}_h = (\pi\land C^\mathbf{a}_h\pi^D_h)$. I am having some trouble interpreting this. Firstly, should it not be $(\pi_h\land C^\mathbf{a}_h\pi^D_h)$? Then, if that's the case, is this equivalent to $$\begin{align*} \bar{\pi}_h(a\mid s) = \min\{\pi_h(a\mid s), C^a_h\pi^D_h(a\mid s)\}? \end{align*}$$ Given that the text focuses almost entirely on the problem of maximizing expected return, how do the bounds given in this paper compare to those of existing results? Is there any benefit to running OccuPG if you only care about expected return (especially in the online setting)? How tight are your convergence bounds, particularly in the case of optimizing general functionals? In Lemma 5, when should one expect to have $\mathcal{C}^{\pi^*}<\infty$? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: No major issues in this regard. Two limitations come to mind that were not thoroughly addressed: 1. For optimization of general functionals, convergence results depend on a Lipschitz assumption on the functional that is only discussed on the appendix. How generally is this assumption satisfied? 2. For the global convergence result (Lemma 5), there is a very strict assumption on the initial state distribution, which would not often be satisfied in practice (to my understanding). That said, other theoretical works have made similar assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful feedback, and for your appreciation of our work. We will refine our presentation per your comments, and include additional discussions according to the points below. --- **Motivation** Yes, these are the main contributions of the online and offline algorithms. We will make them more prominent in our revision. --- **Comparison to previous work** For both the online and offline settings, our results can be divided into two parts, estimation and optimization. Once the estimation and gradient domination conditions (e.g., Eq.(4) for online) are established, the optimization analyses largely follow from existing techniques in the literature. Overall, the dependencies on sample size $n$, Lipschitz constant $\beta$, and iterations $T$ match what we expect to see based on previous work and the optimization literature [Bec17, KU20, AKLM21, NZJZW22, BR24]. For coverage coefficients, the online version (Definition 3) is analogous to those in existing work (ours is finite-horizon while [AKLM21] is infinite-horizon). For offline gradient estimation, the results in [NZJZW22] pay for coverage of all policies (Assumption 6.1) while ours is single-policy (Theorem 16). Beyond that, our setting of occupancy-based gradient estimation is novel, so a direct comparison for the gradient estimation bounds is not readily available. In addition, few previous works provide end-to-end analysis incorporating both estimation/statistical error and optimization error (e.g., [AKLM21] establish global convergence with true gradients, while [NZJZW22] estimate off-policy gradients but do not use them for policy gradient). --- **Questions** > In Corollary 6, what does it mean to run OccuPG with initial distribution $\nu$? This means that the initial state is drawn from $\nu$, i.e., $s_1 \sim \nu$. The rest of the trajectory is generated by rolling out the policy in the true MDP. We will revise our paper to make this more explicit. > Definition of $\bar\pi_h$ in L238 We believe our use of the $h$ subscript might have caused some confusion (discussed below), but overall yes. By $\bar\pi_h = \left(\pi \wedge C_h^{\mathsf{a}}\pi^D_h \right)$ we do mean $\bar\pi_h(a_h|s_h) = \min \left\lbrace \pi(a_h|s_h), C_h^{\mathsf{a}} \pi^D_h(a_h|s_h) \right\rbrace$, and we will make this more explicit in L238. With regards to $\pi(a_h|s_h)$ vs. $\pi_h(a_h|s_h)$, for notational compactness we assumed that the same state cannot appear in multiple timesteps (L60-62). So $\pi(a_h|s_h)$ only refers to the policy’s action at timestep $h$, and is de facto equivalent to $\pi_h(a_h|s_h)$. In the general offline dataset from Def. 7, where each $h$ has a different dataset, $\pi^D_h$ was intended to indicate the data collection policy for timestep $h$. This might have caused some confusion, and we will clarify our notation accordingly. > Benefit to running OccuPG for expected return (esp. online)? In the online setting with expected return, we view OccuPG as complementary to value-based PG, in the sense that the former can be beneficial when occupancy modeling is more feasible or aligned with existing inductive biases (e.g., on occupancy representations). However, it is possible that calculating $\nabla\log d^\pi$ is more challenging than auto-differentiating through the value-based policy gradient. This is something that we are interested in exploring experimentally. More importantly, OccuPG serves as the conceptual framework for our offline PG algorithm Off-OccuPG, and here we do see improvements in sample complexity because, unlike previous works [LSAB19, NZJZW22], we get away with single-policy coverage in gradient estimation. > Tightness of convergence bounds, particularly in the case of optimizing general functionals This is an interesting question, and currently we are not sure. For occupancy gradient estimation, we do not expect that the rate of $n^{-½}$ can be improved, and our dependence in $\mathsf{p}$ matches similar works on (off-policy) gradient estimation [NZJZW22]. It’s possible that $H$ factors can be reduced by more sophisticated handling of error compounding. For general functionals, the generality of our bound (obtained through the Lipschitz factor, which doesn’t take into account properties of the objective such as curvature) means that it is likely not tight. > In Lemma 5, when should one expect to have $\mathcal{C}^{\pi^*} < \infty$? $\mathcal{C}^{\pi^*} < \infty$ if we have access to some distribution over states $\nu \in \Delta(\mathcal{S})$, such that $\nu(s) \gg \sum_h d^{\pi^*}(s)$ for all $s$. In other words, it places nonzero mass on all states visited by the optimal policy. At the worst, one can always guarantee $\mathcal{C}^{\pi^*}$ is finite (though potentially very large) by setting $\nu = \mathrm{uniform}(\mathcal{S})$ to be uniform over all states. In practice, one may be able to craft a more refined $\nu$ that results in smaller $\mathcal{C}^{\pi^*}$ given some expert or domain knowledge. --- **References** [AKLM21] Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. “On the theory of policy gradient methods: Optimality, approximation, and distribution shift”. In: The Journal of Machine Learning Research 22.1 (2021) [Bec17] Amir Beck. First-order methods in optimization. SIAM, 2017. [BR24] Jalaj Bhandari and Daniel Russo. “Global optimality guarantees for policy gradient methods”. In: Operations Research (2024) [LSAB19] Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. “Off-policy policy gradient with state distribution correction” [KU20] Nathan Kallus and Masatoshi Uehara. “Statistically efficient off-policy policy gradients”. In: International Conference on Machine Learning. PMLR. 2020 [NZJZW22] Chengzhuo Ni, Ruiqi Zhang, Xiang Ji, Xuezhou Zhang, and Mengdi Wang. “Optimal Estimation of Policy Gradient via Double Fitted Iteration”. In: International Conference on Machine Learning. PMLR. 2022 --- Rebuttal Comment 1.1: Comment: Thanks for the authors for the great and detailed response. Upon reading the rebuttal and the other reviews, I maintain my view that this is a really nice paper that should be accepted for publication.
Summary: This paper investigates the convergence of model-free policy gradient (PG) methods that compute the gradient through occupancy. Strengths: The convergence analysis and the theorem appear to be sound. Weaknesses: In my opinion, the paper makes overclaims. The convergence depends on a coverage coefficient, which removes the need for exploration. Therefore, the obtained bound does not indicate the number of samples needed for exploration when the initial policy is not good. Additionally, the proposed algorithm does not address the problem of exploration. Technical Quality: 3 Clarity: 3 Questions for Authors: See the 'Weakness' part. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See the 'Weakness' part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your comments. However, we respectfully disagree that our paper over-claims its results. The reviewer’s comment that “the coverage coefficient… removes the need of exploration” likely results from confusion over the term “exploration”, whose meaning in the PG literature is different from that in PAC-RL. Our use of the term is aligned with the literature, and our results’ dependence on the coverage coefficient is also standard. Results of a similar nature are commonly found in the literature, and we elaborate on the details below. --- **re: "The convergence depends on a coverage coefficient, which removes the need for exploration"** In (online) policy gradient analysis, the coverage coefficient $\mathcal{C}^{\pi^*}$ (Lemma 5) expresses the difficulty of the exploration problem in policy optimization. L184-188 discusses this in-depth. There may be some confusion over terminology because our algorithm, along with routine PG methods such as REINFORCE and PPO, do not actively explore in the sense of PAC exploration (e.g., UCB and other optimism-in-face-of-uncertainty algorithms). Rather, exploration here refers to the difficulty of finding a good policy within the policy class from online interactions with the environment. This usage of the phrase “exploration” matches foundational works in the PG literature, for which $\mathcal{C}^{\pi^*}$ is also the standard notion of online PG complexity. They are identical to Definition 3.1 from the seminal work of [AKLM21], and the paragraph above it (modulo differences in infinite and finite horizon MDPs). Definition 3 of [BR24] is another point of comparison. All of these coefficients depend on how well an initial state distribution covers the optimal policy’s occupancy. That said, we should have defined “exploration” more explicitly in the abstract/introduction to reduce confusion, and will revise accordingly. This is a simple fix that can be lifted from L184-188, and that is aligned with standards and terminology from existing work. --- **re: "the obtained bound does not indicate the number of samples needed for exploration when the initial policy is not good"** First of all, the coverage coefficient is defined with respect to a _distribution over initial/starting states_. This is _not an initial policy, or even a policy_. Moreover, a good coverage coefficient does not necessarily imply a good initial policy; the initial policy can still have poor performance, and we rely on gradient ascent to find a better policy through online interactions. This is what the PG literature means by “exploration”. --- **re: "Additionally, the proposed algorithm does not address the problem of exploration"** As argued above, the coverage coefficient in our online bounds characterizes the difficulty of exploration faced by policy gradient optimization algorithms, on par with the existing literature. However, developing algorithms that actively explore to improve policy optimization is a valuable direction of future work, given that in practice the initial state distribution may not be very exploratory (aka have poor coverage over $d^{\pi^*}$ per Lemma 5). --- **Coverage coefficient in offline setting** Lastly, our results cover both online and offline PG, and the latter makes up half of the paper (pages 6-9). In the offline learning setting, the learner cannot interact with the environment and must learn from the given data, hence _the problem of exploration simply does not exist there_. As with all offline results, a coverage coefficient is expected in the guarantee, reflecting the quality of the offline data (since, if the data is poor, there is nothing one can do). In fact, the kind of coverage coefficient we use is already improved over previous _offline_ PG results; previous results require all-policy coverage even just for estimating the gradient [LSAB19, NZJZW22], whereas our estimation guarantee only depends on single-policy coverage (Theorem 16). --- **References** [AKLM21] Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. “On the theory of policy gradient methods: Optimality, approximation, and distribution shift”. In: The Journal of Machine Learning Research 22.1 (2021), pp. 4431–4506. [BR24] Jalaj Bhandari and Daniel Russo. “Global optimality guarantees for policy gradient methods”. In: Operations Research (2024) [LSAB19] Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. “Off-policy policy gradient with state distribution correction”. In: arXiv preprint arXiv:1904.08473 (2019). [NZJZW22] Chengzhuo Ni, Ruiqi Zhang, Xiang Ji, Xuezhou Zhang, and Mengdi Wang. “Optimal Estimation of Policy Gradient via Double Fitted Iteration”. In: International Conference on Machine Learning. PMLR. 2022, pp. 16724–16783.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
No-Regret M${}^{\natural}$-Concave Function Maximization: Stochastic Bandit Algorithms and NP-Hardness of Adversarial Full-Information Setting
Accept (poster)
Summary: This paper considers online learning variants of ${\rm M}^\natural$-concave function maximization. These types of function generalize maximum flows on graphs (where the variable is the vector of source values on each node), gross substitute valuations in economics, and have applications in resource allocation. This paper has two main results: - A learning strategy with $O(K N^{1/3} T^{2/3})$-regret in the stochastic setting (here we can query a function at a given input and observe its output perturbed by mean 0, 1-subgaussian noise and we have $T$ rounds to perform these queries). This is obtained by showing that the greedy algorithm is "robust to local errors", and is the key technical step for this result. The regret bound follows by an application of known results for pure-exploration + the explore-then-commit paradigm. - NP-hardness for the adversarial setting. Unless P=NP, it is impossible for any polynomial time learner to achieve sub-linear regret in the adversarial setting. This is done via a clever reduction from the 3-matroid-intersection problem. Strengths: - This paper considers online learning for a very general class of functions, which makes the results potentially widely applicable. - The "robust to local errors" Theorem for $\rm{M}^\natural$-concave functions may be of independent interest. - The proofs are well written and easy to follow, which is challenging to do for the very technical topic at hand. - The NP-hardness result is interesting and shows a sharp difference between the stochastic and adversarial case for this problem. Weaknesses: - For the upper bound, the techniques are fairly straightforward once we have Theorem 3.1. I am wondering if more can be done, e.g. regret that is sublinear in $K$, improving the bound for $T$ to $O(\sqrt{T})$, or sublinear approximate regret in the adversarial setting. These suggestions have been pointed out by the authors themselves, but they are relevant questions. - The stochastic online setting for this problem could be better motivated than the paragraph in lines 122-126. More concrete examples would be helpful to clarify the applicability of these results beyond the nice mathematics. - No experimental evaluation. While this paper makes a nice theoretical contribution, it could be helpful to demonstrate that the theory is applicable to a problem that prior methods were not suitable for. Technical Quality: 3 Clarity: 2 Questions for Authors: Please clarify the relationship between ${\rm M}^{\natural}$ concave maximization and submodular function maximization. I found it hard to understand the relationship between these as it pertains to the results in this paper and the discussion therein. For example, the conclusion states that if the domain is restricted to $\{0,1\}^V$ then we get online submodular function maximization as a special case. Here, the offline problem is NP-hard so I'm not sure how to interpret the fact that we can get sublinear-regret with respect to $\mathbb{E}[f^*(x^*)]$ with a polynomial-time algorithm. Some very minor comments: - ${\rm M}^\natural$-convexity is used in section 5, but not defined in the paper Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have adequately addressed and discussed limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful to the reviewer for the thoughtful comments and positive evaluation. First, we would like to address the following comment regarding weakness. > Weaknesses: > - For the upper bound, the techniques are fairly straightforward once we have Theorem 3.1. I am wondering if more can be done, e.g. regret that is sublinear in $K$, improving the bound for $T$ to $O(\sqrt{T})$, or sublinear approximate regret in the adversarial setting. These suggestions have been pointed out by the authors themselves, but they are relevant questions. We appreciate the feedback and insights provided. As regards improving the $O(KN^{1/3}T^{2/3})$ regret bound in Theorem 4.3 for the stochastic bandit setting, we have discovered a recent preprint by Tajdini et al. (2023) that presents a relevant lower bound. In the submodular case, their lower bound implies that significant improvements to the $O(KN^{1/3}T^{2/3})$ regret bound are impossible without admitting exponential factors in $K$. While their result does not directly apply to our $\text{M}^\natural$-concave setting, we conjecture that a similar lower bound exists and that our $O(KN^{1/3}T^{2/3})$ regret bound is tight unless we admit exponential factors in $K$. For more details, please refer to the above [global response](https://openreview.net/forum?id=NnoAj91HZX&noteId=RjP8EjP159). Next, we would like to answer the following question. > Questions: > > Please clarify the relationship between $\text{M}^\natural$-concave maximization and submodular function maximization. I found it hard to understand the relationship between these as it pertains to the results in this paper and the discussion therein. For example, the conclusion states that if the domain is restricted to ${0, 1}^V$ then we get online submodular function maximization as a special case. Here, the offline problem is NP-hard so I'm not sure how to interpret the fact that we can get sublinear-regret with respect to $\mathbb{E}[f^*(x^*)]$ with a polynomial-time algorithm. We wish to clarify that the correct relationship we intended to describe in the conclusion is: $$ \text{class of $\text{M}^\natural$-concave functions on $\\{0, 1\\}^V$}\subseteq \text{class of subomdular functions}, $$ which is the opposite of the relationship mentioned in the reviewer's comment. We apologize for any confusion and appreciate it if you could notify us of any incorrect expressions in our manuscript. Importantly, $\text{M}^\natural$-concave maximization on $\\{0, 1\\}^V$ is a special case that the greedy algorithm can *exactly* solve in polynomial time. This has been established by Murota and Shioura [33] and can also be derived from our Theorem 3.1 with $\mathrm{err}(i_k \mid x_{k-1}) = 0$. Since submodular maximization forms a larger problem class, sublinear regret with respect to $\mathbb{E}[f^*(x^*)]$ for stochastic bandit $\text{M}^\natural$-concave maximization does *not* imply exact algorithms for submodular maximization. Therefore, our sublinear regret bounds in Section 4 do not contradict any known results in $\text{M}^\natural$-concave or submodular maximization. We hope this clarification has effectively addressed the reviewer's concerns. Please do not hesitate to let us know if further questions remain. > Some very minor comments: > - $\text{M}^\natural$-convexity is used in section 5, but not defined in the paper We thank the reviewer for pointing this out. We will clarify that the $\text{M}^\natural$-convexity is defined by the negative of $\text{M}^\natural$-concavity, analogous to the standard convexity--concavity relationship. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response to my specific questions and the general response above. For the M-concave vs. submodular discussion above my issue was one of slight confusion with what is written in the text. Your response has cleared this up. My overall evaluation stays the same.
Summary: The authors consider online $M^\sharp$-concave optimization, similar to problems like online convex optimization and online DR-submodular optimization. $M^\sharp$-concave function classes include resource allocation, valuation, and flow problems, and unlike DR-submodular functions can be exactly optimized (at least under a cardinality constraint) by a simple greedy algorithm. The authors show standard online adaptation of the greedy algorithm analogous to similar work for submodular functions, though prove the robustness of the offline algorithm to oracle value errors in a manner distinct from previous robustness analyses (such as for submodular functions). The authors also show that unlike OCO and online DR-submodular optimization, the adversarial setting is fundamentally harder than the stochastic setting – one cannot (with poly-time per round complexity) achieve sublinear regret. Strengths: 1. The authors consider an interesting class of functions for online optimization, which are related to, but much easier to optimize than, (DR-)submodular functions. They show an (potentially quite) interesting dichotomy in hardness between the online stochastic setting and the online adversarial setting, unlike related classes like online convex optimization or online DR-submodular optimization. For the online stochastic setting they get analogous results to prior work for (DR-)submodular, but show through a matroid-intersection construction that one cannot obtain sublinear regret. 2. The robustness analysis is distinct from that used in submodular bandit papers. As described, the standard proof(s) for offline cardinality-constrained $M^\sharp$-concave maximization does not have a structure that lends itself to accounting for the function value impact in sequential mistakes by the greedy algorithm. Thus, the authors develop a new proof that clearly shows such additive accumulation, and subsequently permits almost direct adaptation to the online stochastic setting. (one minor note - line 191 “ours is different from them in that it involves no approximation factors” that aspect I don’t see in and of itself as a meaningful distinction since the offline problem is not NP-hard; from my reading it is how the offline proof is structured as to whether it can be easily modified to account for value oracle errors that matters.) 3. I found the paper overall well written, with good organization, logical flow, discussions, etc. Weaknesses: 1. The stochastic setting algorithms and analysis adapting a greedy algorithm from the offline setting (including an ETC method getting $T^{2/3}$) are straightforward (once a robustness result is in hand), though the authors do acknowledge that in the main paper. 2. This is a somewhat minor point – since the need for a novel robustness analysis arises due to the offline proof of the greedy algorithm’s not already being in a manner that can be easily adapted to account for local errors (which is unlike the proof of $1-1/e$ for the greedy approximation algorithm for cardinality constrained submodular maximization, as noted in Appendix A), I think it would have been helpful to include the proof(s) for constrained $M^\sharp$-concave maximization (specialized to cardinality constraint) so that this need would be more apparent, that no standard proof has a sequential exchange argument that would already lend itself more readily to analyzing robustness and that the robustness result used in the offline setting with exact value oracles clearly constitutes a distinct proof. 3. minor point – it is not clear to what extent the positive result in the stochastic setting can be generalized beyond a simple cardinality constraint. 4. minor point – there is no implementation or experiments for toy flow/valuation/resource allocation problems. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is known about the hardness of offline optimization for maximizing sums of (possibly non-monotone) $M^\sharp$-concave functions? For the adversarial setting, the authors use an interesting construction based on matroid intersection to prove hardness for no-approximation sublinear regret (with poly-time per round computation). But I wonder if it could have been reached in a more straightforward manner. Namely, for the stochastic setting, the regret is based on a single $M^\sharp$-concave function $f^*$ for which in the offline setting the greedy algorithm can find the optimal solution. In the adversarial setting, however, the regret is based on a sum of $M^\sharp$-concave functions and in line 100 we are told that in general the class of $M^\sharp$ functions is not closed under addition. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's dedication to reviewing our paper and providing many insightful comments. First, we would like to respond to some comments regarding weaknesses. > Weaknesses: > > 1. The stochastic setting algorithms and analysis adapting a greedy algorithm from the offline setting (including an ETC method getting $T^{-2/3}$) are straightforward (once a robustness result is in hand), though the authors do acknowledge that in the main paper. We acknowledge the reviewer's point. Indeed, while our algorithm is straightforward, significantly improving our $O(T^{2/3})$ regret seems challenging unless we admit exponential dependence on $K$. Please refer to the above [global response](https://openreview.net/forum?id=NnoAj91HZX&noteId=RjP8EjP159) for details. > 2. This is a somewhat minor point – since the need for a novel robustness analysis arises due to the offline proof of the greedy algorithm’s not already being in a manner that can be easily adapted to account for local errors (which is unlike the proof of $1-1/e$ for the greedy approximation algorithm for cardinality constrained submodular maximization, as noted in Appendix A), I think it would have been helpful to include the proof(s) for constrained $M^\sharp$-concave maximization (specialized to cardinality constraint) so that this need would be more apparent, that no standard proof has a sequential exchange argument that would already lend itself more readily to analyzing robustness and that the robustness result used in the offline setting with exact value oracles clearly constitutes a distinct proof. We appreciate this valuable suggestion. The original proof by Murota and Shioura [33 Section 3], which is based on a convexity argument along the trajectory of an augmenting-type algorithm, is a quite different approach that does not readily align with the robustness analysis. To add further, we wish to highlight that our proof of Theorem 3.1, with $\mathrm{err}(i_k \mid x_{k-1}) = 0$, serves as an alternative proof of the optimality of the greedy algorithm for the offline $\text{M}^\natural$-concave maximization. > 3. minor point – it is not clear to what extent the positive result in the stochastic setting can be generalized beyond a simple cardinality constraint. This is an interesting future direction. Indeed, in the offline setting, the greedy-type algorithm can solve more general $\text{M}^\natural$-concave maximization problems. When it comes to online/bandit settings, extending the robustness analysis akin to Theorem 3.1 will be a key point of investigation. Next, we would like to respond to the following question. > Questions: > > 1. What is known about the hardness of offline optimization for maximizing sums of (possibly non-monotone) $M^\sharp$-concave functions? For the adversarial setting, the authors use an interesting construction based on matroid intersection to prove hardness for no-approximation sublinear regret (with poly-time per round computation). But I wonder if it could have been reached in a more straightforward manner. Namely, for the stochastic setting, the regret is based on a single $M^\sharp$-concave function $f^*$ for which in the offline setting the greedy algorithm can find the optimal solution. In the adversarial setting, however, the regret is based on a sum of $M^\sharp$-concave functions and in line 100 we are told that in general the class of $M^\sharp$ functions is not closed under addition. In the offline setting, maximizing the sum of more than two $\text{M}^\natural$-concave functions is NP-hard in general. This is recognized as folklore within the field of discrete convex analysis; those familiar with $\text{M}^\natural$-concavity would readily infer this NP-hardness due to the connection to the 3-matroid intersection, by encoding a base family of a matroid with an $\text{M}^\natural$-concave indicator function $f:\\{0, 1\\}^V \to \\{0, -\infty\\}$. To our knowledge, however, no explicit proof exists in the literature (although the NP-hardness is mentioned in a seminar material by Kazuo Murota, titled "Discrete Convex Analysis," used in the 2015 Summer School at Hausdorff Institute of Mathematics). Our work would be the first to provide an explicit proof, although the NP-hardness itself is already widely recognized. More importantly, our Lemma 5.5 implies a slightly stronger result: even if $\text{M}^\natural$-concave functions on $\\{0, 1\\}^V$ are restricted to take values in $[0, 1]$ (forbidden taking $-\infty$), maximizing the sum of more than two $\text{M}^\natural$-concave functions remains NP-hard. This restriction is crucial to ensure that our NP-hardness result is meaningful because, if $f_t$ can take $-\infty$, the learner who does not know $f_t$ a priori cannot achieve even a bounded regret, making the hardness result (Theorem 5.2) vacuous. The reduction from the 3-matroid intersection with $\text{M}^\natural$-concave functions that do not take $-\infty$ has not been even recognized in the literature. Thus, our Lemma 5.5, despite being somewhat specific, provides a new crucial technical result. We believe that no significantly more straightforward approach could establish meaningful NP-hardness. We hope this clarification has effectively addressed the reviewer's question and highlighted the value of our technical contributions. Please do not hesitate to reach out during the discussion period if further questions remain. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal. Thanks to the authors for their response. I have decided to increase my score. I do not have further questions.
Summary: This paper studies the online version of M-concave function maximization. With the help of a new lemma that bounds local errors for greedy algorithms, the authors derive an efficient $T^{-1/2}$ simple regret algorithm together with a $T^{2/3}$ regret algorithm for stochastic settings. Moreover, the authors show an interesting computational hardness result for adversarial settings with full information. Strengths: 1. This paper studies a new online learning problem and proposes an efficient algorithm with sublinear regret for stochastic settings. 2. A new computational hardness result for adversarial settings with full information is given, which introduces a new understanding of online optimization. Weaknesses: 1. The regret bound for stochastic setting is not tight. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the main technical challenge in getting $\sqrt{K}$ regret for stochastic settings? Specifically, what is the difficulty when implementing the UCB-type exploration on the fly rather than doing an explore-then-commit algorithm? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate the reviewer's effort in reviewing our paper and providing thoughtful comments. Below is our answer to the question. > Questions: > > 1. What is the main technical challenge in getting $\sqrt{K}$ regret for stochastic settings? Specifically, what is the difficulty when implementing the UCB-type exploration on the fly rather than doing an explore-then-commit algorithm? We appreciate this insightful question. Our response assumes the question refers to $\sqrt{T}$ regret (not $\sqrt{K}$). Please let us know if this is not your intended question! The fundamental difficulty lies in the fact that there are exponentially many arms. Indeed, if the domain is restricted to $\\{0, 1\\}^V$, we can achieve a $\sqrt{T}$ regret bound by regarding all subsets of size up to $K$ as arms and naively applying a UCB-type algorithm to this multi-armed bandit problem. However, this approach leads to a regret bound of $O\left(\sqrt{\binom{N}{K}T}\right)$, which depends exponentially on $K$, and the resulting algorithm requires exponential time per round in general. By contrast, the explore-then-commit strategy can achieve the $O(KN^{1/3}T^{2/3})$ regret bound as in Theorem 4.3, albeit with the worse dependence on $T$, and the algorithm runs in polynomial time per round. As detailed in the [global response](https://openreview.net/forum?id=NnoAj91HZX&noteId=RjP8EjP159), a recent lower bound by Tajdini et al. (2023) implies that, in the submodular maximization case, we cannot do significantly better than the naive UCB applied to $\binom{N}{K}$ arms and the explore-then-commit strategy. While there is a slight possibility of achieving better regret bounds by focusing on the $\text{M}^\natural$-concave case, we conjecture that a similar lower bound likely exists and that our $O(KN^{1/3}T^{2/3})$ regret is essentially tight unless we allow the regret bound to depend exponentially on $K$. We hope this explanation helps the reviewer better understand the challenge in stochastic bandit $\text{M}^\natural$-concave maximization and casts our results in a more favorable light. Please do not hesitate to ask further questions during the discussion period. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Yeah, my question is about the difficulty of getting $\sqrt{T}$ regret. I do not have further questions now.
Summary: This paper studies the online optimization problem with $M^\natural$-concave function, which has many real-world implications such as maximum-flow on bipartite graphs, gross substitute valuation, resource allocation and bandit problem. $M^\natural$-concave function forms the fundamental basis of discrete concave analysis. In the stochastic bandit setting, they present algorithms with $O(T^-1/2)$ simple regret and $O(T^2/3)$ cumulative regret. These results leverage the robustness of the greedy algorithm to local errors, a significant technical contribution of the paper. Additionally, the author also presents the result in the adversarial online learning setting, the paper proves that achieving sub-linear regret is NP-hard, even with full-information feedback, which establishes a distinct difference from the offline setting. Strengths: The paper focuses on $M^\natural$-concave functions, which is a crucial class in discrete convex analysis with wide applications, thus benefiting the analysis on a large variety community. The author brings both stochastic bandit and adversarial full-information settings. Especially for the adversarial setting, at least linear regret justification will guide us to avoid building sub-linear regret algorithms. This theorem provides a profound understanding of the challenges and encourages us to think more potential alternative model assumptions and solutions for optimizing $M^\natural$-concave functions in the online scenarios. Weaknesses: No obvious weaknesses. Technical Quality: 3 Clarity: 3 Questions for Authors: I am more curious about the conclusion in the bandit example. 1. In theorem 4.2, does $sReg_T$ is equivalent to the error in the Best-arm identification problem? 2. Does the bit-O term in theorem 4.2 and 4.3 hide the logarithmic term? 3. The theorem 4.3 claims that the proposed algorithm for stochastic bandit achieves cumulative regret as $O(K N^{1/2} T^{2/3}$. While according to [1], the tight lower bound of a consistent bandit algorithm is $\Theta(K^{1/2}T^{1/2})$. What kind of factor do you think causes the gap between your upper and lower bound? [1] Auer, Peter, et al. "The nonstochastic multiarmed bandit problem." *SIAM journal on computing* 32.1 (2002): 48-77. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's insightful comments and positive evaluation. We address the questions below. > 1. In theorem 4.2, does $sReg_T$ is equivalent to the error in the Best-arm identification problem? Our ${\rm sReg}_T$ is essentially the same as the error measure in the best-arm identification, like the one used by Rejwan and Mansour [41]. To clarify the connection to Rejwan and Mansour [41] mentioned in the conclusion section, there is a bit of difference: they consider the $(\varepsilon, \delta)$-PAC guarantee, i.e., the high probability bound on the expected error, whereas our ${\rm sReg}_T$ focuses solely on the expected error. Nevertheless, their sample complexity lower bound [41, Theorem 10] of $T\gtrsim\Omega(N/\varepsilon^2)$ applies to any $K\le N/2$ and $\delta \in (0, 1/2)$, and hence Markov's inequality implies that our bound of ${\rm sReg}_T = O(K^{3/2}\sqrt{N/T})$ in Theorem 4.2 is tight with respect to $N$ and $T$ when $K=O(1)$. > 2. Does the bit-O term in theorem 4.2 and 4.3 hide the logarithmic term? The big-O notation in Theorems 4.2 and 4.3 does not hide any logarithmic terms. Our algorithms in Section 4 employ the minimax optimal strategy (MOSS) (e.g., Lattimore and Szepesvári [24, Chapter 9]), whose regret bound is free of $\log T$ factors that are typically present in the regret bounds of standard UCB-type algorithms. Consequently, our regret bounds do not involve logarithmic factors. To add further, this strategy could slightly improve previous explore-then-commit approaches for stochastic bandit submodular maximization [37, 38], which contain $\log T$ factors. > 3. The theorem 4.3 claims that the proposed algorithm for stochastic bandit achieves cumulative regret as $O(KN^{1/2}T^{2/3})$. While according to [1], the tight lower bound of a consistent bandit algorithm is $\Theta(K^{1/2}T^{1/2})$. What kind of factor do you think causes the gap between your upper and lower bound? > > [1] Auer, Peter, et al. "The nonstochastic multiarmed bandit problem." SIAM journal on computing 32.1 (2002): 48-77. We greatly value this insightful question. In our opinion, the primary challenge stems from the presence of exponentially many arms: even if we restrict the domain to $\\{0, 1\\}^V$, there are about $N^K$ arms. This leads to the following dilemma: 1. If we aim to achieve $O(\sqrt{T})$ regret, we may naively apply a UCB-type algorithm to the multi-armed bandit problem with about $N^K$ arms. However, this approach results in $O(\sqrt{N^K T})$ regret, and the algorithm requires exponential time per round in general. 2. Alternatively, we can design more efficient strategies based on offline algorithms. In our case, we have designed a no-regret strategy by executing a UCB-type algorithm in each iteration in the greedy algorithm, as described in Section 4. This method employs $K$ no-regret learners; however, only a single bandit feedback is available in each round. Due to this limited feedback, we need sufficient exploration across $K$ iterations in the greedy algorithm, as the explore-then-commit strategy does. Consequently, we incur $T^{2/3}$ regret as in Theorem 4.3. Furthermore, as detailed in the above [global response](https://openreview.net/forum?id=NnoAj91HZX&noteId=RjP8EjP159), a recent lower-bound result by Tajdini et al. (2023) suggests that we cannot do significantly better than the above two strategies in the case of submodular maximization. While there might be a slight possibility of achieving better regret by focusing on the $\text{M}^\natural$-concave case, we conjecture that we cannot achieve $\sqrt{T}$ regret that depends only polynomially on $K$ and $N$. We hope our responses have adequately addressed the reviewer's questions. Please do not hesitate to reach out during the discussion period if there are any further questions. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed explanation of the tightness of the $O(KN^{1/3}T^{2/3})$ regret bound in Theorem 4.3. Please add a formal justification in your paper based on your findings from Tajdini et al. (2023) if you can. I will not change my score since I have already supported this paper in my original review. --- Rebuttal 2: Comment: We deeply appreciate the reviewer's continued support of our paper. We will make every effort to include a rigorous discussion on the tightness of our $O(KN^{1/3}T^{2/3})$-regret bound based on Tajdini et al. (2023). Just to be thorough, we would like to add to our previous response that the difficulty in achieving $O(\sqrt{T})$-regret arises not only from the exponentially many arms but also from the *non-linearity* of $\text{M}^\natural$-concave reward functions. While combinatorial bandits with linear rewards admit $O(\sqrt{T})$-regret algorithms such as COMBAND (Cesa-Bianchi & Lugosi, 2012), achieving $O(\sqrt{T})$-regret that depends polynomially on the problem size is significantly more challenging when dealing with non-linear rewards, as suggested by Tajdini et al. (2023) for the case of submodular rewards.
Rebuttal 1: Rebuttal: ## **Global response: a discussion on the tightness of the $O(KN^{1/3}T^{2/3})$ regret bound in Theorem 4.3** We sincerely thank all reviewers for their efforts in reviewing our paper and providing invaluable feedback. We are pleased to see that all reviewers have positively evaluated our work. Upon reviewing the comments, we noted that multiple reviewers are curious about whether our $O(KN^{1/3}T^{2/3})$ regret upper bound in Theorem 4.3 for stochastic bandit $\text{M}^\natural$-concave maximization is improvable. We revisited this issue and discovered an interesting recent preprint by Tajdini et al. (2023), titled "Minimax Optimal Submodular Optimization with Bandit Feedback." This paper studies stochastic bandit monotone submodular maximization with a ground set of size $N$ and a cardinality constraint of $K$. They showed that there is a lower bound of $$ \Omega\left(\min_{i\le K}\ (K-i)N^{1/3}T^{2/3} + \sqrt{\binom{N-K}{i}T}\right) $$ on *robust greedy regret*, which intuitively compares the learner's actual reward with that of the output, denoted by $S_{\rm gr}$, of the greedy algorithm applied to the underlying true function. This lower bound implies that $O(KN^{1/3}T^{2/3})$ robust greedy regret is inevitable when $T$ is small in the submodular case; furthermore, the explore-then-commit strategy can achieve this regret bound. The $\sqrt{\binom{N-K}{i}T}$ term can be interpreted as the regret bound achieved by regrading all $\binom{N-K}{i}$ subsets as arms and using a UCB-type algorithm. Thus, the lower bound represents the best mix of the two regret terms achieved by the explore-then-commit strategy and the UCB applied to exponentially many arms. Currently, we have confirmed that the proof for establishing the lower bound by Tajdini et al. (2023) does not directly apply to our stochastic bandit $\text{M}^\natural$-concave maximization problem. Specifically, the function used by Tajdini et al. (2023) for obtaining the lower bound is submodular but not $\text{M}^\natural$-concave. Nevertheless, the situations of Tajdini et al. (2023) and our problem in Section 4, with the domain restricted to $\\{0, 1\\}^V$, are remarkably similar: 1. Since the greedy algorithm applied to the unknown true $\text{M}^\natural$-concave function $f^*$ can find an optimal solution $x^*$, we have $x^* = S_{\rm gr}$ and hence the notion of robust greedy regret in Tajdini et al. (2023) essentially coincides with the standard regret in our case. 2. The explore-then-commit strategy and the UCB with exponentially many arms also achieve the $O(KN^{1/3}T^{2/3})$ and $O\left( \sqrt{\binom{N-K}{i}T} \right)$ regret bounds, respectively, in the $\text{M}^\natural$-concave case, where the former is the very result our Theorem 4.3 states. Considering these facts, it is highly plausible that we can construct a hard instance of stochastic bandit $\text{M}^\natural$-concave maximization similar to Tajdini et al. (2023) to establish the same regret lower bound. Thus, we conjecture that our $O(KN^{1/3}T^{2/3})$ regret bound in Theorem 4.3 is tight in $K$, $N$, and $T$ if we want to avoid exponential factors, such as $N^K$, regardless of the value of $T$, which is a common desideratum in the context of combinatorial bandits. We will include an extensive discussion of this open problem in our revised manuscript. We deeply thank dedicated reviewers, whose insightful comments have brought our attention to this interesting connection to Tajdini et al. (2023). We hope the information provided above helps reviewers understand that our $O(KN^{1/3}T^{2/3})$ regret bound in Theorem 4.3 may be close to being tight.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Multi-Instance Partial-Label Learning with Margin Adjustment
Accept (poster)
Summary: The paper proposes an approach to tackle the multi-instance partial-label learning (MIPL) problem. In MIPL, each training sample is represented as a bag of instances associated with a set of candidate labels. The existing MIPL algorithms may gives the high prediction probability on non-candidate label, which can lead to suboptimal performance. They propose MIPLMA algorithm addresses this issue by introducing a margin-aware attention mechanism and a margin-compliant loss function. The algorithm dynamically adjusts the margins for attention scores, while the loss function ensures that the margins between predicted probabilities on candidate and non-candidate label sets are properly constrained. The paper presents experimental results on benchmark and real-world datasets that demonstrate the superior performance of MIPLMA compared to existing MIPL algorithms, as well as other multi-instance algorithms and partial-label learning algorithms. Strengths: 1. The paper is well-organized and easy to comprehend. 2. The experiments are generally comprehensive. 3. Theoretical analysis contributes to establishing the effectiveness of the proposed model. Weaknesses: 1. The margin-aware attention is performed by setting a changing temperature parameter which seems a trival idea. And the details of how the gap of attention scores help the model to decrease the prediction probability on non-candidate labels are also not given. 2. The form of margin-compliant loss seems not easy to be optimized in neural network framework, the paper should give more details on the proposed loss function. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The initial weights are uniformly distributed. After the training in first epoch, the weights change. Subsequent epochs can only make the margin of the distribution larger, so the results are greatly affected by the first epoch. If there are errors in the training of the first few epochs, Will this error accumulate? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and positive feedback on our paper. We are pleased that you found the paper well-organized, comprehensive, and theoretically supported. Below, we have summarized your comments and provided our responses accordingly. > The details of how the gap of attention scores help the model to decrease the prediction probability on non-candidate labels are also not given. Our paper utilizes margin-compliant loss to increase the gap between prediction probabilities for candidate and non-candidate labels, thus reducing the probability for non-candidate labels. To examine the impact of attention score gaps, we introduce MIPLMA-NOTEM, a variant of MIPLMA without the temperature parameter in the attention mechanism. In experiments with the MNIST-MIPL dataset ($r=1$), both models yield similar prediction probabilities of approximately 0.088 for non-candidate labels, showing no significant difference. Visualization of attention scores for three multi-instance bags from the training set, shown in `Figure R2` of the attached PDF, where the horizontal axis denotes the indices of instances, while the vertical axis represents the corresponding attention scores. Red and blue colors indicate attention scores assigned to positive and negative instances, respectively. The results reveals that the temperature parameter effectively increases the gap between attention scores for positive and negative instances. Furthermore, `Figure R3` of the attached PDF compares feature visualizations on the test set. MIPLMA produces well-clustered features by class, while MIPLMA-NOTEM exhibits errors, such as misclassifications where points from the first class incorrectly approach clusters of other classes. Therefore, while the temperature parameter may not directly impact prediction probabilities for non-candidate labels, it enhances feature representation accuracy. We will include these findings and discussions in the revised manuscript. > The margin-compliant loss seems not easy to be optimized in neural network framework. More details are required. The core of the margin-compliant loss involves finding the maximum predicted probabilities for the candidate and non-candidate label sets, concatenating the margins, and calculating their mean and standard deviation. First, we use `torch.max` to obtain the maximum predicted probabilities for both sets. Then, we employ `torch.cat` to concatenate the margins in the current and previous batches. Finally, we calculate the mean and standard deviation of the margins using `.mean` and `.std` operations, followed by the value of the margin-compliant loss. All these operations, including `torch.max`, are differentiable in PyTorch. We will provide additional details on the margin-compliant loss in the revised manuscript to clarify its optimization within the neural network framework. > Will errors in the first few epochs affect subsequent training? In MIPL, where prior knowledge of true labels in the candidate label sets is unavailable, initializing candidate label weights with averages is reasonable. Although early errors might accumulate and impact later training, our dynamic disambiguation loss reduces the influence of early predictions by assigning them lower weights, as shown in Eq. (8). To further investigate the effectiveness of our dynamic disambiguation loss, we also introduce a variant without the dynamic disambiguation, MIPLMA-$\alpha$, where $\alpha^{(t)}$ is set to 0 for all $t \in \\{1,2,\cdots,T\\}$. The table below shows the performance of MIPLMA and MIPLMA-$\alpha$ on benchmark datasets. MIPLMA-$\alpha$ performs well on simpler disambiguation tasks (e.g., $r=1$ or $r=2$) but shows decreased performance compared to MIPLMA as the number of false positive labels increases (e.g., $r=3$). The gap between MIPLMA and MIPLMA-$\alpha$ widens with greater disambiguation challenges. These results suggest that our dynamic disambiguation loss is effective iin mitigating the errors in the first few epochs. | Algorithm | r | MNIST-MIPL | FMNIST-MIPL | Birdsong-MIPL | SIVAL-MIPL | | --------------- | ---- | :--------: | :---------: | :-----------: | :--------: | | | 1 | .985±.010 | .915±.016 | .776±.020 | .703±.026 | | MIPLMA | 2 | .979±.014 | .867±.028 | .762±.015 | .668±.031 | | | 3 | .749±.103 | .654±.055 | .746±.013 | .627±.024 | | | 1 | .991±.025 | .897±.022 | .772±.020 | .704±.022 | | MIPLMA-$\alpha$ | 2 | .949±.056 | .841±.021 | .765±.027 | .658±.021 | | | 3 | .661±.161 | .538±.053 | .721±.042 | .586±.026 | --- Rebuttal Comment 1.1: Title: Looking Forward to Your Feedback Comment: Dear Reviewer Unv6, Thank you for your valuable feedback, which has greatly improved our paper. We have carefully addressed your concerns by providing visualizations of attention scores and bag-level features, detailing the optimization of the margin-compliant loss, and demonstrating how our dynamic disambiguation loss mitigates early errors. Should any issues remain, we are happy to discuss them further. Best regards, The Authors
Summary: In this paper, the authors observe the presence of ‘margin-violations’ in the MIPL problem, which manifest in two aspects: between positive and negative instances, and between candidate and non-candidate labels. Therefore, the paper proposes modifications to the attention mechanism and enhancement of the loss function to rectify and widen the margin. Strengths: 1. This paper aims to address a challenging scenario, specifically the MIPL problem, and identifies an intriguing phenomenon known as ‘margin violations’. 2. This work proposes MIPLMA to solve this issue, which combines several effective but not complicated tricks. 3. The authors conduct extensive experiments and compare with a number of methods to valid the effectiveness of the proposal. Weaknesses: 1. In Figure 1, the authors compare the performances of MIPLMA and DEMIPL, however, according to the results in Table 2 and Table 3, ELIMIPL surpasses DEMIPL on FMNIST-MIPL, and both ELIMIPL and MIPLGP perform better than DEMIPL on CRC-MIPL-Row dataset, so the comparison with these two methods would be more beneficial. 2. When discussing the linear-decay strategy for temperature in Equation 3, the sudden introduction of the constant coefficient 0.95 requires some clarification for better understanding. 3. In Sec.3.1.1, the authors claim that they are the first to employ ResNet on CRC-MIPL dataset, so it is suggested that the authors should list the detailed baseline performances on C-R34-16 and C-R34-25 datatset bag generators, just as in Table.3. 4. The authors assert that “features learned by ResNet-34 outperform those obtained via image bag generators in terms of classification performance.” However, it remains unclear whether this still holds on other benchmarks such as FMNIST-MIPL. To strengthen this claim, it would be beneficial to add further experiments or discussions on this. Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed and positive feedback on our paper. We are glad you found our approach to addressing ‘margin violations’ in the MIPL problem intriguing. Below, we address your comments and questions. > In Figure 1, a comparison with ELIMIPL and MIPLGP would be more beneficial. MIPLGP follows the instance-space paradigm, which assigns augmented candidate label sets of bags to each instance and aggregates bag-level labels from instance-level labels. As a result, we cannot observe the phenomenon of margin violations in the instance and label spaces. For ELIMIPL, we will update Figure 1 to include its results in the revised manuscript. > The constant coefficient 0.95 in Equation 3 requires clarification. The decay rate of $0.95$ for the temperature parameter in Equation 3 was selected based on preliminary experiments, which indicated that the performance of the margin-aware attention mechanism is relatively stable across a range of decay rates. To maintain consistency, we fixed the decay rate at $0.95$ for all datasets. We have conducted additional experiments on the FMNIST-MIPL dataset with decay rates from $0.9$ to $0.99$, as shown in `Figure R1` of the attached PDF. These experiments revealed classification accuracies between $0.907$ and $0.927$. We will provide these details and results in the revised manuscript. > List detailed baseline performances on C-R34-16 and C-R34-25 datasets, as in Table 3. Thank you for your suggestion. The following table shows the classification accuracies (mean±std) for MIPLMA, ELIMIPL, and DEMIPL on the C-R34-16 and C-R34-25 datasets. MIPLMA shows a slight improvement over ELIMIPL and DEMIPL on the C-R34-16 dataset, and a more significant advantage on the C-R34-25 dataset. We will include more baseline performances in Table 3 of the revised manuscript. | Algorithm | C-R34-16 | C-R34-25 | | --------- | :-------: | :-------: | | MIPLMA | .631±.008 | .685±.011 | | ELIMIPL | .628±.009 | .663±.009 | | DEMIPL | .625±.008 | .650±.010 | > It remains unclear whether the superior performance of features learned by ResNet-34 holds on other benchmarks such as FMNIST-MIPL. The instance-level features of the Birdsong-MIPL and SIVAL-MIPL datasets are preprocessed, which prevents the use of DCNNs like ResNet-34 for feature learning. We have tested LeNet and ResNet-34 on the MNIST-MIPL and FMNIST-MIPL datasets, but the classification accuracies were unsatisfactory. We believe this is due to the relatively simple nature of the features in these datasets, which are adequately captured by simple networks like the two-layer CNN used in our study. While deep convolutional neural networks might perform better on fully supervised MNIST and FMNIST datasets, their benefits may be less evident in weakly supervised scenarios. --- Rebuttal Comment 1.1: Title: Looking Forward to Your Feedback Comment: Dear Reviewer b79F, Thank you for your constructive feedback, which has significantly enhanced our manuscript. We have addressed your concerns by conducting experiments on the temperature parameter decay rate, providing results for the C-R34-16 and C-R34-25 datasets, and offering further explanations on the feature extractor. If there are any remaining issues, we are open to further discussion. Best regards, The Authors
Summary: This paper deals with an emerging learning framework, i.e., multi-instance partial-label learning framework, which can be regarded as an extension and combination of multi-instance learning and partial-label learning. Overall, such dual inexact supervision makes the MIPLL difficult to resolve. This paper introduces a margin-aware attention mechanism and margin-compliant loss to deal with such issue. Strengths: 1. The motivation is clear and straightforward; 2. The paper is well-written and easy-to-follow. 3. Empirical results show satisfactory performance on several benchmarks as compared with existing methods. Weaknesses: 1. I am confused with the definition of MIPLL as the authors claimed in the paper: "positive instances refer to the instances that belong to the true label in the setting of MIPL",I agree with the definition of positive instances. However, for the negative instances, " while negative instances represent the remaining instances in the bag that are not associated with any label in the label space". Does the author indicates that the negative instances are not associated with the whole label space or just the candidate label set for the given bag? I think the authors should be careful with this definition. I don't think the whole label space is the correct description w.r.t the negative instances. 2. The so called margin-aware attention mechanism and the margin-compliant loss are not novel, which in my opinion have already been utilized in other well-defined issues. So the novelty cannot reach the bar of NeruIPS. 3. In addition, although the multi-instance partial label learning framework is relatively novel and interesting, there has already existed at least two pioneer works, which again decreases the novelty of this work. 4. As compared with existing methods, the proposed algorithm cannot achieve the dominant performance on all the datasets under all the settings. Technical Quality: 2 Clarity: 2 Questions for Authors: N/A Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive feedback. In the following, we address your comments and concerns. > Definition of negative instances. In line with previous MIPL work such as MIPLGP, DEMIPL, and ELIMIPL, we do not associate negative instances with the label space. For the MNIST-MIPL dataset, positive instances are drawn from the target classes $\\{0, 2, 4, 6, 8\\}$, while negative instances come from the reserved classes $\\{1, 3, 5, 7, 9\\}$. Positive instances in a multi-instance bag are sampled from target classes, and negative instances are sampled from reserved classes. Therefore, the true label of a multi-instance bag corresponds to the actual label, with candidate labels sampled from the remaining target classes. Let us consider two scenarios: 1. Negative instances not associated with the labels space: If a test multi-instance bag has positive instances from class $0$ and negative instances from $\\{1, 3, 5, 7, 9\\}$, correct classification requires the classifier to predict class $0$. 2. Negative instances associated with the label space: If a test multi-instance bag has positive instances from $\\{0, 2\\}$ and negative instances from $\\{1, 3, 5, 7, 9\\}$, predicting either class $0$ or class $2$ is not considered wrong. However, this conflicts with the multi-class classification principle, which assumes each sample belongs to a single class. Not associating negative instances with the label space is reasonable for MIPL applications. For example, For the CRC-MIPL dataset, positive instances are cell types like lymphocytes and colorectal adenocarcinoma epithelium, while negative instances refer to background information, such as non-cellular areas or areas without significant tissue. If background is included in the label space, the true label for each multi-instance bag should reflect both the background and the cell type class, conflicting with the multi-class classification principle that each sample should belong to a single class. > The margin-aware attention mechanism and margin-compliant loss are not novel. We recognize that the attention mechanism and margin-based loss have been used in MIL and PLL, respectively. However, our paper introduces a novel perspective by identifying margin violations in both the instance and label spaces with a dual margin adjustment strategy. Existing margin-based PLL methods, such as PL-SVM and M3PL, have two main drawbacks: they rely on iterative optimization, making integration with neural networks difficult, and they only maximize the mean margin of predicted probabilities. In contrast, our margin-compliant loss integrates seamlessly with neural networks and both maximizes the mean margin and minimizes the standard deviation of predicted probabilities. While the attention mechanism and the margin-based methods are not novel in MIL or PLL, their effective integration and enhancement within the MIPL framework represent a significant contribution. We will elaborate on these unique aspects and improvements in the revised manuscript. > Existing works decrease the novelty of this work. Existing works, such as MIPLGP, DEMIPL, and ELIMIPL, provide important prior contributions to the MIPL framework. These can be divided into two categories: (1) MIPLGP uses a Gaussian Process Regression model for fitting transformed MIPL data, but it is sensitive to negative instances and requires more computational resources; (2) DEMIPL and ELIMIPL leverage attention mechanisms to learn global feature representations, yet they encounter margin violations in both instance and label spaces. Our work distinguishes itself by being the first to identify and address margin violations in MIPL. We propose an effective dual margin adjustment strategy to counter these issues. Additionally, we introduce CRC-MIPL datasets with deep feature extractors, specifically the C-R34-16 and C-R34-25 datasets, which is also a novel contribution in MIPL. > The proposed algorithm does not achieve dominant performance on all the datasets under all the settings. In machine learning, the "No Free Lunch" theorem states that no algorithm can outperform all others across every task and data distribution. Table A4 in the Appendix presents the results of pairwise t-tests at a significance level of 0.05 between our proposed MIPLMA and the comparison algorithms. Out of $430$ comparisons, MIPLMA outperforms the comparison algorithms in $412$ cases and shows no significant difference in $15$ cases. Thus, while MIPLMA may not dominate in every scenario, it demonstrates substantial superiority in the majority of cases. --- Rebuttal Comment 1.1: Title: Looking Forward to Your Feedback Comment: Dear Reviewer BX8M, Thank you for your valuable feedback, which has significantly improved our paper. We have carefully addressed your concerns regarding the definition of negative instances and the novelty of our work. We have reviewed and discussed two related MIPL studies [A, B] with different negative instance setups, which will be discussed in the revised manuscript. Should any issues remain, we welcome further discussion. Best regards, The Authors [A] Wang et al. On learning latent models with multi-instance weak supervision. NeurIPS 2023. [B] Wang et al. On characterizing and mitigating imbalances in multi-instance partial label learning. arXiv:2407. --- Rebuttal Comment 1.2: Comment: Thanks for the author's rebuttal, and most of my concerns have been fixed. So I would like to raise my score. --- Reply to Comment 1.2.1: Comment: Thank you for your positive feedback!
Summary: In this paper, the learning scenario of multi-instance partial learning (MIPL) is studied. The paper proposes a new approach for MIPL, whose key technical idea is margin regularization. The paper points out that an important issue for previous MIPL approaches is the ignorance of margin information in both the outputs of attention modules and final predictors, leading to incorrect attention weights and prediction confidence. To address this issue, the paper proposes a novel margin loss for MIPL, whose effectiveness is verified under various MIPL benchmark datasets. Strengths: 1. In my view, the idea of margin regularization for MIPL is quite interesting, in special the regularization for the margins of attention weights. Even though attention weights are usually considered as a reflection of feature correlations, the accuracy of this correlation is difficult to quantify and receives less study. I think the margin measure is a good point for this quantification, and this idea can be indeed useful in the MIPL problem. 2. The proposed method revives the technique of margin distribution optimization. I like this idea due to its elegance in quantifying margin information. This also leads to the future possibility of theoretical studies for MIPL since the margin distribution framework is well theoretical grounded. 3. The experimental results are fruitful. The performance gain over baseline methods is significant. Weaknesses: 1. The discussion on the scheme of modifying the strength of margin regularization can be augmented. Technical Quality: 3 Clarity: 3 Questions for Authors: I suggest including more discussions on setting the parameter \lambda. In my view, this parameter could significantly affect the learning performance. In my understanding, this parameter can be kept small at the beginning and be gradually increased. The reason is discussed in the paper: at the beginning, the model is less accurate, making the margin less informative, and the thing turns different as learning process proceeds. I find that currently, \lambda is kept as a constant. So I would like to ask for more discussions on the interpretations for this choice. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and positive feedback on our paper. We are glad you found the idea of margin regularization and our results valuable. Below, we answer your questions. > More discussion on setting the parameter $\lambda$. The parameter could significantly affect learning performance and might be better if varied during training rather than kept constant. To address the concern about the parameter $\lambda$, we conducted further study using a dynamic adjustment strategy defined as $\lambda(t) = \min\\{\frac{t}{T^\prime} \lambda^\prime, \lambda^\prime\\}$, where $t$ denotes the current epoch, and $T^\prime$ and $\lambda^\prime$ control the rate of increase and the maximum value of $\lambda$, respectively. We name this model MIPLMA-$\lambda$. When $T^\prime=1$, MIPLMA-$\lambda$ is equivalent to MIPLMA. We conducted experiments on the MNIST-MIPL dataset with $T^\prime$ set to $\\{1,10,50,100\\}$, using the same $\lambda^\prime$ as in MIPLMA. Results show that dynamically adjusting $\lambda$ can sometimes improve classification accuracy compared to a constant $\lambda$; however, in other cases it may perform worse. Notably, when $r=3$, MIPLMA-$\lambda$ tends to enhance accuracy more significantly. We sincerely appreciate your suggestion for dynamic adjustment of $\lambda$. Designing effective adjustment strategies is indeed a valuable research direction. We will explore more refined dynamic adjustment methods for $\lambda$ in the future. | Algorithm | $r$ | $T^\prime=1$ | $T^\prime=10$ | $T^\prime=50$ | $T^\prime=90$ | | ---------------- | :--: | :----------: | :-----------: | :-----------: | :-----------: | | | $1$ | .985±.010 | .957±.056 | .986±.007 | .987±.009 | | MIPLMA-$\lambda$ | $2$ | .979±.014 | .970±.015 | .979±.011 | .977±.014 | | | $3$ | .749±.103 | .763±.089 | .753±.101 | .738±.105 | --- Rebuttal 2: Title: Looking Forward to Your Feedback Comment: Dear Reviewer zb3e, Thank you for your insightful feedback, which has greatly enhanced our paper. We have carefully addressed your concerns, including conducting experiments on the dynamic adjustment strategy for the parameter $\lambda$. If you have any further questions or concerns, we would be pleased to discuss them. Best regards, The Authors
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their time and valuable feedback on our submission. As we cannot resubmit the paper during the rebuttal period, we have attached a one-page PDF with additional experimental results for the new PLL algorithm POP on both benchmark and real-world datasets, results with varying temperature decay rates, and visualizations of attention scores and bag-level features. We will include these details in the revised manuscript. Please let us know if further details or explanations are needed. Pdf: /pdf/0cb3332fa92b631b2032206be7b01cb27cb3c81c.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes MIPLMA, a new Multi-Instance Partial Label algorithm that focuses on margin adjustments in both instance and label space. While computing the bag level representations, it introduces a temperature parameter in the margin-aware attention mechanism to widen the gap between attention scores for positive and negative instances. It also employs a margin loss to maximise the margin between the highest predicted probability on the candidate label set and on the non-candidate label set. Strengths: - Introduces and adapts margin-based approaches from weakly supervised learning paradigms for multi-Instance PLL - The paper is well-written, and I was able to understand it clearly Weaknesses: - In many cases, either the number of classes is low or the number of false positives is low, which raises concerns regarding the generalization of the algorithm - Paper has not been compared with some of the recent PLL algorithms like PICO [A], etc. A discussion on why such comparisons were not done would be helpful. Maybe adaptation was difficult, or they focused on a single modality? - Contributions are limited, it is a direct adaptation of margin-based losses to the problem - The applications of this setup need to be more convincing to me. They do give one application on a Cancer detection dataset, however, I believe it could be achieved through an architecture capable of processing higher-resolution images. [A] Wang et al. PiCO: Contrastive Label Disambiguation for Partial Label Learning. ICLR 2022 Technical Quality: 3 Clarity: 3 Questions for Authors: - The creation of the synthetic datasets is unclear. While Table 1 presents details regarding the number of instances in a bag, the distribution proportion of true positives in a bag is not mentioned. This could have implications for the performance of the algorithm. - In section 3.1.2 (line 200), It is mentioned that “PLL algorithms can be equipped with either the linear model or multi-layer perceptrons (MLP) as backbone networks”. However, DNNs like ResNet can be used to learn feature representations. Ideally, the comparison should be made using the same feature extractors, the ones mentioned in lines 208 - 213. - I would willing to reconsider my ratings post the author rebuttal Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable comments aimed at improving our paper. Below, we provide a summary of your comments along with our corresponding responses. > Concerns regarding generalization due to low class numbers or false positives. We conducted additional experiments on the SIVAL-MIPL dataset with higher number of false-positive labels ($r \in \\{4,5,6,7,8,9\\}$). The results in the following table show that MIPLMA achieves superior performance in all cases, which demonstrate its robustness to various number of false positive labels. | Algorithm | $r=4$ | $r=5$ | $r=6$ | $r=7$ | $r=8$ | $r=9$ | | --------- | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | | MIPLMA | .618±.012 | .584±.019 | .524±.029 | .510±.030 | .431±.026 | .371±.051 | | ELIMIPL | .559±.038 | .507±.022 | .439±.025 | .392±.031 | .343±.029 | .339±.030 | | DEMIPL | .470±.036 | .413±.020 | .352±.032 | .323±.029 | .266±.022 | .275±.021 | > Lack of comparison with recent PLL algorithms like PiCO. We did not compare our method with PiCO for two main reasons: - PiCO relies on data augmentation to learn diverse features of partially labeled images. However, the features of the Birdsong-MIPL, SIVAL-MIPL, and CRC-MIPL datasets are preprocessed tabular data. This makes it challenging to apply data augmentation-based PLL methods to these datasets. - The instances in the MNIST-MIPL, FMNIST-MIPL, C-R34-16, and C-R34-25 datasets are raw images, which allows data augmentation-based PLL methods to learn features for each instance. However, these methods cannot aggregate a bag of instances into a unified feature representation as they lack an attention mechanism or similar aggregation method. In our submission, we compared our method with five PLL methods published between 2020 and 2022. We have now included an additional comparison with a PLL method called POP (ICML 2023). As shown in `Table R1` and `R2` in the attached PDF, the main results indicate that while POP performs better than other PLL methods in most cases, it is inferior to our MIPLMA. We will include the above discussion and the results of POP in the revised manuscript. > A direct adaptation of margin-based losses. We acknowledge that margin-based losses have been studied in existing methods in PLL such as PL-SVM and M3PL. However, these methods have two drawbacks that make them unsuitable for the MIPL problem: they rely on iterative optimization strategies, making them difficult to integrate with neural networks; furthermore, they only maximize the mean margin of the predicted probabilities. Our proposed margin-compliant loss integrates naturally easily with neural networks and simultaneously maximizes the mean margin while minimizing the variance of predicted probabilities. To further illustrate this, we propose a variant of MIPLMA named MIPLML, which directly utilizes the margin loss $\mathcal{L}\_{\text{ml}}$ (Eq. 9). Specifically, the loss functions of MIPLMA and MIPLML are $\mathcal{L}=\mathcal{L}\_{\text{d}} + \lambda \mathcal{L}\_{\text{m}}$ and $\mathcal{L}=\mathcal{L}\_{\text{d}} + \lambda \mathcal{L}\_{\text{ml}}$, respectively. The results show MIPLMA's superiority, demonstrating that a direct adaptation of margin-based losses achieve worse performance. | Algorithm | C-Row | C-SBN | C-KMeans | C-SIFT | | --------- | :-------: | :-------: | :-------: | :-------: | | MIPLMA | .444±.010 | .526±.009 | .557±.010 | .553±.009 | | MIPLML | .424±.006 | .505±.007 | .544±.009 | .545±.009 | > Applications need to be more convincing. In medical diagnostics, whole slide images (WSI) are high-resolution microscopy images of stained tissue slides. These images are extremely large, often reaching gigapixel size. Neural networks cannot learn an entire WSI due to their architecture and GPU limitations. Typically, WSIs are divided into smaller, lower-resolution tiles (multi-instances) to enable neural networks to learn meaningful features. In the future, we plan to collect higher-resolution WSI datasets to extend the MIPL applications. > The creation of the synthetic datasets is unclear. For the MNIST-MIPL, FMNIST-MIPL, and Birdsong-MIPL datasets, we can control the distribution of true instances. The SIVAL-MIPL dataset, derived from the classical MIL dataset SIVAL, is created by adding false positive labels to each multi-instance bag. We calculate and report the proportions of positive instances in each bag, including their maximum, minimum, mean, and median values. This detailed distribution information will be included in the revised manuscript. | | MNIST-MIPL | FMNIST-MIPL | Birdsong-MIPL | SIVAL-MIPL | | ------- | :--------: | :---------: | :-----------: | :--------: | | Maximum | 9.1% | 9.1% | 10.0% | 90.62% | | Minimum | 7.0% | 7.0% | 6.9% | 3.12% | | Mean | 8.0% | 8.0% | 8.3% | 25.6% | | Median | 8.11% | 8.11% | 8.3% | 21.9% | > The comparison should be made using the same feature extractors. For the MNIST-MIPL and FMNIST-MIPL datasets, we use a two-layer CNN and a fully connected layer to extract instance-level features, which are then aggregated into a unified representation using the attention mechanism. The feature extractors for MIPLMA, ELIMIPL, and DEMIPL are consistent. However, the PLL methods cannot directly handle multi-instance bags or individual instances in MIPL because they lack of an aggregation mechanism to yield bag-level features and the candidate label sets are unknown during training. Therefore, it is not feasible to apply the same feature extractors across MIPL and PLLmethods. References: [B] Xu et al. Progressive purification for instance-dependent partial label learning. ICML 2023. --- Rebuttal 2: Title: Looking Forward to Your Feedback Comment: Dear Reviewer H7jM, Thank you for your thoughtful and detailed feedback, which has greatly improved our manuscript. We have carefully addressed your concerns by providing additional results on false positives in the SIVAL-MIPL dataset, comparing our method with the POP PLL method, and designing a variant of MIPLMA. We also included explanations about MIPL applications and the creation of synthetic datasets. Should any issues remain unresolved, we are happy to discuss them further. Best regards, The Authors --- Rebuttal Comment 2.1: Title: Post rebuttal response Comment: The author response addresses some of my concerns and I have updated my rating accordingly. However, the last point is still unclear. Line 200-201, are misleading, why limit to linear or MLP backbone. I see no difficulty in a using a resnet backbone for instance (for example LWS based algorithm do use ResNet backbone in the experiments). And even if the authors are using pre-trained backbones, the adaption and fine-tuning is certainly possible. --- Rebuttal 3: Title: Further Explanation on the Feature Extractor Comment: Thank you for your feedback and for updating your rating. We acknowledge that various PLL methods, such as POP [B], PRODEN [C], and LWS [D], use different backbones like linear models, MLP, and ResNet depending on the dataset. For example, PRODEN and LWS mainly use linear and MLP models for MNIST and FMNIST datasets, while ResNet and convNet are employed for CIFAR-10 dataset. In our experiments, LeNet and ResNet-34 performed worse than our two-layer CNN on the MNIST-MIPL and FMNIST-MIPL datasets. This is likely due to the relatively simple nature of the features in these datasets, where deeper networks like ResNet may not provide significant advantages in weakly supervised scenarios. Additionally, the Birdsong-MIPL and SIVAL-MIPL datasets are preprocessed tabular data, which makes ResNet unsuitable for these datasets. For the C-R34-16 and C-R34-25 datasets, we used ResNet-34 as the feature extractor for both MIPLMA and the compared PLL algorithms. The table below presents the results of these algorithms, where Mean and MaxMin represent the aggregation strategies used to transform instance-level features into bag-level features. The results indicate that MIPLMA outperforms the compared PLL algorithms, even with the same feature extractor. We will include more detailed experimental results in the revised manuscript. Additionally, we plan to collect higher-resolution WSI datasets and the use of pre-trained backbones, including large language models, for better feature learning. | Algorithm | C-R34-16 | C-R34-25 | | --------------- | :-----------: | :-----------: | | MIPLMA | **.631±.008** | **.685±.011** | | LWS (Mean) | .593±.018 | .614±.021 | | LWS (MaxMin) | .462±.014 | .460±.016 | | PRODEN (Mean) | .537±.015 | .585±.019 | | PRODEN (MaxMin) | .434±.010 | .444±.012 | | POP (Mean) | .549±.014 | .591±.014 | | POP (MaxMin) | .450±.009 | .462±.015 | References: [B] Xu et al. Progressive purification for instance-dependent partial label learning. ICML 2023. [C] Lv et al. Progressive identification of true labels for partial-label learning. ICML 2020. [D] Wen et al. Leveraged weighted loss for partial label learning. ICML 2021.
null
null
null
null
null
null
Safe and Sparse Newton Method for Entropic-Regularized Optimal Transport
Accept (poster)
Summary: This paper proposed a new Newton-type algorithm for entropic-regularized optimal transport (OT) which utlizes sparsification and safeguard techniques and achieves global convergence and local quadratic convergence for the entropic-OT problem. Numerical experiments are provided to verify the effectiveness of the proposed method. Strengths: The paper is well-rounded and well-presented. The paper addresses the computational issue caused by a dense Hessian by sparsification, and addresses the singularity issue by a safeguard mechanism. Both the local and global convergences are provided. Numerical experiments are provided to show the effectiveness of the proposed method. Weaknesses: (Please answer the Questions section directly) The safeguard mechanism in this paper is actually a standard trust region technique; The numerical experiment settings could be improved. Technical Quality: 3 Clarity: 3 Questions for Authors: In general, I think this is a technically solid paper. The authors identify the issue of applying second order methods to entropic-OT and use sparsification and safeguard to make it applicable. I didn't check the detail of the proof, but believe it to be correct since these all follow the standard techniques. I'd recommend this work to be accepted. I have the following comments and questions: 1. Regarding Theorem 2 which studies the positive-definiteness of the sparsified matrix $H_{\delta}$: what is the largest and smallest eignvalues if we don't conduct Algorithm 1, meaning we use the original Hessian in (5) directly? The author should add some discussion on this to consolidate their point of "safely used to compute the Newton search directions". 2. When the authors talk about "safe", does it mean Theorem 2 or the trust region type update on line 7-13 in Algorithm 2, or both? 3. In the experiments for MNIST and ImageNet, the authors should provide the dimension information corresponds to the theory part, such as what $n$ and $m$ are. 4. In all images, Newton method seems to be very slow. Is this all due to the costly process of solving the dense linear system? 5. What is $\tilde{T}$ in (5)? I feel that it's not clearly defined. Is it corresponds to $\tilde{\beta}$ which is $T$ having one row/column removed? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors are clear about their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weaknesses** Thanks for the comments. First of all, we shall clarify that SSNS is directly inspired by the regularized Newton method [R6] and the Levenberg–Marquardt algorithm [R7,R8], which have close relations to the trust-region methods but with some subtle differences. Specifically, standard trust-region methods first set a trust-region radius $\Delta_k$ in each iteration, and then (approximately) solve a subproblem to determine the search direction. In SSNS, we update the shift parameter $\lambda_k$ instead, and the search direction has a closed form. The formula of search directions in trust-region methods has a similar form, but its shift $\lambda_k$ is implicitly determined by $\Delta_k$, and typically requires some root-finding techniques. Also, SSNS allows for a step size selection procedure as in Algorithm 3, thus enhancing the flexibility. Also per the suggestions of reviewers, we have improved the numerical experiments as explained in the global rebuttal. ### **Questions** 1. Recall that in Theorem 2 we allow $\delta=0$, which corresponds to the case that we use the genuine Hessian matrix. In this setting, Algorithm 2 reduces to a conventional regularized Newton method, but its per-iteration cost would be huge. Theorem 2 is an improvement to existing sparsifed Newton methods such as [R3], since previous works do not guarantee that the sparsified Hessian is invertible (but note that the true Hessian matrix is always invertible by Theorem 2). 2. In our original design of the algorithm, "safe" mainly refers to the positive definiteness of the sparsified Hessian matrix. But from the experiments, we also find that the shift parameter (which has a close relation to trust-region techniques) is very useful, as the classical Newton method fails in some examples due to numerical instability. 3. Thanks for the suggestion. We have added such information in our local revision. Basically it is $n=m=784$ for (Fashion-)MNIST and $n\approx m\approx 1000$ for ImageNet data. 4. Yes. This can be inferred from the fact that the number of iterations of the Newton method is very small, but its runtime is large. The huge per-iteration cost is the consequence of a dense linear system. 5. Yes. We have defined this notation in the beginning of Section 2, meaning removing the last column of $A$. [R3] Tang, X., Shavlovsky, M., Rahmanian, H., Tardini, E., Thekumparampil, K. K., Xiao, T., & Ying, L. (2024). Accelerating Sinkhorn algorithm with sparse Newton iterations. [R6] Li, D. H., Fukushima, M., Qi, L., & Yamashita, N. (2004). Regularized Newton methods for convex minimization problems with singular solutions. [R7] Levenberg, K. (1944). A method for the solution of certain non-linear problems in least squares. [R8] Marquardt, D. W. (1963). An algorithm for least-squares estimation of nonlinear parameters. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I decide to keep my score as I think this is a good work and the authors do clarify most of my concerns. --- Reply to Comment 1.1.1: Comment: Thanks for the feedback!
Summary: This paper proposes to solve the entropy-regularized OT problem by proposing a customized Newton method. More specifically, the authors propose a new way to sparsify the Hessian matrix as well as adaptive mechanisms to adjust the hyper-parameters in each iteration. Experimental results are also encouraging. Strengths: 1. The new sparsification method is novel and different from [31]. I like the fact that you can prove that this method positive definite matrices. 2. The self-adaptive mechanism to adjust Newton method hyper-parameters look interesting. I am not 100% positive (considering that there are already line search methods to guarantee convergence of Newton methods), but this part seems novel to me. The authors only cite very old papers [16, 17, 21] as motivations. Weaknesses: 1. The major part is the scalability of the proposed method. Newton-variants methods are second-order methods. The largest scale the authors have tried is on the transformed features of ImageNet. The feature dimension is only 30. If we use the Sinkhorn algorithm, I believe we can handle much large scales. 2. The experimental results are only sample results based on very few samples from the dataset. I am concerned that the results could be cherry-picked. It would be much more comprehensive if the authors can present training efficiency improvement on the overall dataset. 3. There is no experimental comparison with SNS [31] even though this work takes a step further on top of SNS. Looking at the SNS paper, I believe it is very easy to implement. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the computational complexity of Algorithm 1 Sparsification? 2. What is the computational complexity of each iteration in Algorithm 2? 3. Can you compare with the SNS method and report results in tables during rebuttal? 4. The SNS paper uses the metric Log of Optimality Gap while you use the metric Log of Marginal Errors? What is the difference? Can you report your results in terms of Log of Optimality Gap as well during rebuttal? 5. Can you also replicate the Table 3 and Table 4 in the SNS paper? Basically conduct perturbation study on the entropy regularization parameters. 6. For the imageNet dataset, how well does your proposed method perform if you change the final feature dimension from 30 to 60 and 90, respectively? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the comments. Below are our point-by-point responses. ### **Weaknesses** 1. The dimension of the features does not impact the scale the problem. The extracted features are only used to compute the cost matrix, which has a size of $n\times m$. We do not make the feature dimension $d$ too large mainly because it is commonly accepted that the Euclidean distance suffers from the curse of dimensionality. In the global rebuttal, we have added experiments to study the impact of feature dimension and the scalability of algorithms on very large problems. 2. As we have mentioned in the global rebuttal and other threads, to enhance the reproducibility, we do not pick the pairs of images. Instead, the randomly selected image IDs are taken from the prior literature [R1] that studies quadratically regularized OT, and we follow their experiment setting to study entropic-regularized OT. We have included different examples in Figures 4 and 5 in Appendix A.3. Also, to investigate the scalability of algorithms on very large problems, we have added experiments to test the performance for $n=m=1000,5000,10000$. 3. We agree that the overall framework of SNS [R3] is easy to understand, but the major challenges for directly comparing with SNS are: (a) there is no publicly available code of SNS to reproduce the experiments; (b) in [R3], the algorithm has many hyperparameters, and we have no clear scheme to set those hyperparameters. For example, how many Sinkhorn iterations to run before switching to the Newton step, how large the sparsification parameter should be, what the fallback strategy should be when the linear system is not invertible, etc. In fact, one of the major motivations of this article is to make the SNS framework more adaptive and practical, and we position our SSNS algorithm as a concrete and practical variant of the original SNS method. ### **Questions** 1. A trivial upper bound of the computational complexity of Algorithm 1 is $O(n^2\log(n))$, assuming $n=m$. This is done by sorting the values of each column or row of $T$. However, if we have an estimated upper bound $\varrho$ of the density of the sparsified $T$, meaning that each column or row of $T$ has at most $\varrho n$ nonzero elements after sparsification, then the cost can be reduced to $O(n^2+\varrho n^2\log(\varrho n))$. This is done by selecting the largest $\varrho n$ elements in each column or row of $T$, and only sort values within these $\varrho n$ elements. In a typical setting, $\varrho$ is very small due to the approximate sparsity of $T$. 2. Each iteration of Algorithm 2 has a computational cost of $O(n^2)$ to compute the gradient and objective function value, plus the cost in Algorithm 1, plus the cost in solving the sparse linear system. When $T$ is very sparse, the number of nonzero elements of $T$ may be as small as $O(n)$, so the overall computation is dominated by the first and second parts, *i.e.*, $O(n^2)$. 3. As we have explained above, there are certain practical difficulties in directly comparing with SNS. But we have tried our best to provide more experiment results similar to those in the SNS paper, as introduced in the points below. 4. A typical definition of the optimality gap is the value $f(x_k)-f(x^*)$, where $f(\cdot)$ is the objective function, and $x^*$ is the optimal point. The marginal error in entropic-regularized OT coincides with the gradient norm $\Vert g(x_k) \Vert$. We do not use the optimality gap because the ground-truth optimal point $x^*$ is in general unknown, and needs to be approximated using existing solvers. But this process also introduces rounding errors, and may be very sensitive when $x_k$ is close to convergence. Instead, $\Vert g(x_k) \Vert$ is easy to compute, and has an absolute lower bound of zero. Overall, we think that the marginal error is a more objective and easy-to-compute criterion to evaluate convergence. 5. Yes, we have added such experiments in the global rebuttal. 6. Per the suggestion, we have added such experiments in the global rebuttal (Figure R2). [R1] Pasechnyuk, D. A., Persiianov, M., Dvurechensky, P., & Gasnikov, A. (2023). Algorithms for Euclidean-regularised optimal transport. [R3] Tang, X., Shavlovsky, M., Rahmanian, H., Tardini, E., Thekumparampil, K. K., Xiao, T., & Ying, L. (2024). Accelerating Sinkhorn algorithm with sparse newton iterations. --- Rebuttal Comment 1.1: Comment: Thank you very much for your reply. Most of my questions have answered, except the requested comparison with [31], which is the following work: [31] Tang, X., Shavlovsky, M., Rahmanian, H., Tardini, E., Thekumparampil, K. K., Xiao, T., & Ying, L. (2024). Accelerating Sinkhorn algorithm with sparse newton iterations. This might sound a little bit too harsh, but I have some leftover question on whether there are any author overlaps between this submission and [31]. If there is an author overlap, then you defintely have access to the code base in [31]. Some research groups do this so that they can publish another paper with impressive results, without having to beat against their previous method. I apologize if this sounds too mean. However, I wouldn't know this because of NeurIPS's double blind policy. I will give you the benefit of doubt and trust you that there is no author overlap and you do not have access to the codebase of [31]. I do want to leave this comment here, just in case this possibility happens. As for now, I will maintain my score since it is already above the acceptance threshold. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We really appreciate your comments and totally understand your concern. To avoid violating the double blind policy, we have communicated with the Area Chairs to handle this issue. What we can comment here is that we have tried our best to enhance the reproducibility and integrity of this work, and the comparison with [31] is subject to some practical difficulties. Given this situation, we have added similar experiments in [31] per your suggestions, which are contained in the global rebuttal. Thanks for your understanding.
Summary: This paper proposes a Newton-based algorithm to solve the entropic optimal transport problem on the basis of samples. The approach hinges on a "sparsification" scheme for the Hessian (which is explained in Algorithm 1) that retains many favorable properties such as an approximation error due to the sparsification (Theorem 1) and positive definite-ness (Theorem 2). The conclude with numerical experiments on image-data, where they demonstrate that their algorithm significantly outperforms other algorithms for EOT on the examples they considered. Strengths: The paper is well-written (with very few typos), with the overall story and methodology quite clear. Weaknesses: The experiments are mildly underwhelming, and also a bit inconsistent. In the first set of experiments, the authors are running their new (E)OT algorithm between two images, where the pixel frequency denotes the weights for the EOT problem. In the second set of experiments, where they perform (E)OT between two classes of images, there they flatten the images, performing some pre-processing so they lie in d=30, and use $n\simeq m \simeq 1000$ samples to constitute the weight vectors ($1/n\bm{1}$ and $1/m\bm{1}$). These are relatively small-scale in the number of samples $n$, and I am inclined to believe that it does not scale well when $n=10000$ or more, which the block-coordinate-descent approach (i.e., the vanilla Sinkhorn algorithm) does allow for. I could be wrong, but this is the more realistic/modern use-case for EOT, which is not covered in the experiments. Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses. - Following the above, is it possible to provide a runtime in the style of Sinkhorn given a prescribed tolerance level? - Can this approach be adapted for unbalanced entropic OT? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weaknesses** Thanks for the comments. As we have explained in the global rebuttal and other threads, the experiments are intentionally designed to reflect two typical uses of OT, one for image morphing and interpolation, and the other for computing statistical distances. In our local revision, we have added a section in the appendix to explain the motivation of these experiments. As for the scalability, we have added a new set of experiments at the scales of $n=m=1000,5000,10000$, as explained in the global rebuttal. The results show that SSNS is capable of solving entropic-regularized OT on very large problems, and is more efficient than BCD in terms of run time and number of iterations. ### **Questions** 1. We suppose the "runtime" here refers to the theoretical computational complexity based on the global convergence speed. First of all, to the best of our knowledge, most convergence rate results for Newton-type methods are local, and there is few on the global convergence rate. However, there is a simple "ensemble" method to safeguard the global convergence speed. Note that our Algorithm 2 never increases the objective function value in any iteration, so we can append a Sinkhorn iteration after every $N$ Newton-type steps, where $N$ is a fixed integer. In this way, we obtain at least a Sinkhorn-like convergence speed, but are likely to make larger progresses during the Newton steps. This idea is also mentioned in Section 3.2, page 41 of [R4]. 2. Following the formulation in [R5], entropic-regularized unbalanced OT also has a smooth dual objective function (equation (4) of [R5]). Its difference with the balanced version is that the linear term in equation (4) of our article, $$-\alpha^T a-\beta^T b=-\sum_{i=1}^n \alpha_i a_i-\sum_{j=1}^m \beta_j b_j,$$ is replaced by sums of exponentials, $$\tau \sum_{i=1}^n e^{-\alpha_i/\tau} a_i+\tau \sum_{j=1}^m e^{-\beta_j/\tau} b_j.$$ Clearly, this means that only the diagonal elements of the Hessian matrix would be different from the setting in our article, and we think that the proposed sparsification scheme and the Newton-type algorithm still apply. [R4] Nocedal, J., & Wright, S. J. (2006). Numerical optimization. [R5] Pham, K., Le, K., Ho, N., Pham, T., & Bui, H. (2020). On unbalanced optimal transport: An analysis of Sinkhorn algorithm. --- Rebuttal Comment 1.1: Comment: Thanks for the comments. I will still keep my score as-is.
Summary: The paper proposes a Newton method to solve the entropic-regularized optimal transform (OT) problem. The method includes a novel strategy for the sparse approximation of the Hessian to reduce computational complexity compared to the classical Newton method and applying a diagonal shift on the sparse Hessian to avoid singularity. The sparse approximation of the Hessian is a simple procedure based on removing the small off-diagonal block elements; the authors have theoretically proven that the approximation error is bounded and that the sparse approximate is always positive definite. The authors have demonstrated the efficacy of their method on two OT problems. Strengths: 1. The paper is very well-written, organized and clear. I congratulate the authors for writing a paper that is this easy to read while they make significant theoretical contributions. 2. The proposed method is practical with only few tuning parameters. 3. The theoretical guarantees of the proposed method are impressive and this is important in eliminating the limitations of prior art. 4. The numerical experiments clearly demonstrate the advantages of the proposed method in the error performance and computational complexity. Weaknesses: I have not found any major weaknesses. Minor comments/typos: 1. The authors have not explained how they chose the pair of images for their first experiment. Is it random? Does the algorithm have similar performance for other images? 2. The authors have usen $K$ for both the number of small elements and the number of iterations. Please use a different symbol for either one to avoid confusion. 3. Line 67: “proved” should be “proven.” 4. Line 110: The full name of “BFGS” is missing. 5. What the authors mean by “safe” only becomes clear at line 169. I would suggest making this clarification earlier in the paper. Maybe, Contribution #2 could be rephrased so that is clear that “safe” refers to avoiding singularity. Technical Quality: 4 Clarity: 4 Questions for Authors: I find the first experiment setting a bit confusing. Could the authors give the motivation behind this experiment? Why are the pixel values used as probabilities? Please see Weaknesses for some other questions on this experiment. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have mentioned the limitation of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments. Below are our point-by-point responses for the questions. ### **Weaknesses** 1. To enhance the reproducibility, we do not pick the pairs of images. Instead, the randomly selected image IDs are taken from the prior literature [R1] that studies quadratically regularized OT, and we follow their experiment setting to study entropic-regularized OT. In fact, we have already included other image pairs in Figures 4 and 5 in Appendix A.3, and the IDs of image pairs can be seen from the titles of images. In the rebuttal PDF we have added more test cases in Figure R1. 2-4. Thanks for the suggestions. We have fixed these issues in our local revision. 5. Thanks for the suggestion. We call our proposed algorithm a safe and sparse Newton method, in the sense that the linear systems for computing the search directions are always positive definite. This property addresses the invertibility issues of existing sparsified Newton methods, and is crucial for practical implementation. In our local manuscript, we have added explanations in the introduction, above the contribution section. ### **Questions** We are trying to demonstrate two typical uses of OT. In the first experiment, images are vectorized as density vectors, and OT is a used as a tool for image morphing and interpolation [R2]. The experiment setting is already used in previous literature such as [R1] and [R3]. The second experiment uses OT as a statistical distance to measure the difference between two distributions. In this setting, each image is one observation of a distribution, and we use OT to compute the (approximate) Wasserstein distance between two classes of images. In our local revision, we have added a section in the appendix to explain the motivation of these experiments. [R1] Pasechnyuk, D. A., Persiianov, M., Dvurechensky, P., & Gasnikov, A. (2023). Algorithms for Euclidean-regularised optimal transport. [R2] Papadakis, N. (2015). Optimal transport for image processing. [R3] Tang, X., Shavlovsky, M., Rahmanian, H., Tardini, E., Thekumparampil, K. K., Xiao, T., & Ying, L. (2024). Accelerating Sinkhorn algorithm with sparse newton iterations. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications!
Rebuttal 1: Rebuttal: # To All Reviewers Thank you all reviewers for the encouraging and insightful comments. We appreciate the time and effort the reviewers have dedicated to providing valuable feedback on our manuscript. In this round, we have made every effort to address the comments of the reviewers. **The point-by-point responses are provided in the reply**. Additionally, we would like to take this opportunity to make some global clarifications as well as improvements we have made during this period. ### **Explaining what "safe" means** We call our proposed algorithm a safe and sparse Newton method, in the sense that the linear systems for computing the search directions are always positive definite. This property addresses the invertibility issues of existing sparsified Newton methods, and is crucial for practical implementation. ### **Motivations on the experiment setting** We are trying to demonstrate two typical uses of OT when designing the numerical experiments. In the first experiment, images are vectorized as density vectors, and OT is a used as a tool for image morphing and interpolation [R2]. The experiment setting is already used in previous literature such as [R1] and [R3]. The second experiment uses OT as a statistical distance to measure the difference between two distributions. In this setting, each image is one observation of a distribution, and we use OT to compute the (approximate) Wasserstein distance between two classes of images. ### **Improvements on experiments** Per the suggestions of reviewers, we have improved the numerical experiments on the following aspects: 1. We make it clear that to enhance **the reproducibility**, we do not pick the pairs of images in the numerical experiments. Instead, the randomly selected image IDs are taken from the prior literature [R1] that studies quadratically regularized OT, and we follow their experiment setting to study entropic-regularized OT. In the article, we have included other image pairs in Figures 4 and 5, and in the rebuttal PDF we have added more test cases in Figure R1. 2. We have studied **the impact of feature dimension** $d$ in the ImageNet experiment (Figure R2, rebuttal PDF). The plots imply that the convergence property of SSNS is robust to the feature dimension of input images. 3. We have studied **the impact of regularization parameters** on the performance of optimization algorithms. The table below shows the performance comparison between BCD and SSNS under different regularization parameters for the ImageNet experiment in Section 5. The convergence tolerance is set to $\varepsilon_{tol}=10^{-8}$, and the cost matrix is based on the $\ell_{1}$-distance. |$\log_{10}(\eta)$|BCD Time (s)|BCD Iterations|SSNS Time (s)|SSNS Iterations| |-|-|-|-|-| |-2 | 1.628| 217|1.523|13 | |-2.25|>3.765|>500|0.960|20 | |-2.5 |>3.765|>500|0.461|30 | |-2.75|>3.766|>500|0.383|57 | |-3 |>3.767|>500|0.771|120| The table below shows the case using cost matrices based on the squared Euclidean distances. |$\log_{10}(\eta)$|BCD Time (s)|BCD Iterations|SSNS Time (s)|SSNS Iterations| |-|-|-|-|-| |-2 |0.438 |59 |2.235|11| |-2.25|0.853 |114 |1.066|15| |-2.5 |3.327 |443 |0.997|23| |-2.75|>3.773|>500|0.529|35| |-3 |>3.773|>500|0.458|68| The results show that BCD is very sensitive to the value of $\eta$. When $\eta$ is large, BCD may demonstrate some computational advantages, but when $\eta$ is small, BCD typically fails to meet the error tolerance within 500 iterations. The pattern of SSNS shows some interesting points: when $\eta$ becomes smaller, the number of iterations also increases, but the overall runtime of SSNS may even decrease. This is because smaller $\eta$ values typically result in more sparse Hessian approximations, thus leading to faster sparse linear system solving. These findings are consistent with our explanations in Section 5. 4. We have investigated **the scalability** of SSNS on very large OT problems. We consider a synthetic OT problem that can generate data with arbitrary dimensions. The basic setting is to approximate the OT between two continuous distributions: the source is an exponential distribution with mean one, and the target is a normal mixture distribution $0.2\cdot N(1,0.2)+0.8\cdot N(3,0.5)$. We discretize the problem in the following way: let $x_i=5(i-1)/(n-1)$, $i=1,\ldots,n$, and $y_j=5(j-1)/(m-1)$, $j=1,\ldots,m$, which are equally-spaced points on [0, 5]. Define the cost matrix as $M_{ij}=(x_i-y_j)^{2}$. Let $f_1$ and $f_2$ be the density functions of the source and target distributions, respectively. Then we set $\tilde{a}_i=f_1(x_i)$, $\tilde{b}_j=f_2(y_j)$, $a\_i=\tilde{a}\_i/\left(\sum\_{k=1}^n\tilde{a}\_k\right)$, and $b\_j=\tilde{b}\_j/\left(\sum\_{k=1}^m\tilde{b}\_k\right)$. Similar to the experiment setting in Section 5, we normalize the cost matrix and set $\eta=0.001$. We then solve the entropic-regularized OT problem using BCD and SSNS at the scales of $n=m=1000,5000$, and 10000. The results are visualized in Figure R3 in the rebuttal PDF, whose pattern is clear: BCD demonstrates a linear-like convergence rate, and SSNS has a fast convergence speed consistent with the theoretical quadratic rate. Thanks to the Hessian sparsification, SSNS does not suffer from a high per-iteration cost, so overall it provides an efficient solver for entropic-regularized OT even on very large problems. [R1] Pasechnyuk, D. A., Persiianov, M., Dvurechensky, P., & Gasnikov, A. (2023). Algorithms for Euclidean-regularised optimal transport. [R2] Papadakis, N. (2015). Optimal transport for image processing. [R3] Tang, X., Shavlovsky, M., Rahmanian, H., Tardini, E., Thekumparampil, K. K., Xiao, T., & Ying, L. (2024). Accelerating Sinkhorn algorithm with sparse newton iterations. Pdf: /pdf/7f003ad8beb89da4ccd861f40daf7e50d03ddb8a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
When does perceptual alignment benefit vision representations?
Accept (poster)
Summary: The paper investigates the benefits of aligning vision model representations with human perceptual judgments to improve their performance across various computer vision tasks. The study fine-tunes state-of-the-art models using a dataset of human similarity judgments and demonstrates that this alignment enhances performance in tasks like semantic segmentation, depth estimation, and instance retrieval. The results suggest that integrating human perceptual knowledge as an inductive bias can improve vision model representations without significantly sacrificing performance on other tasks, including specialized domains like medical imaging and 3D environments. Strengths: 1. The experiment evaluation is comprehensive. The paper evaluates the impact of human perceptual alignment on a wide range of computer vision tasks, including semantic segmentation, depth estimation, instance retrieval, and counting. The paper also use multiple state-of-the-art models (e.g., CLIP, DINO, DINOv2, SynCLR) and a detailed experimental setup. 2. The idea of introduces alignment for vision models with human perceptual judgments is reasonable and innovative. 3. The analysis is insightful. The paper provides a nuanced discussion on the benefits and limitations of human perceptual alignment, offering insights into how different levels of perceptual judgments (low-level, mid-level, high-level) impact model performance on various tasks. Weaknesses: Writing Part: 1. There is a question mark in a citation on line 95. 2. The order of the references is shuffled; reference pages usually come before the supplementary files. 3. The paper lacks a conclusion section. Suggestions for Experiments: 1. More experiments on advanced vision-language models (VLMs) are expected, such as Llava, MiniGPT-4, and instructBLIP. 2. While not necessarily a weakness, it would be interesting to see the performance improvement on CLIP-blind image pairs, as introduced in the paper "Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs." Technical Quality: 2 Clarity: 2 Questions for Authors: How does the patch-level objective differ from the image-level objective? Does it show different performance on various downstream tasks? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Introduced in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments. We are glad that the reviewer found our evaluation to be comprehensive, the paper idea innovative, and the analysis insightful. We address questions and concerns below. **Writing comments** We thank the reviewer for their helpful notes and will fix the citation typos in revision. We intend for the “Discussion” section at the end of the paper to serve as the conclusion, and can revise it to clarify this – and expand upon our conclusions – in revision. **Further experiments on VLMs** We thank the reviewer for the suggestion. In preliminary experiments, we found that LLaVA, MiniGPT-4, and instructBLIP were not well suited to in-context prompting (potentially due to their instruction tuning and optimization for other tasks). However, we replicated our RAG experiment on IDEFICS2, a recently released 8B multimodal model achieving state-of-the-art results across several multimodal benchmarks [2]. Our results are shown in Fig. 1 of the PDF attached to our global response. We find that across the same four datasets evaluated in Section 4.2 of the paper, performance consistently improves when using NIGHTS-tuned models in the RAG pipeline. The exceptions are DINOv2 with the Diabetic Retinopathy dataset and DINO/DINOv2 for the SVHN dataset. Overall, these results support and validate our results on OpenFlamingo; we will add them to our revision. **CLIP-Blind image pairs** We thank the reviewer for suggesting the MMVP-VLM benchmark [2]. Unfortunately, evaluating on this benchmark requires computing the similarity of text-image pairs. We only fine-tuned the CLIP vision encoder; thus it no longer shares the same feature space as the text encoder, making it difficult to fairly run a multimodal evaluation and interpret any positive or negative results. **Clarification on patch-level objective** We provide further details and intuition regarding patch-level training in section 2 of the global response. Below we address specific questions: *How does the patch-level objective differ from the image-level objective? Does it show different performance on various downstream tasks?* Finetuned patch tokens were necessary to evaluate NIGHTS fine-tuning on dense tasks (depth estimation, segmentation). In preliminary experiments, we found that the image-level objective did not induce any significant changes in the patch tokens. The patch-level objective is functionally the same as the image-level objective, the only difference being that it propagates the similarity annotations more directly to the patch tokens. Note that only the fine-tuned patch tokens are needed for segmentation/depth-estimation, and only the CLS token for image-level tasks (all others). [1] Hugo Laurençon, Léo Tronchon, Matthieu Cord, and Victor Sanh. What matters when building vision-language models? arXiv preprint arXiv:2405.02246, 2024b. [2] S. Tong, Z. Liu, Y. Zhai, Y. Ma, Y. LeCun, and S. Xie, “Eyes wide shut? exploring the visual shortcomings of multimodal llms,” CoRR, vol. abs/2401.06209, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I will maintain my score at this stage. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer VQay, We appreciate your positive feedback and are very grateful for your suggestions and questions that have made our paper stronger. Thank you once again for dedicating your time and effort to reviewing our work and providing us with insightful suggestions!
Summary: This paper investigates the effects of aligning pretrained vision models to human judgments. Image-level and patch-level learning objectives are proposed for fine-tuning the pretrained models. The experimental results demonstrate that fine-tuning pretrained models (such as CLIP, DINO, and DINOv2) on the NIGHTS dataset leads to performance improvements in various downstream tasks, including semantic segmentation, depth estimation, retrieval-augmented generation (RAG), counting, and instance retrieval. Ablation experiment regarding the fine-tuning dataset is conducted to demonstrate the perceptual qualities embedded in the NIGHTS dataset. Strengths: 1. The proposed perceptual alignment process is simple yet effective, leading to performance improvements across various downstream tasks. 2. This paper discussed the effects of aligning vision models using datasets with different levels of variations, providing emprical evidence for further research. Weaknesses: 1. This paper demonstrates the effects of perceptual alignment by showing the performance improvement in downstream tasks. However, the presented evidence cannot fully explain "how" perceptual alignment affects model features as general-purpose representations. Specifically, it is unclear what the difference is between the features before and after perceptual alignment, and why such differences lead to performance improvements. 2. This paper investigates the effects of aligning vision models using dataset with mid-level variations leads to a better “general-purpose representation”. However, the reasons why using datasets with such mid-level variations (instead of datasets with high or low levels of variations) for alignment results in better representations remain unclear. 3. Experiments in this paper are conducted using ViT based models. However, the effects of perceptual alignment for CNN based models are not explored. 4. This paper compares the effects of aligning vision models using datasets with low, mid, and high levels of variations on counting and instance retrieval tasks. However, to better demonstrate the high perceptual qualities embedded in the NIGHTS dataset, comparisons should be conducted in a wider range of downstream tasks. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors discuss some of the limitations of their work in Section 5. But I would like them to consider some of my concerns above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments. We address questions and concerns below. **How does perceptual alignment affect model features?** We emphasize that this paper aims to answer this question in terms of the competency of representations. There is a rich precedence of understanding and comparing representations via their competencies at downstream tasks [1-2]. Evaluating general-purpose features on transfer tasks – particularly with simple methods such as KNNs and linear probes – allows us to quantify what the features capture and make decodable. We further flesh out our empirical evaluation in the global response, section 1.1, where we additionally compare different levels and types of perceptual alignment across segmentation, depth estimation, and classification. We hope that our additional evaluations provide insight into which datasets and tasks are represented in a useful way by finetuned models. We also provide further discussion of the learned feature space in section 3. We finally note that while we probe representations in terms of competency, understanding them in terms of their mechanism is an important direction for future work. **Why is mid-level alignment better than other levels?** We address this question directly in our global response, section 3. Please let us know if further detail is needed, or there are follow-ups. **Perceptual alignment for CNNs** We thank the reviewer for their suggestion. We assess the effects of perceptual alignment with NIGHTS on two popular CNNs: ResNet50 and ConvNeXt-B, using the same loss described in the paper*. We evaluate on counting and instance retrieval tasks; our results are shown below: **Counting (Clevr-Count):** | Model | RMSE | MAE | |---------|-------|-------| | ConvNeXt| 2.045| 1.522| | ConvNeXt-HA|**1.631**| 1.193| | ResNet | 3.140| 2.551| | ResNet-HA | **1.729**| **1.282**| **Instance Retrieval (DF2)** **: | Model | Top-1 | Top-3 | Top-5| |---------|-------|-------|-------| | ConvNeXt| 2.12| 3.56| 4.59 | | ConvNeXt-HA|**2.81**|**4.8**| **5.98** | ResNet | 0.018 | **0.12** | 0.14 | | ResNet-HA | 0.018 | 0.074 | **0.17**| *Due to time constraints we were unable to implement LoRA tuning for the CNNs, and instead trained MLPs on top of the final-layer improvements. Previous work [3] indicated training on NIGHTS using MLPs can steer alignment in the right direction but may be less effective; nonetheless we still see downstream improvements using this method. **We report results for both ConvNeXt and ResNet, however acknowledge that the accuracy numbers for ResNet are likely too small to draw conclusions from. **Comparisons against other perceptual datasets.** We thank the reviewer for their suggestion. In addition to our comparison on counting and instance retrieval, we further evaluate models trained on BAPPS, THINGS, and ImageNet triplets on a wider range of downstream tasks: segmentation, depth estimation, and a and a selection of natural/specialized/structured datasets from VTAB. Our full results can be found in the global response, section 1.1, and we will include these in our revision. [1] Yonglong Tian, Lijie Fan, Kaifeng Chen, Dina Katabi, Dilip Krishnan, and Phillip Isola. Learning vision from models rivals learning vision from data. ArXiv, abs/2312.17742, 2023. [2] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICLR, 2021 [3] Stephanie Fu, Netanel Tamir, Shobhita Sundaram, Lucy Chai, Richard Zhang, Tali Dekel, and Phillip Isola. Dreamsim: Learning new dimensions of human visual similarity using synthetic data. In NeurIPS, 2023 --- Rebuttal Comment 1.1: Comment: Dear reviewer, The author-reviewer interaction period has started. Please read the responses provided by the authors, respond to them early on in the discussion, and discuss points of disagreement. Thanks --- Rebuttal 2: Comment: Thanks for your feedback. The hypotheses outlined in Section 3 are interesting and have partly addressed my concerns regarding "why mid-level alignment can lead to better general-purpose representations". I will raise my score accordingly. --- Rebuttal Comment 2.1: Title: Thank you Comment: Dear Reviewer rQrx, We appreciate your positive feedback and are truly delighted to see that our response has addressed your concerns and questions. Thank you once again for dedicating your time and effort to reviewing our work and providing us with insightful suggestions!
Summary: This paper aligns representations with human perception on mid-level semantics to improve performance on various downstream tasks. Specifically, it does this by pre-training models on additional synthetic image triplets, where the visual similarity within each triplet is annotated by human subjects. Strengths: - The dataset this paper used for additional pre-training is rather small compared to the standard datasets, yet authors show that there is still a noticeable improvement on multiple tasks. Weaknesses: - Lacks general instructions on when aligning with human perception would benefit representation learning. From the title, I would expect a series of studies providing insights into which levels of alignment benefit or impair different types of tasks. The empirical study in its current version is not comprehensive enough to compensate for the lack of theoretical novelty compared to previous work [1]. [1] *"Improving neural network representations using human similarity judgment,"* Oh et al., NeurIPS 2023 - It would be better if the authors could provide some empirical analysis to give insights into why mid-level alignment is better than other levels. Technical Quality: 2 Clarity: 3 Questions for Authors: - Regarding dense prediction tasks, there are cases where pre-training with human alignment impairs performance, e.g. DINOv2(-HA) on VOC, etc. The authors stated that DINOv2 has seen these data, and considering that DINOv2-HA has also seen these data, how does this explain the performance drop? Since the only difference in the dataset for models with or without HA should be NIGHTS, this needs clarification. - It seems odd that the performance change from using ImageNet and THINGS for pre-training is reversed in Figure 8 for the counting task, and pre-training on THINGS looks extremely bad for retrieval tasks. Can the authors provide an explanation for this phenomenon? - Compared with previous work [1], the authors use additional patch-level alignment during pre-training. Isn't there some redundancy between the average pooling of patches and the [CLS] token? How does patch-level alignment affect performance? Additionally, since some patches from negative samples and reference images are similar, it seems counterintuitive to push them away in the embedding space. [1] *"Improving neural network representations using human similarity judgment,"* Oh et al., NeurIPS 2023 Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments. We address questions and concerns below. **Which levels of alignment benefit or impair different tasks?** To strengthen our empirical results, we extend our comparison of different “levels” of alignment (i.e. fine-tuning on THINGS, BAPPS, ImageNet) to the majority of tasks evaluated in the paper: counting, retrieval, segmentation, depth estimation, and a selection of natural/specialized/structured datasets from VTAB. We refer the reviewer to section 1.1 of the global response for our new results and observations. We also examine how the *strength* of alignment (i.e. number of training steps) impacts performance, in section 1.2 of the global response. Taken together with our paper, we glean the following insights: - Finetuning on NIGHTS generally benefits representations over base models on object counting, segmentation, depth estimation, instance retrieval, RAG, and some structured classification datasets (such as smallnORB). - Fine-tuning on perceptual datasets that are solely low-level (BAPPS) or high-level (THINGS) typically performs worse than fine-tuning on NIGHTS, and in many cases worse than the base models themselves. This demonstrates that representation quality depends on the type of perceptual data, and in fact some forms of perceptual alignment can harm representations. - Perceptual alignment impairs performance on most natural classification datasets, particularly fine-grained. This phenomenon is consistent across perceptual datasets; we discuss potential reasons in section 6 of the paper. Low-level alignment (BAPPS) seems to best preserve base model performance in this case. - A small amount of alignment may yield the most benefits; training for >1000 timesteps appears to cause a decline in retrieval performance. **Why is mid-level alignment better than other levels?** We address this question directly in our global response, section 3. Please let us know if further detail or follow-up is needed. **Performance drop on dense tasks.** The reviewer is correct that NIGHTS is the only difference for models with/without HA. We flag downstream datasets that the pretrained model has seen because if a dataset is already in-distribution for a backbone, fine-tuning on different data may be more likely to change the feature space such that that dataset is more out-of-distribution. We do not see this as the sole explanation for the results, simply a possible factor. We will clarify this in our revision. **Why does THINGS hurt retrieval?** The triplet choices in THINGS reflect coarse-grained/high-level semantic similarity, i.e., the concepts that humans use for judging object similarity [1, 2]. Thus, THINGS is ill-suited to retrieval tasks (of which counting is one) because they rely on the local/fine-grained similarity structure of representations, rather than the global/coarse-grained similarity structure. Muttenthaler et al. [3] found that learning a linear transform to match the THINGS triplet odd-one-out choices on top of ImageNet-trained models deteriorates downstream performance for tasks where local similarity structure is important, such as few-shot learning on single domain datasets. The authors observed that changing a model’s representation space without preserving the local similarity structure of the pretrained representations decreased few-shot learning performance, and harmed their nearest neighbor structures. Fu et al. [4] further found that fine-tuning vision models on NIGHTS, which better reflects human local similarity structure, leads to decreased performance on THINGS. This indicates that the concepts captured in NIGHTS – which are helpful for retrieval – are different from those useful for THINGS. **Questions on patch-level training** We provide further details and intuition regarding patch-level training in section 2 of the global response. Below we address specific questions: *Isn’t there redundancy between the average pooling of patches and the CLS token?* While there is some redundancy among patch and CLS features, they do not encode identical information as the former is a uniform pooling over the image space, and the latter does not have such a constraint (previously, Zhai et al. [5] train the SigLIP model and use attention-pooled patch tokens as their global embedding instead of using a separate CLS token). In our work, we take the simplest approach to include all feature information available to tune on, and we find it effective for dense prediction tasks. *Since some patches from negative samples and reference images are similar, it seems counterintuitive to push them away in the embedding space.* We agree with the reviewer; this was our intuition for supervising on the average-pooled patches, rather than using a dense contrastive loss. Average pooling allows us to supervise on a global representation, but also propagates that supervision to the patch tokens, enabling evaluation on dense tasks. In preliminary experiments, we found that patch-level alignment without pooling led to poor results, likely for this reason. [1] Hebart, M.N., Zheng, C.Y., Pereira, F. et al. Revealing the multidimensional mental representations of natural objects underlying human similarity judgements. Nat Hum Behav 4, 1173–1185 (2020). [2] Muttenthaler, L., Dippel, J., Linhardt, L., Vandermeulen, R. A., & Kornblith, S. (2022). Human alignment of neural network representations. In ICLR 2023. [3] Muttenthaler, L., Linhardt, L., Dippel, J., Vandermeulen, R. A., Hermann, K., Lampinen, A., & Kornblith, S. Improving neural network representations using human similarity judgments. In NeurIPS, 2023. [4] Fu, S., Tamir, N., Sundaram, S., Chai, L., Zhang, R., Dekel, Tali., and Isola, P. Dreamsim: Learning new dimensions of human visual similarity using synthetic data. In NeurIPS, 2023. [5] Zhai, Xiaohua, et al. "Sigmoid loss for language image pre-training." In ICCV, 2023. --- Rebuttal Comment 1.1: Comment: Dear reviewer, The author-reviewer interaction period has started. Please read the responses provided by the authors, respond to them early on in the discussion, and discuss points of disagreement. Thanks
Summary: As the title clearly indicates, this article investigates how aligning vision model representations to human perceptual judgments impacts the performance of models relying on these representations for downstream tasks such as dense prediction (semantic segmentation and depth estimation), retrieval-augmented generation, object counting and instance retrieval. Given a vision model backbone, it is finetuned with LoRA on the NIGHTS dataset, which contains human similarity judgments over synthetically-generated image triplets (NeurIPS 2023). For each considered downstream task, several backbones are compared to their "human-aligned version" with globally better results when the backbone is fine-tuned on human perception. Strengths: * the scientific question raised by the paper is simple but interesting and the study shows it is worth investigation. The problem is clearly stated and introduced, the method and protocol are well explained and the article is well written overall. This bullet point is not just a false pretext to artificially add a "strength" to the review, it is a real pleasure to read. * the method to fine-tune the backbones relies on well-known and standard methods and tools (visual transformer, triplet loss...), which leads to a convincing methodology. The reader is not confused by an incomprehensible labyrinthine system and can therefore fully concentrate on the results of the study. * the experimental part is impressive, with an evaluation of a large variety of tasks, with various backbones as well. This is obviously a strong part of the paper: although the original idea was simple and interesting, the general quality of the study is finally supported by all these experiments. The supplementary material and the code (zip file) allow a precise investigation of the experiments and ensure reproducibility. * Section 4.5 investigates the use of alternative datasets to fine-tune the model. Some were proposed by previous works and others were built by the author to test a specific question. All of them have a size similar to NIGHTS. The resulting analysis is interesting and is a nice complement to the study. Weaknesses: * the alignment is performed both at the image level (section 3.2) and patch level (section 3.3) and the authors claim (lines 164-166) that "Additionally, we find that local patch-level representations can be improved by tuning on image-level perceptual judgments, and show performance increases on semantic segmentation and depth estimation". However, in the following all experiments only compare a backbone with its "human aligned version" (HA) and it is not clear what is the contribution of the alignment at a global and a local level. On line 148, the text says that the model consists to "train heads for semantic segmentation and depth estimation [at the local level]". Globally, the manuscript could be more explicit on this "details": is the local patch level used for training for segmentation and depth estimation only or is it the same training protocol (with global and local levels) for all experiments? If so, what is the contribution of adding (or not) the heads that are trained at a local level? * one may regret that the negative result on VTAB is not reported in the main paper and only discussed in the Limitation without further investigation. However, one must recognize that the paper already contains significant experimental works, that allow to support the main demonstration. * the fact that the same backbones are used in several experiments/tasks is interesting but sometimes it leads to models that are very weak baselines in comparison to the state of the art. In particular, in Section 4.3, specialized class-agnostic models on few-shot counting have much better performance than those reported (less than 0.15 MAE/RMSE on FSC147 for SafeCount or CounTR) **minor** * the experimental protocol for "object counting" reports that $k$ is determined on the training set among 4 values then the best one is used on the test set. Reporting the performance for each $k$ on the training set, or at least the one that was retained on the test set, would be useful. * line 95: a reference is missing Technical Quality: 4 Clarity: 4 Questions for Authors: * is the local patch level used for training for segmentation and depth estimation only or is it the same training protocol (with global and local levels) for all experiments? If so, what is the contribution of adding (or not) the heads that are trained at a local level? * will the code (zip file) be released with the paper? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: An experiment on the VTAB benchmark, whose results are reported in the supplementary material, leads to lower performance with fine-tuned representations. This case is significantly discussed in Section 5, with several hypotheses proposed to explain the phenomenon. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments. We are glad that the reviewer finds the method convincing, the paper well-written, and the experiments comprehensive. We address specific concerns and questions below. **Global v. local representations** We provide details regarding training patch tokens in the global response, Section 4 of the paper, and further clarifications below. Global representations refer to the output embedding for CLIP/OpenCLIP, and the CLS token for all other backbones. These alone are used for counting, instance retrieval, RAG, and classification, as they contain information across the image. In these global tasks, we either compute k-Nearest Neighbors over these global representations, or train linear probes. Local features are used for dense tasks (segmentation, depth estimation); good spatial features are necessary, as the output depends on per-pixel labels rather than image-level labels. Following the procedures from [1], we train segmentation and depth estimation heads on top of only the local features. We thank the reviewer for raising these questions, and will clarify our methodology in our revision. **VTAB results** We appreciate that the reviewer brought up this concern yet recognized the significant coverage of experiments in the main paper. Our main contribution is to empirically identify how tuning on perceptual datasets impacts transfer performance; for a thorough investigation, this includes negative impact. We acknowledge, however, that readers may be most directly interested in where this alignment succeeds, and structure the paper to highlight these applications while retaining all our findings. **Comparison to state-of-the-art task-specific models** As the reviewer notes, we use the same backbones across all experiments and tasks. This was done to evaluate general-purpose representations, and we acknowledge that these may be outperformed in cases such as counting by task-specific models. We do not aim to achieve state-of-the-art performance on any single task; rather, we demonstrate how a representation becomes comparatively better (or worse) over an array of multiple tasks. By evaluating the competency of a single representation over multiple tasks – which would not be possible with task-specific models – we gain insight into how general-purpose feature spaces are affected by alignment. **Counting experiment parameters** Below we report the training performance for each value of $k$ in the counting task. *DINO:* | k | Acc. | RMSE | MAE | |---------|-------|-------|-------| | 1 | 0.267 | 1.749 | 1.313 | | 3 | 0.284 | 1.753 | 1.292 | | **5** | 0.285 | 1.692 | 1.255 | | 10 | 0.277 | 1.742 | 1.293 | *DINO-HA:* | k | Acc. | RMSE | MAE | |---------|-------|-------|-------| | 1 | 0.271 | 1.706 | 1.331 | | 3 | 0.288 | 1.701 | 1.284 | | 5 | 0.289 | 1.694 | 1.251 | | **10** | 0.290 | 1.708 | 1.261 | **Code release** Our code will be fully open-sourced along with the release of our paper. We also appreciate the reviewer’s note of a missing reference and will address this in revision. [1] Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." arXiv preprint arXiv:2304.07193 (2023). --- Rebuttal Comment 1.1: Title: Thank you for feedback Comment: I acknowledge the authors for their feedback and encourage them to include in their camera ready version as much as possible clarification provided in the rebuttal. I also read the other reviews and I note that my perception is significantly more positive than the average other reviews. Nevertheless, the weaknesses raised do not seem sufficiently convincing to me, and I therefore maintain that this article deserves to be brought to the attention of the community. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer 9gB3, Thank you for your positive feedback throughout the entire reviewing process; we are truly delighted to see appreciation for this line of work. Thank you once again for dedicating your time and effort to reviewing our work and providing us with insightful suggestions!
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful questions and feedback. We are glad they found: * The paper is clear, well-written, and easy to follow. [kSUr, 9gB3] * The experiments are comprehensive. [kSUr, 9gB3, VQay, rQrx] * The analysis is insightful and interesting. [9gB3, VQay] * The paper studies a simple but interesting scientific question worth investigating [kSUr, 9gB3]; the key method is convincing and effective [9gB3, rQrx, VQay] We present key results and details here, and respond to individual questions in reviewer-specific responses. **1. What are the benefits/drawbacks of different levels of perceptual alignment?** **1.1 Ablating perceptual datasets** We extend our dataset ablation to our full range of tasks. We LoRA-tune DINO on triplets from BAPPS, THINGS, and ImageNet, using the procedure from Section 4.5. We emphasize that fine-tuning on these datasets ablates the *level* and *type* of perceptual alignment. BAPPS contains judgments of low-level distortions, THINGS contains semantic-level distortions, and ImageNet contains no perceptual judgments, instead grouping images by semantic category. We evaluate models tuned on these datasets on segmentation, depth estimation, and a VTAB subset. Results are in Fig. 2-4 of the attached PDF. We observe: - For all dense tasks, training on NIGHTS outperforms all other datasets. The dataset ranking varies across evaluations, but NIGHTS-tuned models consistently transfer best. - For many tasks, training on ImageNet triplets or the base model, outperforms THINGS or BAPPS. This indicates that the types of perceptual judgments in NIGHTS specifically are helpful, whereas training solely on low-level or high-level variations may harm representations. - On VTAB datasets, base models perform best. The exception is sNORB (pose prediction), for which NIGHTS is best. Amongst perceptual datasets, NIGHTS is sometimes outperformed by BAPPS/ImageNet. **1.2 Ablating training time** We further evaluate how the *strength* of alignment – i.e. training loss when finetuning on perceptual datasets – impacts features. We evaluate DINO checkpoints tuned for increasing # steps on NIGHTS/BAPPS/THINGS/ImageNet on instance retrieval. Results are in Fig. 5 of the PDF. We observe: - Tuning on NIGHTS outperforms other datasets over the full training trajectory. - Performance rises significantly with a small amount of alignment to NIGHTS, however it consistently trends down after >1000 steps. This indicates that a small amount of alignment is helpful for the downstream task, however overfitting to human preferences may be harmful. **2. Clarification on patch token training** For dense prediction, we modify the original training objective for stronger supervision on patch tokens, which are output alongside the CLS token and are the basis for our segmentation and depth maps. Although supervising only on the CLS token can still modify patch tokens (as tuning model weights affects all feature outputs), our initial experiments showed that the CLS-based objective was too global to change the patch tokens in any impactful way. Thus, we switch to a loss more directly tied to local features. In more detail: our local objective only differs from the global objective in how the feature is extracted: instead of computing $L(\texttt{CLS}_A,\texttt{CLS}_B)$, we compute $L(cat[\texttt{CLS}_A, pool(\texttt{PATCH}_A)], cat[\texttt{CLS}_B, pool(\texttt{PATCH}_B)])$. $\texttt{CLS}$ is of dimension $(1, d)$, $\texttt{PATCH}$ is $(s, s, d)$ where $s$ is the number of patches along each spatial dimension, and we spatially average the patch tokens to get dimension $(1, d)$. We then concatenate the CLS and pooled patch tokens to get dimension $(1, 2d)$. Only our experiments in 4.1 (semantic segmentation and depth estimation) use this patch objective, as they are the only ones that require local features; all other applications in the paper are evaluated with CLS tokens (original objective detailed in 3.2). We appreciate the reviewer feedback to clarify this, and will do so in our revision. **3. Why does tuning on NIGHTS outperform high/low-level datasets?** We hypothesize that variations found in both BAPPS and THINGS are restricted to solely high- or low-level, whereas the mid-level judgments in NIGHTS cover some measure of both (see Fig. 11 in paper for examples of the difference). Previous work [1] has found that similarity judgments by perceptual metrics trained on BAPPS correlate better to low-level metrics such as color than semantic attributes. THINGS contains solely semantic distortions, reflecting the concepts humans use to judge object-level similarity rather than visual concepts. In contrast, NIGHTS contains a broad set of visual variations including style, pose, color, and count. Previous work [1] has found that models fine-tuned on NIGHTS seem to attend to both low-level attributes and semantic attributes . Aligning a feature space to these concepts may be useful for visual tasks requiring some semantic knowledge, such as retrieval, counting, segmentation, etc. This hypothesis may also explain why tuning on NIGHTS hurts performance on fine-grained tasks, in which images that are perceptually similar may belong to different categories. The main contribution of our paper is to empirically identify how tuning on mid-level variations, in comparison to other perceptual datasets, impacts transfer performance across a wide variety of downstream tasks. Further understanding the mechanism of the respective learned representations is a rich avenue for future work. We greatly appreciate that the reviewers see perceptual alignment as an interesting scientific question worth investigating. We hope that our paper encourages research in this emerging topic and sparks fruitful discussion. [1] Fu, S., Tamir, N., Sundaram, S., Chai, L., Zhang, R., Dekel, Tali., and Isola, P. Dreamsim: Learning new dimensions of human visual similarity using synthetic data. In NeurIPS, 2023. Pdf: /pdf/4f4ff3b64d5b770a3e88873d0d4f86922f04b84e.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper studies the question of when perceptual alignment supervision makes better vision representations. They fine-tune existing pre-trained vision backbones (eg, CLIP and DINO) with LoRA on NIGHTS, a synthetic triplet-based dataset of human similarity judgments. Models fine-tuned in this way generally perform better on vision tasks, ranging from dense prediction, RAG, counting, and object retrieval. However, the results can be the opposite when evaluating classification tasks, especially natural classes. They also considered fine-tuning on different types of human preference datasets, and found mid-level supervision to be more beneficial. Strengths: 1. The tasks considered in this paper for evaluation are comprehensive and cover many different properties of the representation. 2. The delivery of this paper is clear and easy to follow. Key motivation, setting, and conclusions of each part are easy to spot. 3. Evaluating human preference for vision representations is an emerging topic and interesting for research. Weaknesses: 1. My major concern is that although the topic is somehow interesting and the evaluation is comprehensive, many conclusions are expected and unsurprising. Provided in the form of triplets, the supervision of human preference is just another form of "classes", being very fine-grained and having flexible class definitions. This is similar to the learning signal of self-supervised contrastive learning, with a better form of data augmentation. In this regard, it is expected that human preference makes generally better representations. 2. Regarding the performance drop in fine-grained classification (especially natural classes), this might be a result of the domain gap. Given that the model is fine-tuned on all synthetic images, it is unsurprising that discriminating natural images is harder afterward. The definition of human preference can be ambiguous, and the dataset for fine-tuning can have distracting factors (eg, synthetic distribution). These factors are not tackled well in the design of comparisons and could make some conclusions unreliable. Technical Quality: 3 Clarity: 3 Questions for Authors: I still find this paper could spark some interesting questions. More investigations on these aspects might increase the significance of this paper. 1. When can human preference be harmful? One motivation of this paper is to provide some empirical guidance for future vision research, if human preference is introduced as the way in the language community. As I mentioned above, the benefit of it to vision is not very surprising, but the prevention of possible risks could be valuable. This paper has shown some and more in this direction might be more valuable than the benefits (as most improvements are just marginal). 2. How should human preference be better defined and categorized? A precise definition of human preference is hard to obtain, but important. This paper has considered the level of supervision, and a discussion of other aspects could help readers. 3. What distracting factors exist in current human preference datasets and how to isolate the effects of them? This is important to ensure reliable conclusions. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed in the paper. Suggestions are flagged above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments and appreciate that they found the evaluation comprehensive, the paper clear, and the topic interesting. We address questions/concerns below. **“It is expected that human preference make generally better representations.”** We broadly agree with the reviewer that human preference is a form of fine-grained “classes”, with flexible class definitions, depending on the level/type of similarity judgments. Our key contribution is to elucidate how these classes should be defined to benefit representations: we find that the particular preferences in NIGHTS improve performance across many tasks. A key finding, in fact, is that *not all human preferences are always better*. In the global response section 1.1, and section 4.5 of the paper, we compare finetuning on multiple types of human preferences (both high- and low-level). Finetuning on NIGHTS outperforms other perceptual datasets (BAPPS, THINGS) across segmentation, depth estimation, retrieval, and counting. Finetuning on triplets formulated without any human preferences (from ImageNet) often outperforms models fine-tuned on BAPPS/THINGS, showing that some human preferences harm representations. **Synthetic-Real domain gap** We briefly discuss the synthetic-real domain gap in section 4.5. We ablate the use of synthetic images in NIGHTS by including SynCLR in our experiments, which was pre-trained solely on generated images. SynCLR exhibits the same performance drops across fine-grained/natural datasets as real-trained backbones, indicating that they must result from the perceptual alignment. We will clarify this in our revision. **When can human preference be harmful?** As detailed further in the next section, “human preference” lacks a rigorous definition in vision and language literatures; it can include aesthetic preferences and other harmful types of annotations. The scope of our work is to evaluate human perceptual alignment, which extracts human preference at a psychophysical level rather than a more cognitively-penetrable level. Nevertheless, we empirically observe several cases in which human preferences may harm representations: - In the global response, section 1.2, we show that overfitting to perceptual datasets harms performance. - In our dataset ablations (global response section 1.1, paper section 4.5) we find that finetuning on perceptual datasets with solely high-level (THINGS) or low-level (BAPPS) variations hurts performance for many tasks. - Finetuning on NIGHTS seems to degrade how well representations to distinguish between distinct fine-grained categories that are very visually similar (paper section 6.1). As mentioned above, due our evaluation of the synthetically-pretrained SynCLR, we attribute the performance drop to perceptual alignment. There are other possibilities for harm outside the scope of the type of perceptual annotations we study: - Human preferences may reflect unwanted biases. A long-standing problem in both language and vision is that biases (e.g. gender, racial) reflected in Internet language/images are inherited by large models, and reflected in their embeddings. One can imagine similar phenomena in visual preferences [3]. - Humans may disagree on a preference label, or even disagree with themselves if asked at different times. This may lead to noisy data if not filtered carefully, thus harming representations [1]. - Without sufficient demographic diversity in the annotator group, emerging biases may be reflected in the model. For example, some RLHF-trained language models have been shown to develop a bias towards the opinions of high-income, liberal individuals over their non-RLHF counterparts. [2] **How should human preference be better defined and categorized?** We thank the reviewer for pointing this out; it is important to define human preferences in vision. In the language space, human preferences largely refers to ethical judgments, or aspects of the user interaction experience. In vision, we define preferences as concepts (or categories) that humans use for making image similarity judgments [1]. By training on these preferences, we make models learn a concept space that is aligned with the concepts that humans use to navigate the visual world. We will clarify this in our paper. In addition, we will refer the reader to a recent review paper, “Getting aligned on representational alignment” [1] that clarifies definitions related to aligning vision representations with human judgments. We also note that aesthetic judgments (i.e. concepts that humans use to determine visual appeal) have been used to improve diffusion models [3]. We consider this a separate category of annotation, however can refer the reader to relevant works. **Distracting factors in human preference datasets** The signal-to-noise ratio of human cognitive data (may it be behavioral or neural) is often low. Thus, it is important to perform a denoising step before using the data downstream. For behavioral data this can be achieved by collecting a large number of judgments and filtering out the low quality judgments or a strong regularization technique. Distracting factors may relate to long response times or other confounding factors. Isolating such effects could be achieved by showing human participants triplets of different kinds, with varying background or object complexity where complexity can be the visual richness of the scene(s) or the number of objects. [1] I. Sucholutsky, L. Muttenthaler, A. Weller, A. Peng, A. Bobu, B. Kim, B. C. Love, E. Grant, J. Achterberg, J. B. Tenenbaum, et al. Getting aligned on representational alignment. arXiv, 2023. [2] Santurkar, S., Durmus, E., Ladhak, F., Lee, C., Liang, P. and Hashimoto, T.. Whose opinions do language models reflect?. In ICML, 2023. [3] Y. Kirstain, A. Polyak, U. Singer, S. Matiana, J. Penna, and O. Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. 2023. --- Rebuttal Comment 1.1: Comment: Dear reviewer, The author-reviewer interaction period has started. Please read the responses provided by the authors, respond to them early on in the discussion, and discuss points of disagreement. Thanks --- Rebuttal Comment 1.2: Comment: Thanks to the authors for the detailed response. I found them profound and have substantially addressed my concerns and questions. I hope the additional discussions (including the general response) can be reflected in the next version of this paper. Overall I think the contents of this paper should be shared with the community, and I'll raise my score accordingly. --- Reply to Comment 1.2.1: Title: Thank you Comment: Dear Reviewer kSUr, We appreciate your positive feedback and are truly delighted to see that our response has addressed your concerns and questions. Thank you once again for dedicating your time and effort to reviewing our work and providing us with insightful suggestions!
null
null
null
null
null
null
Just Add $100 More: Augmenting Pseudo-LiDAR Point Cloud for Resolving Class-imbalance Problem
Accept (poster)
Summary: This paper proposes a new method to augment pseudo-LiDAR point cloud to resolve the class-imbalance problem. It consists of generated 3D models of minority classes from video or miniaturized models with 3D reconstruction NERFs. Such models are then sampled from the target LiDAR and added to a Real LiDAR point cloud. LiDAR intensity are generated using a CycleGAN. Experiments have been conducted on several Dataset: KITTI, nuScenes and Lyft. It shows that the proposed method is competitive with the state-of-the-art and outperforms on minority classes. Strengths: The paper is well written and easy to follow. The related works section is well written and provides a good overview of the state-of-the-art methods. However, I suggest adding one general paragraph on Data Augmentation and it mains techniques. The proposed model combined several high level techniques such as 3D reconstruction and CycleGAN. The overall architecture is well explained and the intuition behind the method is clear. Due to the limited space, some details are given in the supplementary material. The experimental part is two-fold: 1) the comparison with the state-of-the-art methods, and 2) the ablation study. The proposed model increases GT-Aug performances about 1% on both mAP and NDS. The improvment remains for several models. It demonstrates the genericity of the proposed augmentation pipeline. More SOTA models Weaknesses: Regarding the comparison with the state-of-the-art methods, the proposed method is evaluated on several datasets and compared with two data augmentation methods: GT-Aug (vanilla synthesis-based LiDAR data augmentation) and Real-Aug. GT-Aug is more a baseline than a competitor and to my point of view, the only competitor is Real-Aug in this experiment. Technical Quality: 3 Clarity: 3 Questions for Authors: An ablation study is proposed with several experiments. It is interesting to see thant using 3DGS instead of Nerf increases the data augmentation process. It could be interesting to see the impact of each step of the pipeline on the final performance by replacing each step by a simpler one. What happen if the luminance generation part is removed or replace by a simpler method? What happen if the 3D reconstruction step is produced with less images? What happen if the object level data alignment is removed and replaced by a general alignment? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. --- **[S1] Additional Paragraph for Related Work** This is a great suggestion. We will add more paragraphs to the Related Work Section, summarizing the literature on data augmentation techniques. --- **[W1] Additional Comparison against Other Approaches** This is a good question. We chose Real-Aug as our main competitor because it is the latest work and shows promising scores in the public nuScenes leaderboard. However, we fully agree with the reviewer that additional comparison with other augmentation techniques is needed. Thus, as shown in Table VII below, we conduct experiments to analyze the data augmentation effect with other recent approaches, including LiDAR-Aug [F], CA-Aug [G], and 3D-VField [H]. Our experiment confirms that all approaches help improve detection accuracy against the baseline. Notably, Real-Aug and (our proposed) PGT-Aug generally outperform the other approaches. We further provide qualitative analysis in Figure III in our rebuttal pdf file. **Table VII. Performance comparison with other data augmentation approaches.** | | AP\_car 70 (40 recall) | | | | ----- | :---: | :---: | :---: | | | Easy | Mod | Hard | | Baseline | 88.08 | 74.85 | 70.55 | | GT-Aug | 87.80 | 78.36 | 75.41 | | LiDAR-Aug \[G\] | 87.75 | 78.24 | 75.35 | | 3D-VField \[H\] | 87.05 | 77.13 | 75.55 | | CA-Aug \[F\] | 88.82 | 78.66 | 75.75 | | Real-Aug (Our impl.) | 88.13 | 78.97 | 76.06 | | PGT-Aug | 90.00 | 79.45 | 76.35 | **Experiment Details** We compare PGT-Aug with other 3D data augmentation methods on the public KITTI [8] Car benchmark. We create pseudo LiDAR point clouds for our method by randomly sampling ten cars from the CO3D dataset. The instances used are as follows: 106_12650_23736, 106_12662_23043, 157_17286_33548, 185_19982_37678, 194_20899_41094, 216_22796_47484, 216_22827_48422, 421_58405_112551, 206_21799_45886, and 421_58407_112553. [F] Context-Aware Data Augmentation for LiDAR 3D Object Detection (ICIP 2023)\ [G] LiDAR-Aug: A General Rendering-based Augmentation Framework for 3D Object Detection (CVPR 2021)\ [H] 3D-VField: Adversarial augmentation of point clouds for domain generalization in 3d object detection (CVPR 2022) --- **[Q1] More Ablation Studies** We conduct the following additional experiments: 1) without luminance generation (constant intensity), 2) instance generation according to the change in the number of images, and 3) without object level data alignment (random sampling, without rigid motion model). On the second row of Table VIII, the removal of luminance (intensity) generation in pseudo LiDAR leads to performance drop in downstream detection tasks. Also, as the percentage of multiview images used during 3D reconstruction decreases, the overall detection performance decreases accordingly. Finally, we conduct experiments on general alignment by replacing object-alignment with random sampling points from dense RGB point clouds and removing motion models. Both experiments show suboptimal performance compared to PGT-Aug. **Table VIII. Ablations in intensity, the number of images, data alignment** | | Car | Ped | Barrier | T.C. | Bus | C.V. | Trailer | Truck | Motor | Bicycle | mAP | NDS | | :---- | :-----: | :-----: | :-----: | :-----: |:-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | Ours | 85.4 | 85.4 | 68.0 | 71.1 | 72.1 | 24.2 | 40.4 | 59.8 | 70.3 | 58.3 | 63.52 | 69.11 | | constant intensity | 85.4 | 85.0 | 68.1 | 71.5 | 72.6 | 23.6 | 38.9 | 60.0 | 68.3 | 54.2 | 62.75 | 68.75 | | 25% of the number of image | 85.3 | 85.0 | 67.3 | 71.1 | 73.4 | 23.1 | 40.1 | 59.2 | 70.2 | 56.4 | 63.12 | 68.78 | | 50% of the number of image | 85.5 | 84.9 | 68.5 | 71.4 | 72.0 | 23.1 | 41.4 | 59.8 | 69.5 | 56.4 | 63.26 | 69.03 | | no-motion | 85.4 | 85.2 | 67.8 | 71.5 | 71.9 | 23.7 | 41.0 | 60.1 | 69.0 | 56.6 | 63.22 | 68.80| | random sampling | 85.4 | 85.1 | 67.2 | 71.4 | 73.2 | 22.8 | 39.2 | 59.8 | 68.1 | 55.4 | 62.76 | 68.56 |
Summary: The paper presents a novel pipeline for LiDAR-based object detectors to solve the class imbalance problem by generating pseudo-LiDAR samples (from multi-view images of miniatures and public videos of an object) and augmenting them during training to balance the performance gap across classes. The augmentation technique proposed in this paper demonstrate the superiority and generality on nuScenes, KITTI and Lyft datasets. Strengths: 1. The augmentation technique proposed in this paper demonstrate the superiority and generality through performance improvements in extensive experiments conducted on popular benchmarks, i.e., nuScenes, KITTI, and Lyft, especially for the datasets with large domain gaps captured by different LiDAR configurations. 2. The paper is well-written and easy to follow, especially the part explaining the background. 3. It presents good experimental results and intuitive visualizations, convincingly demonstrating its effectiveness. Weaknesses: 1. There is a lack of comparative experiments with other methods that aim for class imbalance problems. 2. The motivation of this paper is not clear. There is a need to discuss the necessity of using data augmentation methods to solve class imbalance problems. Why can't we use some loss-based or strategy-based methods to handle class imbalance issues? 3. The detail of the framework is not clear. For instance, there is an Intensity Domain Alignment in the framework, but what it is in detail? e.g., the structure of it, and how it works. 4. Is PGT-Aug not friendly to the majority of classes? As shown in Table 2, PGT-Aug's performance is close to that of Real-Aug for the majority of classes. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. There are many works dealing with long-tail problems or class imbalance that are not based on data augmentation. Can authors discuss their application to this problem? 2. Compared to Real-Aug, how is the time cost of the proposed method (PGT-Aug) while achieving less than a 1-point increasement in both metrics mAP and NDS? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors discussed the existence of domain discrepancies in both datasets and categories. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. --- **[W1, Q1] Comparison between other methods that aim for class imbalance problems** As the reviewer pointed out, there are other comparative methods dealing with class imbalance using loss-based methods [D, E] without adding data. We attach the comparison results with [D, E] in Table V. To match their baselines, we conducted experiments on PointPillar model. Additionally, we re-implemented Class-Balanced Loss [D] with beta 0.999 and resampled the number of objects. While we find out that loss-balancing methods such as Dynamic Weight Average [E] and CB Loss [D] are effective in enhancing minority-class performance, PGT-Aug, a data-augmentation based method, brings the largest performance gain against other approaches. Also, [E] experimentally proves that its GT sampling-based data augmentation was more effective than loss-based method in improving detection performance. **Table V. Comparison between other methods that aim for class imbalance problems** | | Car | Ped | Barrier | T.C. | Truck | Bus | C.V. | Trailer | Motor | Bicycle | mAP | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | CBLoss \[D\] | 82.7 | 74.7 | 54.0 | 52.1 | 51.2 | 61.7 | 17.9 | 30.5 | 48.7 | 20.5 | 49.4 | | DWA \[E\] | 81.0 | 72.3 | 50.2 | 50.1 | 49.0 | 63.4 | 10.7 | 34.3 | 32.9 | 6.9 | 44.6 | | PGT-Aug | 83.0 | 71.8 | 54.8 | 51.1 | 54.9 | 69.7 | 20.2 | 39.5 | 49.6 | 14.5 | 50.9 | | PGT Aug \+ CBLoss | 82.7 | 74.7 | 56.5 | 55.9 | 54.4 | 68.7 | 20.9 | 34.1 | 53.5 | 20.5 | 52.2 | [D] Class-Balanced Loss Based on Effective Number of Samples (CVPR 2019)\ [E] Resolving Class Imbalance for LiDAR-based Object Detector by Dynamic Weight Average and Contextual Ground Truth Sampling (WACV 2023) --- **[W2, Q1] The motivation of this paper** In 3D object detection, there have been many studies aimed at addressing class imbalance problems by modifying model structures (CBGS) or adjusting loss function (DWA). The most widely used method is the over-sampling known as GT-Aug, which involves inserting objects from other frames into the current frame. However, this method is limited to inserting objects from a predefined pool (training set), which restricts learning intra-class diversity, and generating or collecting 3D data for various objects has been extremely challenging and expensive. Recent advancements in differentiable 3D reconstruction techniques, such as NeRF and Gaussian Splatting, have made it possible to reconstruct dense 3D points at a lower cost. By leveraging these 3D reconstruction techniques, we proposed a novel and practical pipeline that overcomes the limitations of traditional insertion methods, particularly in terms of cost and diversity. --- **[W3] The detail of the framework** Due to page limits, we have elaborated the details of our Intensity Domain Alignment network in line 559 of supplementary material. In summary, we adopt CycleGAN framework to learn unpaired translation between RGB and intensity. We designed generators and discriminators with PointNeXt encoders. It receives nuScenes long-tail class samples as real data, and we create fake data by translating, rotating, resizing, and projecting long-tail class RGB samples to the same (x,y,z,l,w,h, theta) of real data. Our generator is trained to generate fake intensity values from RGB features, while discriminator is trained to discriminate between real and fake intensity. --- **[W4] The effect of PGT-Aug on the majority of classes** Our primary objective is to generate and insert the minority classes instead of the majority classes to relieve the class imbalance issue. Thus, we anticipated that the performance of the minority would improve while the performance of the majority classes would either remain stable or improve slightly due to the synergy effect of addressing the imbalance. To verify the effectiveness of the pipeline to majority classes, we conducted experiments on KITTI dataset as shown in Table VI. We reconstructed 10 cars of 3D RGB point clouds from CO3D dataset [I], (See Figure III in Rebuttal-Supp.) and created about 16,000 pseudo LiDARs of Car class. We apply pseudo LiDARs along with GT LiDARs during augmentation, and our PGT-Aug performance largely outperforms GT-Aug, Real-Aug, and other augmentation methods in car detection benchmark. Due to writing limit, please refer to Experiment Details (jNRg). **Table VI. The effect of PGT-Aug on the majority class** | | AP\_car 70 (40 recall) | | | | ---- | :---: | :---: | :---: | | | Easy | Mod | Hard | | Baseline | 88.08 | 74.85 | 70.55 | | GT-Aug | 87.80 | 78.36 | 75.41 | | LiDAR-Aug \[G\] | 87.75 | 78.24 | 75.35 | | 3D-VField \[H\] | 87.05 | 77.13 | 75.55 | | CA-Aug \[F\] | 88.82 | 78.66 | 75.75 | | Real-AUG (Our impl.) | 88.13 | 78.97 | 76.06 | | PGT-AUG | 90.00 | 79.45 | 76.35 | [F] Context-Aware Data Augmentation for LiDAR 3D Object Detection (ICIP 2023)\ [G] LiDAR-Aug: A General Rendering-based Augmentation Framework for 3D Object Detection (CVPR 2021)\ [H] 3D-VField: Adversarial augmentation of point clouds for domain generalization in 3d object detection (CVPR 2022)\ [I] Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction (ICCV 2021) --- **[Q2] Memory usage and inference time** If Object-level Domain Alignment is performed during training of detection, it may take additional time compared to Real-Aug. However, in 3D object detection methods, objects to be inserted were stored before training, and objects were loaded and inserted during training. Loading these objects from disk to memory in order to insert them into the scene takes time, and the memory and time complexity for the batch is O(n). Therefore, the time cost is the same as Real-Aug. Even though real-time performance was not a major consideration, we will add this discussion to the paper according to the valuable comments.
Summary: This paper introduces Pseudo Ground Truth Augmentation (PGT-Aug), a novel data augmentation technique for LiDAR-based 3D object detection. The goal of PGT-Aug is to address class imbalance in training datasets by generating diverse point clouds for minority-class objects. - **PGT-Aug:** a novel cost-effective pipeline for LiDAR-based object detectors to solve the class imbalance problem by - (i) generating pseudo-LiDAR samples - (ii) augmenting them during training to balance the performance gap across classes. - **Reduce the domain gap:** use spatial distribution matching and data-driven intensity adjustments to achieve - **a novel map-aware augmentation technique:** placing an object into the appropriate locations in the given scene). Strengths: 1. Writing is good. A cool manuscript name. 2. Ample experiment. Weaknesses: 1. There is code provided, but it lacks documentation and is difficult to use to help understand and visualize the results 2. Lack of novelty(not sure), see details in question 4. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. About data collection. For public videos from YouTube can you deal with the dynamic object. 2. In line 138-140, I know Plenoxels and 3DGS are view-dependent representations, but why it is not fully visible or uniformly high-density. 3. what is the degree of spherical harmonic coefficients author use. Why not just use to zero degree of SH to solve question 2. 4. Now there are a lot of autonomous driving simulator work or object-level NeRF/3DGS reconstruction work or shape-gpt/mesh-gpt, can also achieve small objects clone and modify, you can talk about your work and their differences and advantages. 5. NuScenes tests whether contains information about the added object **I'm willing to change the grade if address my concerns.** Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author has said it clearly in the article limitation Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. --- **[W1] Code Documentations** Our apologies. We will revise the current documentation and make them easy-to-follow for researchers to use and understand our code easily. Plus, we will add a Jupyter Notebook-based tutorial to guide (i) how to reproduce our model, (ii) how to use our code, and (iii) how to visualize figures (shown in the paper). --- **[Q1] Data Collection from Videos with Dynamic Objects** This is a good question. Our method relies on SfM to optimize camera poses for 3D reconstruction. However, such SfM would not triangulate well with images containing dynamic objects. This would be why we reconstruct the 3D shape of stationary objects, followed by using a rigid body motion model to represent dynamic objects. However, this must be an interesting direction worth exploring in our future work. --- **[Q2, Q3] View Consistent Representation** This is a great question. We agree that the zero-degree spherical harmonic would have a similar effect to ours. However, we do not use it because (1) we empirically observed that our approach is more robust at capturing the original object's color and areas of dark shades (see our Figure I in rebuttal supp). (2) We wanted our model more generally applicable to various generative models, which may and may not use spherical harmonics. Additionally, (3) in terms of FID score, we observe our approach is generally better. As shown in Table IV, we compare ours with a variant model (with plenoxel's SH coefficient set to 0) to see the quality of generated pseudo LiDAR point clouds in terms of FID score. Our experiment further confirms our approach generates point clouds more similar to the actual LiDAR points. We will add discussion on this. **Table IV. FID score evaluation between SH coefficient 0 and ours** | | Truck | Bus | C.V. | Trailer | Motor | Bicycle | Average FID | | :---- | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | SH coefficient 0 | 14.9 | 13.2 | 8.0 | 7.2 | 2.3 | 3.0 | 8.1 | | Ours | 13.2 | 13.2 | 7.6 | 7.3 | 2.1 | 2.1 | 7.7 | --- **[W2, Q4] Differences to previous simulator works or generative models** By leveraging current 3D reconstruction methods, our main goal is creating a novel and practical pipeline that overcomes the limitations of existing 3D object detections such as class imbalance problems, lack of annotations, in terms of cost and diversity. In other words, our pipeline is not limited to a specific generative model and can be applied to any model. The reasons we used explicit 3D representation models (3DGS or Plenoxel) in the paper instead of the models mentioned by the reviewer are as follows. According to [B] (see section Limitations and Failure Cases in supplementary material), Text-to-3D Generation (Shape-GPT, mesh-GPT, etc.) models tend to collapse modes when the target image distribution is overly concentrated in a single peak, resulting in abnormal 3D objects that are strongly related to specific views (Janus problem). Also, as shown in Figure II of Rebuttal-Supp., recent work of multi-view 3D reconstruction pipeline, DUSt3R [C] often fails to recover the details of miniature-scale objects. However, in order to place the restored objects in various positions, it is necessary to restore the entire shape of the object and create a bounding box. Therefore, we chose models that can reliably reconstruct the entire shape and perform robustly with relatively small objects. [B] Taming Mode Collapse in Score Distillation for Text-to-3D Generation (CVPR 2024)\ [C] DUSt3R: Geometric 3D Vision Made Easy (CVPR 2024) --- **[Q5] NuScenes Setting** All detectors were tested under the same conditions, meaning no further information about our generated objects was used during the evaluation process, i.e., our model only uses the generated objects during training.
Summary: This paper introduces Pseudo Ground Truth augmentation (PGT-Aug) to address class imbalance in LiDAR-based 3D object detection. PGT-Aug generates diverse pseudo LiDAR point clouds from low-cost miniatures or real-world videos and involves three steps: 3D instance reconstruction, object-level domain alignment, and context-aware placement. Extensive experiments on nuScenes, KITTI, and Lyft benchmarks demonstrate its effectiveness, especially for datasets with large domain gaps. Strengths: - **Originality & Practical Impact**: Introduces PGT-Aug, a novel, cost-effective pipeline for addressing class imbalance in LiDAR-based object detectors using pseudo-LiDAR samples from videos and miniatures. - **Quality & Clarity**: Offers a thorough methodology with clear explanations and robust evaluations, enhancing the accuracy and robustness of object detectors. Weaknesses: - As seen in line 532, this paper does not collect as much data compared to large datasets like KITTI. I question the benefits and improvements brought by this work in terms of data collection. - The proposed method requires the use of many pre-trained models, such as the unpaired domain transfer model (L179) and a rigid body motion model (L217). First, the computational efficiency and real-time applicability of these added models need to be addressed. Given the added complexity of the aforementioned models, understanding the computational trade-offs and optimizations required for practical deployment is essential (e.g., the memory usage and inference time), but these aspects are not sufficiently covered in the paper. Second, the impact of the performance of these pre-trained models on the proposed method needs to be discussed and evaluated. - The authors use YouTube videos and "crawled data using the following keywords on Google" (L533). They should obtain permission from the data/content owners. Simply citing the sources in the paper is clearly not sufficient. Minor issue: - "and r is 0.1" should be better move to L.190, since there are no r in Eq. (3) Overall, I really like the interesting idea in this paper. If the authors can address my issues in the rebuttal, I am willing to raise my score. Technical Quality: 3 Clarity: 3 Questions for Authors: How was the map information (L200) obtained? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: no potential negative societal impact of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! Please see our response below. --- **[W1] Data Size** Our dataset is created to provide pseudo-LiDAR point clouds of minority-class objects, which can be augmented into typical driving datasets (e.g., nuScenes, KITTI, and Lyft) to compensate for the class imbalance problem. Thus, the volume of our dataset should be smaller than that of these datasets but (we believe) sufficient to compensate for the class imbalance problem, as we reported in our experiments. Moreover, we would emphasize that (i) we provide an automated pipeline to generate such pseudo-LiDAR point clouds, enabling the community to use it to produce continuously growing datasets with various objects. (ii) We can generate view-dependent pseudo-LiDAR point clouds that can flexibly be placed anywhere in the scene, significantly improving data efficiency. We will clarify this in the final version of this paper. --- **[W2-1] Efficiency and Real-time Applicability of Pre-trained Models** This is a good question. We would emphasize that (1) pseudo LiDAR point clouds are generated and stored offline, and (2) these generated point clouds are loaded (from memory) and augmented during the training of 3D object detectors. Thus, the need to run pre-trained models (e.g., unpaired domain transfer model) in real-time would be less significant. Further, in the following Table I and II, we analyze the processing time of each model and memory usage for generating pseudo LiDAR point clouds of different classes, i.e., construction vehicles, trucks, trailers, etc. This confirms point clouds can efficiently be generated through our pipeline, taking less than 300ms in total. We will discuss this in detail in the final version of this paper. **Table I. Average Processing Time (per instance, in msec)** | Class | C.V | Truck | Trailer | Motor | Bicycle | Bus | | :---- | -----: | -----: | -----: | -----: | -----: | ----: | | Intensity Estimation | 150 | 140 | 250 | 40 | 34 | 178 | | View Dependent Point Sampling | 67 | 44 | 40 | 30 | 6 | 78 | | Rigid body motion | 8.80 | 8.65 | 8.60 | 8.53 | 8.55 | 9.09 | **Table II. Average Memory Usage (MB)** | Class | C.V | Truck | Trailer | Motor | Bicycle | Bus | | :---- | -----: | -----: | -----: | -----: | -----: | -----: | | Memory usage | 4.006 | 4.052 | 4.098 | 4.013 | 4.012 | 4.074 | --- **[W2-2] Ablation Studies with Pre-trained Models** To demonstrate the impact of pre-trained models (e.g., unpaired domain transfer model and rigid body motion model), we further conduct ablation studies as follows: (1) We compare ours with a variant model without luminance generation (i.e., using constant intensity to see the impact of the unpaired domain transfer model). (2) We compare ours with a variant model without our rigid body motion model. In Table III below, we observe both experiments confirm the impacts of using these pre-trained models, showing a degradation without these models. We will discuss this more thoroughly. **Table III. Ablations in luminance generation and motion model** | | Car | Ped | Barrier | T.C. | Bus | C.V. | Trailer | Truck | Motor | Bicycle | mAP | NDS | | :---- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | Ours | 85.4 | 85.4 | 68.0 | 71.1 | 72.1 | 24.2 | 40.4 | 59.8 | 70.3 | 58.3 | 63.52 | 69.11 | | with constant intensity \[1\] | 85.4 | 85.0 | 68.1 | 71.5 | 72.6 | 23.6 | 38.9 | 60.0 | 68.3 | 54.2 | 62.75 | 68.75 | | without rigid body motion model \[2\]| 85.4 | 85.2 | 67.8 | 71.5 | 71.9 | 23.7 | 41.0 | 60.1 | 69.0 | 56.6 | 63.22 | 68.80| --- **[W3] Data/content permission issue** This is a good comment. We also take this copyright issue seriously. First of all, we will not download and re-release those video sources. Instead, we will release a list of YouTube links. Further, as the reviewer suggests, we have contacted the copyright holders to obtain permission to share their video links publicly. We will make sure to resolve this copyright issue upon publication. --- **[W4] Minor Comment** We agree with the reviewer. We will move "and r is 0.1" to L190. --- **[Q1] Map information** nuScenes dataset provides the BEV map, annotating commonly-observed map features such as road segments, lanes, crosswalks, walkways, stop lines, and parking lots. For all scenes, we generate an ego-vehicle-centered map in a top-down coordinate system, and over 34k maps are generated. Note that a range of 102.4m x 102.4m is set for generating a map where ego-vehicle is centered (effective forward sensing range of ego-vehicle is 51.2m). KITTI and Lyft datasets do not provide such map information (as mentioned in L291). Thus, we utilize a LiDAR-based ground segmentation method called Patchworks++ [A] to estimate ground. We will further clarify this in the final version of this paper. [A] Patchwork++: Fast and robust ground segmentation method for 3D LiDAR scans (IROS 2022) --- Rebuttal 2: Title: Raised my rating Comment: I appreciate the rebuttal and additional experiments. I have read the comments from other reviewers as well as the corresponding rebuttal. The rebuttal has largely addressed my concerns. In particular, I am very satisfied with the experiments and discussion related to [W1] and [W2-1]. Good work! Therefore, I have raised my rating to weak accept. I really look forward to seeing the revised version in the near future. --- Rebuttal Comment 2.1: Title: Thank you for your response Comment: We are pleased to hear that our response has addressed your concerns. Thank you for your valuable comments, which will help improve the quality of the paper. If you have any further question, please do not hesitate to let us know.\ Thank you very much.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their time and their thoughtful comments and questions. We are encouraged that the reviewers find that: “very well-written and easy to follow” (R-iAKq, R-eXGm, R-kswe, R-jNRg); our method being described as “novel, cost-effective pipeline” (R-iAKq), “superiority and generality” (R-kswe), “well explained and a clear intuition behind the method” (R- jNRg), and “thorough methodology” (R-iAKq); our experiments were commended as “ample” (R-eXGm), “extensive” (R-kswe), “good” (R-kswe), backed by “robust evaluations” (R-iAKq), “intuitive visualizations” (R-kswe), demonstrating “genericity” (R-jNRg) and “effectiveness” (R-kswe). We attempted our best to address the questions as time allowed. We believe the comments and revisions have strengthened the paper, and we thank all the reviewers for their help. Please find individual responses to your questions below. Summary of major changes: 1. We add new experiments on KITTI [8] and nuScenes [9] to demonstrate its superiority compared to other augmentation methods and the effectiveness to solve class imbalance problems, respectively. 2. We provide additional ablation studies to show the impact of each individual model on the proposed pipeline. 3. We add detailed visualizations and explanations to highlight our contributions and facilitate a clear understanding of the proposed method. Pdf: /pdf/d293d9c6bada85097eede5295f422360ba05b457.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
No Free Lunch Theorem and Black-Box Complexity Analysis for Adversarial Optimisation
Accept (poster)
Summary: This paper considers the complexity of finding (Pure Strategy) Nash Equilibrium in black-box optimization for adversarial optimization. By denoting the loss of a learned solution $x\in X$ on a possible test case $y\in Y$ as $g(x,y)$, the authors show that no algorithm is on average better than any other in finding a PSNE of $g\in \mathcal F$, given that $\mathcal F$ is closed under permutation, via constructing an isomorphism result which roughly says that if the algorithm doesn't at the first time query the unique PSNE $(x^\ast,y^\ast)$, then every possible sub-problem is isomorphic to another. This NFL result distinguishes finding a NE from finding an "always optimal" or "worst-case optimal" solution. The authors then derive the query complexity lower bounds for general black-box adversarial optimization and two-player zero-sum bimatrix games, and also their implications to some two-player zero-sum games. Strengths: 1. This NFL-style result distinguishes finding NE from finding an "always optimal" or "worst-case optimal" solution, which is important towards understanding the various solution concepts in adversarial optimization. 2. Via the NFL-style result, various query complexity lower bounds are also derived for adversarial optimization and two-player games. Weaknesses: 1. This paper assumes a unique PSNE in $g$, but it may not be this case in ML (e.g., $X$ corresponds to a set of over-parameterized NN and $y$ is some train/test loss). 1. While the main text looks pretty technically rigorous, it is a little hard to read. For example, in a sub-problem definition, what does the $b_1,b_2$ usually mean and how are they constructed? The answer to this question doesn't seem to appear anywhere in the main text as well. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In the proof of Theorem 3.1, the authors mentioned "$b_1$ and $b_2$ are defined in Definition 6". Is this really correct? I don't feel Definition 6 infers the construction of $b_1$ and $b_2$. 2. In Theorem 4.1, the condition that "each query has at most k ≥ 2 possible answers" seems pretty strong -- what if the noise on $g(x,y)$ is a continuous r.v.? 3. Can you discuss more regarding the "always optimal" solution concept in Definition 2? Is it likely to exist for a general adversarial optimization problem? If not, why did the original authors consider it? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Included in sec. 5 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **RwGpW2. While the main text looks pretty technically rigorous, it is a little hard to read ...** A: We agree with the reviewer that the role of $b_1$ and $b_2$ was not fully explained. We will improve this in the final version. In particular, Definition 5 can be improved as follows: Let $O\subseteq \mathbb{R}$. Let $\mathcal{F}$ be any subset of the set of payoff functions $g:\mathcal{X} \times \mathcal{Y} \rightarrow O$ with a unique Nash Equilibrium and such that $\mathcal{F}$ is closed under permutation. For any given $(x_1,y_1) \in \mathcal{X} \times \mathcal{Y} $, any function $b_1:\mathcal{Y} \rightarrow O$, and any function $b_2:\mathcal{X} \rightarrow O$, we define a sub-problem class $\mathcal{F} \left((x_1,y_1),(b_1,b_2) \right)$ with respect to $\mathcal{F}$ as follows: $f \in \mathcal{F} \left((x_1,y_1),(b_1,b_2) \right)$ iff there exists a $g \in \mathcal{F}$ such that (1) $g(x_1,y)=b_1(y)$ for all $y \in \mathcal{Y}$; (2) $g(x,y_1)=b_2(x)$ for all $x \in \mathcal{X}$; (3) $f$ is a restrction of $g$ on $\left(\mathcal{X} \setminus \\{x_1\\} \right) \times \left( \mathcal{Y} \setminus \\{y_1\\} \right)$. The formal definition of restriction/extension can be found in the response of PuqfQ1. In terms of the construction of $b_1, b_2$, we explain in detail as follows. To get the intuition of what $b_1,b_2$ means in our analysis, we extend the discussion of **Section E** here. As shown in **Figure 2, Section E, Appendix**, after querying $(x_1,y_1)$, both strategies of $x_1,y_1$ are known. We define $b_1(y):=g(x_1,y)$ for all $y \in \mathcal{Y}$ and $b_2(x):=g(x,y_1)$ for all $x \in \mathcal{X}$. Note that the definition of $b_1, b_2$ depends on $g \in \mathcal{F}$. We consider a black-box model where the algorithm learns all the blue entries in payoff matrix $P$ (Figure 2) when querying the entry ($x_1,y_1)$ (i.e., row $x_1$ and column $y_1$). If there is no Nash equilibrium among the blue entries, we exclude this row and column in the payoff matrix and restrict the problem to a smaller sub-problem. Now, we define $b_1,b_2$ formally: We denote the set of all functions from a set $\mathcal{X}$ to a set $O$ by $\mathcal{H}(\mathcal{X},O)$ and the set of all well-defined functions from a set $\mathcal{Y}$ to a set $O$ by $\mathcal{H}(\mathcal{Y},O)$. Given a subset $\mathcal{F}$ of all the payoff functions $g: \mathcal{X} \times \mathcal{Y} \rightarrow O$ with a unique NE where $\mathcal{F}$ is closed under permutation. For all $(x_0,y_0) \in \mathcal{X} \times \mathcal{Y}$, define \begin{align*} B_{x_0}^{(1)}:&= \\{b_1 \in \mathcal{H}(\mathcal{Y},O) \mid \text{there exists $g \in \mathcal{F}$ s.t. } b_1(y)=g(x_0,y) \text{ for all $y \in \mathcal{Y}$} \\}; \\\\ B_{y_0}^{(2)}:&= \\{b_2 \in \mathcal{H}(\mathcal{X},O) \mid \text{there exists $g \in \mathcal{F}$ s.t. } b_2(x)=g(x,y_0) \text{ for all $x \in \mathcal{X}$}\\}. \end{align*} Let $(x_1,y_1) \in \mathcal{X} \times \mathcal{Y}$ be the first query point that Algorithm 1 makes, we consider $b_1 \in B_{x_1}^{(1)}$ and $b_2 \in B_{y_1}^{(2)}$. Now, they should be well-defined in the proof of Theorem 3.1. We will include this explanation in the proof to improve the accessibility of our paper. **RwGpQ1. In the proof of Theorem 3.1, the authors mentioned, "b1, b2 are defined in Definition 6". Is this really correct? I don’t feel Definition 6 infers the construction of b1 and b2.** A: Yes, you are right. There is a typo; it should be Definition 5 rather than 6. We will correct this in the updated version. The construction part can be checked in the response to RwGpW2. **RwGpQ3. Can you discuss more regarding the "always optimal" solution concept in Definition 2? Is it likely to exist for a general adversarial optimization problem? If not, why did the original authors consider it?** A: Yes, we agree with the reviewer that an "always optimal" solution can hardly exist for a general adversarial optimisation problem. Even in their previous paper, Service and Tauritz [12] acknowledge this by stating: “in many real-world problems, there are no candidate solutions which perform best over all possible test cases. That is, there is often a trade-off between performance on test cases.” We believe they consider this “ideal solution concept" to theoretically demonstrate the existence of a "free lunch" with respect to it. While their result and the result of Wolpert and Macready [14] can be considered worthwhile contributions to No Free Lunch (NFL) in adversarial optimisation, we argue that it is necessary to consider other solution concepts. In fact, we are not satisfied with these "free lunch" results since an "always optimal" solution rarely exists in real-world applications, and such a "free lunch" result might sometimes be misleading. It is important to recognise that every approach, including coevolutionary approaches or other black-box adversarial optimisation algorithms have limitations. This is one of the key messages of our paper. We will extend our discussion in the updated version by including the response above. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed responses. I appreciate that.
Summary: In this paper, the authors theoretically analyze the query complexity for general black-box Adversarial Optimisation under the "closed under permutation" assumption. A no-free-lunch theorem is proved for all algorithms in achieving the same average performance of all possible problem instances (or problem instances c.u.p.) with a unique Nash Equilibrium in a two-player zero-sum game setting. Moreover, the authors provide Lower Bounds for General Black-Box Adversarial Optimisation (with possible answers assumption ) and Two-player Zero-Sum Bimatrix Games, respectively. Strengths: 1. The paper is well-written and well-organized. 2. The theoretical results are general to cover many interesting adversarial optimization cases. The no-free-lunch theorem shows the difficulty of optimizing the whole family of problem instances in two-player zero-sum games. The lower bound of query complexity for General Black-Box Adversarial Optimisation (with possible answers assumption for each query) and Two-player Zero-Sum Bimatrix Games are also interesting. Weaknesses: 1. The technical challenge of the analysis for black-box adversarial optimization over the standard adversarial optimization analysis is not clearly discussed. What the key technique employed for the black-box cases compared with the standard Nash Equilibrium analysis for two-player zero-sum games is not clear. 2. The theoretical analysis is restricted to finite (discrete) problems instead of continuous problems. This restriction may limit the possible application of the theoretical results. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the key technical challenge for the theoretical analysis of black-box adversarial optimization compared with white-box adversarial optimization? 2. What is the key technique employed in this paper to solve the challenge compared with the standard Nash Equilibrium analysis for two-player zero-sum games or adversarial optimization analysis? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **7LPoW1+Q1.** - **The technical challenge of the analysis for black-box adversarial optimization vs the standard adversarial optimization analysis** - **The key technique employed for the black-box cases for two-player zero-sum games** - **Black-box adversarial optimisation vs white-box adversarial optimisation** A: The detailed discussion of key technical contributions, challenges and technique can be found in **Sections 1.2 and 1.3**. There are different definitions of white-box in the literature. To answer the question, it is necessary to define the meaning of white-box adversarial optimisation. For comparison, we need to define white-box adversarial optimisation. We have defined black-box adversarial optimisation in **lines 23-30** and **Algorithm 1**. Since the reviewer did not specify which interpretation they had in mind precisely, we consider two possible interpretations: (1) **White-box adversarial optimisation with full access to the payoff function:** This means the payoff matrix in the two-player zero-sum game is given and known. In this case, finding the Nash Equilibrium (NE) can be formulated as a linear programming problem, which can be solved in polynomial time using algorithms such as the ellipsoid method or interior-point method [2, 9]. (2)**White-box adversarial optimisation with gradient access:** The gradient is accessible if the payoff function is differentiable. A well-known method for solving the saddle-point problem is gradient descent ascent (GDA) and its variants, including stochastic/optimistic GDA, which have been extensively studied [8, 4]. Compared with these two types of white-box adversarial optimisation, black-box adversarial optimi- sation considered in this paper only allows querying the payoff function in each iteration, and the payoff function is usually not explicitly known. The tools used in these two white-box cases are not applicable in our scenario. This is why we employ tools from game theory **(see lines 66-69)** and introduce black-box complexity to analyse the black-box adversarial model **(see lines 70-74)**. **7LPoQ2. The key technique employed in this paper to solve the challenge compared with the standard Nash Equilibrium analysis for two-player zero-sum games or adversarial optimization analysis** A: For the standard Nash Equilibrium analysis for two-player zero-sum games, the payoff function is explicitly known. As mentioned in our response, the tools used in such cases are not applicable here. We have also discussed other works, such as the computational complexity of computing NE in general-sum games being PPAD-complete and the complexity of fictitious play in two-player potential games with a unique NE, in **lines 568-583**. The key technique employed in our paper involves using game-theoretic tools and Yao’s principle to analyse black-box adversarial models. This approach is necessary because, unlike in standard NE analysis, we only have access to the payoff function through queries, and the function is not explicitly known and discrete (hence, no gradient is accessible). --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed response. My concern has been well addressed. I have no further questions.
Summary: Black-box optimization is a critical area in the field of optimization. The original No Free Lunch (NFL) theorem highlights the inherent limitations of traditional black-box optimization and learning algorithms, establishing a theoretical basis for these methods. The paper addresses a long-standing problem in NFL analysis for adversarial (maximin) optimization. The authors try to prove a new NFL theorem for general black-box adversarial optimization, specifically when Nash Equilibrium (NE) is used as the solution concept. This implies that when NE is the goal, the average performance of all black-box adversarial optimization algorithms is equivalent. Using Yao’s maximin principle and the new NFL theorem, the paper provides general lower bounds for the query complexity required to find Nash Equilibrium in adversarial optimization. The authors prove the theoretical impossibility of a universally effective adversarial optimization algorithm and highlight the impact of solution concept selection. It introduces black-box complexity to assess the difficulty of learning the unique optimum and solving two-player zero-sum games, laying the foundation for future research on adversarial optimization. Strengths: S1. The paper demonstrates a high level of theoretical rigor in proving an NFL Theorem for general black-box adversarial optimization, specifically focusing on Nash Equilibrium as the solution concept. The proofs and technical details are sufficiently detailed. S2. The paper effectively formulates the problem statement by emphasizing the importance of solution concepts in adversarial optimization. The paper contributes by addressing adversarial optimization and providing a new perspective on the limitations of black-box algorithms in this context. Weaknesses: W1. The notations used in the paper are not clear in some places. For example, in line 92, v is introduced as a real number. Then for i in [1,n], what is v_i in line 93? In the double-line equation between line 149 and line 150, the notation is not clear. Terms like 'extension' and 'restriction' of a function should ideally be defined earlier in the paper. Here, it is only somewhat discussed in the appendix. W2. The paper seems to focus more on theoretical analysis and proofs, lacking empirical validation or practical demonstrations of the proposed concepts and findings. Incorporating experimental results could enhance the credibility and applicability of the research. W3. For the theoretical analysis discussed in the paper, there is only one practical application provided viz., two-player zero-sum games with a unique Nash Equilibrium (NE). The authors introduce certain complexity in this game by considering plateaus. However, the application is still a trivial one and a study on a broader range of adversarial optimization problems with different solution concepts would have been insightful. W4. The paper cannot rely on the appendix. The text should be self-contained. The appendix contains many crucial information needed for understanding and judging the paper. I didn't particularly like the current structure of the paper, where related work has ended up in the appendix. W5. In the checklist, the authors mention that from a practical viewpoint, more careful benchmark selections are suggested for use in many black-box optimization applications that solve maximin problems with complicated constraints. However, an elaborate discussion on the practical applicability is not discussed in the paper. Such a discussion would have been insightful. Technical Quality: 3 Clarity: 3 Questions for Authors: Please clarify the points in the Weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the limitations of the proposed method are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **PuqfW1. The notations used in the paper are not clear in some places.** A: We will improve the accessibility of this paper in the updated version. There is a typo here, $v \in \mathbb{R}^n$ rather than $v \in \mathbb{R}$ and $v_i$ is the $i$-th opponent of $v$. We refer to $g$ as the extension of $g'$ in line 149 and 150 and it is defined in (1). Note that restrictions and extension are two well-established mathematical terms. We define them separately here and will include these formal definitions in the preliminaries later. Def (restriction): Let $f : X \to Y$ be a function from a set $X$ to a set $Y$. If $A$ is a subset of $X$, then the **restriction** of $f$ to $A$ is the function: \begin{align*} f|_A : A &\to Y \\\\ x &\mapsto f(x). \end{align*} Def (extension): Let $f : X \to Y$ be a function and $A$ and $B$ be sets such that $X \subseteq A$ and $Y \subseteq B$. An **extension** of $f$ to $A$ is a function $g : A \to B$ such that $f(x) = g(x)$ for all $x \in X$. Alternatively, $g$ is an **extension** of $f$ to $A$ if $f$ is the restriction of $g$ to $X$. **PuqfW4. The paper cannot rely on the appendix. The text should be self-contained.** A: We agree that the appendix is not the appropriate place for related work. In the revised version of the paper, we will move the extra related work from the appendix to the main text where necessary and compress the main text. However, it is not feasible to include all proof details in the main body of any comprehensive theory paper. As is traditional for theoretical papers in AI/ML conferences, detailed proofs are deferred to the appendix (e.g., see [11, 5, 3, 4]). We will include proof sketches or explanations in the main text, as seen in line 183, to ensure the paper remains self-contained. **PuqfW5. In the checklist, the authors mention that from a practical viewpoint, ... However, an elaborate discussion on the practical applicability is not discussed in the paper.** A: As a theory paper, we focus on rigorous proof and analysis of general adversarial optimisation problems. For the discussion of careful benchmark selections in black-box adversarial optimisation, we have used binary voting games (two-player zero-sum games) as a practical illustration of BBC analysis. The detailed discussion of the practical applicability of black-box complexity analysis can also be checked in **Section 4.4.3**. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I understand that this work is mostly theoretical and this paper has the potential to set the foundation in this domain. However, I am still concerned that: (a) the main text is hard to read (which is agreed upon by Reviewer RwGp) + there are some discrepancies in the notations and (b) practical applicability is not sufficiently discussed (even in Section 4.4.3, as suggested by the authors). So, I have decided to keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for acknowledging the contribution of our work in laying the foundation for black-box adversarial optimisation. We provide a brief explanation here: **(1) Readability and Notation Discrepancies:** Precise mathematical definitions and notations are essential for rigorous theoretical research. However, we also understand the importance of clarity and accessibility. To address this, we will put extra emphasis on accessibility in the revised version, including more explanations in plain English and ensuring consistent use of notations throughout the text. We have already **clarified** definitions as requested by Reviewer **RwGp** and included explanations of well-established terms like "restriction" and "extension," as suggested by Reviewer Puqf. These efforts will be extended in the revised version to make the paper more accessible without compromising its rigour. **(2) Practical Applicability:** As noted in our global response to Q3, our results focus on theoretical impossibility results, which in nature usually do not have immediate practical applications. However, this **does not lower** their value (see NFL for traditional black-box optimisation [13] or Arrow's impossibility theorem [15]). As a key message of our paper, theoretical impossibility results are crucial for understanding the limitations of black-box adversarial optimisation. We have discussed the usage of black-box complexity analysis in Section 4.4.3 and will include the discussion of theoretical impossibility results in our revised version as well. Future work could explore the potential practical implications of these theoretical findings, which might eventually inform the design and evaluation of black-box optimisation algorithms in applied settings like binary voting games discussed in this paper. Above all, we will include all the discussion in our revised version. Moreover, we are pleased to note that the other reviewers are **satisfied** with our responses and responded **positively** with no further questions. We appreciate your detailed feedback and hope our explanations help resolve your concerns. **Reference** [13] D.H. Wolpert, W.G. Macready. No Free Lunch Theorems for Optimization. TEVC, 1997. [15] Kenneth J Arrow. A difficulty in the concept of social welfare. Journal of political economy, 1950. --- Rebuttal 2: Comment: Reviewer Puqf, Since you gave a borderline score for this paper, please engage in discussions with the authors and see if the rebuttal addressed your concerns. Thanks, AC
Summary: The paper mainly focuses on the analysis of black-box adversarial optimization algorithms with an emphasis on Nash Equilibrium (NE) as the solution concept. It introduces the concept of No Free Lunch (NFL) Theorem for general black-box adversarial optimization, showcasing the equal average performance of all algorithms when NE is the solution concept. The paper also delves into the black-box complexity analysis and provides lower bounds for query complexity in finding NE. Overall, it contributes to the understanding of the limitations and performance evaluation of black-box adversarial optimization algorithms under the Nash Equilibrium solution concept. Strengths: This paper provides rigorous analysis and theoretical foundation concerning black-box adversarial optimization algorithms, particularly focusing on the Nash Equilibrium as the solution concept. The paper provides a novel proof of the No Free Lunch Theorem in the context of adversarial optimization, shedding light on the equal performance of algorithms when NE is considered. Additionally, the introduction of black-box complexity analysis adds depth to understanding the query complexity in finding NE. Overall, the paper's strengths include its theoretical contributions, clarity in presentation, and insightful analysis of black-box adversarial optimization in the context of NE. Weaknesses: See questions below. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. This paper is theoretically comprehensive. However, providing simple empirical verification on binary voting might increase the credibility of the theoretical founding. I am wondering if it is possible to conduct experiments to verify the proposed lower bound. 2. I am curious about more difficult games than DIAGONAL and PLATEAU that can be solved by Algorithm 1. Can you provide other examples? 3. Algorithm 1 is still a random search algorithm. Can it be generalized to some real-world games? Please discuss more on different classes of search heuristics rather than random search. How will the results of black-box complexity change? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **RXe7yQ2 I am curious about more difficult games than DIAGONAL and PLATEAU that can be solved by Algorithm 1. Can you provide other examples?** A: Before answering the question, we would like to clarify two confusions in the question: (1) As mentioned in the paper **(lines 223-230)**, Algorithm 1 represents a broad class of algorithms. Therefore, we cannot claim that all instances of Algorithm 1 solve Diagonal and Plateau. (2) While all instances of the Diagonal problem class can be solved by some instance of Algorithm 1 in polynomial runtime, Plateau is already a challenging problem in terms of polynomial solvability. In particular, we prove that all instances of Algorithm 1 need exponential time to solve Plateau, making it inefficient. Hence, we do not consider it significant to find more difficult games than Plateau. (3) The reason we chose these examples is to illustrate two kinds of problems within the general class of binary voting games with unique NE: the **polynomial-solvable** class (i.e., there exists an algorithm $A \in \mathcal{A}$ that can solve all problem instances of this class in polynomial runtime) and the **non-polynomial-solvable** class (i.e., there exists **no** algorithm $A \in \mathcal{A}$ that can solve all problem instances of this class in polynomial runtime). As mentioned in **Section 4.4.3**, these examples and the general class of binary voting games with unique NE are sufficient to illustrate the use of black-box complexity in black-box adversarial optimisation. **RXe7yQ3 Algorithm 1 is still a random search algorithm. Can it be generalised to some real- world games? Please discuss more on different classes of search heuristics rather than random search. How will the results of black-box complexity change?** A: Before answering the question, we would like to point out another confusion in the question: Although the algorithm(s) make random decisions, it does not make it a pure random search (sampling solutions uniformly at random). We would like to clarify that Algorithm 1 is **not** a simple class of random search algorithms but defines a general class of algorithms. As is well-known in the literature, clever use of randomness can lead to simpler and more robust algorithms. We extend our discussion from the original paper **(lines 223-233)** here. Algorithm 1 defines various black-box adversarial algorithms, including coevolutionary heuristics [7 , 6, 1], bandit learning algorithms [10 , 3], and other black-box query-only randomised algorithms like FINDPSNE (designed to learn the NE in bimatrix games by querying the payoff matrix) [9], by specifying different probability distributions PI(t) in Line 3 of Algorithm 1. The main aim of Section 4 is to provide a general performance measure for such a broad class of black-box models. For other algorithms outside the class defined by Algorithm 1, if in the binary voting games they act like a decision tree algorithm, we conjecture that the black-box complexity (BBC) will remain the same. If not, we may need to examine different classes on a case-by-case basis. --- Rebuttal Comment 1.1: Title: Acknowledgement of reading the authors' responses Comment: I would like to thank the authors for clarifying my doubts. The responses provided by the authors adequately addressed my concerns. --- Rebuttal 2: Comment: Reviewer Xe7y, Since you gave a borderline score for this paper, please engage in discussions with the authors and see if the rebuttal addressed your concerns. Thanks, AC
Rebuttal 1: Rebuttal: We thank all the reviewers for their useful and detailed comments. Due to the space limit, we only keep the reference list in the global response. All the responses will use the **same** reference list. **Q1 Two-player zero-sum game with a unique NE or PSNE. PuqfW3; RwGpW1** A: Our current theoretical analysis focuses on two-player zero-sum games with a unique Nash Equilibrium (NE). We agree that exploring other solution concepts and games would be insightful. However, we argue that our analysis solves a non-trivial open problem. Choosing this class of games is reasonable for the following reasons: (1) The No Free Lunch (NFL) theorem for any solution concept is still a long-standing open problem. One of the main messages of this paper is that, despite prior work showing the existence of a “free lunch” in adversarial optimisation (e.g., [13]; [12]), we prove a NFL theorem for the unique Nash Equilibrium. This uncovers the fact that there is no “silver bullet” in adversarial optimisation for all solution concepts. To our knowledge, this is the first work to answer this question negatively, and the unique NE solution concept is sufficient to serve this aim. (2) The technical level is state-of-the-art within the area. As discussed in Section 1.2, we make only a very weak assumption: limited query access to the payoff function, with no assumptions about properties such as convexity, continuity, or differentiability. Therefore, we need to introduce tools from game theory and Yao’s principle. (3) Two-player zero-sum games with a unique NE are commonly studied in AI/ML literature. For instance, (detailed discussion can be found in **Section A** in Appendix), (a) Learning in Games: for example, see [11], [10], [9]. (b) Co-evolutionary Heuristics: for example, see [7],[6],[1]. In summary, two-player zero-sum games with a unique NE represent a broad and meaningful class of games, providing a reasonable starting point. While it is indeed interesting to generalise the analysis to more complex solution concepts such as mixed NEs or broader classes of adversarial optimisation problems, it is a necessary first step in order to be able to analyse more general classes of games. **Q2 The theoretical analysis is restricted to finite (discrete) problems. 7LPoW2; RwGpQ2.** A: We agree that the current result is restricted to finite (discrete) problems and only considers the deterministic case where "each query has at most k ≥ 2 possible answers". There are two reasons for using this condition: (1) For consistency with previous literature and a fair comparison, we use the same setting as the previous NFL/FL results [13, 14, 12] and emphasise that under the same condition, the solution concept plays a role in the construction of NFL/FL. (2) The generalisation of such a result is interesting, however, as shown in the paper, under this condition, the proof/analysis is still challenging and non-trivial. It is necessary to understand the basic setting before moving to a more general setting. In conclusion, despite the limitations, this setting provides a clear and rigorous starting point. Future work can build on these results to explore continuous and more complex scenarios. As mentioned in **line 384**, we may need further assumptions or use other tools to derive the NFL analysis for other scenarios. For a broad class of possible applications on two-player zero-sum games with finite strategies and unique NE, we refer to our response to Q1. **Q3 Empirical verification of our theoretical result. RXe7yQ1; PuqfW2** A: It is impossible to verify impossibility results, such as our black-box results, empirically. In particular, we are proving an impossibility result for an infinitely large class of algorithms and for an infinitely large class of problems (c.f. Definition 7). To verify the result empirically, one would have to try an infinite number of algorithms on an infinite number of problem instances. Clearly, this is not possible. This paper is theoretical in nature and aims to provide a general understanding of black-box adversarial optimisation. **References** [1] A. Benford, P.K. Lehre. Runtime Analysis of Coevolutionary Algorithms on a Class of Symmetric Zero-Sum Games. In GECCO, 2024. [2] S. Bubeck et al. Convex Optimization: Algorithms and Complexity. Found. Trends Mach. Learn., 2015. [3] Y. Cai, H. Luo, C.-Y. Wei, W. Zheng. Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games with Bandit Feedback. In NeurIPS, 2023. [4] C. Daskalakis, I. Panageas. The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization. In NeurIPS, 2018. [5] A.V. Do, A. Neumann, F. Neumann, A.M. Sutton. Rigorous runtime analysis of MOEA/D for solving multi-objective minimum weight base problems. In NeurIPS, 2023. [6] M.A. Hevia Fajardo, P.K. Lehre, S. Lin. Runtime analysis of a co-evolutionary algorithm: Overcoming negative drift in maximin-optimisation. In FOGA, 2023. [7] P.K. Lehre. Runtime Analysis of Competitive co-Evolutionary Algorithms for Maximin Optimisation of a Bilinear Function. In GECCO, 2022. [8] T. Lin, C. Jin, M. Jordan. On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems. In ICML, 2020. [9] A. Maiti, R. Boczar, K. Jamieson, L.J. Ratliff. Query-Efficient Algorithms to Find the Unique Nash Equilibrium in a Two-Player Zero-Sum Matrix Game. arXiv preprint arXiv:2310.16236, 2023. [10] B. O’Donoghue, T. Lattimore, I. Osband. Matrix Games with Bandit Feedback. In UAI, 2021. [11] I. Panageas, N. Patris, S. Skoulakis, V. Cevher. Exponential Lower Bounds for Fictitious Play in Potential Games. In NeurIPS, 2023. [12] T.C. Service, D.R. Tauritz. A No-Free-Lunch Framework for Coevolution. In GECCO, 2008. [13] D.H. Wolpert, W.G. Macready. No Free Lunch Theorems for Optimization. TEVC, 1997. [14] D.H. Wolpert, W.G. Macready. Coevolutionary Free Lunches. TEVC, 2005.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Test-Time Dynamic Image Fusion
Accept (poster)
Summary: This paper introduces a method for dynamically adjusting fusion weights for image pixels based on their relative dominability, calculated using pixel-wise reconstruction losses. This approach aims to minimize generalization error by considering the correlation between fusion weights and reconstruction losses. Strengths: 1- The presentation of the method is simple and clear. 2- As the results show, the proposed framework achieves good results on several datasets. Weaknesses: 1. The claim that RD accurately captures the dominant regions of different sources without solid empirical justification for various scenarios might be overreaching. It is unclear how RD performs under various noise conditions or with sources of different qualities. 2. In Eq. (4), the fusion weights $w$ are important as they determine the contribution of each source to the total loss. However, there is no clear definition for normalizing these weights. 3. In Eq. (8), the paper lacks discussion on the initialization of $w^(m)$ and how it affects convergence and stability during the dynamic adjustment process. 4. Figure 2 is not clear. How are the feature maps of different layers fused, and what is the impact of each of them on the overall performance of the proposed model? Technical Quality: 2 Clarity: 2 Questions for Authors: Please see the weaknesses! Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Not very adequate. The limitation is general. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We’d like to thank the reviewer for the valuable comments, the acknowledgment of our **good results**, and the **simple but clear presentation**. We provide detailed responses to the constructive comments. * Weakness 1: More explanations for RD. Thanks for the valuable comments. We have added experiments under **noise conditions** and with sources of **different qualities**. In addition, our RD's effect was validated on **different methods and tasks**. These experiments and the visualization of RD empirically demonstrate that the proposed dynamic weights (RDs) based on theoretical principles are effective in capturing and emphasizing the dominant regions of different sources, leading to outstanding fusion performance. (i) **Comparisons on different tasks over multiple baselines**. * For different scenarios, we have evaluated our TTD on four image fusion tasks: VIF (see Tab. 1), MIF (see Tab. 3-5), MEF (see Tab. 2), and MFF (see Tab. 2). * For different baselines, we have applied our TTD to CDDFuse (CDDFuse+TTD), PIAFusion (PIAFusion+TTD), and IFCNN (IFCNN+TTD), separately. Results are given in Tab. 1-5. (ii) **The visualizations of RDs in various scenarios**. * **RD is adaptable to the noise condition**. We simulate a noisy situation in which the visible image quality is affected by contrast perturbation. As shown in **Fig. A4**, with the corruption severity level increasing, the dominant regions of visible modality are gradually reduced, while the unchanged infrared modality gains an increasing RD. Our RD effectively perceives the dominance changes. * **RD is adaptable to different data qualities**. a) To simulate the malfunction of sensors in a real scenario, we masked the infrared image randomly. As shown in **Fig. A2**, the RD of the region being masked is apparently smaller than the surrounding area, while that of the same region in infrared image is relatively greater. b) Furthermore, the quality of images also changes with illumination. As shown in Fig. 6 (see Appendix C.1), we visualized the RDs of the samples at different times in the same scenario. As it changes from day to night, the dominance of visible images gradually decreases, while the dominance of infrared images increases. * **RD is adaptable to different tasks**. We have presented visualizations of RDs obtained using CDD+TTD for VIF and MIF tasks, and IFCNN+TTD for MEF and MFF tasks, as CDDFuse is a VIF-specific method while IFCNN is a unified approach. As shown in Fig. 1, our RD reflects the dominance of each source on different tasks adaptively. (iii) **RD's effectiveness over varying performance baseline**. We further conduct additional experiments to combine TTD with baselines of varying performance. As shown in **Tab. A2** of global rebuttal PDF, our TTD significantly improves these baselines with different performance. This validates that our TTD effectively improves baselines in different scenarios and with different performances. * Weakness 2: More explanations for normalizing the weights. Thanks for the valuable comment. In Eq. 7, we performed a softmax normalization on $w$ as the sum of $w$ being 1 is a prerequisite for deriving the upper bound of generalization error (GError). As shown in **Fig. A1** of global rebuttal PDF, we normalized the weight maps of different sources at the same positions. * Weakness 3: More explanations for the initialization. Thanks for the constructive comment. The initialization of $w$, which is multiplied with unimodal features, **only participates in the uni-source reconstruction process** to adapt the uni-source features to the feature space of the baseline. If the initialization of $w$ is different from that of the baseline in the reconstruction process, **the feature distribution will deviate from the baseline's feature space**, affecting the reconstruction performance. Accordingly, we have added experiments with different initialization weights during uni-source reconstruction to explore the effect of different initializations. According to **Tab. A3** of global rebuttal PDF, we set $w$ the same as the baseline. Besides, unlike traditional test-time adaptation methods [20][21], our TTD does not require fine-tuning the network. In the fusion process, $w$ can be obtained by a single calculation step (Eq. 6). Theoretically, the fused image can be regarded as a linear combination of $M$ uni-source components. We reveal that the fusion model's upper bound of GError is composed of the distance between the uni-source image and each uni-source component as well as the correlation between fusion weight and uni-source component reconstruction loss according to Eq. 4. As the model is frozen in the test time, the essence of reducing GError lies in the negative correlation between fusion weight and uni-source component reconstruction loss. **TTD performs fusion by the dynamic weight negatively correlated to the reconstruction loss according to Eq. 6 by a single calculation step without any training or fine-tuning**, reducing the generalization error and achieving robust results. * Weakness 4: More explanations for the framework. Thanks for your comments. We draw a more detailed pipeline for inference in **Fig. A3**. **In stage 1** (dashed line), we feed each uni-source image individually into the frozen encoder and decoder to acquire the respective decomposed uni-source components. Then, we construct the RD according to Eq. 6. **In stage 2** (solid line), we feed multi-source images into the encoder and get their corresponding features. Then, we fuse features by multiplying the RDs to the respective features and adding them up. Finally, the fused feature is fed into the decoder for the final fusion results. Please kindly note that the fusion of different sources only occurs at the fusion layer of the baseline without interactions between features from different layers. We will release all our code for reliable reproduction. --- Rebuttal Comment 1.1: Comment: The response from the author has addressed my comments. I increased my score. --- Rebuttal 2: Comment: Thanks a lot for your reply. We are delighted to have addressed your concerns. We also appreciate your insightful comments, which have greatly improved our work and inspired us to research more.
Summary: This paper proposes a theoretical justification of image fusion from a generalization perspective and reduces the upper bound of generalization error by decomposing the fused image into multiple components corresponding to its source data. A new test-time dynamic image fusion paradigm TTD is further proposed with the finding that the negative correlation between fusion weight and the uni-source reconstruction loss is the key to reducing the generalization loss. Extensive experiments and discussions confirm the theory and superiority. Strengths: The idea that applying the test-time adaption method into image fusion task with theoretical guarantee is quite meaningful and experiments are sufficient. Weaknesses: The details presentation and explanation about test-time adaption are not very clear. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Is the idea that adding up every uni-source data linearly to get fused image reasonable enough? Are there any information only found with several data neglected in the whole process? 2. Mathematical formulas in Appendix A.1 are a little confused. More parentheses ought to be used to present clear explanations about the scopes of every mathematical symbols. 3. Is the TTD applied to every combination from sources? Most test-time adaption methods only need few data to fine-tune the model. But this paper seems to apply TTD to all source data. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: This paper presents a new perspective to analysis the generalization error of the image fusion task, the details presentation and explanation are not very clear, mathematical formulas are little confused and more explanation about inference process should be provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your valuable comments and appreciate your recognition of the **theoretical justification**, **meaningful and sufficient experiments** as well as the **superiority of our work**. We believe the constructive feedback will improve the paper and increase its potential impact on the community. * Weakness 1: The details presentation and explanation about test-time adaption are not very clear. Thanks for your constructive comment. To illustrate the TTD workflow more clearly, we draw a new detailed pipeline during inference in **Fig. A3** (see the global rebuttal PDF). **In stage 1** (dashed line), we input each uni-source image individually into the frozen encoder and decoder to acquire the respective decomposed uni-source components. Then, we construct the RD which is negatively correlated to the distance between the uni-source component and its original image according to Eq.6. **In stage 2** (solid line), multiple source images are fed into the encoder at the same time and get their corresponding features. Next, we get the fused feature by multiplying the RDs as weights to their respective features and adding them up. Finally, the fused feature is input into the decoder and the fused image is obtained. * Question 1: Is the idea that adding up every uni-source data linearly to get fused image reasonable enough? Are there any information only found with several data neglected in the whole process? Thanks for your valuable comment. (i) Most recent approaches [18][19] imply **multi-source image fusion aims at integrating comprehensive information from different source data**. The key challenge lies in capturing the effective component of each uni-source data. To address this problem, we theoretically propose a method that can effectively extract single-source information. * **Theoretically**, the fused image can be regarded as a linear combination of $M$ uni-source components. We reveal that the model's upper bound of generalization error in the image fusion task is composed of the distance between the uni-source image and each uni-source component as well as the correlation between fusion weight and uni-source component reconstruction loss according to Eq. 4. As the model's encoder and decoder are frozen in the test time, **the essence of reducing generalization error lies in the negative correlation between fusion weights and uni-source component reconstruction loss**. TTD performs fusion by the dynamic weight negatively correlated to the reconstruction loss, achieving a reduction in the generalization error compared with the baseline. * **Empirically**, the fusion weight, e.g. RD (defined in Eq. 6), since fusion models are trained to extract complementary information from each source, the decomposed components of fusion images represent the effective information from the source data. Thus, the uni-source components can be estimated from source data using the fusion model, with the losses representing the deficiencies of the source in constructing fusion images. Negatively correlated to the reconstruction loss, the **RD effectively demonstrates the dominance of each source and highlights the dominant regions as a fusion weight**. (ii) With the neglect of other source data, the uni-source reconstruction can be regarded as the corresponding decomposed component of the fusion image, it represents the effective information from the source data, thus the loss between it and source data reflects the Relative Dominability (RD) of the uni-source data in constructing the fusion image, i.e., we **leverage the impact of the missing modality on the loss to perceive the RD** of that source in the fusion image and use the RD as fusion weight. This aligns with the theoretical guarantee of image fusion (see (i)): the key to reducing the generalization error in image fusion tasks lies in the negative correlation between fusion weights and uni-source component reconstruction loss. * Question 2: Mathematical formulas in Appendix A.1 are a little confused. More parentheses ought to be used to present clear explanations about the scopes of every mathematical symbols. Thanks for the valuable suggestions. Here we provide a detailed explanation of the proof in Appendix A.1. At the second equals sign in Eq. 10, the covariance term is derived from the covariance formula: $\mathbb E[XY]=\mathbb E[X]+\mathbb E[Y]+Cov(X, Y)$. At the second inequality sign in Eq. 10, the expression is expanded by grouping $\Vert D(E^{(i)}(x^{(i)})) − x^{(m)}\Vert$ as like terms and expanding the summation notation. Then it is simplified by combining like terms using $w^{(i)}$. At the third inequality sign in Eq. 10, the property of the norm is used, i.e. $\left\|a\right\|-\left\|b\right\|\leq\left\|a+b\right\|$, to achieve further simplification. Due to character limitations, a more detailed derivation and more easily understandable parentheses have been revised and added in Appendix A.1. * Question 3: Is the TTD applied to every combination from sources? Most test-time adaption methods only need a few data to fine-tune the model. But this paper seems to apply TTD to all source data. Thank you for your valuable questions. Most test-time adaptation methods [20][21] require a small amount of data to fine-tune the model. Unlike traditional test-time adaptation methods, our TTD does not require fine-tuning any network parameters. For each sample, the negative correlation between the fusion weight $w$ and the uni-source reconstruction loss can reduce the generalization error upper bound according to the theoretical analysis in the answer to question 2. **TTD performs fusion by the dynamic weight negatively correlated to the reconstruction loss referring to Eq.6 through a single calculation step without any training or fine-tuning**, reducing the generalization error and achieving robust results. --- Rebuttal 2: Title: Reminder for review Comment: Dear Reviewer gbUk, I have noticed that you have not yet responded to the authors' rebuttal. I kindly urge you to engage in a discussion with the authors at your earliest convenience to help advance the review process.
Summary: This paper proposes a theoretically guaranteed new paradigm for test-time dynamic image fusion, which exploits the negative correlation between the fusion weights and the single-source reconstruction loss to reduce the upper bound of the generalization error. Extensive experiments demonstrate its effectiveness on a variety of image fusion tasks. Strengths: The proposed method is simple and effective,the experimental results are detailed and rich, and the effect is competitive compared to SOTAs. Weaknesses: 1. Authors does not mention the change in model inference efficiency. 2. The paper assumes that the decoder of the fusion model is a CNN model, which introduces certain limitations. If the model were a Transformer or Diffusion model, would TDD still be effective? 3. For specific fusion tasks, it is recommended to compare specific methods rather than generalized methods, e.g., multifocus image fusion tasks should compare multifocus image fusion methods. 4. Ablation lacks enough persuasiveness. 5. Table I and II are written inconsistently e.g. TDD and Ours. Technical Quality: 3 Clarity: 2 Questions for Authors: See Weaknesses. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful and thorough comments on our paper as well as for recognizing our **theoretical guarantee**, the **simple but effective framework** of our TTD, the **detailed and rich experiments**, and the **competitive effect compared to SoTAs**. We will also make an effort to increase clarity throughout. * Weakness 1: Authors do not mention the change in model inference efficiency. Thanks for the constructive comment. The inference time of TTD is dependent on the inference time of the baseline. Since TTD is executed in two stages: in the first stage, we calculate the uni-source reconstruction loss and then compute the fusion weights; in the second stage, we perform the fusion based on the weights. As baselines perform the static fusion, **the inference time of TTD is approximately double that of the baseline**. We measured the average processing time per image on the test set of the LLVIP dataset. The results of the inference time over multiple models are given in **Tab. A4** (see the global rebuttal PDF). * Weakness 2: The paper assumes that the decoder of the fusion model is a CNN model, which introduces certain limitations. If the model were a Transformer or Diffusion model, would TDD still be effective? Thanks for the suggestive comment. Although we assume that the decoder of the fusion model is a CNN model, we applied TTD on both **CNN-based** (PIAFusion, IFCNN) and **Transformer-based** (CDDFuse) models, and achieved competitive results, the results are given in Tab. 1 and Tab. 2. In addition, we apply our TTD to a **diffusion-based** image fusion method (Dif-Fusion [1]). The experimental results are given as follows, demonstrating that TTD can be applied to baselines with various network structures and improve their performance on multiple metrics. | Method | EN | SD | AG | EI | SF | SCD | CE | | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | | Dif-Fusion | 7.45 | 50.68 | 3.87 | 10.41 | 13.89 | **1.43** | **7.81** | | Dif-Fusion+TTD | **7.45** | **53.52** | **4.75** | **12.80** | **16.62** | 1.31 | 8.14 | * Weakness 3: For specific fusion tasks, it is recommended to compare specific methods rather than generalized methods, e.g., multifocus image fusion tasks should compare multifocus image fusion methods. Thanks for the constructive suggestion. We have added comparisons of our TTD with methods specifically applicable to the multi-exposure or multi-focus task, and the results in **Tab. A1** in the global rebuttal PDF shows that our method can outperform these specific methods. * Weakness 4: Ablation lacks enough persuasiveness. Thanks for the valuable comment. Our TTD is a simple but effective method with a straightforward structure, and we analyze the effectiveness of the TTD from different aspects in our paper. We have summarized these ablated analyses here: (i) Ablation study on **different baselines**: see Sec. 4.2, Tab. 1, and Tab. 2. (ii) Ablation study on **the correlation between weight and loss**: see Sec. 5.1 and Fig. 5. (iii) Ablation study on **the ways to obtain weight**: see Sec. 5.3 and Fig. 5. In addition, we added more ablation experiments here: (iv) Ablation study on **different forms of fusion weights**. We compared different forms of fusion weight: $w=0.5$ (baseline),$w = Softmax(-\ell)$, $w = Softmax(Sigmoid(-\ell))$, $Softmax(e^{-\ell})$ over IFCNN on the LLVIP dataset, results are given as follows, it shows that forms of fusion can be flexible to achieve the negative correlation between weight and reconstruction loss. | Forms of weight | EN | SD | AG | EI | SF | SCD | CE | |-------------|-------|--------|-------|--------|--------|-------|--------| | $w=0.5$ | 6.95 | 37.75 | 5.18 | 13.13 | 18.18 | 1.32 | 7.82 | | $w=Softmax(-\ell)$ | 6.97 | 38.41 | 5.24 | 13.31 | 18.31 | **1.35** | 7.81 | | $w=Softmax(Sigmoid(-\ell))$ | 6.97 | 38.48 | 5.36 | 13.60 | 18.87 | 1.33 | 7.80 | |$w=Softmax(e^{-\ell})$ | **6.98** | **38.99** | **5.48**| **13.92** | **19.40** | 1.34 | **7.79** | (v) Ablation study on **the normalization of the weights**: we compared three forms of normalization over IFCNN on the LLVIP dataset, results are given as follows, indicating that as a premise of the generalization theory (see Theorem 3.1), the normalization of the weights is necessary and the ways to normalize have little impact on our method. Overall, we performed complete ablation analyses to validate the effectiveness of TTD **(i)**, the necessity of the negative correlation between fusion weight and reconstruction loss **(ii)**, the expandability of ways to obtain fusion weight **(iii)**, the flexibility in the form of weights **(iv)**, the significance of normalization **(v)**. | Method | EN | SD | AG | EI | SF | SCD | CE | | ------------------- | ------- | -------- | ------- | -------- | -------- | ------- | -------- | | baseline | 6.95 | 37.75 | 5.18 | 13.13 | 18.18 | 1.32 | 7.82 | | w/o norm | 6.57 | 29.84 | 4.60 | 11.56 | 16.56 | 0.95 | 8.80 | | Proportional Norm | 6.97 | 38.41 | 5.24 | 13.31 | 18.31 | **1.34** | 7.80 | | softmax(ours) | **6.98** | **38.99** | **5.48** | **13.92** | **19.40** | 1.34 | **7.79** | * Weakness 5: Table I and II are written inconsistently e.g. TDD and Ours. Thanks for the detailed comments. We have modified the claims to avoid inaccurate descriptions, and we have thoroughly reviewed the entire manuscript and corrected these errors. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. The authors have addressed my concern. I intend to increase my score. --- Rebuttal 2: Comment: Thanks a lot for your positive feedback. Your insightful comments have greatly improved our work. We sincerely appreciate your support.
Summary: - This paper tries to solve the image fusion task, where multi-source images are provided and one needs to extract and integrate effective information from them. - The paper demonstrates its effectiveness on four different tasks: VIF, MIF, MEF, and MFF. - The paper proposes a test-time dynamic image fusion method with theoretical justification. - This paper theoretically proves the superiority of dynamic image fusion over static image fusion, and provides a generalization error upper bound. - By using the relative domainability of each source as the dynamic fusion weight, it is able to theoretically improve the generalization of the image fusion model and dynamically emphasize the dominant regions of each source. - This method theoretically and empirically demonstrates superiority over static fusion methods through extensive experiments on various datasets, including visible-infrared, medical, multi-exposure, and multi-focus image fusion tasks. Strengths: - The paper is well written. - The method is evaluated on four different tasks. - The approach is fairly simple. - The approach is justified theoretically - When the baseline model is robust, it can improve the fusion performance. Weaknesses: - The adaptation method heavily relies on the performance of the baseline model, which can be ineffective when the model performance is poor. - The improvement over the non-adaptive baseline is minor. Technical Quality: 3 Clarity: 4 Questions for Authors: - Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our **effectiveness on multiple tasks**, **theoretical guarantee**, and **well presentation**. We appreciate your support and constructive suggestions and address your concerns as follows. * Weakness 1: The adaptation method heavily relies on the performance of the baseline model, which can be ineffective when the model performance is poor. Thanks for your comment. Please kindly note that our performance is related to the baseline model as we claimed in the Limitations (Line 479-Line 480), however,  it does not imply that our approach would be ineffective on weak-performance baselines. Based on the generalization theory, our TTD explicitly reduces the upper bound of the generalization error of models regardless of the baselines' performance. We have also added experiments to further clarify our effectiveness on the baseline with different performances. We have elaborated on this in two facts: (i) **Theoretically**, referring to the deduce in Sec. 3, the fused image can be regarded as a linear combination of multiple uni-source components, i.e. the uni-source reconstruction. We reveal that the model's upper bound of generalization error in the image fusion task is composed of the distance between the uni-source image and each uni-source component as well as the correlation between fusion weight and uni-source component reconstruction loss according to Eq. 4. As the model's encoder and decoder are frozen in the test time, the essence of reducing generalization error lies in the negative correlation between fusion weights and uni-source component reconstruction loss. In static image fusion (baseline), the correlation between weight and reconstruction loss is 0. In contrast, **the dynamic weight in our TTD is negatively correlated to the reconstruction loss**, achieving a reduction in the generalization error compared with the baseline and improving the baseline's performance effectively. Thus, **the key to TTD functioning is independent of the performance of the baseline theoretically**. (ii) **Experimentally** , in our paper, we have applied TTD to various baselines with different capabilities and all achieved consistent enhancement, TTD can even further improve the performance when combined with current state-of-the-art methods. To further validate that our TTD is effective on models with different performances, we conducted additional experiments to apply TTD on models with varying performance levels by adding random Gaussian noise to the pre-trained model (IFCNN) parameters. The results on the LLVIP dataset are given in **Tab. A2** (see the global rebuttal PDF), showing that the performance of the baseline decreases with increasing noise added to it. As a comparison, **our TTD effectively improves all these baselines' performance, indicating the effectiveness and generalizability of our TTD on various baselines with different performances**. * Weakness 2 : The improvement over the non-adaptive baseline is minor. Thanks for your comment. Please kindly note that our method is independent of the baseline's adaptability, which of TTD is reflected in its ability to dynamically adjust the fusion weights to each sample. (i) **The adaptability of TTD**. As analyzed in Question 1, TTD can theoretically reduce the image fusion model's generalization error by constructing the negative correlation between fusion weight and uni-source component reconstruction loss. For each sample, we derive a pixel-level Relative Dominablity (RD) as the dynamic fusion weight, and RD is negatively correlated with uni-source component reconstruction loss according to Eq. 6. Experimental results in Tab. 1-5, the visualization in Fig. 1 and Fig. 6 show that RD-based dynamic fusion weight effectively captures the dominance of each source in image fusion and enhances its advantages in the fused images. Overall, **this adaptability of TTD refers to its adaptive fusion weights instead of the baseline**. (ii) **The improvements over various baselines are not minor**. We applied our TTD on three baselines with different performances, and we also conducted extensive experiments on multi-modal, multi-exposure, and multi-focus datasets. The superior performance across diverse metrics demonstrates the effectiveness and applicability of our approach. We further perform additional experiments to combine TTD with baseline models of varying performance. As shown in **Tab. A2** (see the global rebuttal PDF), our TTD significantly improves these baselines with different performances. **This validates that our TTD effectively improves the various baselines in different scenarios as well as with different performances**. --- Rebuttal 2: Title: Reminder for review Comment: Dear Reviewer gBb1, I have noticed that you have not yet responded to the authors' rebuttal. I kindly urge you to engage in a discussion with the authors at your earliest convenience to help advance the review process.
Rebuttal 1: Rebuttal: Dear PCs, SACs, ACs, and Reviewers, We would like to thank you for your valuable feedback and insightful reviews, which have greatly contributed to improving the paper. This is a **clear and well-written** (Reviewer gBb1, Reviewer EtNz) manuscript with a **theoretical guarantee** (Reviewer gBb1, Reviewer UH2o, Reviewer gbUk), we proposed a **effective and superior framework** (Reviewer UH2o, Reviewer gbUk), **the meaningful, detailed and sufficient experiments** on multiple tasks validate the theory and **TTD’s effectiveness and superiority** (Reviewer UH2o, Reviewer gbUk, Reviewer gBb1, Reviewer EtNz). In our rebuttal, we addressed the following raised concerns/misunderstandings. * We have provided a detailed explanation and experimental validation for TTD's performance on baselines with varying performance. * We have provided the inference time of TTD and baselines. * We have validated TTD's adaptability on Dif-Fusion. * We have compared our TTD with MEF-specific methods and MFF-specific methods. * We have added more ablation experiments to verify the flexibility in the form of weights and the significance of normalization. * We have drawn a new detailed pipeline during inference in Fig. A3. * We have visualized the RDs under different noise conditions and with sources of different qualities. * We have conducted an ablation study on the initialization of $w$. We hope that our responses will satisfactorily address your questions and concerns. We sincerely appreciate the time and effort you have dedicated to reviewing our submission, along with your invaluable suggestions. We believe that these clarifications and additional details strengthen our paper and address the reviewers' concerns. We understand the constraints of time and workload that reviewers and AC face, and we appreciate the effort already put into evaluating our work. If there are any additional insights, questions, or clarifications on our responses/submission that you would like to discuss with us, we would be very grateful to hear them, your feedback is invaluable for the improvement of our research. Best regards, Authors of Submission 29 ## Reference [1] Yue J, Fang L, Xia S, et al. Dif-fusion: Towards high color fidelity in infrared and visible image fusion with diffusion models[J]. IEEE Transactions on Image Processing, 2023. [2] Ram Prabhakar K, Sai Srikar V, Venkatesh Babu R. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//Proceedings of the IEEE international conference on computer vision. 2017: 4714-4722. [3] Wang Q, Chen W, Wu X, et al. Detail-enhanced multi-scale exposure fusion in YUV color space[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 30(8): 2418-2429. [4] Liu Y, Wang Z. Dense SIFT for ghost-free multi-exposure fusion[J]. Journal of Visual Communication and Image Representation, 2015, 31: 208-224. [5] Lee S, Park J S, Cho N I. A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient[C]//2018 25th IEEE international conference on image processing (ICIP). IEEE, 2018: 1737-1741. [6] Li H, Zhang L. Multi-exposure fusion with CNN features[C]//2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018: 1723-1727. [7] Ma K, Duanmu Z, Yeganeh H, et al. Multi-exposure image fusion by optimizing a structural similarity index[J]. IEEE Transactions on Computational Imaging, 2017, 4(1): 60-72. [8] Lei J, Li J, Liu J, et al. GALFusion: Multi-exposure image fusion via a global–local aggregation learning network[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-15. [9] Liu J, Wu G, Luan J, et al. HoLoCo: Holistic and local contrastive learning network for multi-exposure image fusion[J]. Information Fusion, 2023, 95: 237-249. [10] Li J, Guo X, Lu G, et al. DRPL: Deep regression pair learning for multi-focus image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4816-4831. [11] Amin-Naji M, Aghagolzadeh A, Ezoji M. Ensemble of CNN for multi-focus image fusion[J]. Information Fusion, 2019, 51: 201-214. [12] Xu H, Fan F, Zhang H, et al. A deep model for multi-focus image fusion based on gradients and connected regions[J]. [13] Qiu X, Li M, Zhang L, et al. Guided filter-based multi-focus image fusion through focus region detection[J]. Signal Processing: Image Communication, 2019, 72: 35-46. [14] LAI R U I, LI Y, GUAN J, et al. Multi-Scale Visual Attention Deep Convolutional Neural Network for Multi-Focus Image Fusion[J]. [15] Song X, Wu X J. Multi-focus image fusion with PCA filters of PCANet[C]. Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction: 5th IAPR TC 9 Workshop, MPRSS 2018, Beijing, China, August 20, 2018, Revised Selected Papers 5. Springer International Publishing, 2019: 1-17. [16] Ma B, Zhu Y, Yin X, et al. Sesf-fuse: An unsupervised deep model for multi-focus image fusion[J]. Neural Computing and Applications, 2021, 33: 5793-5804. [17] Ma J, Zhou Z, Wang B, et al. Multi-focus image fusion using boosted random walks-based algorithm with two-scale focus maps[J]. Neurocomputing, 2019, 335: 9-20. [18] Xu H, Ma J, Jiang J, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(1): 502-518. [19] Tang L, Yuan J, Zhang H, et al. PIAFusion: A progressive infrared and visible image fusion network based on illumination aware[J]. Information Fusion, 2022, 83: 79-92. [20] Liu Y, Kothari P, Van Delft B, et al. Ttt++: When does self-supervised test-time training fail or thrive?[J]. Advances in Neural Information Processing Systems, 2021, 34: 21808-21820. [21] Sun Y, Wang X, Liu Z, et al. Test-time training with self-supervision for generalization under distribution shifts[C]. International conference on machine learning. PMLR, 2020: 9229-9248. Pdf: /pdf/c8ba94363fb09421980517cf31222e39c5cda4e0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Paralinguistics-Aware Speech-Empowered Large Language Models for Natural Conversation
Accept (poster)
Summary: The authors present a new large language model (LLM) framework called the Unified Spoken Dialog Model (USDM) that can directly understand and generate spoken dialog responses with natural prosody. This is achieved by incorporating prosodic information into the speech tokens and using a multi-step spoken dialog template for fine-tuning. Both automatic and human evaluations on the DailyTalk dataset show that USDM outperforms previous models in generating natural-sounding spoken responses. Strengths: The authors demonstrate the superior performance of USDM over existing models on the DailyTalk dataset and validate their training methods through thorough analysis. Key contributions include: a unified pretraining strategy for effectively modeling the relationship between speech and text, an extensive spoken dialog modeling framework using prosody-infusing encoders and decoders, and an LLM-based modeling strategy for generating natural and coherent dialog responses. Their work establishes a foundation for speech-enabled chat-based LLMs, showcasing a prototype that enhances LLMs with speech interaction capabilities. Weaknesses: Comparative Analysis: Extend the comparative analysis to include a wider range of previous methods in addition to SpeechGPT. While SpeechGPT is a strong benchmark, a more comprehensive comparison with other relevant approaches would provide a clearer understanding of the USDM's relative strengths and weaknesses. Dataset Diversity: Expand the evaluation to include multiple datasets beyond DailyTalk. Evaluating the model on a diverse range of datasets would offer a more robust assessment of its generalization capabilities and performance across different scenarios. Emotional Control: Explore and discuss the potential for controlling the emotional expression of the generated responses in the proposed USDM model. Addressing this aspect would provide insights into the model's ability to adapt to different emotional contexts and potentially open avenues for future research in this direction. Technical Quality: 3 Clarity: 3 Questions for Authors: -- Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: -- Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, thank you for your thoughtful comments, feedback, and questions. We provide explanations and answers to several questions below. **[W1] Comparative Analysis: Extend the comparative analysis to include a wider range of previous methods in addition to SpeechGPT.** Thank you for your valuable suggestions. Following your feedback, we have added an additional open source baseline for comparison on the DailyTalk dataset. The baseline we added is AnyGPT [1], a multimodal LLM capable of handling text, speech, images, and music as both input and output. We fine-tuned this model using its official implementations and pretrained checkpoints with a 7B parameter size on the DailyTalk dataset. The results of comparing this model with our model are presented in the table below. The comparison was conducted using the same metrics and test set as for Table 1 in the main paper. As shown in the table below, our model demonstrates superior performance compared to AnyGPT. **<Overall>** | **Method** | **win** | **tie** | **lose** | **MOS** | **P-MOS** | |----------------|-----------|----------|-----------|-----------------|-----------------| | Ground Truth | 45.9% | 8.0% | 46.1% | 4.45±0.05 | 4.41±0.05 | | USDM | - | - | - | 4.32±0.06 | 4.28±0.07 | | AnyGPT [1] | 65.6% | 5.1% | 29.3% | 3.54±0.07 | 3.49±0.08 | **<Semantic>** | **Method** | **win** | **tie** | **lose** | **METEOR** | **ROUGE-L** | |----------------|-----------|----------|-----------|-----------------|-----------------| | Ground Truth | 32.7% | 19.6% | 47.7% | - | - | | USDM | - | - | - | 13.1 | 15.7 | | AnyGPT [1] | 58.9% | 15.6%| 25.5% | 9.8 | 11.7 | Although AnyGPT used approximately 60,000 hours of speech and text data for pretraining, it also incorporated two additional modalities (images and music) into a single model. This, combined with its simpler speech-text pretraining scheme and less consideration of paralinguistic information in a multimodal LLM, contributes to its inferior performance compared to USDM. **[W2] Dataset Diversity: Expand the evaluation to include multiple datasets beyond DailyTalk.** Thank you for your valuable feedback. When preparing our model, we considered various datasets besides DailyTalk for evaluation. However, publicly available spoken dialog datasets were very limited. Among the candidates, the Fisher dataset, known for its frequent fillers, backchannels, and simple unengaging responses, posed significant challenges for model evaluation from a semantic perspective. Additionally, this dataset, collected in the early 2000s, comprises 8kHz low-quality telephone conversations, making it challenging to discern paralinguistics. Hence, we decided not to include it in our comparative analysis in the paper and used it only for research demonstration purposes on our demo page. Since the submission of our paper, a new spoken dialog dataset called MultiDialog [2] has released. Along with the valuable suggestions from other reviewers, our next step will be to validate our model's capability across more diverse languages, various tasks, and a broader range of datasets, including MultiDialog. Thank you once again for your insightful suggestions. **[W3] Emotional Control: Explore and discuss the potential for controlling the emotional expression of the generated responses in the proposed USDM model.** Thank you for your valuable suggestions. Following your advice, we have made slight modifications to our model's training template in Figure 5 of the main paper to enable emotion control. Originally, our template for spoken dialog was structured as speech1 $\rightarrow$ text1 $\rightarrow$ text2 $\rightarrow$ speech2. To explicitly control emotion, we modified it to speech1 $\rightarrow$ text1 <emotion1> $\rightarrow$ <emotion2> text2 $\rightarrow$ speech2. By using this modified template to train USDM, we can set <emotion2> to our desired emotion during inference, allowing us to control emotional expression. We preprocessed the MultiDialog dataset using this template and trained USDM accordingly. We have updated our demo page with speech samples generated by this model, demonstrating the possibility of USDM controlling emotional expression. While we demonstrated the potential, we observed several limitations in the MultiDialog dataset that hinder effective emotion control for spoken responses. MultiDialog is designed with scripts and corresponding ground truth emotions provided to participants, who then act out the dialogs. This dataset includes text, audio, and video (talking face) components. Because the recordings were made by acting out the provided emotions and scripts, we observed that the MultiDialog dataset often lacks precise alignment between speech and ground truth emotion labels. In some instances, the emotion is reflected in the video or transcript but not in the speech. This misalignment results in generated samples where emotions are only reflected in the text response or not prominently infused. Additionally, beyond dataset issues, we observed cases where the model struggled to control emotion when the desired emotion did not match the conversational context. We believe that as more high-quality datasets with well-aligned emotional expressions become available, the performance of our model will improve. Furthermore, we plan to propose additional techniques to disentangle the context of the conversation from the emotion, thereby enhancing emotional controllability. We appreciate your valuable feedback and look forward to incorporating it into our future work. **[Reference]** [1] Anygpt: Unified multimodal llm with discrete sequence modeling. (2024). [2] Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation. (2024). --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer yvFi (made visible to authors) Comment: "Dear Authors, Thank you for addressing my comments and providing detailed responses to each of the points raised. I appreciate the effort you’ve put into refining your paper. Below are my thoughts on the updated content: **[W1] Comparative Analysis:** The addition of AnyGPT [1] to your comparative analysis is a valuable improvement. **[W2] Dataset Diversity:** Your explanation regarding the challenges of incorporating datasets like Fisher and the choice to focus on the DailyTalk dataset is understandable. The inclusion of the MultiDialog [2] dataset in future evaluations is a commendable step towards assessing your model’s generalizability and performance across diverse scenarios. I look forward to seeing the results from these additional datasets as they become available. **[W3] Emotional Control:** The modifications made to enable emotion control in the USDM model are promising. Overall, your responses have addressed the concerns raised comprehensively."
Summary: This paper introduces a method for modeling spoken dialog which relies on a speech-text LLM, pretrained with a combination or text and discrete speech tokens which capture semantics as well as prosody. The pretraining regime attempts to get the LLM to capture two types of relations between text and speech tokens: continuation and correspondence. The approach is evaluated on the DailyTalk dataset using both automatic metrics and human preference judgements, and is shown to outperform a cascaded approach relying on ASR and TTS, as well as other baselines. Strengths: - The pretraining approach is simple but ingenious in terms of modeling. - Speech units are pre-evaluated for paralinguistic content by testing them on emotion classification, and on speech reconstruction. - Aspects of the pretraining scheme are evaluated via ablations. Weaknesses: The main weakness that the method relies on a massive amount (~10 years) of transcribed English speech. This makes it limited in its applicability to just a handful of very resource rich languages. The evaluation relies on a single dataset in a single language. The title places the focus of the work on integrating paralinguistic information, but in the actual paper this is only one, and not the most salient, part of the framework. The prosody is aspect is evaluated via the P-MOS score, but not analyzed in depth. Technical Quality: 3 Clarity: 4 Questions for Authors: Do you have any insight into which specific aspects of prosody this approach captures, as compared to the baselines? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Several limitations are clearly articulated. The reliance on large amounts of transcribed speech for training is not, however. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. We provide our point-to-point response below. **[W1,2, L1] The main weakness that the method relies on a massive amount (~10 years) of transcribed English speech. This makes it limited in its applicability to just a handful of very resource rich languages, / The evaluation relies on a single dataset in a single language.** We share your concerns and acknowledge the limitations you pointed out. Like many other multimodal and audio (speech) LLMs, our model also utilizes over 10k hours of audio for training. This can indeed be a barrier when expanding to entirely new languages. As per your suggestion, we will include this in the Limitations section. Generally, apart from a few languages like English, most languages have only a few hundred hours of available speech data at most. Recent research in tasks such as speech translation and personalized speech synthesis shows that when extensive data is available for a specific language (typically English), it can significantly enhance the performance of low-resource languages by leveraging the dataset of data-rich languages [1, 2]. Following your suggestion, we will extend our methodology to a multilingual setup in future research. By leveraging the data from resource-rich languages, we aim to boost the performance of low-resource languages and validate our approach across various languages and datasets. Thank you once again for pointing us in this meaningful direction. **[W3, Q1] The prosody is aspect is evaluated via the P-MOS score, but not analyzed in depth. / Do you have any insight into which specific aspects of prosody this approach captures, as compared to the baselines?** We acknowledge the limitations in our ability to precisely determine which aspects of prosody are captured in conversations. We also faced challenges in evaluating these aspects individually, which led us to rely on P-MOS for overall prosody assessment. We have conducted further analysis on the units we employed to infuse paralinguistics to indirectly demonstrate which aspects of prosody our approach focuses on. To achieve this, we checked whether our units encapsulate various aspects of prosody. We trained classifiers to assess whether the acoustic units contain information related to other prosodic aspects not covered in Section 3.1, such as gender, pitch, tempo, and energy. We used the TextrolSpeech dataset [3] to train these classifiers, employing the same structure as the emotion classifier described in the main manuscript. For gender classification, we used binary classes (male/female), and for the other three aspects, we performed ternary classification following the dataset's predefined classes. Below is the table with the statistics for the test set of each class and the classification results: | **Class** | **Test set (Total: 199)** | **Probability of Random Guess** | **Acoustic Unit Classifier Accuracy** | |-----------|--------------------------------------|---------------------------|----------------------------------------| | Gender | Male: 104 / Female: 95 | 50.8% | 83.4% | | Pitch | High: 68 / Normal: 67 / Low: 64 | 34.2% | 70.9% | | Tempo | High: 127 / Normal: 44 / Low: 28 | 63.8% | 82.4% | | Energy | High: 76 / Normal: 64 / Low: 59 | 38.2% | 64.8% | The results in the table above confirm that our adopted acoustic units contain information related to gender, speed, and pitch, in addition to emotion. Additionally, for energy, our speech encoder is based on XLS-R, which normalizes input speech using mean and variance. As a result, while the trend in energy (e.g., position of peak value, lowest value) may be preserved, the absolute energy levels are affected. This indicates that the energy trend alone must be used to classify the energy of speech, leading to comparatively lower classification accuracy compared to other aspects. Note that the goal of this experiment is not to build highly accurate classifiers but to check which aspects of prosody are embedded in the units. With more data and additional techniques, the classification accuracies could be improved. By confirming the presence of these aspects in the acoustic units, we hope to provide a clearer understanding of which aspects of prosody our approach captures. **[Reference]** [1] Audiopalm: A large language model that can speak and listen. (2023). [2] XTTS: a Massively Multilingual Zero-Shot Text-to-Speech Model. (2024). [3] Textrolspeech: A text style control speech corpus with codec language text-to-speech models. (2024). --- Rebuttal Comment 1.1: Comment: I'd like to acknowledge the authors' response and thank them for providing the additional analysis. As to my assessment ultimately it remains unchanged as it was largely positive already.
Summary: This paper introduces an extensive speech-text LLM framework, the Unified Spoken Dialog Model (USDM), designed to generate coherent spoken responses with naturally occurring prosodic features relevant to the given input speech without relying on explicit automatic speech recognition (ASR) or text-to-speech (TTS) systems. Strengths: 1) This paper present how to integrate paralinguistics in speech-empowered LLMs. This topic is important and beneficial for both multimodal and speech communities. 2) The interleaved pre-training schedule is reasonable and effective that provides solution to mitigate modality gap between speech and text. 3) The finding in 3.1 is also interesting for speech tokenization. Weaknesses: 1) Unfair comparison. Given that this paper focuses on paralinguistics, based on the findings in section 3.1, it is natural that SpeechGPT's 1k acoustic tokens would perform worse than USDM. 2) With such a reasonable pre-training process, it is regrettable that this work has not managed to extend the speech-text associations learned by the model to different tasks (such as [1]). The paralinguistics is important for human speech understanding beyond spoken dialog. 3) Some listening examples of zero-shot TTS (especially emotional TTS) are recommended to demonstrate that USDM can better mimic the emotion and prosody information from reference speech. [1] Nguyen, Tu Anh, et al. "Spirit-lm: Interleaved spoken and written language model." arXiv preprint arXiv:2402.05755 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: 1) How is the instruction-following capacity of USDM? Since this issue is unrelated to paralinguistics, it's ok to skip this question. 2) Is it possible for USDM to perform speech-to-speech directly with more data or training schedule? Such modality changing results in low efficiency. If can, which kind of strategy is required can be based on USDM? 3) Do authors have open-source plan for pre-trained model? I would like to increase the rating since such an interleaved speech-text foundation model can benefit to both speech and multimodal communities. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful comments and constructive questions. We address your concerns below. **[Q3] Do authors have open-source plan for pre-trained model?** Sure. We plan to release our code and pretrained models. Our model consists of a speech-text pretrained model, fine-tuned spoken dialog model (USDM), and unit-Voicebox, which restores units to speech. All three models will be made publicly available. In addition, since unit-Voicebox can perform speaker adaptation, we are currently preparing methods to prevent misuse (e.g., a classifier to distinguish synthesized speech, etc.). We intend to release unit-Voicebox along with these preventive measures. **[W1] Given that this paper focuses on paralinguistics, based on the findings in section 3.1, it is natural that SpeechGPT's 1k acoustic tokens would perform worse than USDM.** Our goal is to develop a model capable of natural conversation that reflects paralinguistics. In the context of spoken dialog modeling, we emphasize not only the incorporation of paralinguistic information but also the effective capture of semantic content within conversations. In Section 3.1, we highlighted that units with 10k acoustic tokens encapsulate pitch trends and emotional information from a paralinguistic perspective. Furthermore, in Section 3.2, we introduced a novel pretraining scheme aimed at capturing cross-modal relationships, which we believe is particularly beneficial for enhancing semantic understanding. The effectiveness of this scheme is demonstrated in Sections 4.1 and 4.2.1. For a more detailed comparison with SpeechGPT, we constructed an ablation setup, Setup 2, which is trained with a pretraining scheme similar to SpeechGPT but uses the same dataset and acoustic tokens as our approach. As shown in Table 3 of the main paper, Setup 2 performs worse than our approach. This comparison illustrates the efficacy of our pretraining scheme in maintaining semantic coherence, even with identical acoustic tokens. We hope this response clarifies our efforts to ensure a more precise and fair comparative analysis of the proposed components. **[W3] Some listening examples of zero-shot TTS (especially emotional TTS) are recommended to demonstrate that USDM can better mimic the emotion and prosody information from reference speech.** Thank you for your valuable suggestion. Following your idea, we have updated our demo page with several examples related to zero-shot TTS with several emotional speech prompts. **[W2, Q1] With such a reasonable pre-training process, it is regrettable that this work has not managed to extend the speech-text associations learned by the model to different tasks. / How is the instruction-following capacity of USDM?** Although we proposed and implemented speech-text pretraining for performing spoken dialog modeling, as you mentioned, our speech-text pretraining model can also be utilized for other speech-text downstream tasks. In this study, however, we intended to focus on spoken dialog modeling, as mentioned in our limitations section. Currently, our model is fine-tuned with a focus on the spoken dialog modeling task, so its instruction-following capability is not well demonstrated. Based on other existing studies [1,2], we believe that the instruction-following ability can be incorporated into our model by learning various speech tasks with appropriate instructions for each task. Following your suggestion, we plan to apply our method to a variety of tasks beyond spoken dialog modeling in future direction and strive to enhance the instruction-following capacity of our model. Thank you for your valuable suggestions. **[Q2] Is it possible for USDM to perform speech-to-speech directly with more data or training schedule? If can, which kind of strategy is required can be based on USDM?** As you pointed out, USDM generates spoken responses through intermediate text to achieve better performance, which is shown in Table 3 of the main text (compared to S1 $\rightarrow$ S2). Recently, it has been demonstrated that the amount of data is crucial for performance, not only in speech [3] but also in various modalities (text, image, etc.) [4]. Therefore, as you mentioned, we believe that using more data will likely improve the performance of speech-to-speech direct modeling. To illustrate this point, we attempted direct spoken dialog modeling using only 10\% of the data used in the S1 $\rightarrow$ S2 experiment in Table 3 of the main paper. When comparing the results, the original METEOR score was 6.5 and ROUGE-L score was 7.7, while with reduced data, the METEOR score dropped to 5.1 and ROUGE-L score to 6.4. Although we have not tested the upper bound of performance with increased data due to the lack of large-scale high-quality spoken dialog datasets, we expect that increasing the data alone would improve the performance of direct spoken dialog modeling. Additionally, considering potential strategies, recent works have shown that training various tasks with a single model creates synergy, resulting in performance improvements compared to task-specific models [5, 6]. Therefore, by exploring and incorporating various tasks and techniques that can aid in direct speech-to-speech spoken response generation, we believe USDM will be able to perform spoken dialogues without the need for intermediate text. **[Reference]** [1] UniAudio 1.5: Large Language Model-driven Audio Codec is A Few-shot Audio Task Learner. (2024). [2] Audio flamingo: A novel audio language model with few-shot learning and dialogue abilities. (2024). [3] Audiobox: Unified audio generation with natural language prompts. (2023). [4] Scaling laws for neural language models. (2020). [5] u-LLaVA: Unifying Multi-Modal Tasks via Large Language Model. (2023). [6] UniverSLU: Universal Spoken Language Understanding for Diverse Tasks with Natural Language Instructions. (2024). --- Rebuttal 2: Title: Improve My Rating Comment: I appreciate the author's response, which has alleviated most of my concerns. Therefore, I decide to raise my rating to 6 as I mentioned.
null
null
Rebuttal 1: Rebuttal: We would like to extend our sincere gratitude to all the reviewers for their insightful comments. Before addressing each reviewer's concerns, we want to clarify a point that may have caused some confusion caused by our paper's title, "Integrating Paralinguistics in Speech-Empowered Large Language Models for Natural Conversation." The primary goal of our research is to propose an approach for modeling spoken dialogs that accurately reflect both paralinguistics and content, thereby building a spoken dialog model. Therefore, our study focuses not only on paralinguistics but also on semantic coherence in spoken conversation. To achieve both, we analyzed acoustic tokens in terms of paralinguistics and adopted prosody-infused speech tokens in Section 3.1, and proposed an effective and novel speech-text pretraining scheme in Section 3.2. We hope this clarifies that our focus is on both paralinguistics and appropriate content in spoken dialog modeling. If our title continues to cause confusion, we are open to considering a change in the title.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Provably Efficient Interactive-Grounded Learning with Personalized Reward
Accept (poster)
Summary: This paper considers Interactive-Grounded Learning (IGL) with personalized reward, where the feedback can be context-dependent. The authors propose two algorithms, which are provably efficient by utilizing the novel Lipschitz reward estimators. Empirical results on the image classification dataset and the conversational dataset showcase the effectiveness of the proposed algorithms. Strengths: (+) This is the first work to provide provably efficient algorithms with regret guarantees for IGL with personalized reward. (+) It is interesting to see the experiments on the conversation dataset, which is a novel task in the IGL literature. Weaknesses: (-) The paper does not compare the proposed algorithms to the IGL-P algorithm proposed in (Maghakian et al., 2022). (-) While there are justifications in the paper for using the Lipschitz reward estimators (Lemma 4), it is unclear why using $\mathbb{1}\\{\hat{h}_a(x,y)\ge\frac{\theta}{\alpha}\\}$ or the previous IGL-P algorithm could fail. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. It would be nice to provide counter-examples to show why using $\mathbb{1}\\{\hat{h}_a(x,y)\ge\frac{\theta}{\alpha}\\}$ or the step-function estimator in prior work could fail in the setting. 2. How restrictive is it to obtain prior knowledge of parameters $\alpha$ and $\theta$ in Algorithms 1 and 2? What is the role of parameter $\gamma$ in Algorithm 2? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: No "Limitations" section is provided in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **“The paper does not compare the proposed algorithms to the IGL-P algorithm proposed in (Maghakian et al., 2022).”** Reply: We followed the reviewer’s suggestion and tested the algorithm of Maghakian et al. on the MNIST dataset. We found that it achieves less than 0.2 average progressive reward, significantly worse than our algorithms. We will add this result to our paper in the next version. Beside the issue of using step-function estimators, another main reason for this huge performance difference is that IGL-P trains the reward predictor (inverse kinematics part) in an online manner. Specifically, in each iteration, IGL-P updates the predictor using only the current sample, without reusing past samples for training. In contrast, our algorithms first employ uniform policies to collect a dataset and then train the predictor on this dataset for multiple epochs via supervised learning. Since the actions in our dataset are uniformly distributed, the dataset has good coverage for each action, ensuring better generalization performance for our predictor. > **“While there are justifications in the paper for using the Lipschitz reward estimators (Lemma 4), it is unclear why using $\mathbb{1}\lbrace\hat h_a(x,y)\ge\frac{\theta}{\alpha}\rbrace$ or the previous IGL-P algorithm could fail.”** Reply: We do not have counter-examples showing that the binary reward function provably fails. However, from a technical perspective, Lipschitzness is crucial to control the difference between the empirical value function with respect to true rewards and that with respect to the estimated rewards (lines 486-487), and from an empirical perspective, we also show that our algorithm performs much better when equipped with our Lipschitz reward estimator compared to the binary one. To further demonstrate the last point, we run additional experiments using the on-policy Algorithm 2 on MNIST with 20 different random seeds. The binary reward estimator achieves an average progressive reward of 0.711 (0.040) and a test accuracy of 88.5% (3.5%). In contrast, the Lipschitz reward estimator achieves an average progressive reward of 0.748 (0.025) and a test accuracy of 90.6% (3.4%). According to the two-sample t-test, the Lipschitz reward estimator outperforms the binary one with greater than 95% confidence in terms of both average progressive reward and test accuracy. > **“How restrictive is it to obtain prior knowledge of parameters $\alpha$ and $\theta$ in Algorithms 1 and 2? What is the role of parameter $\gamma$ in Algorithm 2?”** Reply: We think prior knowledge on $\alpha$ and $\theta$ is usually easy to obtain in many setups. For example, as we mentioned after Assumption 3, in the s-multi-label classification problem (with $1\leq s < K/2$), we have $\alpha = s$ and $\theta = 1$. The parameter $\gamma$ controls the amount of exploration in the inverse-gap weighting (IGW) rule. Intuitively, smaller $\gamma$ leads to more exploration among the actions. --- Rebuttal Comment 1.1: Title: Follow-up Response Comment: I thank the authors for their responses to my questions. I appreciate that they include numerical results on the comparison with the IGL-P algorithm. Responding: **Q1.** For interpretability, in the later version of the paper, I would suggest plotting the empirical errors of $\mathbb{1}\{\hat{h}_a(x,y)\geq\frac{\theta}{\alpha}\}$ and $G(\hat{h}_a(x,y),\frac{\theta}{\alpha}-\sigma,\sigma)$ compared to the ground-truth estimator $\mathbb{1}\{h_a^\star(x,y)\geq\frac{\theta}{\alpha}\}$. --- Reply to Comment 1.1.1: Comment: Thanks for your suggestion. We will incorporate this into the next version. If our response addresses your concern, please do consider re-evaluating our paper.
Summary: This paper studies the problem of personalized rewards in the context of Interactive-Grounded Learning (IGL), where the goal is to maximize the unobservable latent rewards from the observed reward-dependent feedback on actions being taken. Specially, authors introduce provably efficient algorithms with sublinear regret to solve a variant of IGL, in which the feedback depends on both the context and rewards. Authors introduce Lipschitz reward estimator via inverse kinematics. Based on it, two algorithms are proposed based on explore-then-exploit and inverse-gap weighting respectively. Both achieves $O(T^{2/3})$ regret. Empirical studies are performed on an image classification dataset and a conversational dataset. Strengths: This paper explores a variant setting of IGL. Instead of sticking with the conditional independence assumption made in existing works, authors go one-step further by studying the setting where the observed feedback depends on both rewards and the context. This setting is practical in cases such as recommender systems. The authors introduces two algorithms that enjoy the sublinear regret of $\tilde{O}(T^{2/3})$ for IGL with personalized reward. In particular, a new reward estimator is introduced via inverse kinematics to construct Lipschitz rewards. Compared to the prior work (Maghakian et al., 2022) which studies deterministic binary reward, here the reward estimator generalizes to randomized binary rewards. Weaknesses: While this paper clearly articulates the idea, my main concern lies in its technical novelty and contribution. More specifically, 1. While I understand the motivation of the paper, It is not clear to me how the studied setting can be technically more challenging than the common contextual bandits. The studied setting concerns about partial feedback that depends on latent reward and context, whereas contextual bandits concerns about explicit rewards that depend on context and action. The studied setting is somewhat a simplified version of POMDP. As such, extending the stochastic contextual bandit algorithms and their theoretical guarantees for IGL appears to be straightforward. 2. Prior study of Xie et al. [2022] has relaxed the conditional independent assumption to study feedback which depends on both action and reward. A natural extension will be to deal with feedback that depends on context, action, and reward (i.e. $y|x, a, r$) when it comes to the personalized settings. It is also more practical and align with the contextual bandit settings. However, authors preserves the conditional independent assumption for actions. 3. The main contribution and emphasis lies in the construction of the reward estimator, the novelty in algorithmic design appears to be limited by using simple standard bandit algorithms (e.g. explore-then-exploit). In addition, algorithmically, employing uniform exploration may lead to higher sample complexity for the balance between exploration and exploitation. More advanced exploration strategies might need to be considered. As such, authors are suggested to comment on the optimality of the provided regret bound. 4. Compared to Algorithm 1, the performance guarantee of Algorithm 2 relies on more restricted assumptions (Assumptions 1 - 5), which can be difficult to satisfied. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Could you explain why do we need an underestimate of the reward in this studied setting? Most of the time when we study online settings, optimism is desirable. 2. What is the intuition of $\sigma$? 3. In Table 1, why the performance of algorithm 2 outperforms algorithm 1? Do they converge to similar performance if running for longer time horizon? It is suggested to provide the regret plots for experiments. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: No potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Comparison with contextual bandits and POMDP** As the reviewer already pointed out, in IGL one only observes indirect feedback about the reward, while in standard contextual bandits one observes the reward directly. This clearly makes IGL more challenging than contextual bandits — in fact, without further assumptions (like those we made), learning in IGL becomes impossible. So we do not understand why the reviewer thinks IGL is not technically more challenging. If what you mean is that all we need is to construct reward estimators using the feedback received, then we emphasize again that our Lipschitz reward estimator is novel, and the previous estimator proposed by Maghakian et al. (2022) does not lead to a regret guarantee. Also, we emphasize that IGL is not a simplified version of POMDPs, and they present their own unique challenges. In POMDPs, although the latent state is unobservable, the reward of the chosen action is still revealed to the learner, which means that POMDP algorithms do not need to address the symmetry breaking problem that is central to IGL. > **The conditional independence assumption for actions** Note that [Xie et al., 2022] in fact considers the conditional independence assumption for context, while we follow [Maghakian et al. (2022)] that considers the conditional independence assumption for actions. The two main reasons we study this setting are:. First, we believe that it does capture many real-world applications. For instance, in recommender systems, while different users may communicate their feedback differently, a given user typically expresses (dis)satisfaction in a consistent way. Thus, this is a realistic setting where the feedback depends only on the context (the user) and the reward (degree of satisfaction), but not the actual action (the recommended item). Second, this seemingly restricted setting is already challenging enough, and there are no efficient algorithms with sublinear regret before our work. Thus, we believe that solving this challenge (which we did) is itself a significant contribution. Furthermore, our empirical results also validate the practicality and the effectiveness of our approach. We do recognize the limit of our setting as pointed out by the reviewer though, and we leave the more general setting as an important future direction. > **“Require more advanced exploration strategies. Optimality of the provided bound.”** Note that in addition to the simple explore-then-exploit idea used in Algorithm 1, we also use the more sophisticated online exploration idea from Foster and Rakhlin [2020] in Algorithm 2. While we are not able to prove a better regret bound (mainly due to the fact that for learning $\hat{h}$, we still use uniform exploration), Algorithm 2 does perform better than Algorithm 1 empirically, as shown in our experiments. We are not certain about the optimality of our $O(T^{2/3})$ regret bounds. An obvious regret lower bound is $\Omega(\sqrt{KT})$ (from contextual bandits). Given that IGL is significantly more challenging (and also that $T^{2/3}$ is the optimal regret for a class of partial monitoring problems as mentioned by Reviewer 8SUA), it is possible that $T^{2/3}$ regret is indeed optimal in our setting. > **“Compared to Algorithm 1, Algorithm 2 relies on more assumptions”** We would like to clarify that Algorithm 2 only relies on two additional assumptions, Assumptions 4 and 5, both of which are commonly used in the contextual bandit literature and reasonable in practice. Assumption 4 assumes that the square loss regret of the online oracle is bounded. As stated in line 240, we can use Vovk’s aggregation algorithm to achieve logarithmic regret for finite $\mathcal{F}$. Additionally, as noted in footnote 1, there are many examples of regression oracles when $\mathcal{F}$ is not finite, and we refer the reviewer to Foster and Rakhlin [2020] for further details. Assumption 5 assumes that our function class realizes one specific function $\underline{f}^\star$. This is similar to the widely used realizability assumption and can be satisfied with a rich function class such as deep neural networks. > **“Why a reward underestimator?”** Indeed, in many bandit problems, optimism is a commonly used exploration strategy. The fact that we need pessimism instead highlights the difference between IGL and standard contextual bandits. From a technical viewpoint, the reason we make sure that the predicted rewards are underestimators and at the same time pretty accurate for the optimal policy (Lemma 4) is that it then allows us to upper bound the regret under the true reward by the regret under the predicted reward (which the algorithm tries to minimize). > **“Intuition of $\sigma$?”** Intuitively, $\sigma$ characterizes the difference of function $h^\star_{\pi^\star(x)}(x,y)$ when $\phi(x,y)=0$ and $\phi(x,y)=1$. As discussed in line 145-154, when $\phi(x,y)=0$, we have $h^\star_{\pi^\star(x)}(x,y)\leq \frac{1}{K-\alpha}$ and when $\phi(x,y)=1$, we have $h^\star_{\pi^\star(x)}(x,y)\geq \frac{\theta}{\alpha}$. $\sigma$ is then set to be half of this gap when constructing our reward estimator that is $1/\sigma$-Lipschitz. > **“why algorithm 2 outperforms algorithm 1? Do they converge to similar performance? Regret plots are suggested.”** Although Algorithm 2 has the same $T^{2/3}$ regret bound as Algorithm 1, as stated in line 243, this is primarily because both need to uniformly explore in the first phase. However, in the second phase, Algorithm 2 is on-policy and employs the inverse-gap weighting strategy, which typically offers better exploration than the explore-then-exploit approach used in Algorithm 1. This is the main reason why Algorithm 2 empirically performs better, regardless how long the time horizon is. To further illustrate this, we have provided the average progressive reward plots in the rebuttal PDF, which indeed demonstrates that Algorithm 2 consistently outperforms Algorithm 1 over time. --- Rebuttal 2: Comment: I thank the authors for their detailed response. To further clarify, Xie et al. [2022] studies the setting of $(y|a, r)$, the current draft studies the setting of $(y|x, r)$. Could you elaborate more on why the studied setting may not borrow the techniques from Xie et al. [2022]? More specifically, what are the main technical differences when dealing with feedback depending on context and reward v.s. feedback depending on action and reward? --- Rebuttal Comment 2.1: Comment: Thanks for your further questions. We emphasize that the idea behind the algorithm design of Xie et al., [2022] **cannot** be used in our setting. Specifically, Xie et al., [2022] consider the setting where the feedback is independent of the context given the reward and the action, so they learn the value function $f_a$ and the reward decoder $\psi_a$ separately for each action $a\in[K]$. Generalizing their idea to our setting would then mean learning the value function and the reward decoder **for each context**, which is infeasible since there could be infinitely many contexts. Therefore, we have to propose a new method, that is, constructing a reward decoder via inverse kinematic, for our setting.
Summary: In this work, the authors provide the first provably efficient algorithms with sublinear regret guarantees for Interactive-Grounded Learning (IGL) with personalized rewards under realizability. Based on a novel Lipschitz reward estimator, the authors propose two algorithms: one based on explore-then-exploit and the other based on inverse-gap weighting. Furthermore, the authors apply IGL to learning from image feedback and learning from text feedback, showcasing the effectiveness of the proposed algorithms. Strengths: This work provides the first provably efficient algorithms for Interactive-Grounded Learning with personalized rewards, supported by experiments demonstrating the effectiveness of the proposed algorithms. The presentation is generally clear, with detailed experiment settings, and the appendix is well-organized. The definitions and assumptions are well defined. Weaknesses: The theoretical contributions are limited by the realizability and identifiability assumptions. It would be helpful to provide practical examples where all these assumptions are satisfied. In Table 1, the difference between Binary and Lipschitz reward estimation seems insignificant, which raises doubts about the practical contribution of the new estimator. Technical Quality: 3 Clarity: 2 Questions for Authors: Same as weakness. Q1: It seems straightforward to apply explore-then-exploit and inverse-gap weighting. Could you please state the challenges encountered when proving the regret guarantees? Q2: It would be helpful if the authors could provide some related lower bounds to help readers understand the gap between the current regret bound and the optimal rate. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Same as weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **“The theoretical contributions are limited by the realizability and identifiability assumptions. It would be helpful to provide practical examples where all these assumptions are satisfied.”** Reply: Realizability is a well-established assumption in the contextual bandit literature [Foster et al., 2018, Foster and Rakhlin, 2020], and can be satisfied by using a rich function class as the reward predictor such as deep neural networks. Identifiability is a necessary assumption in IGL to break the symmetry between reward being 1 and being 0. As demonstrated by examples in [Xie et al. ,2022], learning in IGL is impossible without breaking the reward symmetry. As mentioned in Lines 107-108, our identifiability assumption is satisfied in many scenarios with sparse rewards, including classification problems and recommender systems where the user is primarily interested in a very small subset of items. > **“In Table 1, the difference between Binary and Lipschitz reward estimation seems insignificant, which raises doubts about the practical contribution of the new estimator.”** Reply: To verify that the advantage of our Lipschitz reward estimator is indeed significant, we run additional experiments using the on-policy Algorithm 2 on MNIST with 20 different random seeds. The binary reward estimator achieves an average progressive reward of 0.711 (0.040) and a test accuracy of 88.5% (3.5%). In contrast, the Lipschitz reward estimator achieves an average progressive reward of 0.748 (0.025) and a test accuracy of 90.6% (3.4%). According to the two-sample t-test, our Lipschitz reward estimator outperforms the binary one with greater than 95% confidence in terms of both average progressive reward and test accuracy. These results demonstrate the practical significance and contribution of the new estimator. > **“It seems straightforward to apply explore-then-exploit and inverse-gap weighting. Could you please state the challenges encountered when proving the regret guarantees?”** Reply: While explore-then-exploit and inverse-gap weighting are well-established techniques in contextual bandits where the learner has access to the true reward, applying these methods in the context of IGL, where the learner only receives certain indirect feedback, presents significant challenges. First, to develop a provable algorithm, it is necessary to design an effective reward predictor. The step-function estimator used by Maghakian et al. (2022) is one such example. However, our analysis indicates that this estimator does not lead to a regret guarantee. Instead, we propose a novel Lipschitz reward estimator, which is essential for proving the regret guarantees. Additionally, unlike the widely used optimism principle in bandit problems, we apply a pessimism approach instead: our predicted reward is an underestimator of the true reward. This design is also crucial for obtaining the regret guarantee, whose analysis is nontrivial: it involves leveraging the Lipschitz property of our estimator to address the discrepancy between the predicted reward and the true reward. This differs from the misspecification analysis typically conducted in contextual bandits. To sum up, our approach and the theoretical analysis address the unique challenges posed by the IGL problem and are innovative and nontrivial in our opinion. > **“It would be helpful if the authors could provide some related lower bounds to help readers understand the gap between the current regret bound and the optimal rate.”** Reply: According to the lower bound in the contextual bandit problem, the best regret bound we can hope for in IGL is $O(\sqrt{KT})$. However, as we stated before, IGL is substantially more difficult than contextual bandit, so whether $O(\sqrt{T})$ regret is achievable remains an open question. Nevertheless, we emphasize that our work presents the first provably efficient algorithm with sublinear regret in the personalized reward setting.
Summary: The authors provide sublinear regret algorithms for Interaction Grounded Learning (IGL) setting, a modification of the standard contextual bandit setting where instead of getting the reward signal, the learner receives some alternative signal from an arbitrary space. The game proceeds for T rounds where at every round, the learner gets a context x_t, chooses an action a_t, gets a reward r(x_t, a_t) and receives feedback y_t \in Y. The primary difference from the prior work is that the reward function can depend on the context x, i.e. the reward function can be personalized for the context x. Due to this personalization, the authors make an independent assumption that given the reward and the context, the observation and the action are independent to each other. Strengths: Provide T^{2/3} style of regret bounds for IGL setting with personalized rewards. Most of the assumptions and problem setting follow classical contextual bandits literature. Experimental are provided for the corresponding algorithms. Weaknesses: The regret bound scales as T^{2/3} and it is not clear how or when can we get T^{1/2}. I am guessing that similar to the partial monitoring setting we will need to make assumptions on the function \Phi and there will be a dichotomy between T^{1/2} regret vs T^{2/3} regret based on the information structure. The proof techniques follow closely well-known tools like inverse kinematics and SquareCB from the contextual bandits and RL literature. However, the experiments are still quite interesting. Technical Quality: 3 Clarity: 4 Questions for Authors: Can the authors provide a more detailed discussion of how IGL is different from the partial monitoring setting. From what I understand, the primary difference is the assumption that learner has access to a class \Phi which contains a decoder \phi that maps the context and observation to actions. Thus this becomes closed to Model-Based methods in RL, etc, and hence we can expect statistical traceability. Can the authors elaborate if there are any other differences? Paper is well written. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **“Can the authors provide a more detailed discussion of how IGL is different from the partial monitoring setting. From what I understand, the primary difference is the assumption that the learner has access to a class $\Phi$ which contains a decoder $\phi$ that maps the context and observation to actions. Thus this becomes close to Model-Based methods in RL, etc, and hence we can expect statistical traceability. Can the authors elaborate if there are any other differences?”** Reply: Besides the differences mentioned by the reviewer, in the partial monitoring setting, in order to achieve sublinear regret guarantees, it is assumed that the difference between the loss of any two arms can be represented as a certain linear combination of the signal matrix. Comparing the assumptions in these two different problems, we think those in IGL might be easier to interpret in real applications.
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed and valuable comments. To further illustrate that Algorithm 2 outperforms Algorithm 1, as suggested by Reviewer LmpU, we include the average progressive reward plot in the rebuttal PDF, which indeed demonstrates that Algorithm 2 consistently outperforms Algorithm 1 over time. Pdf: /pdf/e4b03b9589ad4268a7ece94616a4643120b8ef65.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Nearest Neighbor Speculative Decoding for LLM Generation and Attribution
Accept (poster)
Summary: The paper presents a new semi-parametric language modeling approach that can incorporate text spans from a datastore into LLM-based generation improving both quality and attribution of generated texts. They propose a two-step approach that requires constructing an on-the-fly token-level datastore based on a small number of retrieved passages from a passage-level datastore. The current token is generated from the mixture distribution between the base LLM and the retrieved token-level distribution interpolated using a relative retrieval confidence score capturing the uncertainty of the token retriever. Furthermore, the approach enables extending the generation from token to an n-gram span based on a speculative decoding that rejects or accepts the tokens in the continuations, thus enabling span-level generation, improving efficiency. Strengths: 1. Extensive experiments across tasks and datasets shows the efficacy of the proposed approach over standard LLM decoding, KNN and Retrieval augmented incontext learning variants. 2. Ablations show how the relaxation factor used for speculative decoding can enable flexible-length n-gram continuations to be incorporated based on the domain and can provide a good tradeoff between accuracy and attribution. Weaknesses: 1. Multiple-token nearest neighbor generation has been proposed in prior work but was not discussed or compared, see: [1] Chunk-based Nearest Neighbor Machine Translation (https://aclanthology.org/2022.emnlp-main.284/). Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the novelty and effectiveness of our work. The chunk retrieval and the neighbour selection process in the suggested paper are indeed related to NEST. For example, it is possible to improve the span selection process of NEST with the batch-beam-level neighbours in the paper which evaluate multiple chunk candidates simultaneously. We thank the reviewer again and will add a more detailed discussion of this paper in the related work section.
Summary: This paper: * Introduces NEST, a semi-parametric language modeling approach that integrates real-world text spans into language model generations. * Enhances generation quality and reduces latency by using token-level retrieval and speculative decoding. * Outperforms conventional kNN-LM and competes well with in-context retrieval methods. * Demonstrates significant improvements with various models. Strengths: * Introduces a novel semi-parametric language modeling technique for improved attribution and generation quality. * Demonstrates a significant increase in speed and efficiency in language model generation. * Outperforms conventional kNN-LM and shows competitive results against in-context retrieval methods across various knowledge-intensive tasks. * Effective across a range of tasks including text completion, question answering, and factuality-aware generation, showcasing the method's adaptability to different content requirements. Weaknesses: * Performance heavily relies on the accuracy of the first-stage passage retrieval and second-stage token retrieval. * The retrieval process may still be complex and resource-intensive for practical deployment. * The gains in performance and efficiency are less pronounced in larger models. Technical Quality: 3 Clarity: 3 Questions for Authors: * How scalable is the NEST approach in real-world scenarios? * What measures are taken to mitigate the impact of noise and errors introduced during the data retrieval stages? * How does NEST address potential biases that may arise from the retrieved data? * How well does NEST adapt to the evolving nature of language? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: * The overall effectiveness of NEST is contingent upon the precision of both stages of text retrieval. * The approach may struggle with ensuring that the retrieved content is contextually relevant and free from biases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the novelty and effectiveness of NEST. Below, we address each weakness point raised: - *“Performance heavily relies on the accuracy of the first-stage passage retrieval”*: We acknowledge the performance of NEST is impacted by the first-stage passage retrieval. However, all retrieval-augmented models have this limit, and our work focuses on improving 1) the integration of the retrieved content into the LLMs and 2), rather than further improving the first-stage retrieval accuracy. - *“Retrieval process may still be complex and resource-intensive for practical deployment”*: We politely disagree that the retrieval process is complex. Our two-stage retrieval system is designed to reduce the resources required for building a token-level datastore of the entire corpus, which is a significant advantage over other approaches. We have empirically demonstrated that our approach strikes a good balance between accuracy and efficiency. - *“The gains in performance and efficiency are less pronounced in larger models”*: We understand the reviewer's point that the gains in performance and efficiency are less pronounced in larger models. However, this is a natural consequence of the model's increased capacity to memorize nuanced facts. Nevertheless, retrieval augmentation remains meaningful in practice for long-tail knowledge, faster response, source attribution, and knowledge update. Moreover, our results show that the speedup is more prominent in larger models (the overall generation latency per query is dominated by the LM forwarding and rejection sampling process), making NEST a more efficient choice for large-scale applications. For the Questions: - **Scalability of NEST**: We believe that our experiments are representative of real-world scenarios, with the largest model having 70 billion parameters and the largest knowledge store having 33 million passages. Further engineering optimization can enable scaling to even larger models and knowledge stores. - **Measures to reduce noises and biases in retrieval**: The Confidence-Based Output Interpolation (sec 3.2) is our exact measure to mitigate the noise and biases in retrieval. Not incorporating the retrieval information at the input reduces the risk of biases and interpolating based on the token-retrieval confidence further filters out contradictory knowledge. This is also verified in the TruthfulQA experiments when adversarial queries are encountered, where NEST outperforms normal RAG methods by 0.3~0.73 points in Rouge-1. Please see line 219 - 223 for more details. - **Adapting to changes in language**: We interpret the reviewer's question as referring to the evolving nature of language, where new phrases and knowledge are added frequently. NEST's zero-shot combination of LLMs and knowledge sources enables it to adapt to these changes more easily. By tuning the hyperparameters on the development set, NEST can combine different LLMs with different knowledge sources, making it more flexible and adaptable to the evolving nature of language.
Summary: This work presents Nearest Neighbor Speculative Decoding (NEST), a technique to better inject real-world text spans into the output of existing language models. NEST is a kNN-LM approach adding an initial passage retrieval step. During inference, NEST uses Relative Retrieval Confidence (RRC) for confidence-based interpolation, dynamically extends selected tokens to include text spans when confidence is high, and employs a relaxed speculative decoding process that accepts only highly probable token spans. According to the evaluation conducted by the authors with Llama 2 chat, NEST significantly outperforms both the base LM and standard kNN-LM, in terms of speed and accuracy. Strengths: - well-written: The paper is clear and easy to understand. Figure 1 itself is enough to understand what the paper presents - well-motivated: The paper explains the problem that it tries to solve and provides the motivations for why it needs to be solved - Extensive experiments demonstrating that the proposed approach works Weaknesses: - It’s unclear to me what are the most important contributions of this work. What is really new and has the most impact on the results? Even with the help of the related work section I cannot answer this question, hence my low confidence score. Technical Quality: 4 Clarity: 3 Questions for Authors: It’s unclear to me what are the most important contributions of this work. What is really new and has the most impact on the results? Even with the help of the related work section I cannot answer this question, hence my low confidence score Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The limitations are pointed out and sufficiently discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's feedback and would like to clarify the main contributions of NEST: - **Source Attribution**: By injecting retrieved segments into the generation of LLM, NEST provides direct attribution for the generation at a span level, enabling users to verify the reliability of the generation. Methods based on prompting/finetuning to provide citations can still make mistakes, while for NEST, given the same set of retrieved documents, the attribution of the claim is always correct since the segments are directly taken from the source. - **Copyright Protection**: Using LLMs for content generation is controversial because these models were probably trained on copyrighted content, which may infringe on the rights of data producers. Storing the proprietary data separately in a data store avoids memorizing high-risk knowledge; providing direction attribution to each claim avoids the legal risk of using licenced data without acknowledgement. - **Factuality**: Grounding the generation using spans taken from a verified knowledge source can improve the factuality of LLM generation. Unlike normal RAG settings where the retrieved evidence is injected into the prompt, NEST can choose not to use the retrieved results during interpolation. This is also showcased in adversarial question-answering tasks such as TruthfulQA, where RAG can easily be misled by the retrieved “evidence” that is seemingly related but incorrect, while NEST can avoid such misleading knowledge by not interpolating with them at the output. Please see line 219 - 223 for more details. Moreover, we would like to point out some technical improvements NEST made in LLM and retrieval-augmented generation: - **Dynamic Interpolation**: Retrieval is not always required for tasks such as creative writing, while RAG and baseline kNN-LM methods both used a fixed scheme to incorporate retrieved “evidence”, which might hurt the generation in such cases. Other methods that can adaptively choose to combine the retrieved knowledge such as CoG or Retro require further finetuning, which couples the LLM with the knowledge base and might not be able to transfer to new databases. On the other hand, NEST can dynamically adjust the interpolation between LLMs and retrievers in a zero-shot fashion, making it easy for developers to update the LLM and knowledge base. - **Generation Efficiency**: Injecting the retrieved segment into LLM generation also improves the generation speed thanks to speculative decoding. We did not optimize for this aspect but using a faster and better retriever should further improve the generation latency. - **Attribution-Fluency Tradeoff**: It might not be obvious in the paper but injecting segments into LLM generation (attribution) is risky as it might introduce artifacts during transition such as repetition or grammatical errors (fluency). This is hard to evaluate but has a major impact in real application. Our proposed dynamic span selection and relaxed speculative decoding method provide a solution for attribution-fluency trade-off by tuning the selection and rejection hyperparameters in Equation (6) and (7). In summary, NEST not only solves the technical challenges in previous generation methods, such as when to combine with retrieval and attribution-fluency tradeoff but also has real impacts on applications such as copyright protection and providing accurate attribution for verification. We again thank the reviewer for pointing this out and we will elaborate on the contribution and novelty of NEST in the paper. --- Rebuttal 2: Title: Respectfully Asking to Reconsider Your Rating Comment: Thank you again for your feedback. We would like to follow up on our rebuttal to ensure that we have thoroughly addressed your initial concerns, and to respectfully ask for a reconsideration of the overall rating. Specifically, we would like to provide additional clarification on the two areas of concerns raised. (1) **Novelty:** Our proposed approach, NEST, primarily consists of two novel techniques: (1) a confidence-adjusted, token-level retrieval score to extract text spans from real-world corpora as draft inputs, and (2) a relaxed-speculative decoding procedure to seamlessly integrate these drafts into the LLM generation process, rejecting the uncertain suffixes. We demonstrated empirically that this approach allows attribution of the generated text directly to the source, and yields significant improvements in both generation speed and quality compared to the base LLMs and kNN-LM with two-stage retrieval. kNN-LM with two-stage retrieval is a stronger baseline compared to the standard kNN-LM. Reviewers nNiG, vMVQ, and kPHS have also recognized the novelty and strengths of our approach. (2) **Which technique had the most impact on performance:** We included an ablation study in the appendix of our submission (**Appendix C.2, Table 3**). The results show that both adjusting the interpolation score based on token-level retrieval confidence (+4.6 ROUGE-1 for WikiText-103 and +6.8 FactScore on Biography) and performing token rejection using relaxed speculative decoding (+2.3 ROUGE-1 for WikiText-103 and +5.2 FactScore on Biography) significantly contribute to the performance improvements. We plan to move the ablation study to the main text to provide clearer insights for future readers with additional main content space. | Models (7B) | Wiki./ROUGE-1 | NQ/ALR | Bio./FS | |---------------------------------|---------------|--------|---------| | kNN-LM (two-stage) | 20.1 | 40.8 | 34.8 | | + Relative Retrieval Confidence | 24.7 | 44.4 | 41.6 | | + Dynamic Span selection | 24.5 | 44.6 | 41.6 | | + Relaxed speculative decoding | 26.8 | 45.4 | 46.8 |
Summary: The paper introduces NEST, a novel semi-parametric language modeling approach that enhances the generation quality and attribution of Large Language Models (LLMs) by incorporating real-world text spans. NEST employs a two-stage k-NN search and speculative decoding, achieving improved performance and reduced inference time. Strengths: The two-stage k-NN search is a smart optimization that balances search accuracy and efficiency. The paper demonstrates significant improvements in generation quality and speed, offering a competitive edge over traditional methods. The approach of providing direct attribution to sources is valuable for enhancing the reliability of LLMs. Weaknesses: The paper could benefit from a more detailed comparison with state-of-the-art methods. The generalizability of NEST across different domains and languages needs further exploration. The potential impact of NEST on the in-context learning ability of LLMs is not thoroughly discussed. The paper lacks a comprehensive analysis of error rates and statistical significance. Technical Quality: 2 Clarity: 2 Questions for Authors: How does NEST compare with other advanced language models in terms of handling long-tail knowledge? What are the implications of using NEST for models pre-trained with different objectives or datasets? Could the authors elaborate on the impact of NEST on the diversity and creativity of LLM generations? On page 2, the authors mention a 1.8× speedup; could they specify if this is consistent across different model sizes? How does NEST handle potential biases in the retrieved text spans from the corpus? Can the authors provide more insight into the decision process behind the choice of hyperparameters? Is there a risk of overfitting the corpus used for training the key-value datastore? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the contributions of NEST. Below, we address each weakness point raised: - **Comparison with SOTA models**: We appreciate the reviewer's comment on comparing NEST with state-of-the-art (SOTA) models. To clarify, NEST is an algorithm that can be used to combine any SOTA LLMs with a desired data store to improve generation and attribution. While the retrieval-augmented generation (RAG) baseline outperforms NEST on several tasks, we highlight that NEST can be further combined with RA to achieve better performance (Table 1, last row). This flexibility is a key strength of our approach. - **Generalization of NEST across different domains**: We evaluated NEST across multiple domains, including general, legal, and medical fields, as well as a diverse set of subjects covered by MMLU (sec 4.4). Our results show that NEST consistently improves performance across these domains. While our current focus is mainly on English due to the availability of robust open-sourced multi-lingual LLMs, we believe that NEST can be applied to other languages with minimal modifications. - **In-context learning ability**: As mentioned in the Limitation section, NEST may not improve or even slightly hurt in-context learning since the demonstrations in the prompts are unlikely to appear in the natural text predominantly present in the retrieval data store. To quantify the impact, we conducted an experiment using Llama-2 (7B and 13B, not instruction-tuned) with 5-shot demonstrations on the Natural Questions dev set. Our results show that NEST does not improve performance in this setting. We will include these additional results in future versions of the paper. | Models | EM | F1 | ALR| | -------- | ------- | ------- | ------- | | Llama-2-7B | 24.05 | 33.53 | 27.10 | | +NEST | 23.50 | 32.87 | 27.06 | | Llama-2-13B | 30.34 | 41.17 | 33.76 | | +NEST | 30.07 | 40.77 | 34.10 | For the Questions: - **Handling long-tail knowledge**: Semi-parametric LMs such as NEST have stronger capabilities in memorizing long-tail facts because of the direct access to a non-parametric data store. Our experiment on MedMCQA, which involves medical questions that are often "long-tail," shows that NEST outperforms the base LM model by 0.7~1 point in answer recall (Table 1). - **Different base LMs**: NEST is primarily designed for auto-regressive language models to improve text generation. Within this scope, NEST is designed to be flexible and can be combined with different LLMs and data stores in zero-shot, and we used Llama-2-chat as an example in the paper. We notice the notion of “models **pre-trained with different objectives or datasets**” can be ambiguous, and we’re happy to further clarify if we misunderstood the reviewer’s question. - **Diversity and creativity**: While NEST primarily focuses on improving factuality and attribution, we believe that our approach can be extended to incorporate diversity and creativity in future work. The Confidence-Based Output Interpolation mechanism we propose (sec 3.2) can determine whether to interpolate the LM prediction with the retrieved evidence. For creative generation scenarios, NEST will primarily rely on the base LLM for generation. - **Efficiency across different model sizes**: The speedup is not the same across models of different sizes. The overall generation latency per query is dominated by the LM forward pass and the rejection sampling process, while the other parts (retrieval and index building) stay almost constant. Therefore, the speedup is more obvious for larger LLMs and less prominent for smaller models, which is consistent with other speculative decoding methods. - **Biases in retrieval**: The Confidence-Based Output Interpolation mitigates the noise and biases in retrieval. By using the confidence of the retrieval, the LLM can adjust the frequency to interpolate with the retrieval distribution. This is verified in the TruthfulQA experiments, where adversarial queries are prevalent. NEST outperforms normal RAG methods by 0.3~0.73 points in Rouge-1 (line 219 - 223). - **Choice of hyperparameters**: We provide a detailed hyperparameter tuning scheme in Appendix A and B. The parameters $\alpha$ and $\tau$ in Equation (4) reflect the prior of how trustworthy the retriever is, and using larger $\alpha$ means trusting the retriever more. For $\delta$ in Equation (5), we find 0.5 works best across all models and tasks, which is reasonable as the interpolation coefficient is well-calibrated in Equation (4). For $\gamma$ in Equation (6), it determines how often the LM rejects a segment retrieved from the corpus. A smaller $\gamma$ means less rejection. We find that using a larger $\gamma$ for a stronger LM (more rejection) tends to work better. - **Risk of corpus overfitting**: We would like to clarify that our LLMs and retriever are not fine-tuned on the corpus we used for retrieval. On the other hand, we tuned the hyperparameters in Equation (4), (6), and (7) for each retrieval corpus, assuming that the information distribution and quality of the data were unknown until tested. We recommend practitioners adopting the NEST approach also tune the hyperparameters on their retrieval corpus. This is cheap to complete and may significantly boost downstream performance. --- Rebuttal 2: Comment: I have read the author's response and other reviews, and I will keep my rating unchanged. --- Rebuttal 3: Comment: Thank you again for your feedback. We're happy to address any remaining concerns you may have and to provide further clarifications to ensure a thorough understanding of our research.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Local Method for Satisfying Interventional Fairness with Partially Known Causal Graphs
Accept (poster)
Summary: This paper develops a method for satisfying interventional fairness when the causal graph is partially known. The paper first develops a method for checking possible parental sets as well as a method for estimating propensities given a possible parental set. Then, the interventional distribution is computed using the parental set as a back-door adjustment set. Finally, a min-max joint learning approach is proposed to learn a fair model based on the worst-case violation of interventional fairness. Strengths: Originality The paper extends the previous methods to MPDAGs and develops a novel method for learning fair models based on the worst-case violation of interventional fairness. Quality The proposed method is theoretically sound. It builds upon established theoretical results from previous works while also providing rigorous theoretical analysis of the newly proposed approaches. Clarity The main results are summarized as lemmas. However, a lemma is usually used to help prove a larger theorem. The authors may change lemmas to propositions. Significance The experiment results show that the proposed method can achieve good fairness and accuracy performance without providing exact causal graphs. Weaknesses: This paper presents a solid work. The main reason I am inclined towards rejection is the applicability of this work. The proposed method relies on the existence of parent nodes of the sensitive attribute. However, as also mentioned in the paper, in most datasets the sensitive attributes have no parent nodes. To be more specific, most sensitive attributes have no parents in nature. For example, common sensitive attributes like sex/gender, race, nationality, etc., are determined at birth. As a result, the proposed method is applicable to very limited sensitive attributes like disability, which significantly limits the importance of this work. I recommend that the authors may apply the proposed method to more general causal inference scenarios where the treatment node may generally have parent nodes. Technical Quality: 3 Clarity: 3 Questions for Authors: In the experiments, the accuracy is measured by RMSE. Can the proposed method be applied to classification problems? ===============post-rebuttal comments============ The authors partially addressed my concerns, and I value the contributions made by the paper. I have raised my score. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s great efforts and insightful comments to improve our manuscript. We are also encouraged for the reviewer to recognize the **Originality**, **Quality**, **Clarity**, and **Significance** of our work. In below, we would like to address the reviewer's concerns regarding the **Applicability** of our proposed method from both theoretical and experimental aspects. *** ### **Main Result 1: The proposed method can also be applied in scenarios where sensitive attributes are identified as root nodes (W1).** - We **agree** with the reviewer that most sensitive attributes have no parents in nature. However, we respectively **disagree** that the proposed method relies on the existence of parent nodes of the sensitive attribute. ### **Theoretical Analysis** - For our studied interventional fairness, if the sensitive attribute $A$ is as a root node in the causal DAG, then there will be **no backdoor path** from the sensitive attribute $A$ to the outcome variable $Y$. - At this point, **the causal estimands in the definition of interventional fairness and our method degenerate to the correlation estimands.** - Compared to previous Fair, FairRelax, and $\epsilon$-IFair methods, **our method doesn't rely on variable selection and strong causal assumptions**, thus still should be considered as a more plausible method. ### **More Real-World Experiments** - We **add experiments on two widely used real-world datasets**: the **Adult** and the **COMPAS** datasets, where **the sensitive attributes are "sex" and "race" as the root nodes**, respectively. |COMPAS|Full|Unaware|FairRelax|Fair|ε-IFair|Ours| | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |RMSE|0.256 ± 0.022|0.261 ± 0.023|0.261 ± 0.023|0.263 ± 0.022|0.235 ± 0.091|0.219 ± 0.020 | |Unfairness|0.273 ± 0.048|0.269 ± 0.052|0.260 ± 0.045|0.238 ± 0.045|0.190 ± 0.120|0.179 ± 0.011| || |Adult|Full|Unaware|FairRelax|Fair|ε-IFair|Ours| | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |RMSE|0.433 ± 0.024|0.436 ± 0.024|0.607 ± 0.131|0.611 ± 0.128|0.413 ± 0.010|0.375 ± 0.019| |Unfairness|0.506 ± 0.021|0.409 ± 0.029|0.209 ± 0.205|0.187 ± 0.205|0.155 ± 0.009|0.171 ± 0.021| || - From the above experimental results, we find that **our method still outperforms previous methods when the sensitive attribute is used as the root node**. ### **More Synthetic Experiments with Varying Number of Parent Nodes** - **To further explore the impact of the number of parent nodes of sensitive attributes on the accuracy and fairness performance, we also added more synthetic experiments with varying number of parent nodes.** |Metrics|Unaware|Fair|FairRelax|$\epsilon$-IFair|Ours| |:--:|:--:|:--:|:--:|:--:|:--:| |RMSE (p = 0)|0.601 ± 0.235|0.642 ± 0.241|0.635 ± 0.240|0.633 ± 0.237|0.635 ± 0.233| |Unfairness (p = 0)|0.063 ± 0.082|0.041 ± 0.051|0.045 ± 0.050|0.041 ± 0.021|0.035 ± 0.016| |RMSE (p = 2)|0.528 ± 0.298|0.601 ± 0.291|0.597 ± 0.294|0.590 ± 0.206|0.584 ± 0.201| |Unfairness (p = 2)|0.111 ± 0.090|0.099 ± 0.075|0.100 ± 0.075|0.100 ± 0.088|0.096 ± 0.107| |RMSE (p = 4)|0.718 ± 0.281|0.860 ± 0.261|0.834 ± 0.253|0.818 ± 0.237|0.792 ± 0.228| |Unfairness (p = 4)|0.129 ± 0.073|0.113 ± 0.058|0.120 ± 0.050|0.110 ± 0.072|0.103 ± 0.086| || - **As the number of parent nodes increases, the advantage of our method over **Fair**, **FairRelax**, and **$\epsilon$-IFair** gradually increases.** This is because when there are more parent nodes, there are more backdoor paths, thus our method can find these backdoor paths to better control the causal effect of the sensitive attribute on the outcome variable. *** ### **Main Result 2: The proposed method can also be applied to the Classification Problems (Q1).** ### **Real-World Experiments** - For real-world experiments, we would like to kindly remind the reviewer that **the outcome variable is binary in the OULAD dataset** adopted in our original manuscript. **The outcome variables in our added experiments on the Adult and COMPAS datasets are also binary.** ### **Synthetic Experiments** - For synthetic experiments, as suggested by the reviewer, **we conduct more experiments using AUC as the evaluation metric instead of RMSE**. Specifically, we modify the data generation process (DGP) to clip the outcome variable $Y$ to 1 if $Y > 0$, and to 0 if $Y < 0$, and the rest DGP remains the same. The experiment results are shown below. |Noise = 2.5|Node = 10, Edge = 20 (AUC) |Node = 10, Edge = 20 (Unfairness) |Node = 40, Edge = 80 (AUC)|Node = 40, Edge = 80 (Unfairness)| |:--:|:--:|:--:|:--:|:--:| |Oracle|0.815 ± 0.094|0.000 ± 0.000|0.828 ± 0.087|0.000 ± 0.000| |Full|0.845 ± 0.071|0.038 ± 0.051|0.842 ± 0.086|0.143 ± 0.106| |Unaware|0.843 ± 0.070|0.017 ± 0.021|0.837 ± 0.090|0.113 ± 0.121| |FairRelax|0.825 ± 0.057|0.017 ± 0.021|0.824 ± 0.082|0.112 ± 0.119| |Fair|0.819 ± 0.060|0.015 ± 0.021|0.822 ± 0.128|0.094 ± 0.122| |$\epsilon$-IFair|0.843 ± 0.049|0.018 ± 0.017|0.833 ± 0.085|0.098 ± 0.105| |Ours|0.844 ± 0.051|0.015±0.016|0.835 ± 0.088|0.089 ± 0.119| || - From the above results, **Full** and **Unaware** perform better on AUC, while **Fair**, **FairRelax**, **$\epsilon$-IFair**, and our approach perform better on unfairness. Note that our approach outperforms **Fair**, **FairRelax**, and **$\epsilon$-IFair** in all scenarios on both AUC and unfairness. *** ### **(Minor) Clarity Issues (S3)** > Clarity: The main results are summarized as lemmas. However, a lemma is usually used to help prove a larger theorem. The authors may change lemmas to propositions. - We thank the reviewer for raising such useful suggestions. In our revised version, we have changed all *Lemma*s to *Proposition*s. *** **We hope the above discussion will fully address your concerns about the applicability of our work, and we would really appreciate it if you could consider to raise your score.** We look forward to your insightful and constructive responses to further help us improve the quality of our work. Thank you! --- Rebuttal Comment 1.1: Title: We would like to summarize our efforts and changes during rebuttal as follows. Comment: Dear Reviewer GJNz, Once again, we are grateful for your time and effort for reviewing our paper. Since the discussion period will end in around a day, we are very eager to get your feedback on our response. We understand that you are very busy, but we would highly appreciate it if you could take into account our response when updating the rating and having a discussion with AC and other reviewers. We are encouraged by your kind words supporting the **Originality, Quality, Clarity, and Significance** of our work. Meanwhile, it seems that your only concern currently is around the **Applicability** of our method. To facilitate checking, we are happy to summarize our efforts during rebuttal as follows. - **We theoretically showed that our proposed method can also works well in scenarios where sensitive attributes are as root nodes.** - **We added experiments on two widely used real-world datasets: the Adult and the COMPAS datasets, where the sensitive attributes are "sex" and "race" as the root nodes, respectively.** - **We claimed that our proposed method can also be applied to the classification problems, and highlighted that the current real-world experiment is conducted on the OULAD dataset, where the outcome variable is binary.** - **As suggested by the reviewer, we conduct more synthetic experiments using AUC as the evaluation metric instead of RMSE.** - **Benefiting from the reviewer, we have changed all _Lemmas_ to _Propositions_ to improve the presentation clarity.** - (Reviewer BRmE) **We added more simulation experiments to explore the effect of the inaccuracy of CPDAG on the performance of our method, as shown in Fig. 1 in the Supplementary PDF.** *** As the discussion deadline approaches, we are wondering whether our responses have properly addressed your concerns? Your feedback would be extremely helpful to us. If you have further comments or questions, we hope for the opportunity to respond to them. Many thanks, Submission18019 Authors --- Rebuttal 2: Comment: First, we clarify that **methods for achieving fairness notions can be divided into the following three categories.** |Methods| Input(s) | Fairness notions | Limitation | |:--:|:--:|:--:|:--:| |achieve correlation-based fairness | observational data | demographic parity, risk ratio, equal opportunity, etc. | **cannot achieve causal fairness** | |achieve causal fairness **with** a known causal DAG | observational data, **a known causal DAG** |interventional fairness, counterfactual fairness, path-specific counterfactual fairness, etc. | **need strong prior knowledge for a known causal DAG** | |achieve causal fairness **without** a known causal DAG (ours) | observational data | interventional fairness, counterfactual fairness, path-specific counterfactual fairness, etc. | **with observational data, we can only know the Markov equivalence class, not the unique DAG** | || **1. After careful thinking, we respectfully believe that it is not fair to compare our approach with methods for achieving correlation-based fairness in our degenerated case (in which the causal is the same as correlation), due to the following reasons.** - For methods achieving correlation-based fairness, **their applicability usually fails to achieve causal fairness.** However, **our approach can achieve causal fairness with general applicability,** where it degenerates to correlation-based fairness only in a specific case (in which the sensitive attribute is a root node in the causal DAG). - The most important point, from the table above, is **how do we determine if the sensitive attribute is the root node in the true causal DAG when we only have the observational data?** - We kindly invite the reviewer to refer to a toy example in Figure 4 on page 13 of our manuscript. Specifically, Figure 4(a) is the true causal DAG, in which the sensitive attribute is a root node. However, as shown in Figure 4(b), **with only observation data, we cannot distinguish from the following causal relations: $X_1 \leftarrow A \to X_2$, or $X_1 \to A \to X_2$, or $X_1 \leftarrow A \leftarrow X_2$,** because they are all in the same Markov equivalence class! But **only the first causal relation** $X_1 \leftarrow A \to X_2$ corresponds to the case where **the sensitive attribute is a root node!** **2. Meanwhile, we agree with the reviewer that it is fair to compare our approach with methods for achieving causal fairness _without_ a known causal DAG, in terms of factors like effectiveness, applicability, etc., especially at the degenerated case.** **To the best of our knowledge, Fair [1], FairRelax [1], and $\epsilon$-IFair [2] are all the existing prior works focusing on the same problem** (causal fairness with partially known causal graph). To clarify the **core idea, key assumptions, limitations, sensitivity to wrong CPDAG, and how they perform in the degenerated case in which the sensitive attribute as the root node**, we summarize our findings as follows. |Methods| Core idea | Key assumptions | Limitations | Sensitivity to wrong CPDAG | Sensitive attribute as the root node | |:--:|:--:|:--:|:--:|:--:|:--:| |Fair [1] and FairRelax [1] | two-phase approach: first obtains a CPDAG from causal discovery methods, then uses definite descendants (and possible descendants) of the sensitive attribute to train a fair model | a correct CPDAG, and **very strong assumption that the sensitive attribute is a root node for identification** | only uses partial variables, instead of all variables, may **significantly reduce the prediction accuracy** | **very high**, the effectiveness of such approaches completely rely on the correctness of the causal discovery algorithm to find the correct CPDAG | **poor fairness performance**, because the second phase does not impose any causal fairness constraint | |$\epsilon$-IFair [2] | a **global approach** by enumerating **all subclasses of DAGs where the causal effect is identifiable** | a correct CPDAG, and **strong assumptions that there is no undirect edge between sensitive features and non-sensitive features** | **high time complexity** | **weaker** than that of Fair [1] and FairRelax [1] (see Figure 1 in supplementary PDF) | causal effect is always identifiable in such a case, and **enumerating all DAGs globally needs much time** | | Ours | a **local approac**h by enumerating **all possible parental sets of the sensitive attribute** | a correct CPDAG **(with no other strong assumptions, because we intend to do partially identification, not fully identification as in [1] and [2])** | **need a extra propensity model** for fair training | **weaker** than that of Fair [1] and FairRelax [1] (see Figure 1 in supplementary PDF) | causal effect is always identifiable in such a case, and **enumerating locally needs less time** | || --- Rebuttal 3: Comment: - **In the degenerated case, we conclude that Fair [1] and FairRelax [1] perform poorly in both accuracy and fairness.** - On one hand, they only use partial variables, not all, which reduces their accuracy performance. - On the other hand, after selecting the definite descendants (and possible descendants) of the sensitive attribute, they does not impose any causal fairness constraint, which reduces their fairness performance. - If the sensitive attribute is a root node in the causal DAG, most of the variables would be identified as the definite descendants (and possible descendants) via the causal discovery algorithms, which increases the availability of more variables and makes the unfair problem be more serious. - We also validate such claims on our added experiments on the Adult and the COMPAS datasets during rebuttal. - **In the degenerated case, both of the $\epsilon$-IFair [2] and our approaches can work well, but the time complexity for implementing $\epsilon$-IFair [2] in much larger than that of our approach.** - This is because $\epsilon$-IFair [2] enumerates all subclasses of DAGs where the causal effect is identifiable, and when the sensitive attribute is the root node, the causal effect is naturally identifiable, so $\epsilon$-IFair [2] enumerates almost all possible DAGs. - For our local method, we only enumerate all possible parental sets of the sensitive attribute, whose enumerating space is much smaller than that of $\epsilon$-IFair [2], especially when we have plenty nodes in the real-world datasets. *** **We sincerely hope that the reviewer can carefully go over our discussion above -- we really spent a lot of time trying to make it clearer!** We are also very grateful for the multiple rounds of communication with you, as this level of engagement has significantly contributed to improving the quality and positioning of our work (in the extensive fairness literature). As the discussion deadline approaches, please let us know if you have any further feedback. We truly appreciate your beneficial comments as well as your consideration to upgrade your score -- thank you so much!! *** **References** [1] Counterfactual fairness with partially known causal graph. NeurIPS, 2022. [2] Interventional fairness on partially known causal graphs: A constrained optimization approach. ICLR, 2024. --- Rebuttal Comment 3.1: Comment: Dear Reviewer GJNz, Since the discussion period will end in a few hours, we will be online waiting for your feedback on our rebuttal, which we believe has fully addressed your concerns. We understand that you are very busy, but we would highly appreciate it if you could take into account our response when updating the rating and having a discussion with AC and other reviewers. Thanks for your time, Submission18019 Authors
Summary: In this paper, the author(s) talk about a better way to estimate interventional fairness in the case where we have partially known Directed Acyclic Graphs (DAG). It lists out how most previous works have only done this for fully known DAG's and recent attempts to address the case of unknown graphs have utilized strong assumptions and or reduced prediction accuracy. The authors introduce a min-max optimization framework to achieve interventional fairness and boost accuracy and test it on both generated data and real-world data. Strengths: The paper does a good assessment of former work on the topic and builds directly on it. The terms and definitions and clearly outlined and explained adequately for easy understanding. The authors put their theory into practice and using both synthetic and real-world data. Weaknesses: The paper's approach relies heavily on the accuracy of the CPDAG and on the generated propensity scores. The paper could address measures to address interventional fairness in the case of inaccuracies coming from the CPDAG Technical Quality: 4 Clarity: 4 Questions for Authors: CPDAG was used on line 57 but was but the full meaning was not given until line 96. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The author brings it up clearly in their conclusion that their method relies on Completely Partially Directed Acyclic Graph (CPDAG) and on the propensity scores returned and therefore any errors in these can affect their method. Also, the case where we have latent variables or confounders was not addressed. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s great efforts and insightful comments to improve our manuscript. We are also encouraged by the very positive comments on our paper. > The paper's approach relies heavily on the accuracy of the CPDAG and on the generated propensity scores. The paper could address measures to address interventional fairness in the case of inaccuracies coming from the CPDAG. As suggested by the reviewer, we added more simulation experiments to explore the effect of the degree of inaccuracy of CPDAG on the performance of our method. The experimental results are shown in Fig. 1 in the Supplementary PDF. > CPDAG was used on line 57 but was but the full meaning was not given until line 96. We thank the reviewer for pointing out the typo. In our revised manuscript, we will provide the full name of CPDAG and more explanation at Line 57. > Flag For Ethics Review: Ethics review needed: Data privacy, copyright, and consent We're guessing you misclicked the flag for ethics review. We would like to clarify that all used datasets in our original manuscript and rebuttal are public and widely adopted datasets. - The OULAD dataset: https://www.archive.ics.uci.edu/dataset/349/open+university+learning+analytics+dataset - The Adult dataset: https://archive.ics.uci.edu/dataset/2/adult - The COMPAS dataset: https://www.kaggle.com/datasets/danofer/compass *** We hope the above discussion will fully address your concerns about our work. We look forward to your insightful and constructive responses to further help us improve the quality of our work. Thank you! --- Rebuttal Comment 1.1: Comment: I have fully read your rebuttal and I agree to them and yes I misclicked the flag for ethics review --- Reply to Comment 1.1.1: Title: Thank you for agreeing with our rebuttal and for the clarification on misclicking the ethical review. Comment: We are thankful that the reviewers took the time to fully read our rebuttal and found our rebuttal meaningful. We appreciate your very positive evaluation of our work. Benefiting from your comments, the added experiments about the performance under inaccurate CPDAG can further enhance the quality of our manuscript -- thank you so much!!
Summary: This work investigates interventional fairness given partially known causal graphs. Compared to existing methods, it employs all variables and does not need to rely on additional strong assumptions for identification. Specifically, it offers a min-max optimization framework which produces counterfactual fairness in maximally oriented PDAGs. Experiments on synthetic and real-world datasets showcase the efficacy of the proposed method. Strengths: The manuscript is commendably clear in its presentation, particularly in articulating the research gap. This clarity facilitates a quick comprehension of the central theme and the contributions of the study. The toy examples provided are meticulously crafted, serving effectively to elucidate several underlying concepts integral to the research. The application of a min-max optimization algorithm is a highlight of this work, elegantly addressing the research problems identified. Weaknesses: Overall, the paper is well-composed, substantiated by robust experimental outcomes and comprehensive theoretical analysis. However, some sections could benefit from further clarification to enhance understandability. Specific points of ambiguity are addressed below: • The statement “One may notice that a function of X1 − X2 can be used to predict Y” appears somewhat abrupt and disrupts the flow of reading. A detailed explanation or a graphical illustration might clarify the independence between U1 − U2 and A. • At line 165, the rationale behind transforming certain undirected edges into directed edges in the opposite direction remains unclear. Elaboration on this would be beneficial. • The term ‘direct causal information set’ warrants a clear, explicit definition within the main text to avoid potential misunderstandings. --- Thank you very much for providing thorough answers to my queries, which have satisfactorily addressed my concerns. I am pleased to adjust my score. Technical Quality: 4 Clarity: 3 Questions for Authors: See the weakness above. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The limitation has been well discussed in the conclusion part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s great efforts and insightful comments to improve our manuscript. In below, we address these concerns point by point and try our best to update the manuscript accordingly. > **The statement “One may notice that a function of X1 − X2 can be used to predict Y” appears somewhat abrupt and disrupts the flow of reading. A detailed explanation or a graphical illustration might clarify the independence between U1 − U2 and A.** **Response:** We thank the reviewer for the helpful comments. We will add the following toy example to illustrate such claims. - Let $A = U_A, X_1 = A + U_1, X_2 = A + U_2$, and $Y = 2X_1 + X_2 + U_Y$. - Because $Cov(X_1 – X_2, Y) \neq 0$, thus $X_1 – X_2$ is useful in predicting $Y$. - Meanwhile, $X_1 – X_2 = U_1 – U_2$ is a constant for any value of $A$, so $U_1 – U_2$ is independent of $A$. - Thus, using $X_1 – X_2$ as the variable for the prediction can still achieve counterfactual fairness. > **At line 165, the rationale behind transforming certain undirected edges into directed edges in the opposite direction remains unclear. Elaboration on this would be beneficial.** **Response:** We would like to clarify the need for ‘transforming certain undirected edges into directed edges’ as follows. - We know that causal discovery algorithms such as the PC algorithm can only return CPDAGs with undirected edges, not DAGs where all edges are directed. - By definition, $sib(A)$ is a set containing all nodes that have an undirected edge with $A$, and our algorithm needs to find the set of all possible parent sets of the sensitive attribute $A$. - Thus, we need to enumerate all possible cases (i.e., for some $Z \in sib(A), Z \rightarrow A$, and for other $Z \in sib(A), A \rightarrow Z$), and determine whether a new v-structure is generated. - Formally, let $\mathbf{S}(A)$ be a subset of $sib(A)$, we can obtain a DAG from CPDAG by changing all undirected edges $\\{Z-A, \forall Z\in \mathbf{S}(A)\\}$ into the directed edges $\\{Z\to A, \forall Z\in \mathbf{S}(A)\\}$ as parents, and all of other undirected edges $\\{Z- A, \forall Z \not\in \mathbf{S}(A)\\}$ into the directed edges with opposite direction $\\{Z\leftarrow A, \forall Z \not\in \mathbf{S}(A)\\}$ as children. > **The term ‘direct causal information set’ warrants a clear, explicit definition within the main text to avoid potential misunderstandings.** **Response:** The ‘direct causal information set’ is assumed to be the direct causal information in the form $X \to Y$, indicating that $X$ is a direct cause of $Y$. Such direct causal information are usually determined by the prior knowledge. Meanwhile, it is worth noting that other forms of background knowledge, such as tier orderings, specific model restrictions, or data obtained from previous experiments, can also induce MPDAGs [1-4]. As suggested by the reviewer, we will add the above explicit definition within the main text to avoid potential misunderstandings. *** **References** [1] Alain Hauser and Peter Buhlmann. Characterization and greedy learning of interventional Markov equivalence classes of directed acyclic graphs. JMLR, 2012. [2] Marco F Eigenmann, Preetam Nandy, and Marloes H Maathuis. Structure learning of linear gaussian structural equation models with weak edges. ArXiv:1707.07560, 2017. [3] Yuhao Wang, Liam Solus, Karren Yang, and Caroline Uhler. Permutation-based causal inference algorithms with interventions. NeurIPS, 2017. [4] Dominik Rothenhausler, Jan Ernest, and Peter Buhlmann. Causal inference in partially linear structural equation models. The Annals of Statistics, 2018. *** **We hope the above discussion will fully address your concerns about our work.** We look forward to your insightful and constructive responses to further help us improve the quality of our work. Thank you! --- Rebuttal Comment 1.1: Title: Thanks for raising your score to "Strong Accept"! Comment: We thank the reviewer for helping us to improve the presentation clarity of our manuscript -- we will definitely incorporate all the clarifications into our final version. We are happy that you are willing to improve your score from 7 to 8. Thanks!!
Summary: This paper aims for interventional fairness with sufficient prediction accuracy by employing a min-max optimization framework. The proposed approach aims for partially directed acyclic graphs (PDAGs) and extends itself to maximally oriented PDAGs (MPDAGs). Finally, the approach is evaluated on synthetic and real-world datasets. Strengths: The authors addressed a nice and interesting fairness problem in this paper. The proposed approach seemed to be theoretically robust. They also provided detailed empirical analysis. Weaknesses: Below I share some weaknesses. * Above line 188: It is unclear what $S^{(i)}(A); 1<=i<=M$ refers to. The authors should explain the superscripts. * Equations should be numbered. The Equation after line 192 should be explained in more detail. * Line 201: The authors should provide some intuition/explanation about the backdoor adjustment formula. * It seems that there are no baselines based on prior works. Is there no work that solves the fairness problem with causal effect estimation? Technical Quality: 3 Clarity: 3 Questions for Authors: Below I provide my questions: * Line 178: why can it not be $X_i \leftarrow A \rightarrow X_j$? * What is the computational complexity of the proposed approach? * How can the proposed approach be connected with the min-max approach proposed by Xia et al [1] and the invariant prediction problem proposed by Subbaswamy et al [2] [1] Xia, Kevin, et al. "The causal-neural connection: Expressiveness, learnability, and inference." Advances in Neural Information Processing Systems 34 (2021): 10823-10836.\ [2] Subbaswamy, Adarsh, Peter Schulam, and Suchi Saria. "Preventing failures due to dataset shift: Learning predictive models that transport." The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors properly discussed their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s great efforts and insightful comments to improve our manuscript. In below, we address these concerns point by point and try our best to update the manuscript accordingly. ## Weaknesses > **Above line 188 : It is unclear what $S^{(i)}(A) ; 1<=i<=M$ refers to. The authors should explain the superscripts.** - We know that causal discovery algorithms can only return CPDAGs with undirected edges, not DAGs where all edges are directed. - Let $sib(A)$ be a set containing all nodes that have an undirected edge with $A$, and our algorithm needs to find the set of all possible parent sets of the sensitive attribute $A$. - $S^{(i)}(A) ; 1<=i<=M$ is the subsets of $sib(A)$, such that $\mathcal{S}_A=\\{p a(A)\cup \mathbf{S}^{(1)}(A), \ldots, p a(A)\cup \mathbf{S}^{(M)}(A)\\}$ consists of all possible parent sets of the sensitive attribute $A$ obtained from our algorithm. > **Equations should be numbered. The Equation after line 192 should be explained in more detail.** - The propensity $P(A \mid p a(A)\cup \mathbf{S}^{(m)}(A))$ is defined as the probability of the (binary) sensitive attribute $A$ given its (possible) parent set $pa(A)\cup \mathbf{S}^{(m)}(A)$. - The Equation after line 192 is the binary classification cross-entropy loss for training the propensity model $g(X; \hat \phi^{(m)})$ for estimating the true propensity $P(A \mid p a(A)\cup \mathbf{S}^{(m)}(A))$. - The input is covariates $X$ restricted on the possible parent set $p a(A, \mathcal{H})\cup \mathbf{S}^{(m)}(A)$, denoted as $X|_{p a(A, \mathcal{H})\cup \mathbf{S}^{(m)}(A)}$. > **Line 201: The authors should provide some intuition/explanation about the backdoor adjustment formula.** - Whenever we undertake to evaluate the effect of one factor $A$ on another $Y$, the question arises as to whether we should adjust our measurements for possible variations in some other factors $X$, otherwise known as "confounders". - The backdoor adjustment formula aims to partitioning the population into groups that are homogeneous relative to $X$, assessing the effect of $A$ on $Y$ in each homogeneous group, and then averaging the results. - Formally, the backdoor adjustment formula is $P(Y=y \mid do(A=a))=\sum_x P(Y=y \mid A=a, X=x) P(X=x)$, which computes the association between $A$ and $Y$ for each value $x$ of $X$, then averages over those values, also referred as "adjusting for $X$". > **It seems that there are no baselines based on prior works. Is there no work that solves the fairness problem with causal effect estimation?** - For compared baselines, we would like to clarify that FairRelax [R1], Fair [R1], and $\epsilon$-IFair [R2] are all prior works solving the same problem (causal fairness with partially known causal graph). We will explicitly add citations in our revised "Baselines" part. - To the best of our knowledge, we have compared the most comprehensive and SOTA methods, e.g., [R2] is published in ICLR 24. - The important reason for the limited baselines is because most previous causal fairness work requires known causal graphs (DAGs), whereas we only need tabular data (from which we can only obtain CPDAGs). ## Questions > **Line 178: why can it not be $X_i \leftarrow A \to X_j$?** - From Definition 3.1 in Line 157, the v-structure is defined as $X_i \to A \leftarrow X_j$, not $X_i \leftarrow A \to X_j$. - From Lemma 3.2 in Line 170, a set $\mathbf{S}(A) \subset sib(A)$ is a possible parent set of the sensitive attribute $A$, if and only if there is no more v-structures. - Thus, as stated in Line 178, if $X_i$ and $X_j$ are not adjacent, they cannot be both in the parent set of $A$. Otherwise, a new v-structure $X_i \to A \leftarrow X_j$ is formed. > **What is the computational complexity of the proposed approach?** - First, our proposed approach mainly includes the following 3 steps: constructing the CPDAG from the tabular data, estimating all possible propensities (Alg. 1), and learning fair classifier via minimax approach (Alg. 2). - From [R1], the complexity of constructing the CPDAG in the worst case is $\mathcal{O}(|sib(S, \mathcal{G})+ch(S, \mathcal{G})| *|E(\mathcal{G})|)$, where $|E(\mathcal{G})|$ is the number of edges in $\mathcal{G}$. - For Alg. 1 and Alg. 2, the proposed local method needs to identify the v-structures for every node pairs in $sib(S, \mathcal{G})$, thus the computational complexity is $\mathcal{O}(|sib(S, \mathcal{G})|^2)$. - Thus, the overall computational complexity is $\mathcal{O}(|sib(S, \mathcal{G})+ch(S, \mathcal{G})| *|E(\mathcal{G})|+|sib(S, \mathcal{G})|^2)$, which also illustrates the advantage of our local method instead of enumerating on all possible DAGs. > **How can the proposed approach be connected with the min-max approach proposed by Xia et al [1] and the invariant prediction problem proposed by Subbaswamy et al [2]?** - For the min-max approach proposed by Xia et al [1], one similar point is that the input is also the observational data, rather than requiring the known DAG. - They found that a neural net is unable to predict the effects of interventions given observational data alone (Thm. 1), which motivates to introduce **a special type of SCM called a neural causal model (NCM)** for performing causal inferences. - However, our paper still aims to conduct the causal inferences **in the context of general SCM**, with minimax approach for bounding the uncertainties of the true DAGs. - For the invariant prediction problem proposed by Subbaswamy et al [2], they aims to solve the **dataset shift problem with different training and target distributions.** - However, our study aims to achieve causal fairness without the explicit knowledge of causal DAGs, with **same training and target distributions.** **** **References** [R1] Counterfactual fairness with partially known causal graph. NeurIPS, 2022. [R2] Interventional fairness on partially known causal graphs: A constrained optimization approach. ICLR, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. The authors have addressed my concerns. Also, after reading other reviewers' responses, I have decides to increase my score. --- Reply to Comment 1.1.1: Title: We are happy that your concerns have been addressed. Thanks for raising your score! Comment: We're glad our rebuttal addressed your concerns. Thank you for your helpful comments -- it helps make our manuscripts easier to follow. May I know if our clarifications can improve your ```Confidence: 2``` in your review? We will definitely put our discussions into our final version -- thank you so much!!
Rebuttal 1: Rebuttal: Dear all reviewers and AC, Please kindly find the attachment as our added one page experimental results. Thanks, Authors from Submission #18019 Pdf: /pdf/8dbd1a199447d61e54d83738661baa832ad338de.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond
Accept (poster)
Summary: The paper proposes a new approach to understanding neural networks by examining a model consisting of a sequence of first-order approximations telescoping into an empirically operational tool. This model aims to isolate components of the training process and offers a lens to analyze the effects of design choices in neural networks. The authors illustrate the applicability of this model through case studies on double descent, linear mode connectivity etc. Strengths: The paper introduces a novel telescoping model that bridges the gap between theoretical and empirical research in neural networks. The approach is used to derive new insights into well-known phenomena such as double descent. The paper covers a wide range of phenomena and provides a detailed analysis of each, contributing significantly to the understanding of neural network behavior. The model offers a simplified yet effective way to understand the impact of various design choices on neural network training. Weaknesses: The telescoping model involves increased computational costs, especially for large networks and datasets, which may limit its practical applicability. The quality of the approximation depends heavily on the learning rate and the optimizer choice, which might not generalize well across different settings. While the model is intended to simplify understanding, the incremental approximation can become complex to implement and analyze for certain networks and tasks. Although the paper provides case studies, more extensive empirical validation across diverse datasets and architectures could strengthen the claims. Technical Quality: 3 Clarity: 3 Questions for Authors: In what ways does this model bridge the gap between purely theoretical analysis and empirical observations in neural network research? How does the model handle non-linearities and complex interactions within neural networks? Can the authors provide a quantification of how these features affect their approximation? Can you elaborate on how the telescoping model reveals linear mode connectivity within the loss landscape? What are the implications of this connectivity for network training and generalization? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank this reviewer for their in-depth review and their positive assessment of our work! We were delighted by their appreciation of the utility of the telescoping model itself, the wide range of phenomena we are able to consider using our model and the insights we provide. We respond to major points raised in the review below. **(1) Computational burden.** We agree that, as highlighted upfront in Sec 3, the telescoping model is more expensive to obtain than a standard neural network and therefore may not be possible to implement for large scale architectures such as modern LLMs. However, we note that the model is sufficiently tractable that it could be applied across all three case studies on the standard benchmarks from existing literature (all of which have received substantial attention in the recent literature despite their relatively moderate scale), and we were able to utilize it to provide significant insight. We believe that exploring computationally efficient methods of extending this approach to larger scale settings provides an exciting direction for future work (as alluded to in l.123) **(2) Approximation quality.** We also appreciate that, with the exception of models for which the first-order Taylor expansion holds exactly, the approximation quality will _always_ deteriorate for a sufficiently high learning rate (note that this is effectively by definition as eqn (2) holds exactly as $\gamma \to 0$). Indeed, making this potential limitation explicit was the purpose of including the experiment with varying learning rates in Fig 1. However, we generally found the appropriate learning rates for provided experiments to achieve good approximation quality (across different optimization choices, as further investigated in Apx D.1). **(3) Increased complexity over linear models.** Indeed, we sacrifice some of the analytic simplicity of the fully linearized lazy model in order to obtain a more accurate model of a neural network. Unfortunately, we believe this to be unavoidable as a linear model will generally be unable to accurately model a full neural network in standard applied settings (indeed including even the most of toy-ish settings we consider, e.g. MNIST in Fig 1). We would argue that any conclusions drawn from a proxy model can only be valid if that proxy is a good approximation. Therefore, we believe the telescoping model sacrifices only the minimal degree of simplicity necessary in order to maintain an accurate approximation of the full network. **(4) Experiments.** Allow us to re-emphasize that the goal of this work was to provide a tool that can help us to better understand _specific previously observed_ phenomena in deep learning. Therefore, in our experimental evaluation we primarily revisit the specific, well-known, canonical experimental settings in which the studied phenomena have been observed in previous work (e.g. grokking, double descent, LMC). We have rigorously pursued this approach by reproducing key experiments from several works (e.g. [BHMM19, LMT22, KBGP24, FDP+20]) and integrating aspects of the telescoping model to better understand their findings. Further, Apx D includes additional results for different settings of some of the case studies. We also appreciate that our concise presentation of results in the main text (which are reinforced by 4 further pages of experiments in Apx D) may have disguised the empirical contribution of this work, making it appear that our evaluations are less extensive on the surface. In the general comment, we do provide several additional results on different datasets and using different experimental settings, which were specifically requested in other reviews and further reinforce the findings of our paper. We hope that these additional results may also further reassure this reviewer. **Questions.** - Q1. Related to (3) above, important theoretical progress in understanding neural networks has emerged from the NTK literature. Many of these results require some assumptions e.g. infinite width limit, lazy learning, etc. The telescoping model provides a setting in which an arbitrary neural network and optimization routine can be iteratively cast using NTKs without large deviations from the exact network. We envision this can provide a bridge between theoretical and empirical research upon which future theoretical research may be able to build by applying tools from this literature to this iterative approximation instead. - Q2. Indeed, a powerful component of the telescoping model is that it does not require us to discard complex aspects of the optimization process or architecture. As shown in Sec 3.1, important practical extensions to vanilla SGD (i.e. momentum, weight decay, Adam) can be explicitly derived such that they can be exactly integrated into the telescoping model. Complexities of the model architecture, such as nonlinearities, on the other hand are entirely captured by the tangent features $\nabla_\theta f_{\theta_t}(x)$. That is, these gradients of the model outputs with respect to the model parameters capture how the networks architecture propagates between input and output. - Q3. We agree that it is an interesting next step to seek insights on the nature of loss landscapes through insights derived from LMC. In this case, we are hesitant to make strong general claims about loss landscapes on the basis of our findings. While it is clear from eqn (10) that constant gradients are a _sufficient_ condition for LMC, in practice we cannot broadly claim that gradients will become _exactly_ constant and a complete theory of LMC may need to take into account further intricacies of neural networks. We hope that a more complete theory of LMC can be developed through future research taking into account the telescoping insights we provide here, which would provide valuable insights into the nature of loss landscapes in deep learning.
Summary: This paper tries to understand the design of optimizer, model architectures and some deep learning phenomenon empirically. Additionally, the authors find a proxy model of neural networks other than the simply linearized model. Strengths: 1. The topic is interesting, especially the proxy model of neural networks. I would like to see more theoretical insights derived from the proxy model, but the current paper didn't put efforts on it. Weaknesses: 1. First of all, limited insights gained from the proxy model of neural networks when dealing with the modern optimizer and architectures. In section 3.1, the authors try to connect their proposed proxy model with some modern optimization techniques. However, no rigorous proof or insightful experiments are provided. Complex formulas including Eq. 6-8 only provides limited insights. 2. As for case study part, the writing confuses me a lot. For example, in Section 4.1, in line 200-201, "A candidate complexity measure that avoids the shortcomings listed above because it only considers the behavior of the final fitted model was recently used by [CJvdS23] in their study of non-deep double descent." I cannot understand what this mean. 3. Still the case study part, I strongly disagree that "LMC and train-time transition into lazy regime." (As I am not familiar with GBT and grokking, I skimmed those parts). [cite 1] and [cite 2] demonstrates that lazy regime is not sufficient to explain LMC and model merging. [cite 1] Fort, Stanislav, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M. Roy, and Surya Ganguli. "Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel." Advances in Neural Information Processing Systems 33 (2020): 5850-5861. [cite 2] Ortiz-Jimenez, Guillermo, Alessandro Favero, and Pascal Frossard. "Task arithmetic in the tangent space: Improved editing of pre-trained models." Advances in Neural Information Processing Systems 36 (2024). 4. There are seeming no many experiments in both case study part and Sec 3. I would expect more experiments relating their proposed proxy model to the case studies and optimization techniques. Technical Quality: 1 Clarity: 1 Questions for Authors: N/A Confidence: 3 Soundness: 1 Presentation: 1 Contribution: 1 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We are delighted that the reviewer finds our proxy model for a neural network interesting! Before we move to address the specific points raised in the review below, we would like to make two high-level comments. First, while we similarly believe that interesting theoretical insights can be derived from this model in follow-up work, we would like to note that the _explicitly stated goal_ of the current paper was to use it as a tool for generating new _empirical_ insight, unifying the study of a range of modern deep learning phenomena. Second, we also find it unfortunate that Secs 4.1 and 4.2 were only "skimmed" and therefore barely feature in the review, and would like to highlight that these were the parts that received particular praise in all other reviews. **1. Insights from Sec 3.1.** First, we note that as stated in L131, eqs 6-8 were not only included to provide insight, but were also _necessary_ in order to replicate later experiments from existing work that required Adam, weight decay, and momentum to be applied in the telescoping model. Further, we believe that Sec 3.1 will be of interest to anyone wishing to gain deeper _intuition_ into the effect different optimization strategies will have _on model predictions_. Usually, these strategies are only discussed in terms of their effect in parameter space, and not in function space as we do here – even though function space is what ML is ultimately interested in. While we appreciate that this reviewer was not interested in the insights derived from Sec 3.1, we note that both YCvv and b22S explicitly mentioned this section as a strength of the paper. We hope that in examining the new empirical results we have now provided (see the pdf in the top-level response) that demonstrates how these insights are reflected in training a real neural network this reviewer will also find value in Sec 3.1. **2. Clarifying cited paragraph on complexity.** This sentence highlights that the cited complexity metric of [CJvdS23] accounts for what the model has learned during the fitting process, and is thus able to capture the interaction of model, optimizer and a particular dataset. Many other metrics only consider what is _learnable_ by the entire hypothesis class for arbitrary data. We note in the opening paragraph of this section that the former type of metric is preferable for understanding double descent and grokking, where the phenomena only arise due to interaction of the model, its learning process and the data. We hope that this alleviates any confusion. We would also be happy to clarify any further unclarities in the writing if more precise issues were pointed out! **3. LMC.** We thank this reviewer for raising this question and are happy to clarify why there is no contradiction between these works and ours. - Fort et al., which we cite numerous times as [FDP+20], find that “deep network training exhibits a highly chaotic rapid initial transient” for the first 2-3 epochs after which the relative change in the NTK decreases throughout training and plateaus at a low value. This is in complete agreement with our work which studies this through the lens of the telescoping model. Note that our Fig 5 and their Fig 7 - which are similar in spirit but study slightly different objects and use different metrics - observe a very similar trend. We further supplement their results by examining differences in tangent feature changes between pretrained and randomly initialized models. - Ortiz-Jimenez et al. focus on a very different setting: they consider the context of task arithmetic, where a generalist model is being finetuned several times, each time on a new dataset, where the input space $\mathcal{X}$ of each task is disjoint from the others, with resulting task vectors then _added_ to the base models weights and, therefore, is subtly different to _weight averaging_ (they consider addition and subtraction). Please note that we strictly focus on finetuning routines that only differ in “batch orderings and data augmentations” (L356) followed by weight averaging between the resulting models, and do not claim that our empirical findings will transfer to their setting. Nonetheless, we agree that this reference is relevant to include and will discuss it in the updated version! Thank you for pointing this out. **4. Experiments.** The goal of this work was to provide a tool that can help us to better understand _specific previously observed phenomena in deep learning_. Therefore, in our experimental evaluation we primarily revisit the exact well-known, canonical experimental settings in which the studied phenomena have been observed in previous work (e.g. grokking, double descent, LMC). We have rigorously pursued this approach by reproducing key experiments from several works (e.g. [BHMM19, LMT22, KBGP24, FDP+20]) and integrating aspects of the telescoping model to better understand their findings. We also verified that the telescoping approximation holds for the different implementation choices discussed in Sec 3.1 in Apx D.1. We also appreciate that our concise presentation of results in the main text (which are reinforced by 4 further pages of experiments in the Appendix) may have disguised the empirical contribution of this work, making it appear that our evaluations are less extensive on the surface. For example, the double descent experiments alone - which only constituted a part (Fig 2) of case study 2 - required training 88 models per setting resulting in 432 hours of compute time on an A100 GPU. Finally, in the pdf in the general comment we provide several additional results which were specifically requested in other reviews and further reinforce the findings of our paper. In particular, we also provide some empirical verification of our expressions from Sec 3.1 which we hope will further reassure this reviewer.
Summary: The paper replaces the traditional "lazy learning" regime approximation with an approximation using a telescoping sum. This splits up the usual interval of approximation $\theta_T - \theta_0$ into many smaller pieces. The authors then show why this approximation is useful for explaining various interesting aspects of modern deep learning: 1. Generalisation behaviour (double descent, grokking and the effect of inductive bias on model complexity) 2. The connection between gradient boosting and modern deep neural networks (DNNs) 3. The success of weight averaging 4. Modern design choices regarding architecture and optimiser Strengths: In general, I think the authors did a reasonably good job. Particular praise goes to the following aspects of the work: 1. The extension of the lazy regime approximation to a telescoping sum seems like a very natural extension of the current set of ideas. The use of the sum seems logical and may become a useful theoretical insight. 2. I liked the choice of case studies. Phenomena like grokking, double descent, and weight averaging are clearly of interest to the community. 3. The explanations provided seem relatively intuitive under the telescoping approximation. I will now touch on how well I think the paper does on the areas of originality, quality, clarity, and significance: 1. *Originality.* The use of a telescoping sum is clearly somewhat original. In addition, I thought the analysis of GBTs seemed fairly novel and interesting. 2. *Quality.* There are some elements of high-quality work in the paper. I think the analysis and empirical evidence for the generalisation section were fairly good. I also thought the analysis of momentum and weight decay seemed interesting. (However, I have issues/suggestions for both of these sections in weaknesses). 3. *Clarity.* I thought the authors did a fairly good job of justifying their use of a telescoping sum. However, I find it hard to come up with other strengths, as I think clarity is what this paper is most lacking. 4. *Significance.* I think the insights are of decent significance. For example, if it was demonstrated that the effective parameter count metric had further predictive power, it could be a useful tool in the analysis of grokking. (I provide suggestions as to how it could be expanded upon). Weaknesses: I will structure this section into three subsections. The first two subsections concern general where I believe the paper may be lacking. In the third section, I list additional minor issues. I then end with a general assessment. ### (1) Lack of clarity and writing style I think the paper lacks clarity in the writing. Here are some major areas where this causes issues: 1. **Abstract.** The abstract does not describe the idea of the paper well nor provides a clear explanation of the impact of the work. I found it very hard to read at first. 2. **Explanation of insights.** Due to the density of the writing, I found it hard to understand the theoretical insights. 3. **Explanation of setup in section 4.1.** I found the explanation of the $p_{\hat{s}}^0$ measure to be quite confusing. I could not see where $\hat{s}(x)$ was clearly defined. ### (2) Importance of theoretical insights and empirical backing In section 3.1, there are several interesting observations about modern optimisation choices and how they can be more easily reconciled under the telescoping approximation. However, I am unsure of several things: 1. **Utility of telescoping sum.** Are the observations provided specific to the telescoping sum approximation or would they hold with the traditional lazy approximation? 2. **Claims in real networks.** Do the claims made hold in real networks? It would have been nice to see experiments demonstrating the claims made given that we are still working with an approximation. 3. **Utility of insights.** What should I do with the insights provided (supposing they hold in real cases)? For example, now that I know that momentum offsets the effect of weight decay on learned function updates, should I try and avoid using weight decay and momentum? In my experience, it seems that weight decay and momentum do work pretty well together in practice. In section 4, there are also many interesting observations. However, I have a few concerns: 1. I would have liked to have seen more experiments demonstrating that cases of double descent and effective parameter counts are related in the way described. I think it is likely to be true, I just don't think that one experiment is sufficient. 2. In the grokking cases, I would like to see whether the metric proposed (namely mismatch in the effective parameter count) has more predictive power. For example, in the literature, there have been ways of inducing grokking proposed. For example, in [this paper](https://openreview.net/pdf?id=ux9BrxPCl8) it is shown that adding spurious dimensions to the input reliably increases grokking. I would like to see whether cases with a greater grokking gap correlate with a greater divergence in effective parameter use. 3. In the second case study, the theory of why GBTs perform better than neural networks seems reasonable. However, I think the empirical evidence is lacking. A single plot with correlation does not seem sufficient. 4. In the third case study, I am unsure as to whether the trend in the pre-activation gradients aligning with the accuracy gap is sufficient to demonstrate the hypothesis. ### (3) Minor gripes with the paper 1. If I have understood correctly, the paragraph discussing approximation error seems unnecessary. I would place this in an appendix and spend two sentences on it in the main discussion. It seems quite obvious that the telescoping sum is going to approximate things better. 2. The practical limitations section also seems unnecessary. We know that it is a tool for analysis rather than an alternative approach for training. This does not need to be explained. 3. I'm not sure how I feel about the use of the word "pedagogical" throughout the paper. I don't know if the impact of a NIPS paper is meant to be a tool for teaching. Rather, the desired impact should be a novel result or an improvement to the understanding of the field. ### A general assessment Generally, I think the paper provides some interesting insights but is attempting to do too much. I think if it were to be rejected, I would encourage the authors to rewrite it in a much simpler style focused on far fewer topics with many more experiments. Update: I have moved my overall rating from a 5 to a 6 as I believe some of my concerns were addressed in the rebuttal. I also moved my rating of soundness from a 2 to a 3. Technical Quality: 3 Clarity: 2 Questions for Authors: Some questions: 1. As someone who has read relatively few papers using tangent kernel arguments, I am curious how other groups have extended the traditional lazy -> rich approximation? 2. Could the authors provide a better explanation of $p_{\hat{s}}^0$ and associated nomenclature? What is $\mathcal{I}_0$ for example? 3. Are there meaningful experiments to run with the insights in section 3.1? For example, could the authors look at verifying the reported trade-off between momentum and weight decay? Some other suggestions: 1. A few grokking papers which are not included in the literature review are: - [Paper 1](https://openreview.net/forum?id=GH2LYb9XV0). Grokking in linear estimators without regularisation, which may go against intuitions with the measure proposed - [Paper 2](https://openreview.net/forum?id=ux9BrxPCl8). Discusses complexity as a theory of grokking. - [Paper 3](https://openreview.net/forum?id=UHjE5v5MB7). Talks about corrupted labels, might be interesting in discussion of future work where the proposed measure could be used to analyse cases with corrupted labels 2. Paper on double descent not mentioned: [Double Descent Paper](https://arxiv.org/pdf/2111.08234). Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I think limitations are appropriately addressed given that the paper is not focused on performance. It might be interesting to see cases where the telescoping sum does not approximate some set of phenomena well. Perhaps some discussion of this instead of the current limitations paragraph would be more meaningful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the very detailed and constructive review! Limited by space constraints, we respond to major comments below. **(1) Clarity.** We appreciate that our writing was somewhat densely packed in the mentioned sections, and will utilize the camera-ready extra page to decompress for improved clarity. We will particularly expand the paragraph on $p^0_s$ and will more explicitly write out $\hat{s}(x)$ in the main text (currently derived for different optimizers in Apx B.1), which can recursively be obtained from eq 5 after substituting in the squared loss gradient. We will also rework the abstract to include a more explicit summary of the finding of each case-study. **(2) Comments on Sec 3.1.** 1. The insights on functional updates that can be derived from the telescoping sum subsume those of the lazy regime as a special case. As we highlight in the paragraphs on momentum and weight decay, assuming constant gradients will simplify some of these expressions. Some observations could thus also be made when assuming lazy learning – e.g. adaptive learning rates acting by rescaling the kernel – however, we have not seen these observations discussed explicitly anywhere. We will make even more clear which observations also hold in the lazy regime! 2. _Claims in real networks._ This is point is precisely why we included experiments on approximation quality – Fig 1 and Fig 6 in Apx D show that the approximation matches real networks very closely for the different implementation choices and that the insights in Sec 3.1 thus apply to real networks. We appreciate the suggestion to additionally numerically verify the tradeoff between momentum and weight-decay empirically, and have included an experiment that does so in the pdf in the top-level response! 3. _Utility of insights._ We believe that Sec 3.1 will be of interest to anyone wishing to gain more intuition and understanding of the effect different optimization strategies will have on model predictions. Usually, these are only discussed in terms of their effect in parameter space, and not in function space as we do here – even though function space is what ML is ultimately interested in! We comment further on the specific momentum weight decay example in the experiment in the top-level response. **(3) Comments on Sec 4.** 1. _Double descent._ Note that Apx. D.2 already includes additional experiments, indeed demonstrating the same behavior for double descent on MNIST. Additionally, we provided a new experiment in the pdf of the top-level response. 2. _Grokking._ In Apx. D.2, we actually already conducted additional experiments varying two such mechanisms – the initialization scale [LMT22] and task alignment [KBG24]. As discussed in Apx D.2 we found in both cases that later generalisation (i.e. more extreme grokking) is predicted by an effective parameter gap emerging later in training. As suggested in the review, we repeated this experiment for another mechanism – the number of spurious dimensions – on the parity prediction task of Miller et al (TMLR2024). The results, included in the top-level comment, indicate that also here grokking is predicted by the effective parameter gap growing. We will better link to this in the updated main text, thank you for the suggestion! 3. _GBTs._ Note that results using additional datasets are included in Apx D.3 where a similar behavior is observed in the kernels. 4. _LMC._ We agree that unlike the preceding sections, the case study in Sec 4.3 is more exploratory in nature. The key point of interest is using the telescoping lens to come to the observation in eq 10 that constant gradients are sufficient for LMC. However, in practice gradients will not reach perfect stabilization and therefore this cannot provide a complete explanation for this phenomenon. We were therefore deliberate in not claiming any overly strong conclusions, and only include preliminary evidence of a specific factor contributing to LMC emerging by empirically showing that the model gradients indeed become closer to stable throughout training, and that – like the loss barrier – the change in model gradients is lower in pretrained than in randomly initialized models. We think there is significant opportunity to further investigate this perspective, and will update the section to be more explicit on scope and potential future avenues. **(4) Minor points.** 1. _Approximation quality section (Fig 1)._ Building on the answer to point 2 on Sec 3.1 above, we found this section essential as it demonstrates that (and when) the telescoping sum is indeed a reasonable model of a neural network in a practical setting [note that the neural networks used here are the same as in Sec 4.1] and that therefore the derived insights hold, while those of the lazy model may not. Further, the main purpose of including the lazy model here was to provide a _reference point_ for the scale of the approximation error, allowing us to better evaluate how good an approximation the telescoping model is. We will make the purpose of this section more clear in the updated manuscript. 2. _Limitations._ We found it important to be upfront about potential computational limitations. The comment on training is in reference to the lazy model, which _is_ sometimes used as an alternative way of training (as a linear regression in the tangent features). We will make this more clear! 3. _The term pedagogical._ We are in full agreement that the goal of a NeurIPS paper should be an ``improvement to the understanding of the field''. Our intention was to imply the telescoping model to be a pedagogical device in the sense that it is a tool through which researchers can better understand aspects of deep learning. However, we are happy to substitute this term as this was obviously not clear. 4. _Additional relevant references._ Thank you for these, we will include them in our literature review! --- Rebuttal Comment 1.1: Title: Response to rebuttal for b22S Comment: I would like to thank the authors for their detailed response, it cleared up many of my questions and concerns. As a result, I will update my rating to 6 (weak accept) and my assessment of soundness to 3. A few lingering questions and comments: 1. Could the authors explain in a comment what is meant by $\hat{s}(x)$, unfortunately, I was not able to understand its meaning from Appendix B.2. 2. I still believe the section on the approximation performance is not really necessary. My original concern was that even if the approximation holds with reference to the way the learning trajectories look, the approximation might not hold when used to derive other properties with the telescoping sum. (I hope this makes sense!) --- Reply to Comment 1.1.1: Title: Response [1/2] Comment: We thank the reviewer for the very quick response and for engaging with our rebuttal – we really appreciate it! We are delighted that our rebuttal alleviated most concerns and hope that our responses below address any leftover reservations. 1. **What is $\mathbf{s}\_{{\theta}\_{T}}(x)$?** _Motivation._ [CJvdS23] showed that their _effective parameter_ metric $p^0\_s$ can be used to explain double descent in some _non-deep_ models (linear regression and tree-based methods) because it allows to distinguish between train-time and test-time complexity. In section 4.1, we explore whether the telescoping model allows us to make use of $p^0\_s$ in deep models too, which is not obvious a priori as the use of $p^{0}\_s$ requires to be able to write model predictions in smoother form. *Extended background.* [CJvdS23] build on ideas from the literature on nonparametric regression smoothers, which are methods that output predictions that are a _linear combination_ of the nx1 vector of training labels $\mathbf{y}$. That is, such methods issue predictions $\hat{f}(x)=\hat{s}(x)\mathbf{y}$ where $\hat{s}(x)$ is a 1xn vector assigning a weight to each training example. Prototypical examples of (the very broad class of) smoothers are k-Nearest Neighbor (kNN) estimators, which use smoother weights $\hat{s}(x) \in \\{0, 1/k\\}^n$, and ordinary linear regression (OLS), which uses smoother weights $\hat{s}(x)=x’(X’X)^{-1}X’$. Effective parameters $p^{0}\_{s}=\frac{n}{|\mathcal{I}\_0|}\sum\_{j\in \mathcal{I}\_0} ||\hat{s}(x\_j)||^2$ then aim to make different types of smoothers comparable to each other w.r.t. the amount of smoothing they do, by measuring the average squared norm of the smoother weights when issuing predictions for a given set of inputs $\\{x_j\\}\_{j \in \mathcal{I}\_0}$. Here $\mathcal{I}\_0$ is simply used as a shorthand to collect the indices of the inputs for which the used effective parameters are measured; e.g. $\mathcal{I}\_{train}=\\{1, \ldots, n\\}$ the indices of the training data and $\mathcal{I}\_{test}=\\{n+1, \ldots, n+m\\}$ the indices of the test data. Intuitively, $p^0\_s$ provides a measure for how non-uniform and extreme the learned smoother weights are – the higher $p^{0}\_{s}$, the more complex the model. For example, OLS with $p$ covariates has $p^{0}\_{s}=p$ and a kNN estimator has $p^{0}\_{s}=\frac{n}{k}$. *How does the telescoping model enable us to use $p^0\_s$?* For the squared loss, we exploit that the SGD functional updates $\Delta \tilde{f}\_t(x) = \gamma \nabla\_{{\theta}} f\_{{\theta}\_{t-1}}(x)^\top \mathbf{T}\_t(\mathbf{y} - \tilde{\mathbf{f}}\_{\theta\_{t-1}})$ use only a linear combination of the training labels, which is how we recursively construct $\mathbf{s}\_{\theta\_t}(x)$. To see this, assume for simplicity that at initialization $f\_{\theta\_0}({x})=0$ (otherwise, the prediction at initialization is carried forward as additional constant term $c^0\_{\theta\_t}(x)$ in the paper). Then, the first functional update is $$\Delta \tilde{f}\_1(x)=\gamma \nabla\_{\theta} f\_{\theta_{0}}(x)^\top \mathbf{T}\_1(\mathbf{y} - \mathbf{0}) =\underbrace{\gamma \nabla\_{\theta} f\_{\theta\_{0}}(x)^\top \mathbf{T}\_1}\mathbf{y} = \mathbf{s}\_{\theta\_1}(x)\mathbf{y}$$ which means that $\tilde{f}\_{\theta\_1}(x)=0 +\mathbf{s}\_{\theta\_1}(x)\mathbf{y}$ is also a linear combination of the training labels. Letting $\mathbf{S}\_{\theta\_1}$ denote the nxn matrix that has as the ith row $\mathbf{s}\_{\theta\_1}(x_i)=\gamma \nabla\_{\theta} f\_{\theta\_{0}}(x_i)^\top \mathbf{T}\_1$ for the ith training example and $\mathbf{I}\_n$ denote the nxn identity matrix, we note that the second update can then be written as $$\Delta \tilde{f}\_2(x) = \gamma \nabla\_{\theta} f\_{\theta\_1}(x)^\top \mathbf{T}\_2(\mathbf{y} - \mathbf{S}\_{\theta\_1}\mathbf{y}) = \underbrace{\gamma \nabla\_{\theta} f\_{\theta\_1}(x)^\top \mathbf{T}\_2(\mathbf{I}\_n - \mathbf{S}\_{\theta\_1})}\mathbf{y}= \Delta \mathbf{s}\_{\theta\_2}(x)\mathbf{y}$$ which is also a linear combination of the training labels. Then, by adding the functional update to the prediction from the first step, we get the prediction at the second step $$\tilde{f}\_{\theta_2}(x)=0 +\mathbf{s}\_{\theta\_1}(x)\mathbf{y} +\Delta \mathbf{s}\_{\theta\_2}(x)\mathbf{y} = (\mathbf{s}\_{\theta\_1}(x) +\Delta \mathbf{s}\_{\theta\_2}(x))\mathbf{y} = \mathbf{s}\_{\theta\_2}(x) \mathbf{y}$$ which is again a linear combination of the training labels. Then, by recursion, any future update analogously adds a term $$\Delta \mathbf{s}\_{\theta\_t}(x) = \gamma \nabla\_{\theta} f\_{\theta\_{t-1}}(x)^\top \mathbf{T}\_t(\mathbf{I}\_n - \mathbf{S}\_{\theta\_{t-1}})$$ to the previous smoother weight. Finally, this then gives $$\mathbf{s}\_{\theta\_T}(x)=\sum^T\_{t=1}\Delta \mathbf{s}\_{\theta\_t}(x)$$ where the $\Delta \mathbf{s}\_{\theta\_t}(x)$ depend only on the gradient feature matrices $\\{\mathbf{T}\_{t'}\\}_{t'\leq t}$. [Continued in next comment below] --- Rebuttal 2: Title: Response [2/2] Comment: [Continued from above] As derivation of $\mathbf{s}\_{\theta\_T}(x)$ requires only straightforward yet somewhat tedious algebra, we had relegated it to the Appendix to ensure readability of Section 4.1. However, we now see that the reader could benefit from some of the above discussion in the main text to make it more self-contained. As indicated in our rebuttal, we are very grateful for the suggestion and will use some of the additional space in the camera-ready to expand the corresponding paragraph on page 5. 2. **Approximation performance.** We recognize the first point being made by the reviewer - the telescoping model is strictly a better approximation of a neural network than the lazy model _by design_ as the difference is simply taking smaller approximation terms. This point does not require major discussion in the main text. The more important aspect of this section is demonstrating that, at least on the tasks evaluated, the difference in approximation quality results in a categorical improvement from a model that certainly does not track the performance of a neural network to one that does. The point is cemented further in Fig 7 where we observe the lazy model catastrophically diverging from the neural network’s prediction performance early in training while the telescoping model matches performance throughout. We would argue that this categorical difference in the two models ability to emulate a neural network is vital to demonstrate as it provides the essential justification for accepting the added complexity of the telescoping over the lazy model (e.g. one could conceivably compute effective parameters in case study 1 using the lazy approximation instead (as it is a special case of the telescoping approximation), but these results demonstrate that it would be a mistake to expect conclusions drawn in that setting to transfer to a fully trained neural network). We appreciate this reviewer also has a subtle point that matching performance is _necessary_ but not _sufficient_ to know that a model is generally reliable as a proxy for analysis. One could imagine designing a model that matches a neural network relatively well in terms of predictions but that are achieved by a very different mechanism (e.g. using AdaBoost with decision trees). In that case, findings about the inner workings of the proxy model could plausibly _not_ also hold for the neural network it models. In some sense, this is an unavoidable fact of modelling any process (particularly relevant in the social sciences where natural data-generating processes are often modelled using generalized linear models). However, we would point to two strong pieces of evidence that suggest the telescoping model only induces minimal bias in its approximation. (1) Unlike in e.g. the social sciences, we _do_ have access to the exact mathematical form of the natural phenomenon (a neural network being trained on data) and are taking a direct approximation of that neural network. (2) The empirical predictions derived from the telescoping model have been reflected in the behavior of the full neural network repeatedly throughout our empirical investigations (e.g. (a) the divergence of full neural networks and gradient boosting is well modelled by the kernel of the telescoping model or (b) the predicted counterbalancing effect of weight decay and momentum from the telescoping model was demonstrated to exist exactly in a full neural network in our new results in the pdf). Once more, we would like to thank this reviewer for their detailed engagement with our submission. We will integrate aspects of this extended discussion into the main text as we believe they are indeed valuable to expand upon. We hope that we have provided sufficient clarification on these final concerns but would be happy to discuss further if not!
Summary: This work proposes a new analytical model of neural networks (NNs) extending popular 'linearized' approximations over the initial parameters built from the gradient vectors. In particular, they consider approximating the full learning trajectory of NNs as a sequence of first-order approximations rather than a single one. The authors connect their model to several prominent empirical observations of the properties of neural networks, showing how it can be used to compare different optimizers and learning frameworks, and also justify phenomena such as grokking, double-descent, and linear-mode connectivity. Strengths: 1) The authors provide a comprehensive first look into the properties and implications of the proposed framework. For instance, they consider extensions to vanilla SGD updates, including momentum and weight decay - showing how they can connect quite naturally through a weight averaging perspective (start of Sec 3.1). 2) The Appendix is functionally used to relegate more comprehensive secondary details, allowing the authors to keep the text free of long proofs and extended literature reviews. This makes the main text concise and pleasant to read, while still allowing the interested readers to delve deeper into specific topics. 3) The paper does a good job at highlighting the relevance of the proposed framework, providing grounded examples that connect it with several popular understandings of neural network dynamics and behavior in Section 4. Out of the three examples, I found the discussion regarding comparing neural networks and gradient boosting through their relative approximated dynamics and kernels (4.2) of particular interest. Weaknesses: 1) The proposed telescoping approximation partially defeats the purpose of the simplified linear models, by introducing back much complexity that linearization precisely seeks to abstract away. Perhaps, some additional discussion about the limitations in terms of theoretical analysis would have been useful. 2) Also due to this increased complexity, most experiments are carried out in relatively toy-ish settings (e.g., MNIST). 3) There is some overuse of inline math notation (e.g., Section 4.1) hindering readability. While I understand the limitations due to the imposed page constraints, I would encourage the authors to try and adjust their text to lay out their equations more clearly in later revisions. Typos: "due rescaling" -> due to rescaling (117) Technical Quality: 3 Clarity: 2 Questions for Authors: I would appreciate if the authors could address the criticism highlighted above in their rebuttal. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors explicitly discuss the main limitation of the proposed methodology (increased complexity) at the end of Section 3, although mostly from a practical (and not theoretical) viewpoint. Overall, I found the work well-written, quite comprehensive, and insightful - building directly on top prior work, yet, providing some interesting new perspectives. Hence, I am leaning toward acceptance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank this reviewer for their very constructive review and the positive assessment of our work! We were especially delighted to read that the reviewer found our uncovered connection between the telescoping model and gradient boosting to be particularly instructive. We respond to the comments raised in the review below. **(1) Increased complexity over linear models.** Indeed, we sacrifice some of the analytic simplicity of the fully linearized lazy model in order to obtain a more accurate model of a neural network. Unfortunately, we believe this to be unavoidable as a linear model will typically be unable to accurately model a full neural network in standard applied settings (indeed including even the most of toy-ish settings we consider, e.g. MNIST in Fig 1). We would argue that any conclusions drawn from a proxy model can only be valid if that proxy is a good approximation. Therefore, we believe the telescoping model sacrifices only the minimal degree of simplicity necessary in order to maintain an accurate approximation of the full network. We appreciate the reviewer's suggestion to extend the limitations section beyond practical considerations by highlighting that this additional complexity can come at the cost of complicating potential future theoretical analyses, and will do so in the updated manuscript! **(2) Scale of experimental settings.** While we agree that our experiments were generally conducted in relatively moderate-scale settings compared to e.g. much of the recent LLM focused empirical work, we would like to emphasize that we were able to operate at a sufficiently large scale to satisfy the three provided case studies, all of which have received substantial attention in the recent literature (despite their relatively moderate scale). For case study 2 we operate on typical size tabular datasets from a standard benchmark from the literature. For case studies 1 & 3, we primarily revisit the exact well-known, canonical experimental settings in which the studied phenomena have been observed in previous work (e.g. grokking, double descent, LMC from [BHMM19, LMT22, KBGP24, FDP+20]). While larger scale experiments were thus not necessary in order to address our selected research questions here, we certainly agree that considering how to apply the proposed telescoping model to larger scale networks would enable many interesting research directions in future work (as we allude to in l.123). **(3) Use of in-line maths.** We were delighted to read that this reviewer appreciated our efforts to make the reading experience as natural as possible by relegating extended details to the appendix for interested readers. We also appreciate that the quantity of content and breadth of phenomena covered in this work required certain parts of the paper to be written quite densely. This was a fair point raised by several reviewers. We will therefore put considerable effort into using the extra page of a camera-ready version to decompress the text (e.g. less in-line maths particularly in Sec 4.1) and expand on details where there is an opportunity to improve clarity. _Typo:_ Thank you, fixed! --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: I would like to thank the authors for their concise rebuttal which managed to address my main concerns, and for providing additional experiments and analysis. I also read the other reviews, and while I do understand some of their shared concerns e.g., "limited insights [...] when dealing with the modern optimizer and architecture" (gQrQ) I believe these have been mostly addressed/should be addressable (e.g., by toning down some of the claims), and do not warrant outright rejection. I believe expecting each paper to provide definite evidence for some of the key questions in the field is unrealistic. However, I believe this paper makes a solid effort to provide insightful novel evidence and analysis, which is likely to be of interest to the wider research community. Thus, I stand by my original assessment and would argue this paper clearly meets the bar for acceptance.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for the time and effort put into the review process! We are grateful for the constructive nature of the reviews and were delighted by the largely positive assessment of our work. We were especially excited by the recurring appreciation of the new insights on (i) the effects of optimization strategies, (ii) the phenomena relating to model complexity (double descent and grokking), and (iii) the relationship between neural networks and gradient boosting, all of which the construction of the telescoping model allowed us to provide in this work. We respond to all reviewers individually below, but re-address some recurring comments globally here. We also include additional experimental evidence in the pdf attached to this comment. Note that due to the strict 6000 character limit in rebuttals our responses were forced to be brief. However, we would be happy to expand our discussion on any point if further details are helpful! **1. Density of writing.** We appreciate that the quantity of content and breadth of phenomena covered in this work required certain parts of the paper to be written quite densely. We will therefore put considerable effort into using the extra page of a camera-ready version to decompress the text (e.g. using less in-line maths particularly in Sec 4.1) and expand on details where reviewers pointed out that there is an opportunity to improve clarity! **2. Further experiments.** Allow us to re-emphasize that the goal of this work was to provide a tool that can help us to better understand _specific, previously observed_ phenomena in deep learning. Therefore, in our experimental evaluation we primarily revisit the exact well-known, canonical experimental settings in which the studied phenomena have been observed in previous work (e.g. grokking, double descent, LMC). We have rigorously pursued this approach by reproducing key experiments from several works (e.g. [BHMM19, LMT22, KBGP24, FDP+20]) and integrating aspects of the telescoping model to better understand their findings. Where specific additional experiments were requested we have provided them in the attached pdf. In particular, the pdf includes: - Empirical verification of some of the theoretical predictions made in Sec 3.1 on the opposing effects of weight decay and momentum on model predictions using real networks obtained through standard training. - Additional experiments on grokking in a parity prediction task with varying number of spurious features, further demonstrating that the behaviour of effective parameters indeed predicts grokking behaviour. - A double descent experiment on a third dataset, the MNIST-1D dataset (Greydanus \& Kobak, (ICML2024). ``Scaling Down Deep Learning with MNIST-1D.'') which was proposed recently as a sandbox for investigating empirical deep learning phenomena and was also used to demonstrate deep double descent in the textbook [Pri23]. Pdf: /pdf/d89105f16d181bf581a7fdd7a4d938ce7d487e77.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Single-Loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions
Accept (poster)
Summary: This paper introduces a novel optimization problem termed Difference of Max-Structured Weakly Convex Functions (DMax). The DMax problem extends traditional frameworks such as difference-of-weakly-convex (DWC) optimization and weakly-convex-strongly-concave (WCSC) min-max optimization, which are widely utilized in the machine learning community. Using a Moreau envelope smoothing technique, the authors propose a stochastic algorithm called SMAG for optimizing DMax in non-smooth settings. This approach not only addresses the DMax problem but also applies to DWC and non-smooth WCSC min-max optimization scenarios, demonstrating comparable convergence rates. Experimental results validate the efficacy of their algorithms across various applications, including Positive-Unlabeled (PU) Learning and partial Area Under the Curve (AUC) optimization with an adversarial fairness regularizer. Strengths: **Originality:** The paper introduces a new optimization problem termed Difference of Max-Structured Weakly Convex Functions (DMax), which extends existing frameworks like difference-of-weakly-convex (DWC) optimization and weakly-convex-strongly-concave (WCSC) min-max optimization. This extension demonstrates a significant level of originality in problem formulation within the field of optimization. By addressing a broader class of problems under the DMax framework, the paper contributes novel insights and methodologies that enrich the theoretical foundations of optimization in machine learning and related fields. **Quality:** The paper maintains high-quality standards across various aspects. It rigorously establishes the theoretical foundations of the DMax problem and proposes a Moreau envelope smoothing technique for optimization in non-smooth settings. The SMAG algorithm introduced is methodically developed and supported by well-defined lemmas and references, ensuring robustness and reliability in the proposed approach, though I don't check the proofs in detail. Furthermore, the experimental validation on applications such as Positive-Unlabeled Learning and partial AUC optimization with adversarial fairness regularization underscores the practical relevance and effectiveness of the proposed methods. **Clarity:** The clarity of the paper is a notable strength. It effectively communicates complex concepts and methodologies in a clear and accessible manner, making it understandable even for readers with limited expertise in the specific problem domain. The intuitive presentation of algorithms and key steps, coupled with clear explanations of referenced lemmas, enhances readability without sacrificing depth. This clarity not only facilitates comprehension but also promotes transparency in the theoretical derivations and experimental procedures, thereby bolstering the paper's overall impact. **Significance:** By addressing challenging optimization problems and providing practical algorithms validated in real-world scenarios, the paper significantly advances both theoretical understanding and practical applications in machine learning optimization. Weaknesses: **Problem Statement and Novelty:** The paper introduces a new problem that extends traditional frameworks such as difference-of-weakly-convex (DWC) optimization and weakly-convex-strongly-concave (WCSC) min-max optimization, presenting the SMAG algorithm as a solution. However, the paper could benefit from clearer justification of how this generalization represents a significant departure from incremental advancements. This clarity is pivotal for establishing the theoretical contribution within a novel setting. **Complexity Analysis:** Upon examining the complexity results, it is apparent that in non-convex, non-smooth min-max problems, SMAG demonstrates complexity similar to Epoch-GDA under comparable assumptions (Table 2). This finding raises significant concerns about the broader implications for WCSC min-max optimization, particularly considering the emphasis on complexity as a contribution (Line 59). A clearer explanation is needed on how SMAG surpasses existing complexities in this domain, beyond the practical advantage of a single-loop method for tuning, to strengthen the paper's impact solely from a complexity perspective. **Algorithmic Technique:** The proof and algorithmic technique employed in this study resemble SBCD, utilizing the Moreau envelope to transform non-smooth problems into smooth ones, thereby enhancing accessibility. Hence, it is crucial to clearly delineate why SMAG achieves superior complexity results compared to SBCD or to identify any overlooked aspects that may explain this discrepancy. I welcome further discussion on these points to ensure a comprehensive evaluation. Thanks! Technical Quality: 3 Clarity: 3 Questions for Authors: In the experiments section, the authors employ a linear classification model with hinge loss (Line 271). However, this setting appears to be a smooth one, potentially limiting the demonstration of the benefits of the authors' method in non-smooth DWC problems, despite SMAG showing superiority over other methods. Regarding the large variance in training curves observed, such as in FER 2013 (Figure 1), it raises questions about the stability and robustness of the method in practical applications. To enhance confidence in the results, I suggest exploring parameter tuning to achieve more stable and convincing outcomes. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper is theoretical, and I believe there is no need to confirm societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and feedback on our paper. **Q1:** Clarification of theoretical contribution within a novel setting. **A.** We proposed a unified framework of analysis for non-smooth DWC and WCSC min-max problems, which leads to single-loop methods that achieves the best convergence rate. Our main contribution is the novel analysis that applies to problems with complex structures (difference of max-structured weakly convex functions) and potential non-smoothness. In particular, in our technical analysis, the Lemma 4.4 is novel, which does not exist in any existing work. This make its possible for us to prove the convergence of a special error function $\|x_\phi^t - prox_{\gamma\phi}(x_{t-1})\|^2 + \|x_\psi^t - prox_{\gamma\psi}(x_{t-1})\|^2 + \|\nabla F_{\gamma}(x_t)\|^2$. **Q2:** Confusion about the complexity as a contribution (Line 59) for WCSC problems. **A:** We are sorry for the confusion. For WCSC problem, we did not intend to claim a better complexity, but rather emphasize that it is the first single-loop method with the same complexity as existing double-loop methods. The improved complexity of our algorithm lies at its application to difference-of-weakly convex functions, which can be seen from Table 1. **Q3:** Does the proof and algorithmic technique employed in this study resemble SBCD? **Response.** NO. The main difference between the analysis of SMAG and SBCD is in the error bounds $E\|x_\phi^{t+1} - prox_{\gamma \Phi}(x_t)\|^2$ and $E\|x_\psi^{t+1} - prox_{\gamma \Psi}(x_t)\|^2$. For SBCD, they use inner loops to solve for $prox_{\gamma \Phi}(x_t)$ and $prox_{\gamma \Psi}(x_t)$. In order to reach error bounds $E\|x_\phi^{t+1} - prox_{\gamma \Phi}(x_t)\|^2 \leq O(\frac{1}{t+1})$ and $E\|x_\psi^{t+1} - prox_{\gamma \Psi}(x_t)\|^2 \leq O(\frac{1}{t+1})$ at iteration $t$, they need to use $O(t^2)$ inner loop iterations because of the involved maximization. With $O(\epsilon^{-2})$ outer iterations guarantees a nearly $\epsilon$-critical point, it leads to a total sample complexity of $O(\epsilon^{-6})$. In contrast, our analysis directly builds the recursion for $\|x_\phi^{t+1} - prox_{\gamma \Phi}(x_t)\|^2 + \|y_{t+1} - y^*(prox_{\gamma\phi}(x_{t}))\|^2$ in Lemma 4.4. In addition, we introduce a special potential function $P_t$ as in Eq. (23). We are able to build a decreasing recursion in terms of $P_t$, i.e., $P_{t+1}$ in the left, and $(1-\beta)P_t$ in the right of the bound (cf line 541), where $\beta<1$. These make it possible for us to prove a better convergence rate of a special error function $E[\|x_\phi^t - prox_{\gamma\phi}(x_{t-1})\|^2 + \|x_\psi^t - prox_{\gamma\psi}(x_{t-1})\|^2 + \|\nabla F_{\gamma}(x_t)\|^2]$ for a randomly chosen $t$. **Q4.** Misunderstanding about the hinge loss (Line 271) as a smooth function. About the large variance in training curves observed, such as in FER 2013 (Figure 1). **A.** The hinge loss is in the form of $\ell(x,y) = \max(0, 1- y * f(x))$ where $x,y$ are the data sample and its label, and $f(x)$ is the prediction score of the sample $x$. This clearly is a non-smooth loss due to the presence of the function $\max(0, \cdot)$. Regarding the variance in the training curves: (i) the large variance of our method for FER2013 is due to a bug in the code, which loads a wrong file of one trial result for plotting. A corrected training curve of our method is included in the attached PDF file of the global rebuttal. It has much smaller variances. (ii) We have also incldued the means and standard deviations of our method on all datasets for references in the attached PDF file. It can be seen that the standard deviations are indeed one order smaller than the loss values. Please also note that the scale of the y-axis is very small, which makes the std more obvious. (iii) We have conducted hyperparameter tuning as stated in lines 283-289. Thank you! --- Rebuttal Comment 1.1: Comment: Thank you for the clear response. I have slightly increased my score, particularly for the clarification of the main difference between the analysis of SMAG and SBCD, which serves as the new proof technique. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our responses. We are glad to hear that it addresses your concerns.
Summary: This paper proposes a stochastic Moreau envelope approximate gradient method dubbed SMAG, the first single-loop algorithm for solving these problems, and provides a state-of-the-art non-asymptotic convergence rate. This paper achieves the best complexity of order $O(\epsilon^{-4})$. Futhermore, the agorithm of this paper only use a single loop which is easy to implement and tune the hyperparameters. A typo: Line 3, it should be $\Psi(x) = \max_z \psi(x, z)$. Strengths: No Weaknesses: No Technical Quality: 3 Clarity: 3 Questions for Authors: This paper considers the problem of a special structure that is the difference of convex functions. However, in this paper, this special structure seems not be used. As I know, for the problem of difference of convex functions, there are several algorithms can achieve the faster convergenc rate of general non-convex functions. So, I wonder whether the results of this paper is useful enough. Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and feedback on our paper. **Q:** The special structure of the difference of convex functions is not used. **A:** Thank you for your good question. (1) In this paper the problem considered is the Difference of Max-Structured Weakly Convex Functions (DMax) Optimization, which is more general than the difference of convex functions due to the maximization structure in the two component functions. Hence, we cannot use some technique like linearization of the second component as in previous work on solving difference-of-convex functions. (2) We do utilize the difference structure to some degree. In particular, we apply Moreau envelope smoothing to each component separately instead of jointly. It is notable that even each component is weakly convex, their difference is not necessarily weakly convex. Hence, applying Moreau envelope smoothing to the joint function would not make sense. (3) For the difference of weakly-convex functions problem (DWC), we have summarized its existing works in Table 1. To the best of our knowledge, our proposed method is the first single-loop stochastic methods and it achieves the best convergence rate $O(\epsilon^{-4})$.
Summary: The paper considers the problem of minimizing a difference of maximum of two functions, under various regularity assumptions. Two important settings are difference of weakly convex functions and weakly convex strongly concave min-max problems. The authors propose a single loop algorithm with a convergence rate (defined in an appropriate sense) of $\epsilon^{-4}$. The authors apply their method to positive-unlabeled (PU) learning and partial area under ROC curve (pAUC) optimization with an adversarial fairness regularizer, and compare against existing methods. Strengths: 1. The paper is generally well written. The authors motivate the technical challenges well and how to get around them rather. 2. Getting single loop algorithms with the Moreau smoothing technique is challenging. The authors make important contributions in this regard. 3. The experimental results are encouraging. The authors show consistent improvement in their applications over the baselines. Weaknesses: 1. (Writing) The authors should distinguish if the difference/improvement from previous work are "double loop vs single loop", and/or faster rates and/or weaker assumptions. For the DWC setting, it seems it is "double loop vs single loop", and/or faster rates and weaker assumptions, where as for WCSC, it is "double loop vs single loop". 2. (Writing) The authors should mention the notion of optimalities in various settings ($\epsilon$ nearly critical and $\epsilon$ near stationarity) eraly on, and remark that they are standard. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Are the proof ideas useful to give single loop algorithms for non-smooth convex minimization via Moreau smoothing, with optimal rates? 2. Optimality: Is the $\epsilon^{-4}$ rate achieved optimal in these settings? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and feedback on our paper. **Q1.** The authors should distinguish if the difference/improvement from previous work are "double loop vs single loop", and/or faster rates and/or weaker assumptions. For the DWC setting, it seems it is "double loop vs single loop", and/or faster rates and weaker assumptions, where as for WCSC, it is "double loop vs single loop". **A:** The improvement over baselines can be seen from the difference in terms of assumptions, complexity and number of loops in Table 1 and Table 2. For the DWC setting, our improvement over SDCA [25] lies at weaker assumption and single loop, over SSDC-X lies at weaker assumption, faster rate and single loop, over SBCD lies at faster rate and single loop. In the WCSC setting, our improvement over PG-SMD/Epoch-GDA lies at single loop, over SAPD+ lies at weaker assumption and single loop, and over StocAGDA lies at weaker assumption. **Q2.** The authors should mention the notion of optimalities in various settings (nearly critical and near stationarity) eraly on, and remark that they are standard. **A:** Thank you the suggestion. We will revise our draft accordingly. **Q3.** Are the proof ideas useful to give single loop algorithms for non-smooth convex minimization via Moreau smoothing, with optimal rates? **A:** Thanks for the good question! We have not explored the non-smooth convex minimization via Moreau smoothing. It might be useful in the sense that our technique can be used to argue that with one step update of $x_t'$ for solving $\min_{x'}f(x') + \frac{1}{2\gamma}\|x_t - x'\|^2$, we can establish recursion of $|x'_t - $ $ prox_{\gamma f}(x_t)\|^2$, where $prox_{\gamma f}(x_t)$ is the optimal solution to $\min_{x}f(x') + \frac{1}{2\gamma}\|x_t - x'\|^2$. This is the error of gradient estimator of the function $F_{\gamma}(x_t) = \min_{x}f(x') + \frac{1}{2\gamma}\|x_t - x'\|^2$. For deep analysis, we will leave it as a future work. **Q4.** Is the $O(\epsilon^{-4})$ rate achieved optimal in these settings? **Response.** We believe so. $O(\epsilon^{-4})$ rate has been proved to the optimal even in the smooth setting for non-convex optimization [r1]. Reference. [r1]: Arjevani et al. Lower bounds for non-convex stochastic optimization. 2019.
Summary: The paper studies a class of optimization problem named DMax, i.e., to minimize a loss that is a difference of two max functions. This problem covers both difference of weakly-convex optimization and weakly-convex strongly-concave minimax optimization. Existing algorithms require double-loop structure to solve the inner problem within certain accuracy. This paper proposes a simple single-loop stochastic algorithm that achieves state-of-the-art convergence rates. Empirical experiments demonstrate the effectiveness of the proposed algorithm. Strengths: The paper considers a class of difference of max-structured weakly-convex optimization problems, which can be interesting and novel in its problem fomulation. The proposed algorithm is simple to implement as it is single-loop structured and also clear to understand. The convergence rate matches exsiting works that use more complicated structures of the algorithms. Weaknesses: 1. There are some typos that may affect readibility. (1) Is the definition of $\Psi$ in line 3 of the abstract correct? Shouldn't it be $\max_z\psi$? (2) I guess that it should be "Single-Loop" in the title? (3) In line 27, there is an empty citation for adversarial learning. (4) In line 49, "sufficient to to". (5) "covnex" in line 4 of Table 2. (6) Should it be $\partial_x \psi$ in line 4 of Algorithm 1 and line 3 of Algorithm 2? 2. Although the problem fomulation is interesting in optimization theory, the current applications seem to be limited to only applications of the special cases DWC and WCSC min-max optimization. Therefore, the problem can be a little bit artificial in the sense that these applications do not necessarily require such a formulation. It can be done by studying DWC and min-max separately, and the major benefit is that DMax provides a unified way. Are there examples that DMax gives more applications beyond DWC and min-max optimization? 3. (Minor) Although I like the proposed algorithm, which is simple and clear to understand, it seems like all the technical steps exist before. The paper combines these known results and uses it for a new class of problem. For example, it is well-known that single-loop updates can give the same rate for strongly-concave problems compared to double-loop algorithms that first solve the inner-problems to a required accuracy. It is also known that weakly-convex problems can be handled using Moreau envelope and usually enjoy the same rate as smooth problems. As a result, the convergence rates and theoretical analysis in the paper are not surprising. I am the emergence reviewer. I do not carefully check all the technical details and proofs in the paper. However, the algorithmic ideas and analysis outline look good to me. I will be on the positive side with a low confidence rate. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and feedback on our paper. **Q1.** There are some typos that may affect readability **A:** Thank you for pointing out these typos. We will fix them in our revision. **Q2.** Are there examples that DMax gives more applications beyond DWC and min-max optimization? **A:** One example can be found in [43] for optimizing partial AUC in a range of false positive rates for binary classification. The objective can be written as $\sum_{i=1}^{N^+}\phi_{n}(s_{i}) - \phi_m(s_{i})$, where $s_i=(s_{i1},\ldots, s_{iN^-})$ denotes the pairwise loss between a positive data $x_i$ and all $N^-$ negative samples, and $\phi_n$ is the an operator that outputs the sum of the top-$n$ items in the input vector. Since $\phi_{n}(s_{i}) =\max_{p_j\geq 0, \sum p_j = n}p_j s_{ij}$, as a result, $\phi_{n}(s_{i}) - \phi_m(s_{i})$ is a structure of DMax. **Q3:** About the technical novelty. **A:** It is true that single-loop updates can give the same rate for strongly-concave problems compared to double-loop algorithms that first solve the inner-problems to a required accuracy. However, existing single-loop algorithms for min-max problems all require smoothness of the objective function. In our work, we did not use the smoothness assumption in terms of $x$ and hence the analysis is completely different. In terms of technical analysis, the Lemma 4.4 is novel, which does not exist in any existing work. In addition, we introduce a special potential function $P_t$ as in Eq. (23). These make it possible for us to prove the convergence of a special error function $E[\|x_\phi^t - prox_{\gamma\phi}(x_{t-1})\|^2 + \|x_\psi^t - prox_{\gamma\psi}(x_{t-1})\|^2+ \|\nabla F_{\gamma}(x_t)\|^2]\leq \epsilon$ for a randomly chosen $t$. --- Rebuttal Comment 1.1: Comment: Thanks for your response! I have no additional problems. I will keep my score, which is on the positive side. I am willing to support the acceptance of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our responses and the positive feedback.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback. As Reviewer gunK noted, the training curve of SMAG for the FER2013 dataset in Figure 1 exhibits unusually high variance. After careful checking, we identified a bug in the code, which loads a wrong file of one trial result for plotting. We have included the corrected figure in the attached pdf file. We also include the mean and std of loss values of our method on different datasets in the file. Pdf: /pdf/8bcf66042169424685657844b716380550c056a7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Understanding Visual Feature Reliance through the Lens of Complexity
Accept (poster)
Summary: This paper proposes a method to measure the complexity of features extracted by deep learning models. This method is based on $\mathcal{V}$-information, an extension of Shannon's mutual information that takes the computational capabilities of a decoder into account. The proposed measure of feature complexity is inversely related to the cumulative $\mathcal{V}$-information across a model's layers, the intuition being that features that become available at earlier layers will accumulate a larger $\mathcal{V}$-information, while features that only become available late will have a smaller $\mathcal{V}$-information. Features are extracted from a standard, ImageNet-trained ResNet50, from which an overcomplete dictionary is learned. These features are then clustered, and the mean complexity score for each feature is computed. Feature clusters with low, intermediate and high complexity scores are visualized, revealing that low-complexity features tend to be related to uniform colors or low-frequency information, intermediate-complexity features to local shapes such as eyes and noses and high-complexity features to highly structured shapes, such as insect legs. Using the CKA between the features in the dictionary and the activations at different layers within the main branch and residual branch of the network, the authors find that Easy features tend to emerge in early layers and then copied to later layers through the residual branch, while Complex features steadily increase throughout layers in both branches. Analyzing the dynamics of different features' emergence throughout training, complex features are found to emerge later than simple ones. Interestingly, analyzing the interplay between features' complexity and their importance in influencing the network's outputs, the authors find that (a) more important features tend to be less complex, and (b) the complexity of important features is reduced throughout training, suggesting that the network might be "compressing" the more important features. Strengths: - The paper presents an elegant measure of feature complexity in deep neural networks, which takes into account the networks' computational expressiveness in different layers. - The paper is extremely well written, with detailed methods and supplementary materials, exhaustive analyses and clear visualizations. - The Related Works section is particularly exhaustive and thorough. - It successfully combines several recently proposed interpretability tools. - The analysis of the interplay between complexity and importance throughout learning was particularly interesting, suggesting a mechanism for the compression of important features across learning epochs in deep neural networks. Weaknesses: - The proposed measure is closely related to the accuracy with which a feature can be linearly decoded from the layers of a network. I believe the paper would benefit from an explicit discussion of what, exactly, is the additional information provided by the proposed method which cannot be gained from directly looking at the "raw" decoding accuracy. - For simplicity, the authors restrict their analysis to a single ResNet model. While this is an understandable choice, I am not sure about what the implications are for models which do not include residual connections. The residual stream is found to be central in "teleporting" simple features to later layers. The authors should explicitly discuss what they predict the pattern of results would be for networks without residual connections. For example, would simple features be overall less relevant to the network's responses? Or would the network implicitly implement a residual-like stream? The limitations of using this architecture exclusively are briefly acknowledged in the Limitations section, but the specific role of residual connections is not discussed there. - The proposed measure is based on a loose assumption that the decodability of features tends to increase across layers. However, it is possible that certain features are _only_ available in early layers (for example, certain low-level image features might be discarded). These features would have a low cumulative $\mathcal{V}$-information, and might thus be erroneously classified as high-complexity. The authors should explicitly discuss whether this is a concern at all, and for what reasons. - The plot in Figure 5B shows that more important features tend to become less complex as training progresses. As the authors have access to the features which are subject to this process, and those which are not, it would be extremely interesting to visualize them, showing what exactly is happening - are the same features being extracted at earlier layers? Or are they actually changing, and starting to resemble simpler features such as colors or edges? The answer to this question would clarify more precisely what the proposed complexity measure is capturing. Is it possible that features we visually evaluate as "complex" can be extracted at early layers if they are important to the network's task? Or is layer depth always correlated with visual complexity? Technical Quality: 4 Clarity: 4 Questions for Authors: No questions, beyond the requests for clarification listed in the weaknesses section. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Limitations are discussed adequately. Please refer to the weaknesses section for comments about limitations which I believe were not sufficiently discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > "The proposed measure is closely related to the accuracy with which a feature can be linearly decoded from the layers of a network. I believe the paper would benefit from an explicit discussion of what, exactly, is the additional information provided by the proposed method which cannot be gained from directly looking at the "raw" decoding accuracy." We are not sure which accuracy you refer to. The classification accuracy is not smooth, nor continuous, in network predictions. This typically makes its interpretation more noisy. Indeed, for hard tasks, the accuracy may be near the one of a random predictor, while the loss itself might be significantly lower than the one of a random predictor. Furthermore, accuracy only makes sense for classification tasks over discrete sets. But predicting a concept coefficient is not as simple as predicting if it is “present” or “absent”. Instead, it is a continuum where a concept can be more or less influential. In this case, we face a regression task, as explained in Appendix C. This smoothness allows better handling of inaccuracies in the concept extraction stage. If you refer to MSE, it is indeed closely related to $\nu$-information in the Gaussian posterior case (appendix C), but thanks to the “optional ignorance” property of $\nu$-information, we are certain that small MSE can be attributed to knowledge of the input, and not to simplicity of the task. > "I am not sure about what the implications are for models which do not include residual connections. The residual stream is found to be central in "teleporting" simple features to later layers. The authors should explicitly discuss what they predict the pattern of results would be for networks without residual connections. For example, would simple features be overall less relevant to the network's responses? Or would the network implicitly implement a residual-like stream? The limitations of using this architecture exclusively are briefly acknowledged in the Limitations section, but the specific role of residual connections is not discussed there." Thanks for this suggestion. We added a paragraph in the limitations section to discuss the significance of residual connections, and the questions you raise. > "The proposed measure is based on a loose assumption that the decodability of features tends to increase across layers. However, it is possible that certain features are only available in early layers (for example, certain low-level image features might be discarded). These features would have a low cumulative -information, and might thus be erroneously classified as high-complexity. The authors should explicitly discuss whether this is a concern at all, and for what reasons." This is a good question. To clarify, we do not have this problem because we only seek to decode features available at the last layer. We create the dictionary at the penultimate layer, ensuring each feature exists at the end. We then calculate their complexity, avoiding issues with features that don't exist at the penultimate layer. However, you highlight an interesting point: what about features present at layer 2 but not at layer 5 for example? This is outside the scope of our current study, but an interesting question for future work. > "The plot in Figure 5B shows that more important features tend to become less complex as training progresses. As the authors have access to the features which are subject to this process, and those which are not, it would be extremely interesting to visualize them, showing what exactly is happening - are the same features being extracted at earlier layers? Or are they actually changing, and starting to resemble simpler features such as colors or edges? The answer to this question would clarify more precisely what the proposed complexity measure is capturing. Is it possible that features we visually evaluate as "complex" can be extracted at early layers if they are important to the network's task? Or is layer depth always correlated with visual complexity?" Concerning the tracking of type of features, we indeed have some figures available showing this, but the problem is that tracking concepts (i.e., not just averaging complexities but genuinely tracking and linking features across epochs) requires an algorithm with hyperparameters. Some features may disappear at certain points. We chose to show only the average and top and bottom complexity curves to avoid relying too much on the linking method for our analysis. However, this is an excellent remark, and we would like to investigate this further in future work. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. Their answers are very helpful to clarify the few points I was unsure about, or to acknowledge when open questions still remain (as in the role of residual connections and the tracking of features during training). Just for clarification, in my point about decoding accuracy I was indeed referring to MSE. I apologize for using the "accuracy" terminology which suggests a discrete classification, that was probably confusing. Given that my review was very positive in the first place, and my concerns were quite minor, I will keep the same score.
Summary: The current work proposes a method to assess the complexity of features in a representation space in terms of usable information. The measure of complexity is related to how far back in the layers of a trained network one can find information about the feature that is recoverable by a linear decoder. Equipped with a measure of complexity per feature, the authors grab 10k distributed features in the penultimate layer of a ResNet50 and then examine various relationships such as the growth in complexity over the course of training, the relative importance of simple features in determining the output, and the “flow” of simple features through the residual backbone of the network. Strengths: The premise -- of a practical way to assess complexity via tiers of usable information, and then its employment as a route to a better understanding of learned features -- is very interesting. To evaluate the complexity measure, the authors cleverly use the trained layers in the network to approximate a hierarchy of function classes, such that the amount of usable information between the input and the feature must increase as you move deeper into the network, and at each level all that is needed is a correlation measurement. The complexity measure is thus reasonable and straightforward to measure in practice. The authors are upfront about a central assumption related to optimal processing by the trained layers. The writing is clear and the references to related literature are thorough and extensive. Weaknesses: Many of the analyses are questionable, in my opinion, making the work seem to have a strong premise with a lackluster follow up. I will be happy to be rebutted on these points with justification and clarifications during the discussion period. Specifically, the “what” and “where” analyses are odd to me. My main gripes are below, with smaller points in the "questions" section. - Regarding the qualitative exploration of dictionary vectors, and their grouping into clusters (for “what”): is there a good reason to think relative positions mean anything in feature space, and therefore that clustering is meaningful? Were the dictionary vectors normalized beforehand, or else did the variable magnitude affect the UMAP and clustering? Why were 150 clusters used? Did a sweep over k or any other analysis indicate that the vectors are indeed clustered to some degree, and 150 is an appropriate choice? Unless I missed it, it’s not explained how the meta-clusters were labeled -- was it just visual inspection, and the remaining 120 clusters simply weren’t interpretable? - Regarding the use of CKA for “where”: The proposed complexity measure is already based on how far back in the processing the feature is still recognizable to a linear probe. Why show CKA instead of the usable information as a function of depth for simple vs complex features? It’s not intuitive to me to infer anything about the similarity of the “residual” branch aka simply the feature vec one layer deeper, compared to that of the delta (“main branch”), which is not used as a feature by the network. It seems far more straightforward to look at any changes in the usable information as a function of depth as arising from the delta at that layer. Why not just show the curves for each of the features? The curves would also show the way in which usable information is growing with depth -- is it gradual, or stepped? All said, it’s hard to find much insight from the results of Fig 4, and a straightforward alternative exists. Technical Quality: 4 Clarity: 2 Questions for Authors: - In Appendix A: “The dictionary D was designed to encapsulate 10 concepts per class”. Does this mean the NMF was run separately for each class, and then the dictionaries were all combined? If that’s the case, are there multiple copies of some features (e.g. “wheel” being found for every vehicle class)? - How were the visualizations of Fig 1 produced, particularly the earlier manifestations z2 and z3? How should we interpret a feature’s identity in an earlier layer when it is not recognizable in that earlier layer (and therefore a function of many features)? - Fig 6: What are we supposed to infer from the epoch 1 results? If everything is still essentially random at this stage of training, then we're just seeing meaningless values plotted against each other, right? - The inhibitory/excitatory results (Fig 12) are striking -- **every single inhibitory feature has complexity less than ~0.5?** While the other findings are rough trends extracted from point scatter, this finding is relegated to the appendices and given minimal attention? Questions out of interest rather than needing to fill a gap in the paper: - Regarding the higher importance of simpler features: are they simply present more often, or are they more influential? I wonder if there's a frequency-complexity relationship that would be insightful. - Did you do any of the same analyses with the individual neurons in the penultimate layer (i.e. comparing local vs distributed)? Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: There is a reasonable discussion of limitations in the Appendices. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the time and effort you invested in reviewing our work; thank you. > "Regarding the qualitative exploration of dictionary vectors, and their grouping into clusters (for “what”): is there a good reason to think relative positions mean anything in feature space, and therefore that clustering is meaningful? Were the dictionary vectors normalized beforehand, or else did the variable magnitude affect the UMAP and clustering? Why were 150 clusters used? Did a sweep over k or any other analysis indicate that the vectors are indeed clustered to some degree ? " Clustering was used to qualitatively inspect what complex and simple features look like, and it represents a qualitative part of our study. Clustering in embedding space is a common practice, and normalization does not change the cluster membership. We found that meta-clusters were robust to choice of cosine or L2 normalization. Regarding the choice of 150 clusters, we aimed to avoid overcrowding figures and selected 30 meta-features that span different degrees of complexity, sampling randomly at each level. Each cluster was qualitatively interpreted, but the labeling of clusters is subjective. We didn't use external models for labeling, and while these labels provide useful insights, they are inherently qualitative and should be interpreted as such. We have clarified this aspect in the manuscript to emphasize the qualitative nature of this part of the analysis and to acknowledge the limitations of the clustering and labeling process. > "Regarding the use of CKA for “where”: The proposed complexity measure is already based on how far back in the processing the feature is still recognizable to a linear probe. Why show CKA instead of the usable information as a function of depth for simple vs complex features? It’s not intuitive to me to infer anything about the similarity of the “residual” branch aka simply the feature vec one layer deeper, compared to that of the delta (“main branch”), which is not used as a feature by the network." Good question as you are indeed correct. We acknowledge that $\nu$-information could provide a valuable perspective, and could be used for that. However, to avoid circular reasoning and to employ a well-established metric in the literature, we chose CKA. Moreover, our results with $\nu$-information are consistent with those obtained using CKA, and we have included these additional findings in the appendix as suggested. The choice of CKA was also driven by its established use in analyzing neural network activations, ensuring our findings are interpretable and comparable within the broader research context. > "In Appendix A: “The dictionary D was designed to encapsulate 10 concepts per class”. Does this mean the NMF was run separately for each class, and then the dictionaries were all combined? If that’s the case, are there multiple copies of some features (e.g. “wheel” being found for every vehicle class)?" The dictionary design process utilized the published Craft method, ensuring 10 concepts per class to achieve balanced reconstruction across classes. While this approach can result in duplicated concepts, it ensures comprehensive coverage of class-specific features. We believe this redundancy does not detract from the overall utility of the dictionary; rather, it enhances the robustness of feature representation by capturing subtle variations within shared concepts across different classes. > "Fig 6: What are we supposed to infer from the epoch 1 results? If everything is still essentially random at this stage of training, then we're just seeing meaningless values plotted against each other, right?" In this figure, we contrast features at the beginning versus end of training, to get a picture of what has shifted. The end of epoch 1 represents the point where the dataset has been seen once, and several gradient steps have been taken. The features used at the end of epoch 1 have equal chances of being simple or complex (decodable at layer 1 or layer 10, for example). In contrast, by the end of training, we observe a simplicity bias, where important features are more likely to be decodable early. This demonstrates how the network evolves to prioritize simpler features over time, highlighting the dynamic nature of feature learning and the emergence of important features throughout the training process. > "The inhibitory/excitatory results (Fig 12) are striking -- every single inhibitory feature has complexity less than ~0.5?" Thank you for this remark. We also find this observation intriguing. To be honest, we are not certain whether this is a specific property of ResNet50, ImageNet (because of 1k classes), or a more general phenomenon. This warrants further investigation and could be a fruitful avenue for future research. > "Regarding the higher importance of simpler features: are they simply present more often, or are they more influential? I wonder if there's a frequency-complexity relationship that would be insightful." This is an excellent question. Previous work has already pointed to these issues [1], and concurrent research has posed similar questions with controlled toy examples [2]. It appears that the relationship between frequency and complexity is indeed complex, involving both factors. Simpler features may be more frequently present and also more influential in driving the network's decisions. We believe this is a fundamental question for deep learning. [1] Hermann, K., Mobahi, H., Fel, T. Mozer, M. (2023). On the Foundations of Shortcut Learning. [2] Lampinen, A. K., Chan, S. C., and Hermann, K. (2024). Learned feature representations are biased by complexity, learning order, position, and more. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I would have liked to see in a rebuttal pdf the usable information vs depth comparison to CKA ("we have included these additional findings in the appendix as suggested"), but I'll take your word for it, and look forward to checking it out in the published version. All told, this paper (refreshingly) gave me a lot to think about, and I'm more convinced after the response about the soundness of the analyses. I'll raise my score to a 7; good luck with acceptance. --- Rebuttal 2: Comment: Thank you for your feedback! We're glad our responses helped clarify the analysis. We'll make sure the comparison with CKA is included in the appendix as suggested. We appreciate your support and the score increase. Thanks again, and we hope the final version meets your expectations!
Summary: The paper introduces a metric based on V-information to measure feature complexity in deep learning models. Using ResNet50 trained on ImageNet, it explores feature spectrum, training dynamics, network flow, and decision impact. The study also highlights the role of simplicity bias and the evolution of feature importance in neural networks. Strengths: 1) Clear and well defined objective 2) Informative Figures 3) Comprehensive supplementary material Weaknesses: 1) Not Easy To Follow On Math: The paper's mathematical sections are difficult to follow. Clearer and intuitive explanations would improve accessibility. 2) Limited Model Diversity: The study focuses solely on ResNet50, ignoring modern architectures like depth-wise separable CNNs. Technical Quality: 3 Clarity: 2 Questions for Authors: 1) The paper states that both $z$ and $D^*$ are positive due to the use of Non-Negative Matrix Factorization, which aligns with the nature of ReLUs. Could you elaborate on why negative values for $D^*$ wouldn't work, despite the positive nature of ReLU? It's not clear why allowing $z$ and $D^*$ to be negative would be problematic, especially considering that $(-z)(-D^*) = zD^*$. 2) In line 126, $z$ is defined as $z \in \mathbb{R}$. Given that $z$ is described as positive elsewhere, wouldn't $z \in \mathbb{R^+}$ be more precise? Or are you intentionally allowing negative values for $z$? 3) The nature of $z_1$, $z_2$, and $z_3$ in Figure 1 is unclear. If these represent the $i$-th element of $z$, they would be scalars, which doesn't seem to align with what's depicted. Could you clarify what these variables represent? 4) Figure 1 shows feature visualization across different layers, and the paper mentions earlier layers elsewhere. However, line 622 states that feature extraction was done only for the penultimate layer. Could you explain this and clarify how earlier layer information was obtained if extraction was limited to the penultimate layer? Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have addressed limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > "Not Easy To Follow On Math: The paper's mathematical sections are difficult to follow. Clearer and intuitive explanations would improve accessibility." We thank the reviewer for this feedback. We strove to keep the math portions of the paper as accessible as possible, but welcome feedback if there were particular points of confusion. > "The paper states that both and are positive due to the use of Non-Negative Matrix Factorization, which aligns with the nature of ReLUs. Could you elaborate on why negative values for wouldn't work, despite the positive nature of ReLU? It's not clear why allowing and to be negative would be problematic, especially considering that." We are applying a published method widely recognized to generate concepts that are interpretable, sparse, and compositional; see [1, 2] for an overview of the large literature on the subject. Theoretically speaking, the importance of using a consistent semi-ring on both sides (non-negative semi-ring or negative semi-ring, as you pointed out) ensures compositionality and interpretability, preventing the cancellation of concepts (we only add value on top of each other; they don't cancel). Practically speaking, NMF (i) aligns well with the properties of ReLU activations, which are sparse and positive. NMF (ii) allows us to decompose activations without orthogonality constraints, which is crucial since model activations can collapse, making concepts non-orthogonal. Moreover, NMF (iii) supports compositionality, enabling multiple concepts to coexist within an activation. [1] Lee, D. D., & Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature. [2] Gillis, N. (2014). Nonnegative Matrix Factorization. > "In line 126, is defined as. Given that is described as positive elsewhere, wouldn't be more precise? Or are you intentionally allowing negative values for?" This is a good remark. We have corrected this inconsistency to ensure clarity. > "The nature of, , and in Figure 1 is unclear. If these represent the -th element of, they would be scalars, which doesn't seem to align with what's depicted. Could you clarify what these variables represent?" You are correct, and we apologize for the lack of clarity. $z_1, z_2$, and $z_3$ ​ are indeed scalar values representing directions in the activation space (also called “concepts”). The images shown are feature visualizations, which are optimized to maximize the activation of these concepts (i.e., finding $x$ such that $x = \arg\max z_i(x)$. We have clarified this in the manuscript to avoid confusion. > "Figure 1 shows feature visualization across different layers, and the paper mentions earlier layers elsewhere. However, line 622 states that feature extraction was done only for the penultimate layer. Could you explain this and clarify how earlier layer information was obtained if extraction was limited to the penultimate layer?" Thank you for this remark. We have corrected this inconsistency in the manuscript. Feature extraction was indeed performed at the penultimate layer. The visualizations in Figure 1 aim to decode features $z_1, z_2$, and $z_3$ that exist at the penultimate layer at lower layers, such as blocks 1 and 3 of the ResNet50. We use linear probing at these earlier layers to visualize how features develop and are transformed across the network. This method clarification has been added to the manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal and their time. Their clarification were indeed insightful for better understanding the paper. After reading other discussions, and reading the paper one more time, I'm even more unsure of my assessment. I will stand by my current rating as I think the authors put a lot of effort in this paper, and their results are reasonable, but I will downgrade my confidence to 1. I hope the best for the authors. Best regards,
Summary: The paper introduces a novel metric for quantifying feature complexity in deep learning models, specifically focusing on an ImageNet-trained ResNet50 model. This V-information-based metric captures whether a feature requires complex computational transformations for extraction. The study addresses four key questions: (1) The appearance of features as a function of complexity. (2) The learning timeline of these features during training. (3) The flow of simple and complex features within the network. (4) The relationship between feature complexity and their importance in the model's decision-making. The study reveals that simpler features dominate early in training and are transported through the network via residual connections, while more complex features emerge gradually and require more computational effort. Interestingly, the most important features tend to be simpler and become accessible earlier in the network, suggesting a sedimentation process. Strengths: (1) Originality: The introduction of the V-information metric for assessing feature complexity is novel and provides a fresh perspective on understanding neural network behavior. The exploration of feature complexity across training epochs and network layers adds significant depth to existing knowledge. (2) Clarity: The paper is well-organized, clearly presenting its goals, methods, and findings. Visualizations and qualitative analyses effectively illustrate the differences between simple and complex features. (3) Significance: Understanding feature complexity and its impact on model performance and decision-making is crucial for developing more interpretable and efficient models. This work contributes to explainable AI by providing insights into how features are learned and utilized. Weaknesses: (1) Generalizability: While the findings are insightful, they are based on a single architecture (ResNet50) and dataset (ImageNet). The results might not generalize to other models or tasks without further validation. (2) Complexity Metric: The assumption that each layer optimally represents features for downstream linear probes may not hold universally, potentially leading to overestimated complexity scores in some cases. More empirical validation across different models and tasks could strengthen the proposed metric's robustness. (3) Temporal Analysis: The focus on specific epochs (e.g., epoch 90) may overlook important dynamics occurring at intermediate stages of training. A more granular analysis could provide a clearer picture of feature evolution, i.e., Can you analyze the changes in importance during the training process? Technical Quality: 2 Clarity: 3 Questions for Authors: (1) How would the proposed V-information complexity metric perform on different neural network architectures, such as Transformers or GANs? (2) Can the authors provide more details on how the complexity metric correlates with other existing complexity measures in literature? Which one provides better insights? (3) How does the model's simplicity bias, as observed in the study, impact its generalization performance on unseen data? (4) Are there specific strategies that could be employed to mitigate the simplicity bias and encourage the learning of more complex, yet important features? (5) Does the conclusion of section 4 still hold on networks outside ResNet? How to explain the architecture of the transformer regarding the residual connection? (6) Why does the horizontal axis in Figure 4 show several blocks instead of all blocks? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: While the paper provides significant insights into feature complexity in deep learning models, there are several limitations that need to be addressed: Model and Dataset Scope: The study focuses solely on a ResNet50 model trained on the ImageNet dataset. This raises concerns about the generalizability of the findings to other neural network architectures (e.g., Transformers, GANs) and different types of datasets. Future work should validate the proposed metric across a variety of models and tasks to ensure broader applicability. Assumption on Optimal Representation: The complexity metric relies on the assumption that each layer in the network provides an optimal representation for downstream linear probes. However, this assumption may not always hold true, potentially leading to overestimated complexity scores. Further empirical validation and adjustments to the metric may be necessary to account for cases where this assumption is violated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > "While the findings are insightful, they are based on a single architecture (ResNet50) and dataset (ImageNet). The results might not generalize to other models or tasks without further validation." "The assumption that each layer optimally represents features for downstream linear probes may not hold universally, potentially leading to overestimated complexity scores in some cases. More empirical validation across different models and tasks could strengthen the proposed metric's robustness." This is a valid point, which we discuss in lines 325-331 and in Appendix I. We agree and would like to extend our study to include diverse models in the future. However, our perspective with this work is that understanding why ResNet50 generalizes on ImageNet is already a significant challenge. > "The focus on specific epochs (e.g., epoch 90) may overlook important dynamics occurring at intermediate stages of training. A more granular analysis could provide a clearer picture of feature evolution, i.e., Can you analyze the changes in importance during the training process?" Thank you for your comment. However, there might be a misunderstanding. Sections 5 and 6 of our paper directly address this issue. We indeed perform a dynamical study of features across epochs, and we show (i) in Section 5 that the most complex features emerge later in terms of epochs, while section 6 demonstrates both (ii) the simplicity bias over epochs and (iii) the dynamic compression of important features throughout the epochs. > "How would the proposed V-information complexity metric perform on different neural network architectures, such as Transformers or GANs?" We're not entirely sure if we understand the question. What do you mean by 'perform'? The metric, as introduced in the seminal paper [1], measures mutual information under a complexity constraint. We use it to determine the simplicity or complexity of features. How would you suggest evaluating the performance of our metric? If you're asking about applicability, our metric can be easily applied to transformers or GANs without any issues, as it only requires having access to intermediate activations. [1] A Theory of Usable Information under Computational Constraints, Xu et al., 2020 > "Can the authors provide more details on how the complexity metric correlates with other existing complexity measures in the literature? Which one provides better insights?" You raise a valid point, as complexity can be defined in various ways. Recent research employs category theory to introduce a redundancy-based metric, which merges neurons until a distance gap surpasses a threshold, using this gap as a hyperparameter [1]. In Appendix E, we demonstrate a correlation between our metric and the general redundancy score presented in [2]. However, this method has two limitations: (i) it requires a hyperparameter, and (ii) features considered redundant by this metric may still be complex according to our measure. Our complexity metric is more focused on the computational operations needed to decode a feature optimally. In this regard, our complexity metric can be viewed as a relaxation of computational complexity metrics, such as Kolmogorov complexity. [1] Going beyond neural network feature similarity: The network feature complexity and its interpretation using category theory. Chen et al. (2023). [2] Diffused redundancy in pre-trained representations. Nanda et al. (2024) > "How does the model's simplicity bias, as observed in the study, impact its generalization performance on unseen data?" This phenomenon has been extensively studied in previous research, which tends to show that a simplicity bias can negatively impact model performance on unseen data. Models that overly rely on simpler features may fail to capture the more complex patterns necessary for robust generalization. We have incorporated references to these relevant studies [1,2] in our Related Work section to provide additional context and support for this observation. [1] The Pitfalls of Simplicity Bias in Neural Networks. Shah, H., Tamuly, K., Raghunathan, A., Jain, P., Netrapalli, P. [2] Overcoming Simplicity Bias in Deep Networks using a Feature Sieve. Tiwari R. ,Shenoy, P. (2023). > "Are there specific strategies that could be employed to mitigate the simplicity bias and encourage the learning of more complex, yet important, features?" One potential strategy we could imagine to mitigate simplicity bias using our metric is to backpropagate through the score of the $\nu$-information. This score is differentiable, allowing for backpropagation (backprop through single matrix inversion). While this method can theoretically prioritize the learning of more complex features, it presents significant challenges in terms of memory and scalability. We could imagine addressing this with an online score (or batch estimation) instead of a real Complexity score.
Rebuttal 1: Rebuttal: ### General comments Thank you to the reviewers for taking the time to read and review our paper. Your critiques are sharp and insightful. You found the paper "extremely well-written" and "well presented and easy to follow." Regarding the results, you noted that it "adds significant depth to existing knowledge," supported by "thorough and extensive references to related literature." Finally, you found the results on the compression of important features "particularly interesting, suggesting a mechanism for the compression of important features across learning epochs in deep neural networks.". However, we acknowledge that you have raised certain points and identified weaknesses in the paper. During the rebuttal period, we carefully examined the critiques provided by all five reviewers. We have diligently incorporated the necessary citations and clarifications into our Related Work and Discussion sections. We believe we have addressed all the comments in a satisfactory manner We will now proceed to address each reviewer's comments directly, providing detailed justifications and clarifications for the points raised. We appreciate the opportunity to improve our paper through this constructive feedback. ### About dataset and model > "Does the conclusion of section 4 still hold on networks outside ResNet? How to explain the architecture of the transformer regarding the residual connection?" > "Limited Model Diversity: The study focuses solely on ResNet50, ignoring modern architectures like depth-wise separable CNNs." We recognize this concern, as outlined in our initial two limitations points. Analyzing ResNet50 during training, as done here, already entails examining over a million features from a standard model used in practice. We believe that a thorough study of this single, large-scale model during its training can already provide valuable insights. Our perspective is that understanding why a ResNet50 generalizes on ImageNet is already a significant challenge. By focusing on a single model, we aim to derive credible hypotheses in the study of simple and complex features, their order of appearance, types of features, and the link between complexity and generalization. However, we agree that this study should be extended to other architectures in future work to generalize these findings.
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes a new measure of feature complexity based on an information-theoretic metric, the $\nu$-information metric. Utilizing this complexity measure, the paper shows (1) visualization of features of different complexities, (2) simple features are propagated through the residual connections to reach the final layer, (3) simple features are learned earlier than complex features during training, and (4) simple features usually have a higher importance (weight) to the output score of the model. Strengths: 1. The paper is well presented and is quite easy to follow. I appreciate the summary of different experiments into the words “what” “where” and ”when.” 2. The paper provides abundant visualization to help readers grasp the intuitive behind simple features and complex features. Weaknesses: My major concerns for this paper are its coherence and significance. 1. The paper lacks coherence because multiple components (e.g., metrics, algorithms) introduced in this paper do not come from the same theoretical framework. For example, the notion of feature complexity in this paper (the $\nu$-information metric) is drawn from information theory, while the method for extracting features (the Craft method) is based on non-negative matrix factorization and has little connection with information theory. The same issue applies to the feature visualization method for demonstrating features of different complexities, and CKA metric, and the importance metric $\Gamma(z_i)$. These metrics/algorithms are borrowed from different theoretical frameworks and contexts, so it is in doubt whether they can be used together. I would appreciate it if the authors are able to re-organize all components of the paper under a coherent framework, e.g., the information theoretic perspective (if at all possible). 2. My second concern is about the paper’s significance, since many claims have been discussed in previous works and appears non-novel. The conclusion that simple features are usually color/edge detectors that are located in earlier layers is not surprising, nor do the conjecture that simple features tend to propagate through the residual connection. For the conclusion that simple features are learned earlier than complex features, there are many studies leading to this conclusion, from the perspective of spectral bias [cite1], game-theoretic interactions [cite2], or frequency [cite3]. Overall speaking, I do not learn new insights from the paper. Nevertheless, as mentioned in the 1st point in weaknesses, if the authors are able to re-organize the paper from the purely information-theoretic perspective, it will be a more intriguing work. 3. Using K-means clustering to aggregate features into meta-features may lead to incorrect clustering results. K-means clustering is known for its poor performance for data clusters that are not circular-shaped and are of discrepant sizes. I am not sure how features are distributed in this paper’s experiments, but more advanced clustering strategies such as the spectral clustering or hierarchical clustering can be applied to mitigate this issue. [cite1] Rahaman et al. On the Spectral Bias of Neural Networks. ICML, 2019. [cite2] Liu et al. Towards the Difficulty for a Deep Neural Network to Learn Concepts of Different Complexities. NeurIPS, 2023. [cite3] Xu et al. Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks. ICLR, 2020. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The paper claims that simple features tend to propagate through the residual connection, while complex features tend to propagate through the main branch of the network. Then a natural question arises: how are simple and complex features propagate in CNNs without residual connections (e.g., AlexNet, VGG)? 2. About the visualization of simple and complex features in Appendix B. The feature *fences* is among the most simple features, while the features *whiskers* and *dotted texture* are among the most complex features. However, I do not see an essential difference between the features *fences*, *whiskers*, and *dotted texture* from the sample images in Figure 7 and Figure 8. For example, *fences* and *dotted texture* both seem to consist of lines that cross over each other to form holes. Similarly, the *whiskers* feature is also composed of lines, although the lines do not cross over each other but appear parallel. Why is *fences* a simple feature, but *whiskers* and *dotted texture* complex features? Another question is that previous studies show that CNNs are usually biased towards textures, i.e., they tend to learn textures as a shortcut solution, but why is the *dotted texture* here measured to be a complex feature? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > "The paper lacks coherence because multiple components (e.g., metrics, algorithms) introduced in this paper do not come from the same theoretical framework. [...] These metrics/algorithms are borrowed from different theoretical frameworks and contexts, so it is in doubt whether they can be used together. I would appreciate it if the authors are able to re-organize all components of the paper under a coherent framework, e.g., the information theoretic perspective (if at all possible)." Regarding the different theoretical frameworks for the metrics and algorithms introduced, we believe that (1) these components can indeed be integrated together as is usually done and (2) we could if we wanted to write the entire framework from an information theoretic perspective. For instance, dictionary learning methods have a strong connection with information theory [1,2]. Additionally, CKA is based on HSIC, which is closely related to mutual information ($HSIC = MMD(P, \prod P_i$) and mutual information $KL(P, \prod P_i))$. The framework of $\nu$-Informations is not tied-up to information theory alone, quite to the contrary. It inherits most of the theoretical properties of Shannon’s entropy (e.g …) but the possibility to specify any function class for the predictive family is what makes it powerful and applicable to a broad range of domains (including deep learning, but not only). It would be possible to reorganize the components under a unified framework, even if it is not the traditional approach for each component. Thank you for your suggestion; we are adding these insights in the appendix. [1]: B. Dumitrescu and P. Irofti, Dictionary Learning Algorithms and Applications [2]: A Personal Introduction to Theoretical Dictionary Learning, Karin Schnass > "My second concern is about the paper’s significance, since many claims have been discussed in previous works and appears non-novel. The conclusion that simple features are usually color/edge detectors that are located in earlier layers is not surprising, nor do the conjecture that simple features tend to propagate through the residual connection [...]Overall speaking, I do not learn new insights from the paper." You are correct that Section 3 qualitatively presents a range of simple, medium, and complex features. While some of these are new, some align with features we have learned about from the prior literature. Still, it is worthwhile to see that our method discovers them. Regarding Section 4, we do not claim novelty for the late appearance phenomenon, although this is the first explanation from a complexity perspective. As another reviewer noted, this provides a fresh viewpoint on the phenomenon. We believe that Section 6 presents two new contributions: the emergence of simplicity bias using $\nu$-Information during training and the observation that neural networks generalize by compressing important features. > "Using K-means clustering to aggregate features into meta-features may lead to incorrect clustering results. K-means clustering is known for its poor performance for data clusters that are not circular-shaped and are of discrepant sizes. I am not sure how features are distributed in this paper’s experiments, but more advanced clustering strategies such as the spectral clustering or hierarchical clustering can be applied to mitigate this issue." Thank you for your suggestion. We chose to use K-means based on the rationale that, in the collapsed space of a neural network, the choice of clustering algorithm has minimal impact on the results: studies have shown that feature distributions in this space are generally well-suited for K-means clustering [1]. [1] "Neural Collapse: A Phenomenon in the Terminal Phase of Deep Learning" by Papyan et al., 2020 > "The paper claims that simple features tend to propagate through the residual connection, while complex features tend to propagate through the main branch of the network. Then a natural question arises: how are simple and complex features propagate in CNNs without residual connections (e.g., AlexNet, VGG)?" This is indeed a relevant and intriguing question. Investigating whether identity functions or orthogonal transformations occur channel-wise in models like AlexNet and VGG could provide valuable insights. This topic warrants further exploration in future research, and has been noted in future directions. > "The feature fences is among the most simple features, while the features whiskers and dotted texture are among the most complex features. However, I do not see an essential difference between the features fences, whiskers, and dotted texture from the sample images in Figure 7 and Figure 8. For example, fences and dotted texture both seem to consist of lines that cross over each other to form holes. Similarly, the whiskers feature is also composed of lines, although the lines do not cross over each other but appear parallel. Why is fences a simple feature, but whiskers and dotted texture complex features?" We argue that fences represent repetitive local patterns, in contrast to whiskers or insect legs, which are unique, finely structured motifs. This distinction ties into the texture versus shape debate: texture pertains to repetitive patterns, while shape involves unique, widely distributed elements across an image, necessitating the extraction of more structured motifs. > "Another question is that previous studies show that CNNs are usually biased towards textures, i.e., they tend to learn textures as a shortcut solution, but why is the dotted texture here measured to be a complex feature?" Firstly, the dotted pattern falls within the medium complexity category, ranking 14th out of 30 meta-features. Although not highly complex, the appendix reveals that images responding to this pattern exhibit different orientations. This indicates that the pattern detects texture while also capturing it from multiple angles and viewpoints. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. The explanation for using the K-means method, and the complexity of the *fences* and *dotted texture* feature seem reasonable. Considering this effort, I have raised my score to 4. However, I cannot give a higher score because my concern for the coherence and significance of the paper remains. To further clarify the coherence issue, the main point of "multiple components introduced in this paper do not come from the same theoretical framework" is to justify that all these algorithms and metrics can used together and do not conflict with each other. For example, why choose the Craft method to extract features but not other methods? Is Craft truly compatible with the proposed $\nu$-information metric? The same question can be asked for other components: the feature visualization method, the CKA metric, the importance metric $\Gamma(z_i)$, etc. These components are expected to be organized in a coherent framework, or at least be justified for the particular choices, rather than appear like a combination of engineering techniques. I hope this paper can be more impactful in doing so. --- Rebuttal 2: Title: Thank you for increasing your score Comment: > the main point [...] is to justify that all these algorithms and metrics can used together and do not conflict with each other > Is Craft truly compatible with the proposed v-information metric? Can you clarify what you mean by "conflicting with each other" ? None of these frameworks operate with mutually exclusive hypotheses. > These components are expected to be organized in a coherent framework This is rather ambitious, as it would require a comprehensive understanding of all phenomena arising in deep learning training. We believe our work provides a first step into the direction of identifying promising components of a "coherent framework". > why choose the Craft method to extract features but not other methods? > The same question can be asked for other components: the feature visualization method, the CKA metric, the importance metric , etc All these methods are published, and have demonstrated their effectiveness in deep learning benchmarks, including human experiments. We observe consistent results using different tools and theories. This diversity is a strength, not a weakness: this is precisely what makes us confident in the robustness of our observations. As all results and metrics point toward the same direction, we are confident to not overfit a single method.
null
null
null
null
null
null
SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D Semantic MPIs
Reject
Summary: The paper introduces SyntheOcc, a framework utilizing diffusion models to synthesize photorealistic images for autonomous driving simulations. The proposed method addresses limitations in the existing 2D diffusion model to generate multi-view driving videos by integrating detailed 3D geometric data. The authors effectively employ 3D semantic multi-plane images (MPIs) for precise geometric control, enhancing the realism and utility of generated images for training perception models. The paper also proposes re-weighting strategies to address the imbalance problem between foreground, background, and object categories. The experiments prove the effectiveness of the proposed MPI encoder and the reweighting strategies. Strengths: - The paper introduces an innovative approach by incorporating 3D semantic Multi-Plane Images (MPIs) to capture both geometric and semantic details of a scene. This approach allows for the precise modeling of 3D environments in a 2D image synthesis context, enhancing the photorealism and depth accuracy of the generated images. - The design of the MPI encoder is very effective in handling the input conditions with a large number of channels while maintaining spatial consistency to the latent features of diffusion UNet. - Additionally, SyntheOcc incorporates sophisticated reweighing strategies to address class imbalance and ensure focus on critical features. These include foreground enhancement, depth-aware reweighing, and class-balanced reweighing. - The paper outlines a comprehensive set of evaluations to demonstrate the effectiveness of the proposed method. Qualitative evaluations visually demonstrate the photorealism and environmental accuracy of the generated images compared to real scenes from the nuScenes dataset. Quantitative analyses leverage metrics such as Frechet Inception Distance (FID) to measure image quality and evaluate perception model performance, offering solid empirical evidence of the framework's effectiveness. Ablation studies further dissect the impact of various components and design choices in the proposed method. Additional robustness tests are conducted to evaluate how changes in the MPI settings (like variations in depth or semantic labeling) affect the output quality and the training effectiveness of perception models. Weaknesses: The contributions for reweighing strategies seem to be minor improvements over existing methods (Kai Chen, Enze Xie, Zhe Chen, Lanqing Hong, Zhenguo Li, and Dit-Yan Yeung. Integrating geometric control into text-to-image diffusion models for high-quality detection data generation via text prompt. arXiv preprint arXiv:2306.04607, 2023, Benjin Zhu, Zhengkai Jiang, Xiangxin Zhou, Zeming Li, and Gang Yu. Class-balanced grouping and sampling for point cloud 3d object detection (arXiv preprint arXiv:1908.09492), which limits the perceived novelty of the paper's contributions. The paper focuses on scene editing capabilities, but there is a noticeable underrepresentation of object-level editing in the experiments. Magic drive’s data augmentation is evaluated on two perception tasks BEV segmentation and 3D object detection, with CVT (Zhou & Krahenbuhl , 2022) and BEVFusion (Liu et al., 2023a) as perception models, respectively. Hence, evaluations on same downstream tasks are encouraged for better comparisons to the state-of-the-art baseline. The paper doesn't provide any evaluations comparing the re-weighing solution proposed by GeoDiffusion. There are also several noticeable view inconsistencies, eg: Fig 14 - row 2 column 2-3 (clouds seem different), row 7 column 4-5 there is a mismatch in building structures, which are not discussed in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: I do not have any further questions. I would like the reviewers to comment on the weaknesses that I listed in the previous paragraph. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors acknowledge some key limitations in the proposed method. First, it relies heavily on existing data for generating scenes, which means it doesn’t create as much variety as it could. This limits how well it can train models to handle different driving conditions. The paper also struggles with complex scenes, like crowds, where it fails to accurately identify individual people. This is a big deal for autonomous driving, where accurate representations of the scene are crucial for predictions. The authors suggest that future improvements could include better methods for creating diverse scenes and making the model more capable of handling dynamic environments, which would help make the system more practical and effective for real-world applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed feedback and insightful comments. We appreciate your time and great efforts in reviewing. We now reply to each comment below. **(1) Reweighing strategy** We agree with the reviewer that the reweighing strategy represents a minor level of innovation. As a result, we do not view it as a key contribution or novelty in our paper. Our contribution primarily focus on precise 3D controllability for image generation. The reweighing strategy serves as an engineered component designed to improve image quality and condition-generation alignment. **(2) Object-level editing** In our paper, we provide object-level editing in Figure 3 by placing new objects and deleting existing objects. Furthermore, from a broad perspective, all images shown in our paper represent object-level editing from an empty canvas. **(3) Align experiment setting in 3D detection using BEVFusion** To provide a comprehensive evaluation, we evaluate our method on 3D detection. In Gen setting in our paper, our method achieves mAP in 22.3 and NDS in 31.3, higher than MagicDrive’s 20.8 and 30.2. This denotes we achieve a more effective 3D controllable generation than prior work. **(4) Compare reweighing method with GeoDiffusion** We compare our Depth-aware Foreground Reweighing with reweighing method in GeoDiffusion. Analogous to the setting in Table 3, our method achieves 25.50 in mIOU, which is more effective than GeoDiffusion’s 25.07. Our reweighting method is formulated as an inverted cosine annealing, while GeoDiffusion formulated it as an exponential function. Both of them applied it to depth value. **(5) Noticeable view inconsistencies** In our paper, while contributing to 3D controllable image generation, we acknowledge its limitations. In most of our cases, our method achieves satisfactory consistency in the intersecting area between the views or frames. However, it is important to acknowledge that our method is not perfect. There is still room for improvement in our consistency, as certain cases have instances of artifacts, like Fig 14 - row 2 column 2-3. We discuss a variety of potential reasons for this problem. 1. **Reason 1: Missing occupancy data**. In the context of autonomous driving applications, taller buildings are irrelevant to the planning. Consequently, objects exceeding 5 meters in height are excluded from the occupancy data. This leads to the content above this threshold, such as cloud and building, being randomly generated by a diffusion model. In these cases, randomness potentially contributes the inconsistencies in appearance. 2. **Reason 2: Imperfect attention**. The second reason lies in the imperfect cross-view and cross-frame attention module. View consistency is compromised due to a limited overlapping region between views, resulting in relatively weak supervision. Meanwhile, achieving frame consistency is hindered by the small amount of video clips in the nuScenes dataset, posing a challenge to deliver satisfactory results. As the scale of our data and model is limited in the current context, the visual results are inevitably limited. In the future, we plan to harness the advanced video foundation model such as (open) Sora and its alternatives to improve consistency. **Certain artifacts may not impact downstream tasks**. In general, artifacts above the height threshold may not significantly impact downstream applications, as evidenced by our data augmentation experiments. For tasks such as 3D detection or occupancy detection, the upper regions of images are often cropped out, as they exceed the perception boundary, and are not necessary for planning. Finally, we would like to emphasize that our primary contribution does not lie in frame-consistent or view-consistent generation. Our work primarily focuses on precise 3D controllability for image generation. In our implementation, the frame and view attention modules are analogous to our baseline method MagicDrive. Although we do not show significant improvement in consistency, the performance of our model is sufficient and adequately robust for downstream application, as demonstrated through our data augmentation experiments. We will continue to enhance the multi-view and frame generation in future work. --- If there are any further concerns, we would be happy to address them in further discussion. Thank you! --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and clarifications. Considering the other reviews and the rebuttal of the authors, I am happy to keep my score 6. This paper presents solid work with a fair contribution that should be presented to the community.
Summary: This paper introduces SytheOcc, a method that employs a diffusion model with 3D occupancy as conditions to generate street view images. Strengths: 1. Unlike previous methods that use box conditions, this paper proposes the use of 3D occupancy, resulting in finer geometric control ability. 2. The paper further suggests the use of 3D semantic multi-plane images to represent the 3D occupancy. 3. The text and figures are well-presented, and the provided examples are very promising. Weaknesses: 1. The main concern is the inconsistency between views and frames. Despite using Cross-View and Cross-Frame Attention and 3D occupancy as conditions, the spatial and temporal consistency results are unsatisfactory (e.g., Fig 5 (b) and the video demos). This is not the expected outcome, as incorporating 3D occupancy as a consistent world representation should result in better spatial and temporal consistency. Additionally, it would be preferable to have metric results such as FVD for the temporal experiments. 2. In Table 1, why does SytheOcc-Aug show worse results for certain categories (e.g., bicycle, moto)? 3. Table 1 lacks experiments for ControlNet-Aug or ControlNet+depth-aug. 4. Some discussions regarding 3D occupancy as a 3D geometry condition: 4.1. The field of view (FOV) for 3D occupancy is limited as it is generally generated using lidar, which leads to inconsistency issues for high-rise buildings when considering cameras of larger FOVs. 4.2. The current annotation of 3D occupancy has limited category coverage. It would be beneficial to explore open-vocabulary approaches. 4.3. When using only 2D semantic masks as conditions, the paper mentions the presence of ambiguity (i.e., Fig 6 a0). Can the use of instance-level semantic masks alleviate this problem? Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Overall, I have a positive view of the paper and am willing to increase the score if the authors can address my concerns. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have provided a comprehensive list of limitations and future work of their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments and constructive suggestions. In the following, we reply to individual questions and comments raised by the reviewer: **(1.1) Inconsistency between views and frames**. In our paper, while contributing to 3D controllable image generation, we acknowledge its limitations. In most of our cases, our method achieves satisfactory consistency in the intersecting area between the views or frames. However, it is important to acknowledge that our method is not perfect, and there is still room for improvement in our consistency, as certain cases have instances of artifacts. We now discuss potential reasons for this problem. We first note that our condition of occupancy provides a consistent geometry (shape), but does not provide texture consistency between views or frames. In our video demos of Fig.11 row 5 columns 2-3, the car displays a consistent shape but with texture shifting between views. This case is primarily due to insufficient texture consistency, which remains an unresolved open problem. The underlying reason lies in the imperfect cross-view and cross-frame attention module. Their performance heavily depends on the available training data. View consistency is compromised due to a limited overlapping region between different views, resulting in relatively weak supervision. Meanwhile, achieving frame consistency is hindered by the small amount of video clips in the nuScenes dataset, posing a challenge to deliver satisfactory results. The scale of our data and model limited in the current context, is insufficient to enable these attention modules to achieve perfection in our scenario. In the future, we plan to draw on experiences from video generation and harness the advanced video foundation model such as (open) Sora and its alternatives to improve consistency. Finally, we would like to emphasize that our primary contribution does not lie in frame-consistent or view-consistent generation. Our work primarily focuses on precise 3D controllability for image generation. In our implementation, the frame and view attention modules are analogous to our baseline method MagicDrive. Although we do not show significant improvement in consistency, the performance of our model is sufficient and adequately robust for downstream application, as demonstrated through our data augmentation experiments. We will continue to enhance the multi-view and frame generation in future work. **(1.2) FVD evaluation** Good idea! The FVD metric serves as an effective measure for assessing video fidelity. As our baseline method MagicDrive does not provide FVD evaluation. We compare our method with DriveDreamer and DrivingDiffusion. Our achieved FVD score of 251 surpasses DriveDreamer's 340 and DrivingDiffusion's 332, demonstrating better video quality. **(2) Results for hard cases** It is a good point. We regard the categories such as bicycle and moto as the hard case and corner case that pose significant challenges for the generative model. As they have articulated structures and complex topologies, generating images with bicycles or moto sometimes becomes a mess in our current capacity. Moreover, bicycles and motos belong to the long-tailed class in the training set. The image generation network inherently suffers from data imbalance issues. As a result, the imperfect data with bicycle and moto will hurt the performance, while larger and easier objects such as buses and drivable surfaces tend to yield positive improvement. In the future, we plan to explore the application of scaling laws to scale of our dataset and model size to mitigate this issue and yield better results on bicycles and motos. **(3) Additional experiment** During rebuttal, we add additional experiments. Due to limited resources, including lab equipment and the very long training time of FB-Occ (4 days x 2), we provide small-scale experiments that train FB-Occ using 30% training iterations to reduce training time, while keeping others unchanged. In the setting of data augmentation, our method achieves an mIOU of 34.9, higher than ControlNet’s 33.2 and ControlNet+depth’s 33.5. These experiments denote that our method provides better effectiveness than the ControlNet baseline. **(4) Discussions regarding 3D occupancy as a 3D geometry condition** 1. **Defect in occupancy**. We agree that the coverage area of occupancy input is limited. In the context of autonomous driving applications, taller buildings are irrelevant to the planning. Consequently, objects exceeding 5 meters in height are excluded from the occupancy data. This leads to the content above this threshold, such as cloud and building, being randomly generated by a diffusion model. In these cases, randomness potentially contributes the inconsistencies in appearance. 2. **Open-vocabulary image generation**. Good suggestion! We will explore open-vocabulary generation to broaden our application domain in the future. In this setting, we will achieve a more fine-grained control than a fixed vocabulary. 3. **Instance-level condition**. Adding instance-level conditions will alleviate this problem. One possible method is to incorporate instance masks containing instance IDs such as 0,1,2. This way can be effective in distinguishing overlapping objects but not 3D-aware to capture the underlying 3D shape of objects. ------ If there are any further concerns, we would be happy to address them in further discussion. Thank you! --- Rebuttal Comment 1.1: Comment: Thank you to the authors for your response, and I would like to keep my rating unchanged.
Summary: The paper propose a new 3D semantic multi-plane images (MPIs) based image generation pipeline, which enables finer geometric control for 3D editing, dataset generation, and long-tailed scene generation. Through extensive experiments, the work demonstrates substantial advancement in generation quality and better alignment between condition and synthesized images. Strengths: 1. The work explores a new 3D semantic Multi-Plane Images (MPIs) as a condition, which provides better spatial alignment compared with baselines and enables 3D editing. 2. The comparison results are comprehensive and demonstrate the effectiveness the proposed method, the ablation is relatively complete to validate the MPI Encoder and the reweighting strategy. 3. The paper is well-written, and the experimental results are presented clearly. Weaknesses: 1. The MPI encoder, which is the major contribution, is not novel for me. Although the proposed 3D MPI enables finer control than BEVGen, but the diffusion model also operates on the 2D domain and generates each view and frame separately without strict geometry constraints. 2. The importance of reweighing is tricky and hard to tune, considering many hyperparameters. Are the m and n in Eq. 6 the same for different datasets? 3. It’s hard to decide how the method works without a supplementary video, I doubt the view-consistency of the generated video across frames and views. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How are the hyperparameters of the “progressive foreground enhancement” determined? Is it calculated using the distribution of different categories? 2. Will the appearance of the same objects change across different frames and views? Because the MPI encoder only provides semantic layout, not texture features, and there is no strict geometry constraint. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the careful proofreading of our paper. We would like to thank the reviewer for the valuable feedback and for appreciating our clear writing. In the following, we reply to individual questions and comments raised by the reviewer: **(1) W1. Geometry Constraint** In our understanding, the reviewer posits that the absence of explicit geometric constraints precludes the diffusion model from generating strictly consistent images, a perspective we respectfully find to be inaccurate. By implicitly learning geometry-aware image generation from 3D conditions, our diffusion model is trained to produce images that are aligned with the provided conditional input. While the consistency between the condition and the image may not be strictly guaranteed like a 3D representation (mesh or NeRF), the underlying label-image consistency is sufficiently robust. As evidenced by our data augmentation experiments, the generated image is already effective and valuable for downstream tasks. As for view and frame generation, we additionally learn cross-view attention and cross-frame attention for consistent generation as described in our paper Sec. 3.4. The two attention modules are designed to enable the target view to access information from its neighboring views, specifically from the left and right views, or history frames. This feature flow facilitates the synthesis of coherent and contextually relevant views, thereby enhancing the overall consistency of the generated content. **(2) W2, Q1. Hyperparameters** The assumption that more parameters, are more difficult to tune may be inaccurate. In practice, we find that the hyperparameters m and n in Eq. 6 do not need to be carefully tuned. n is the training iterations that do not need adjustment. m is the maximum weight coefficient. Empirically, we’ve tried m=2 and m=3, both of them yield nearly equivalent performance. The hyperparameter is heuristically designed and shared across different datasets and categories. **(3) W3. Video Demonstration** We have provided generated video in Figures 11, 12, and 16. To clearly demonstrate and compare the consistency across different views and frames, we present the videos frame by frame. It is important to acknowledge that there is still room for improvement in our consistency, particularly as we fine-tune our cross-frame attention with a limited and small dataset. In the future, we plan to harness the advanced video foundation model such as (open) Sora or its alternatives to improve consistency. Finally, we would like to emphasize that our primary contribution does not lie in frame-consistent or view-consistent generation. Our work primarily focuses on precise 3D controllability for image generation. In our implementation, the frame and view attention modules are analogous to our baseline method MagicDrive. Although we do not show significant improvement in consistency, the performance of our model is sufficient and adequately robust for downstream application, as demonstrated through our data augmentation experiments. We will continue to enhance the multi-view and frame generation in future work. **(4) Q2. Appearance change** In various figures throughout our paper, such as Figure 5, 7, and 11, the objects and contents maintain consistent identities across different frames and viewpoints. This consistency is learned by cross-frame and cross-view attention within a classifier-free guidance framework. We acknowledge that there are minor artifacts in certain cases due to imperfect attention modules. In our implementation, the two attention modules are analogous to MagicDrive. Although we do not show significant improvement in view consistency, the performance of our model is sufficient and adequately robust for downstream application, as demonstrated through our data augmentation experiments. --- If there are any further concerns, we would be happy to address them in further discussion. Thank you! --- Rebuttal 2: Comment: We thank the reviewer for the valuable comments. As the deadline for the rebuttal disscussion is approaching, we would greatly appreciate it if you could take the time to review our rebuttal at your earliest convenience. Please let us know if there is any additional information we can provide or any further clarification needed. Thank you very much for your time and consideration. We look forward to your review.
Summary: In this paper, the authors propose a new controllable diffusion-based image generation method named SyntheOcc, which takes an occupancy map as input and generates camera images. SyntheOcc enables the application of scene editing and long-tail corner case generation and shows a strong capability of data augmentation for autonomous driving systems. Strengths: 1. Compared with previous controllable image generation methods for traffic scenarios like Panacea or MagicDrive, the occupancy map contains more 3D spatial information than the BEV layout. 2. The paper is well-organized and easy to follow. 3. The extensive experimental results demonstrate the effectiveness of the proposed data generation pipeline. Weaknesses: 1. The control signal in Panacea or MagicDrive is BEV layout, which only contains lanes and foreground objects and is more easily acquired than occupancy. However, the SyntheOcc relies on sophisticated collected occupancy. Technical Quality: 3 Clarity: 4 Questions for Authors: It's interesting to note that diffusion models typically don't generate identical images in repeated runes, even when given the same control inputs. However, in Figure 1, we see that the generated images corresponding to both the original and edited occupancy maps show remarkably similar street structures. This raises an important question: Could this consistency be a result of overfitting to the nuScenes dataset? Diffusion models are known for their variability, but this particular case shows an unusual level of structural similarity. It's worth considering whether the model has perhaps learned the specific patterns of the nuScenes data too closely, leading to this unexpected consistency in output. This observation could have implications for the model's generalizability and its performance on diverse, real-world scenarios outside the training dataset. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The proposed SyntheOcc faces challenges in real-world application scenarios. For instance, to generate planning-level long-tail corner cases, other methods like Panacea or MagicDrive simply require editing the object's trajectory. However, SyntheOcc demands not only inputting the background occupancy but also constructing a pseudo occupancy for the foreground object. This raises the question of whether using occupancy as a control signal is an advantage or a disadvantage. The crux of the issue lies in the complexity of this method compared to alternatives. While other approaches need some adjustments to object trajectories, this technique necessitates providing comprehensive background and foreground occupancy data. This additional overhead prompts us to ponder whether occupancy-based control offers tangible benefits or introduces unnecessary complications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive and detailed feedback. Below, we reply to individual comments and questions raised by the reviewer: **(1) W1. The reliance on occupancy introduces a minor burden but with enhanced controllability** We agree with the reviewer regarding the reliance of our method on more sophisticated occupancy data. Nevertheless, acquiring such data is not overly challenging. Occupancy data can be semi-automatically annotated using semantic point cloud and 3D bounding boxes (refer to [a]), which stands for an acceptable burden. On the other hand, condition on occupancy enables finer 3D controllability, so that we can edit object shapes and irregular background elements like drivable area and terrain. This level of control is challenging to achieve with previous 3D bounding boxes or Bird's Eye View (BEV) layouts. [a] SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving **(2) Q1. Similar structure in Fig.1: Controllability ensures our condition-image consistency** We provide several analyses of this effect: 1. A similar structure in Fig.1 is an expected result. The first thing need to note is that, if we use the same random seed and condition input, the generated images in multiple runs will be the same. As we only modify a partial of occupancy input by placing traffic cones, the majority of occupancy data is unchanged. Besides, in implementation, we use the same random seed in two figures. Consequently, the generated images display similar street structures. In general, our diffusion model learns 3D controllable generation through a CFG paradigm [b], aligning with the frameworks used in text-to-image and ControlNet. Using pairs of conditions and images, our method learns controllable generation in a classifier-free manner. 2. Empirically, it has been observed that our method does not overfit the nuScenes training set. As demonstrated in Figure 5 of our paper, when the same occupancy input is provided, our method produces a variety of images with consistent attributes and diverse appearances. This evidence indicates that our method does not overfit training images, as it maintains the ability to generate diverse yet aligned outputs. 3. By employing a larger training dataset, our approach is expected to improve generalizability. We consider the nuScenes dataset, which comprises 200,000 images, to be of substantial scale. Utilizing this dataset for training, our model is more likely to learn the intrinsic relationships between the input conditions and the generated images, thereby mitigating the risk of unexpected consistency of overfitting. [b] Classifier-Free Diffusion Guidance **(3) L1. User-friendly editing: disentangle the occupancy into foreground and background** To ease the editing, we provide a strategy that disentangles the foreground control and background control in occupancy data. If we want to edit a car’s trajectory, we can keep the background occupancy unchanged and select the car’s first frame occupancy using the 3D box. During the following frames, we remove foreground occupancy and simply place our foreground target’s occupancy in a certain location using trajectory. By doing so, we only add minor steps by using occupancy but provide more precise 3D control, which makes it a favorable choice for conditioning. --- If there are any further concerns, we would be happy to address them in further discussion. Thank you!
Rebuttal 1: Rebuttal: We appreciate the reviewers' constructive feedback and acknowledge their consensus on the merits of our work. We list the consensus among the reviewers below. **(1) Occupancy as condition: an innovative approach** Our work introduces 3D semantic Multi-Plane Images (MPIs) to capture both geometric and semantic details of a 3D scene using occupancy. Furthermore, we introduce the MPI encoder to effectively extract 3D information for the diffusion model. Our approach allows for the precise modeling of 3D environments in a 2D image synthesis context, enhancing the photorealism and depth accuracy of the generated images. **(2) Precise 3D control: enable various downstream application** Compared to prior work that relies on box-conditioning, our method achieves finer 3D control through the utilization of a semantic voxel. This innovative approach provides precise 3D control, thus broadening the horizon of potential applications. We provide three applications in our paper. First, it facilitates 3D scene editing, enabling users to make nuanced adjustments to 3D environments for image synthesis. Second, the method enables dataset generation, producing high-quality, diverse datasets that are essential for training downstream models. Third, it provides long-tailed scene generation, where it can synthesize rare scenarios, thereby enriching the diversity of training data and improving the generalization capabilities of the perception model. **(3) Thorough experiments validate our effectiveness** Our extensive experimental results demonstrate the effectiveness of the proposed generation pipeline across various settings, including label-image alignment and data augmentation. Notably, the pipeline excels in maintaining consistency between labels and images, ensuring that the synthesized data accurately reflects the intended annotations. This consistency is crucial for training robust models that can generalize well to real-world scenarios, thereby enhancing the reliability and applicability of the generated data in practical applications.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral Methods
Accept (poster)
Summary: This paper studies the following "matrix denoising"/low-rank estimation problem. Given a rectangular matrix $Y = uv^\top + W$, the goal is to recover the rank-one factors $u,v$, when $W$ is random, with as little $\ell_2$ error as possible. A classic and well-studied setting takes $W$ to have iid entries; often one even assumes that the entries are Gaussian. This setting has been studied in random matrix theory, high-dimensional statistics, and more recently via techniques from statistical physics. The iid noise assumption is quite strong, and so a more recent line of works aims to relax that assumption by allowing correlations among the entries of $W$. In the "two-sided" heteroscedastic noise model, $W$ takes the form $\Theta^{1/2} \cdot W' \cdot \Sigma^{1/2}$, where $\Theta$ and $\Sigma$ are PSD matrices and $W'$ has iid Gaussian entries. The assumption is that $\Theta$ and $\Sigma$ are known. The main contributions of this paper are twofold: - a new spectral estimator whose performance improves over more naive spectral methods for recovery of $u$ and $v$. - a proof that under slightly stronger assumptions ($u$,$v$ Gaussian), the spectral estimator obtains nontrivial $\ell_2$ error whenever this is information-theoretically possible, and even obtains information-theoretically optimal $\ell_2$ error when the heteroscedasticity is one-sided. Along the way, the paper establishes a simple formula for the information-theoretic signal-to-noise threshold governing when nontrivial recovery of $u$ and $v$ is possible in this model. The paper also performs numerical experiments on synthetic data to validate the theory. Strengths: - Well-written and easy to read first 9 pages - Thorough mathematical investigation of the two-sided heteroscedastic spiked matrix model - New algorithm with nice optimality guarantees Weaknesses: My main reservation is that, as with a lot of papers using stat phys techniques, one gets the feeling that the assumptions are sort of designed to make the mathematical techniques work. I think the biggest offender here is the assumption that $\Sigma$ and $\Theta$ are known. Less major are the Gaussian-ness assumptions, the assumption that $n,d$ are of comparable order, and the assumption that the empirical spectral distributions of $\Theta$ and $\Sigma$ converge -- I think these allow the use of asymptotic methods, basically. Technical Quality: 4 Clarity: 4 Questions for Authors: none Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer Fvn7 for the insightful comments and positive evaluation. Below we address the comment concerning our various assumptions. We agree with the reviewer that the Gaussian-ness assumptions, the assumption that $n,d$ are of comparable order, and the assumption that the empirical spectral distributions of $\Xi, \Sigma$ converge are mild, and they basically allow the use of asymptotic methods. We now make a few comments on the assumption that $\Xi, \Sigma$ are known, and we will add this discussion to the revision. * In some settings, it is possible to estimate $\Xi, \Sigma$ (even consistently). This is the case if such matrices possess additional structures, e.g., if they are sparse [R4], if their inverses are sparse [R5] or if they are circulant or Toeplitz [R6]. We refer to the survey [R7] for detailed results on estimating structured high-dimensional covariance matrices. * Recently [R8] addresses the challenge of unknown covariances by considering a modified model where one additionally observes an independent copy of noise. The statistician can then estimate the covariance from the noise-only observation and use it as a surrogate of the true covariance for estimating the signals from the spiked model. It’s possible to derive similar results in the doubly heteroskedastic setting considered in our paper. * Finally, if the covariances are completely unknown, then our model (with Gaussian priors) is equivalent to a spiked matrix model with a certain bi-rotationally invariant noise. This problem is expected to exhibit rather different behaviors than when the covariances are known. See [R9] and [R10] for recent progress on understanding the statistical and computational limits for such models. --- [R4] T. T. Cai, H. H. Zhou, "Optimal rates of convergence for sparse covariance matrix estimation", Annals of Statistics, 2012. [R5] M. Yuan, "High dimensional inverse covariance matrix estimation via linear programming", Journal of Machine Learning Research, 2010. [R6] T. T. Cai, Z. Ren, H. H. Zhou, "Optimal rates of convergence for estimating Toeplitz covariance matrices", Probability Theory and Related Fields, 2013. [R7] T. T. Cai, Z. Ren, H. H. Zhou, "Estimating structured high-dimensional covariance and precision matrices: Optimal rates and adaptive estimation", Electronic Journal of Statistics, 2016. [R8] M. Gavish, W. Leeb, E. Romanov, "Matrix denoising with partial noise statistics: optimal singular value shrinkage of spiked F-matrices", Information and Inference: A Journal of the IMA, 2023. [R9] J. Barbier, F. Camilli, M. Mondelli, Y. Xu, "Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise." arXiv preprint arXiv:2405.20993, 2024. [R10] R. Dudeja, S. Liu, J. Ma, "Optimality of Approximate Message Passing Algorithms for Spiked Matrix Models with Rotationally Invariant Noise", arXiv preprint arXiv:2405.18081, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for following up!
Summary: The authors study how to recover a rank one spike corrupted by doubly heteroscedastic gaussian noise in the high dimensional regime. We are given a condition on the signal to noise ratio to indicate whether it's information-theoretically possible to have a non-trivial recovery of the spike. If this is satisfied then there is spectral estimator which can obtain non-trivial recovery. In particular cases this estimator is also Bayes optimal. Strengths: The paper is well written and does a really good job at introducing the main ideas in an intuitive way. While this kind of spectral estimators are well known in the physics literature, their application to such a noise model is an interesting generalisation. I believe this paper thoroughly explores this denoising problem, with the only easily achievable extension being looking at a rank $r$ spike in the signal (where r is a constant in d,n), which I am sure however wouldn't alter the results significantly. Thus this is in my opinion quite a solid contribution with not much room for improvement. Weaknesses: The paper introduces no significant advances in the theoretical tools or understanding of matrix denoising as all the tools used are essentially well known. I believe one issue in the writing is its lack of clarity in stating which results are completely rigorous and which aren't. The authors are upfront in saying they are guided by physics-inspired heuristics, but reading in the appendix it seems like (at least in some sections) the derivations are quite solid. A tangible improvement to the writing would be to state explicitly which results are conjectures and which are theorems. I think it would greatly improve the readability of figures 2 and 3 to have the error on the mean instead of the std. I would add a sentence describing more explicitly the difference and respective advantages of (3.1) (where the spectrum is O(1)) and (4.1) (where the elements of Y are O(1)). Small typos: 1. Line 203: $\sigma_2$ is not directly defined. You will only do so in Theorem 5.1 2. Line 188: you invoke the Nishimori identity but don't state it in the main text 3. Line 200: you say that $\eta>0$. While I also expect this to be true, I think you mean to say that they are real. The same applies to all the square roots in 5.3 and 5.4 Finally, not exactly a typo but I personally find it confusing to use $\bar \Sigma$ and $\bar \Xi$ when one wants to average over the singular values of $\Sigma$ and $\Xi$. I would prefer having an explicit integral over the singular value PDF. Technical Quality: 3 Clarity: 3 Questions for Authors: Would it be possible to look at the singular values of $A$ before and after the pre-processing? Can we clearly see a spike emerging if 5.1 is true by doing the pre-processing? You state that AMP has the fundamental limitation that requires a "warm start" to be effective. While this is true, initialising the estimator at random from the prior should allow it to have non-zero overlap in O(log(d)) steps. Do you see this in your numerics? Could you run AMP after using the spectral estimator and put the additional lines and simulation dots in figures 2, 3? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are correctly addressed in section 6. There is no negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer hDgq for carefully reading the manuscript and for the insightful comments and suggestions. Below we address each point raised in the review separately. **I believe one issue in the writing is its lack of clarity in stating which results are completely rigorous and which aren't** While our approach is indeed inspired by statistical physics, we will clarify that *all* our results (i.e., Proposition 4.1, Theorem 4.2, Corollary 4.3, Theorem 4.4, Theorem 5.1 and Corollary 5.2) are mathematically rigorous, with the only technical condition being “(5.1) implies $\sigma_2^* < 1$” in Theorem 5.1 that we only managed to verify numerically, but not analytically. For the information-theoretic results, our proof uses the Gaussian interpolation method which is a rigorous technique originating in physics. For results on spectral estimators, the analysis is inspired by Bayes-AMP. A similar approach (albeit for different learning problems) is put forward e.g. in [R1] and [R2]. However, in sharp contrast with the two works above which only provide heuristics, our results are completely rigorous. In fact, a key element of novelty in our paper is precisely to give an exact asymptotic characterization of spectral estimators via AMP tools. Thus, the heuristics on pages 8-9 are just to offer the readers an intuition on how the spectral estimators given in (5.4) arise from Bayes-AMP. We will make this clear in the revised version. **I think it would greatly improve the readability of figures 2 and 3 to have the error on the mean instead of the std.** In Figures 2-3, we report the mean averaged over 20 trials, as well as the error bars representing 1 standard deviation from the mean. Please let us know if you have any suggestions on how to improve the readability of the figures. **I would add a sentence describing more explicitly the difference and respective advantages of (3.1) (where the spectrum is O(1)) and (4.1) (where the elements of Y are O(1)).** The different scalings in (3.1) and (4.1) are purely for mathematical convenience. For the analysis of spectral estimators, it’s more convenient to have matrices whose operator norms are of constant order. For the information-theoretic analysis, it is customary in the literature to have Hamiltonians on the order of $n$ and then normalize the free energy (i.e., the expected log partition function) by $1/n$. In principle, all results and proofs can be written under a single scaling. We will clarify this in the revised version. **Small typos** Thank you for spotting the typos, we will revise accordingly. **Usage of $\overline{\Xi}, \overline{\Sigma}$** We will make it clear that all expectations involving these random variables are computed as integrals against the limiting spectral distributions of $\Xi, \Sigma$, as is common in the random matrix theory literature. **Would it be possible to look at the singular values of $A$ before and after the pre-processing? Can we clearly see a spike emerging if 5.1 is true by doing the pre-processing?** Yes, we clearly see a spike emerging if (5.1) holds by doing the pre-processing. Thank you for this excellent suggestion. We have added a plot in the PDF attached to the global response that shows the presence of spectral outliers in $A^*$, as well as the absence of such outliers in $A$. We will add this plot to the revision. **Initialising the estimator at random from the prior should allow it to have non-zero overlap in O(log(d)) steps** For some problems, AMP with random initialization may be able to attain performance similar to AMP with warm start (such as spectral initialization). Such a phenomenon has been empirically observed for various PCA and regression problems, including ours. However, proving such a behavior largely remains open. To our knowledge, the only progress is the recent work of Li, Fan and Wei [R3] for $\mathbb{Z}_2$ synchronization, i.e., rank-1 matrix estimation with Rademacher prior and GOE noise (a setting much simpler than the one considered here). **Could you run AMP after using the spectral estimator and put the additional lines and simulation dots in figures 2, 3?** All experiments in Figures 2,3 are for Gaussian priors and, in this case, our spectral estimators attain the same asymptotic performance as Bayes-AMP, as the heuristics on page 8-9 suggest. There is therefore no advantage of running AMP with spectral initialization, as opposed to the spectral method alone. Furthermore, in the special case of Figure 2a, our spectral estimators alone are information-theoretically optimal. --- [R1] A. Maillard, F. Krzakala, Y. M. Lu, L. Zdeborová, "Construction of optimal spectral methods in phase retrieval", Mathematical and Scientific Machine Learning, 2022. [R2] E. Troiani, Y. Dandi, L. Defilippis, L. Zdeborová, B. Loureiro, F. Krzakala, "Fundamental limits of weak learnability in high-dimensional multi-index models", arXiv preprint arXiv:2405.15480, 2024. [R3] G. Li, W. Fan, Y. Wei, "Approximate message passing from random initialization with applications to Z 2 synchronization", Proceedings of the National Academy of Sciences, 2023. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed rebuttal, I will raise my score accordingly. For the figure, I am slightly surprised the fluctuations are so large, I guess trying larger sizes might be doable and make the presentation clearer.
Summary: This paper considers the problem of matrix denoising. Given an observation X = A + W where W is noise, our goal is to estimate A, which is typically low-rank. Unlike previous works, this paper treats the case where W is doubly heteroscedastic. The authors identify a condition for non-trivial estimation of the signal vectors, along with an accompanying spectral algorithm that succeeds whenever that condition holds, under a technical condition. Strengths: The paper addresses an important problem using a novel approach using statistical physics/ AMP concepts. The results, both theoretical and empirical, are strong and add considerably to the literature. Weaknesses: Strictly speaking, you do not show that whitening fails, only that the whitened matrix does not match the proposed AMP approach (lines 262-265). Technical Quality: 4 Clarity: 4 Questions for Authors: Initially you say that you believe (5.1) implies \sigma_2^{\star} < 1, and later you say that you believe these conditions are equivalent. Could you please clarify which one it is? Confidence: 1 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer S4fP for the positive evaluation. Below we address the comments and questions. **Strictly speaking, you do not show that whitening fails, only that the whitened matrix does not match the proposed AMP approach (lines 262-265).** We will make the following 2 clarifications regarding lines 262-265. 1. One can repeat the analysis of an AMP that operates on the whitened matrix $\Xi^{-1/2} A \Sigma^{-1/2}$. The fixed point equations of the resulting state evolution do not match the information-theoretically optimal one in (4.5). In particular, the weak recovery threshold coming out of this approach is strictly larger than the optimal one in (4.6), as long as at least one of $\Xi, \Sigma$ is not a multiple of the identity. Since these derivations led to suboptimal results, the details were left out from the paper. 1. Our optimal spectral estimator is motivated by Bayes-AMP. If one instead considers spectral estimators associated with the whitened matrix $\Xi^{-1/2} A \Sigma^{-1/2}$, then the resulting weak recovery threshold and estimation error (overlap and matrix MSE) are all suboptimal. The exact asymptotic values of these quantities can be retrieved from Leeb–Romanov [40] and Leeb [41]. Numerical results and the corresponding theoretical predictions for whitened spectral estimators are shown in Figure 2; note its suboptimality compared to our spectral estimator and the information-theoretic limit. To summarize, the whitening approach is provably suboptimal for both AMP and spectral estimators. **Initially you say that you believe (5.1) implies \sigma_2^{\star} < 1, and later you say that you believe these conditions are equivalent. Could you please clarify which one it is?** We apologize for the confusion, and we will clarify this point in the revision. We believe that these two conditions are actually equivalent. However, the proof of Theorem 5.1 only requires one direction: (5.1) implies $\sigma_2^* < 1$. Therefore, we formally only make the minimal conjecture of (5.1) implying $\sigma_2^* < 1$. --- Rebuttal Comment 1.1: Title: Reply Comment: Thank you for your rebuttal!
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their reviews and we have responded to their comments separately below. In this common response, we attach a plot that shows the presence of spectral outliers in $A^*$, as well as their absence in $A$. We will add this plot to the revision. Pdf: /pdf/e493c185cebd5bd8543c07e92bf7df0ef1564a2c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Non-asymptotic Convergence of Training Transformers for Next-token Prediction
Accept (poster)
Summary: This manuscript focuses on the next-token prediction (NTP) task, and provides a fine-grained non-asymptotic analysis of the training dynamics of a one-layer transformer. Specifically, the authors first characterize the essential structural properties of training datasets for NTP using a mathematical framework based on partial orders. Then, they design a two-stage training algorithm, where the pre-processing stage for training the feed-forward layer and the main stage for training the attention layer exhibit fast convergence performance. Finally, they show that the well-trained transformer can have non-trivial prediction ability on unseen data, which sheds light on the generalization capability of transformers. Strengths: This manuscript conducts a theoretical analysis on the convergence speed and generalization of the important NTP task, with giving an example to illustrate. Although not comprehensive, it can serve as a stepping stone for convergence analysis of NTP tasks. This manuscript is well-written and easy to read. Weaknesses: 1. How is the loss function between lines 227 and 288 obtained? Why is its form different from Eq. 1? 2. In Section 6, the authors theoretically prove the generalization ability of the trained Transformer. Can the generalization ability be proved in the experimental part? 3. Why do the authors design a two-stage training algorithm? Is it valuable in practical applications compared to single-stage training? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weakness Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This manuscript simplifies the Transformer into a single layer for analysis. While I understand the need for this simplification, it would be nice to include a brief paragraph outlining the views on generalizing to multi-layer Transformers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments, and would like to provide the following responses. **Q1:** How is the loss function between lines 227 and 228 obtained? Why is its form different from Eq. 1? **A1:** The loss in Eq.\ (1) is for the general-length case. The loss between lines 227 and 228 is for the training stage 1 (i.e., training of feed-forward layer), where the inputs are one-token sentences. We note that there was a typo in Eq.\ (1), i.e., missing $\exp$ in the first bracket. The correct form should be \\[ \log \left(\sum_v \exp( e_v^\top \bar{T}(X) ) \right) - e_i^\top \bar{T}(X) ,\\] where $i$ is the index of the next token of $X$. Such a form can specialize to the loss for stage 1 as follows. Let $X = [x_1]$. Then we have $\bar{T}(X) = W_{ov}x_1$. Thus, the loss becomes \\[\log\left( \sum_v \exp(e_v^\top W_{ov}x_1) \right) - \log\exp(e_i^\top W_{ov} x_1) = -\log\frac{ \exp(e_i^\top W_{ov} x_1) }{ \sum_v \exp(e_v^\top W_{ov}x_1) }.\\] Thank you for your question and we will fix the typo in the revision. **Q2:** In Section 6, the authors theoretically prove the generalization ability of the trained Transformer. Can the generalization ability be proved in the experimental part? **A2:** Yes. Please see the attached PDF file in the global response for the test loss result. For the test dataset, we randomly construct test data as described in Theorem 3, and plot the average test loss for 20 trials. As shown in the figure, the test loss also converges almost to 0. In addition, since the generalization ability is determined by the parameter $W_{kq}$ and $W_{ov}$, alignment of these parameters with their corresponding max-margin solutions as shown in Figure 2 of the paper implies the generalization ability as well. **Q3:** Why do the authors design a two-stage training algorithm? Is it valuable in practical applications compared to single-stage training? **A3:** Thanks for the question. Our characterization of the realizable dataset is via two steps: (i) identifying collocation and (ii) using collocation to define partial ordering and optimal token. This naturally motivates the two training stages to learn the corresponding information: (i) stage 1 learns collocation, i.e., next-token prediction for length-one sentences; and (ii) stage 2 learns partial ordering to identify optimal token. Thus, it is also meaningful to apply such a two-stage algorithm in practice. We also expect that stage 1 of pre-processing value matrix using one-token sentences could help to stablize the entire training process of transformers. Single-stage training may still yield good solution simply from optimization perspective, and we are currently working on analytically characterizing the dynamics of such a training process. **Q4:** This manuscript simplifies the Transformer into a single layer for analysis. While I understand the need for this simplification, it would be nice to include a brief paragraph outlining the views on generalizing to multi-layer Transformers. **A4:** Thanks for the suggestion. A possible approach to generalize it into multi-layer transformers is to view our single layer as the last layer of a multi-layer transformer. Therefore, the input $X$ in the paper can be viewed as the output of a multi-layer network. Then, we can combine our technique for the last layer and the technique of neural tangent kernel (NTK) for other layers (Emami et al., 2021), and analyze the training dynamics of such a multi-layer transformer. We expect that the last layer evolves similarly to our findings given that other layers output $X$ satisfies certain conditions, which may be potentially shown by exploiting the realizability of the dataset and the properties of NTK. Emami et al. "Implicit bias of linear RNNs." ICML 2021. --- We thank the reviewer again for the highly inspiring comments. We hope that our responses have resolved your concerns. If so, we wonder if the reviewer could kindly consider to increase your score. Certainly, we are more than happy to answer your further questions. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for replying to my questions. I have read the relevant comments and will maintain this score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Many thanks for your continual and strong support of our paper!
Summary: In this work, the authors mathematically examine the learning dynamics of simple transformers for next token prediction. To allow a mathematical analysis, they consider a highly simplified setting: the transformer consists of a single attention layer followed by a single feed-forward layer, the layers are trained one after the other (i.e., in a custom, decoupled training procedure), and the dataset adheres to a set of mathematical properties that make sure the loss can be arbitrarily close to zero. They show that both layers converge in direction to their corresponding max-margin solutions sub-linearly. Strengths: The paper adresses an important problem, namely how we could/should understand the learning dynamics of transformers. The paper seems a valid contribution to this problem, and provides a step in the direction of a better theoretical understanding. The paper defines a set of mathematical properties a dataset for next-token-prediction should adhere to, to obtain a training error arbitrarily close to zero. This might be useful for future work in the community. The paper is well-written, well-structured, the related works section is extensive and the work is put into the context of previous works. Great care has been taken in the mathematical formulations and proofs; I wasn’t able to check the details, but the mathematical derivations seem correct and they are extensive. Weaknesses: The paper completely lacks a discussion of how general the obtained insights are; i.e., what the paper contributes beyond the setting of the simple network architecture, custom dataset and custom training method used to derive the results. It is understandable that assumptions and simplifications have to be made to make a problem amenable to a mathematical analysis, but the paper contains no discussion of these limitations nor any experiments to further examine this. This also makes it hard to judge the actual contribution of the work. The current version of the paper is only easy to read for people working on this exact topic. To also be of interest to a somewhat broader audience (e.g., machine learning experts with a good grasp of mathematics), some concepts would need to be better motivated and/or explained in a more intuitive way. E.g., although the text mentions, ‘We first provide some intuitions about those two properties’ (L145), there seems to be no real intuitive explanation of why collocations and partial orders are introduced, and how this should be interpreted. The definition and arguments are purely mathematical. The example dataset provides some additional insights, but does not clarify why these properties are important, how this leads to zero training loss in the limit, etc… Technical Quality: 3 Clarity: 3 Questions for Authors: How would you change the manuscript to make it more easy to understand your definitions and motivation behind the mathematics? How would you change the manuscript to better assess/discuss the limitations of your work? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See above, no negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments, and would like to provide the following responses. **Q1:** How would you change the manuscript to make it more easy to understand your definitions and motivation behind the mathematics? For example, the example dataset provides some additional insights, but does not clarify why these properties are important, how this leads to zero training loss in the limit, etc. **A1:** Thanks for the suggestion. We will provide the following high-level description of our dataset and their properties. The goal of introducing the collocation and query-dependent partial order is to make training transformer provable and interpretable. We breakdown the insights as follows. (i) Since the model should apply to all subsequences of sentences in the training set including length-two sentences, then collocation between two words naturally becomes a property of the dataset. (ii) It can be observed that the input of our loss function is a convex combination of $L$ points each being a projection of one token embedding in the sentence, where the weights in the convex combination are the attention scores of each token with the query token. In this way, the loss values of these tokens determine an order among them, and the token that achieves the minimum loss value serves as the optimal token. In other words, some tokens has larger attention scores than others under a specific query. These facts motivate us to introduce query-dependent partial order. Combining (ii) with (i), the collocated token of the optimal token becomes the predicted next token. Finally, due to the injection property of collocation and the structure of cross-entropy, the loss function can be trained to approach zero. **Q2:** How would you change the manuscript to better assess/discuss the limitations of your work? For example, provide discussion of how general the obtained insights are; i.e., what the paper contributes beyond the setting of the simple network architecture, custom dataset and custom training method used to derive the results. **A2:** Thanks for the suggestion. We provide the following discussion on the generality of our findings and will add it to our revision. We remark that our work establishes the basic framework to analyze the convergence performance of transformers for NTP task. Several insights can be extended to more complex settings. Regarding multi-layer transformers, for example, one possible idea is to view the single layer in our work as the last layer of a multi-layer transformer. This is promising if we consider the last layer training regime (Kirichenko, et al., 2022). In this regime, if the output of other layers satisfies certain conditions such as near-orthogonality, which is often satisfied in high dimensional setting, our convergence and generalization results can be directly applied to multi-layer transformers. To generalize dataset, for example, our technique can be extended to study a noisy version of collocation. We expect that most of our proof steps can still be applied by incorporating appropriate noise bound. To generalize training method, an alternative natural method is to update the linear layer and the attention layer simultaneously instead of two-stage training. A popular way to implement such an algorithm is via two timescale stepsizes so that the attention update can be in a faster scale and the final convergence can be determined by the feedforward layer by incorporating the suboptimality error of the attention layer. Such suboptimality error can be bounded by our current techniques for training stage 2. Kirichenko et al. "Last layer re-training is sufficient for robustness to spurious correlations." arXiv:2204.02937, 2022. --- We thank the reviewer again for the highly inspiring comments. We hope that our responses have resolved your concerns. If so, we wonder if the reviewer could kindly consider to increase your score. Certainly, we are more than happy to answer your further questions. --- Rebuttal Comment 1.1: Title: A gentle reminder Comment: Dear Reviewer ZtZ3 We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions. Thank you for your time and effort in reviewing our work! Best Regards, Authors
Summary: The paper presents a non-asymptotic analysis of training dynamics for a single-layer transformer used in next-token prediction tasks. It introduces a two-stage training algorithm leveraging structural properties of the training dataset, defined via collocations and query-dependent partial orders. The findings include sub-linear convergence of both the feed-forward and self-attention layers to their respective max-margin solutions, and a linear convergence rate for the cross-entropy loss, supporting non-trivial prediction capabilities on unseen data. The approach is validated through theoretical analysis and empirical results, enhancing understanding of transformers' training and generalization behaviors. Strengths: 1. The paper introduces novel theoretical frameworks for analyzing transformer training, focusing on non-asymptotic convergence. 2. The research is technically robust, with sound mathematical derivations and empirical validation. 3. The concepts and results are communicated effectively, albeit with room for improvement in some technical descriptions. Weaknesses: 1. Some of the mathematical concepts, particularly the query-dependent partial orders, are complex and could be better explained. 2. While the theoretical results are strong, additional empirical studies, particularly on real-world datasets, could further strengthen the claims. Technical Quality: 3 Clarity: 3 Questions for Authors: What are the potential implications of the findings on transformer training efficiency and computational costs in practical settings? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NaN Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments, and would like to provide the following responses. **Q1:** Some of the mathematical concepts, particularly the query-dependent partial orders, are complex and could be better explained. **A1:** We appreciate the feedback on the complexity of the mathematical concepts. To address this, we will provide more intuitive explanations and examples in the revised manuscript. Specifically, query-dependent partial order formalizes the concept of importance or relevance of tokens in a given sentence. If one token is more important than others or more relevant to the next token, it is considered as an optimal token and thus is ''greater'' than other tokens. For example, assume that the sentence *Machine learning is a popular* has next word *area*, and *popular* is the most important word. Then, under the query *popular*, *popular* itself is the optimal token, and hence it is ''greater'' than other tokens. In addition, such a partial order is **query-dependent** so that the order can be different given different queries. For example, consider a new sentence *A popular area is machine* with next word *learning*. Then, under the query *machine*, *machine* itself is more important or ''greater'' than *popular*. Finally, the motivation for defining these concepts is detailed in Response A1 to Reviewer ZtZ3. **Q2:** While the theoretical results are strong, additional empirical studies, particularly on real-world datasets, could further strengthen the claims. **A2:** Thanks for the suggestion. We are actively seeking suitable real-world datasets and experiments to further illustrate our results. **Q3:** What are the potential implications of the findings on transformer training efficiency and computational costs in practical settings? **A3:** Thanks for the question. Our results have the following implications. (1) The collocation training can be separated from the attention training to reduce the complexity of joint training, and at the same time achieve stable and fast convergence rate for each separate training. (2) Our result shows that the loss function converges fast (with potentially linear convergence rate), and hence the training does not require many iterations. The main computational costs lie in the gradient computations in each iteration, which scales with the number of training parameters. Thus, employing techniques such as LoRA to reduce the number of training parameters is crucial to achieve scalable training. --- We thank the reviewer again for the highly inspiring comments. We hope that our responses have resolved your concerns. If so, we wonder if the reviewer could kindly consider to increase your score. Certainly, we are more than happy to answer your further questions. --- Rebuttal Comment 1.1: Comment: I have thoroughly reviewed all the comments and the author’s responses, and I will remain positive about this submission. --- Reply to Comment 1.1.1: Title: Thank you Comment: Many thanks for your continual and strong support of our paper!
Summary: This paper conducts a non-asymptotic analysis of training dynamics for a one-layer transformer, focusing on next-token prediction tasks. It provides a mathematical framework based on partial order to formally characterize a realizable training dataset for next-token prediction. It also introduces a two-stage training algorithm that ensures fast convergence, with both layers approaching their max-margin solutions sub-linearly. Strengths: The paper develops a detailed theoretical framework that both analyzes non-asymptotic convergence of the training and generalization capabilities of transformers in the next-token prediction task. Weaknesses: The approach described in the paper may not align with practical Transformer training. Please see the questions below for more details. Technical Quality: 2 Clarity: 3 Questions for Authors: Q1: Although the ground truth model is deterministic, it seems uncommon in machine learning to assume a statistical model because the next word is not deterministically determined from a deterministic context. Besides assuming n, where is the existence of p*L theoretically necessary? Q2: The assumption of the existence of collocations seems too far removed from reality. What are some realistic data where such assumptions hold? The setting in Dryer, 1991 seems artificial. Especially, the existence of n() is crucial for the two-stage training and the proof, and it forms a major assumption at the core of this paper. Is it possible to actually prepare n() in real datasets? Q3: I am confused about the definition of partial order. In Definition 1, it is stated that if there is at least one sentence where x >{x_q} x' and n(x) = L_{L+1} != n(x'), then it is also permissible for there to be sentences where n(x) != L_{L+1} = n(x'). I could understand it if it were defined to hold for all sentences. Q4: How commonly is the normalized gradient used in actual Transformer training? In the training of Transfomer, AdamW is typically used. Q5: Assumption 3 may not hold in situations where attention is sparse. It is important that attention is placed on the optimal token, but it is known that the attention map in actual Transformers is generally sparse. Doesn’t this assumption contradict the sparsity of attention? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The assumption of the existence of collocations seems far from real traninng dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. **Q1:** Deterministic ground truth model seems uncommon. Besides assuming n, where is the existence of $p_L^*$ theoretically necessary? **A1:** In recent line of theoretical research on transformers, deterministic models are often adopted such as in Li et al., (2024) on next-token prediction and Tarzanagh et al., (2023) on binary classification, mainly because deterministic models can offer valuable theoretical insights due to their simplicity and clean structure. In particular, it provides a clear cause-and-effect relationship between inputs and outputs, which makes it appealing to facilitate the interpretation of the model. Note that deterministic model of ``separable data'' is also commonly studied in theoretical works on neural networks such as in Soudry et al., (2018). The ground truth model $p^*_L$ serves as a theoretical baseline for analyzing an algorithm so that the performance of a training algorithm is well defined. As a notation, $p^*_L$ can be replaced by $\mathrm{n}(X)$, where $X$ contains $L$ tokens. As a realizable model, it is captured by collocation and partial order. With the existence of such a model $p^*_L$, the training algorithm can have provable performance. Tarzanagh et al. Transformers as support vector machines. arXiv:2308.16898, 2023. Li et al. Mechanics of next token prediction with self-attention. AISTATS 2024. Soudry et al. "The implicit bias of gradient descent on separable data." JMLR 2018. **Q2:** Existence of collocations seems far from reality; realistic data where such assumptions hold? How to prepare n() in real datasets? **A2:** Thanks for the question. The existence of collocations can be observed in many natural languages, where certain words frequently appear together (Lehecka 2015). Taking such an assumption enables us to derive interesting theoretical results, while remaining arguably reasonable in practice. Note that the assumption of separable data, as a special case of this, has formed an important line of theoretical research in deep learning to explore the implicit bias and training dynamics of neural networks (Soudry et al. 2018, Chizat and Francis 2020). Given a dataset, $\mathrm{n}$ can be prepared by the counting method described in Lehecka, T. (2015). More generally, it can be extracted by simply including all two-token sequences from the dataset, and each second token $y$ is the next token of the first token $x$, i.e., $n(x)=y$. Furthermore, leveraging domain-specific knowledge, such as medical dictionaries, can also enrich collocations. These approaches allow for the practical extraction of \( \mathrm{n}() \) in a scalable and efficient manner. Lehecka. Collocation and colligation. In Handbook of pragmatics online. Benjamins 2015. Chizat and Francis Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. COLT 2020. **Q3:** How to understand definition of partial order. **A3:** Thanks for the question. We clarify that the partial order is **query-dependent**. Therefore, if $x>\_{x_q} x'$ and $n(x)=x_{L+1}\neq n(x')$ in some sentences ending with query $x_q$, it is **not** possible that $n(x)\neq x_{L+1} = n(x')$ in another sentence ending with the same query $x_q$. This is because of the structure of the transformer where the attention weights are proportional to $\exp(x^\top W_{kq} x_q)$. If $n(x)=x_{L+1}\neq n(x')$ under query $x_q$, then $x^\top W_{kq} x_q > (x')^\top W_{kq}x_q$ as long as the sentence ends with $x_q$. Thus, $x$ is always more important than $x'$ in $x_q$-ended sentences. However, $n(x)\neq x_{L+1} = n(x')$ **is** still possible in other sentences ending with **different queries** than $x_q$. These facts motivates us to introduce *query-dependent* partial orders. **Q4:** How commonly is the normalized gradient used? AdamW is typically used. **A4:** Thanks for the question. Although normalized gradient descent (NGD) is often favored in theoretical studies, there are also empirical works showing that the achieved accuracy of NGD is slightly better than Adam in training transformers (Cutkosky and Mehta 2020), where momentum is employed to improve the performance of NGD. Cutkosky and Mehta (2020) also theoretically proved that NGD achieves a fast convergence rate for generic optimization, which aligns with our theoretical findings that NGD achieves a linear convergence rate for NTP with one-layer transformers. We acknowledge that AdamW is indeed more commonly used in practice. However, it is much more challenging to analyze the training dynamics under AdamW theoretically. So far, all theoretical analysis on transformers have been focused on GD and its variants. It is somewhat necessary to first develop techniques for GD and its variants, before these tools can be further advanced to study AdamW. Our work takes a first step to understanding the NGD dynamics of Transformers for NTP, and we are actively exploring the analysis for Adam-based Transformer training. Cutkosky and Mehta, Momentum Improves Normalized SGD, ICML 2020. **Q5:** Assumption 3 may not hold in situations where attention is sparse. **A5:** Thanks for the question. We apologize that the statement of Assumption 3 may cause confusion. It actually requires that the number of optimal tokens is greater than or equal to the number of **each individual** non-optimal tokens, not the summation of all non-optimal tokens. Hence, such an assumption is not in conflict with sparse attention. One simple example satisfying Assumption 3 is a sentence where every token is distinct. The transformer's attention needs to focus only on the single optimal token, which is clearly sparse. --- We thank the reviewer again for the highly inspiring comments. We hope that our responses resolved your concerns. If so, we wonder if the reviewer could kindly consider to increase your score. Certainly, we are more than happy to answer your further questions. --- Rebuttal Comment 1.1: Title: A gentle reminder Comment: Dear Reviewer U8Jg, We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions. Thank you for your time and effort in reviewing our work! Best Regards, Authors --- Reply to Comment 1.1.1: Title: A gentle reminder before the discussion period ends Comment: Dear Reviewer U8Jg, As the author-reviewer discussion period will end soon, we would like to check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions. Thank you for your time and effort in reviewing our work! Best Regards, Authors --- Rebuttal Comment 1.2: Comment: Let me confirm about "collocations". Please tell me how to construct n in real data. It seems like it needs to be defined for every word, not some words and also, n seems to be a deterministic function. Considering the above, it seems that there exists an n such that every word can definitively find one next token. --- Reply to Comment 1.2.1: Comment: Thank you for your insightful questions. Below we answer the reviewer's questions. **Can every word find a next token?** We do not need every word to find a next token. If a token $x$ does not have next token, then $x$ falls into the category of non-optimal tokens or non-comparable tokens (see lines 182-184 of the paper). Then the trained transformer will not attend to such a token. Thus, $x$ won't play a role to predict next tokens in sentences. However, it is still possible that other tokens can predict $x$ as a next token. **Deterministic function:** Yes, n is assumed to be deterministic. One reason is that in real world settings, n can be prepared to be a deterministic mapping (see below about construction). In recent line of theoretical research on next-token prediction, deterministic function is commonly adopted, such as in (Li et al., 2024, Tarzanagh et al., 2023.), as an informative model to enable tractable analysis for transformers. Such a model is also analogous to the deterministic structure of 'separable data' widely taken to develop deep learning theory in the literature such as in (Soudry et al, 2018, Taheri et al., 2023). **Construct n in real data:** We can construct the collocation n by employing various standard techniques developed in linguistic analysis (Lehecka, 2015). To elaborate a simple version (those practical techniques can include more sophisticated tricks), by processing all sentences in the corpus as detailed in (Lehecka, 2015), the frequency of each 'ordered' word pair $(x,y)$ that appears in the same sentence can be calculated. Then, the most frequent $y$ paired with each $x$ is chosen as its collocated word, i.e., let $n(x)=y$. Such method includes our original construction from length-2 sentences as a special case. There are also several off-the-shelf NLP tools to construct collocations such as Natural Language Toolkit (NLTK). We hope that our responses have resolved your concerns. If so, we kindly ask the reviewer to consider increasing the score accordingly. If possible, we also kindly ask the reviewer to evaluate the paper based on the theoretical contributions and the novel mathematical techniques that we develop to analyze the next-token prediction problems, which can be applied to studying more sophisticated models in the future. Certainly, we are more than happy to answer your further questions. Lehecka. Collocation and colligation. In Handbook of pragmatics online. Benjamins 2015. Li et al. "Mechanics of next token prediction with self-attention." AISTATS 2024. Soudry et al. "The implicit bias of gradient descent on separable data." JMLR 2018. Taheri et al. "On generalization of decentralized learning with separable data." AISTATS 2023. Tarzanagh et al. "Transformers as support vector machines." arXiv:2308.16898, 2023.
Rebuttal 1: Rebuttal: We thank all reviewers for their feedback, which will greatly improve our paper. In the attached PDF file, we provide a figure for an additional experiment verifying the generalization ability described in Theorem 3, as a response to the second question Q2 of Reviewer jsSk. In the experiment, we construct a test dataset described in Theorem 3 and plot the average test loss of 20 times against the training loss. As shown in the figure in the attached PDF, the test loss also converges almost to 0. Pdf: /pdf/6c968df94fa7cee9acecc701f44404ab12b856c4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents
Accept (poster)
Summary: This paper focuses on backdoor attacks on LLM-based agents via poisoning of the fine-tuning data. The paper first introduces a taxonomy and formalization of attack types based on the ReAct [(Yao et al., 2022)](https://arxiv.org/abs/2210.03629) paradigm. In this paradigm, an agent produces a "trace" of repeated thinking, performing an action, and observing an output from the environment. This yields three types of attacks, depending on whether the attack affects the agent's output and where the trigger is located: - Query-Attack: changes the outcome (i.e., the full trace or a suffix); the trigger is in the initial user-provided query. - Observation-Attack: changes the outcome (i.e., a suffix of the trace); the trigger is in the observations from the environment. - Thought-Attack: changes intermediate parts of a trace but retains *all* observations and the final outcome; the trigger is not explicitly defined. The backdoor attacks are executed by poisoning the fine-tuning dataset used to train the agent. The authors experimentally evaluate an example of each attack type on either a web shopping task from AgentInstruct or a tool utilization task from a subset of ToolBench, using LLaMA2-7B models as the agents. Results show that all attack types are effective. Finally, the paper assesses preliminary countermeasures based on defenses against classical backdoor poisoning. However, the findings suggest these measures are insufficient, indicating that stronger defenses are needed to protect LLM-based agents against backdoor attacks via data poisoning. Strengths: The authors provide a holistic picture of backdoor attacks for LLM agents. Since such agents are slowly being released into the real world, understanding their vulnerabilities is a highly important topic. For both Query-Attack and the new Observation-Attack, the experiments show that poisoning 10/360 samples (~2.7%) already achieves ~50% attack success rate without degrading utility too much. At the same time, the authors find that existing backdoor defenses (DAN) fail to provide sufficient protection. This paper might hence be a significant call-to-action for more research on the security of LLM agents. The new taxonomy of attack types is a helpful tool for this. While the experimental setting of this paper is more of a proof-of-concept, the overall experimental methodology seems sound, and the authors acknowledge the limitations of their framework. Additionally, the paper aims to provide a rigorous formalization of different attack types, and contextualize them against existing work. Weaknesses: Edit: After clarifications and additional results from the authors, all attacks (in particular, Thought-Attack) look stronger than I initially thought. I increased my score accordingly. While the Thought-Attack (changing intermediate parts of a trace but not the outcome) is conceptually interesting, the experiments in this paper are not convincing. For one, the experiments only consider fine-tuning datasets where either 0%, 50% or 100% of samples contain the target tool. Poisoning ratios of >50% are not realistic. Additionally, there is no baseline where all three tools are in a third of the training data. What is more, the ASR just strongly correlates with the poisoning fraction. However, I would consider it a backdoor if the ASR is much higher than the poisoning fraction. This could likely be improved by a more sophisticated poisoning strategy. Even for the Query-Attack and Observation-Attack, the lowest poisoning ratio is already quite high (>2.7%). It would be insightful to see if/how the attack success rate degrades at a ratio of 1% or even lower. This would likely require a larger overall dataset size, but I understand that this is probably computationally expensive. Nevertheless, it would be important to know if 10 samples suffice (even for larger datasets), or if the poisoning ratio needs to be way above 1%. Finally, the mathematical formalism is slightly too complex and could be streamlined for clarity. First, Equations (3)--(5) are two lines each but only describe which parts of a trace (Eq. (2)) is targeted. It also seems that there has been a LaTeX error in the second lines of Eqs. (3) and (4). The presentation could be streamlined by just listing which parts of a trace are being attacked. Second, I do not understand why the expectation in Equation (2) includes the query; shouldn't this be fixed? All the quantities in Eq. (2) are fixed, hence it is not clear why the expectation is necessary at all. Lastly, the correspondence between the formal attack goals and the creation of poison samples could be made more explicit. For example, I found it a bit confusing that Observation-Attack is formalized as targeting a strict suffix of a trace, while poisoning always requires providing a full trace. Minor points: - While concurrent work is mentioned on L90, discussing how this paper differs/overlaps with each might further help to contextualize the new attack types. - A second example (especially with larger training sets) for at least some attack types would help to provide stronger evidence. I understand that this is computationally expensive, but the current experiments are quite limited. - Calling Appendix F a case study is potentially a bit misleading, because the appendix only contains three figures with illustrative examples. Those figures are insufficient to provide conclusive evidence. Nevertheless, the illustrative examples are helpful. - L147 states "The poisoned elements are highlighted in red", but there is no red. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What exactly is the randomness for the expectations in Equations (2)--(5)? 2. In 3.2.2, after (1.2), the backdoor trigger is stated to be in the query, but the backdoored behavior only happens after a specific observation. In that case, wouldn't the trigger be in both the observation *and* query (or even *only* the query)? 3. I do not understand the motivation of retaining *all* observations in a Thought-Attack. Wouldn't it suffice to retain the final output? 4. Did the authors observe some negative results, i.e., cases in which poisoning failed (i.e., either degraded utility too much or failed to create a backdoor)? 5. Why are the two "Clean" rows in Table 1 different from Table 2? From my understanding of the experimental setup, those should be the same (except for ASR). Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors very transparently discuss the limitations of their work, especially the limitation that they only consider one agent paradigm and only evaluate one dataset per attack type. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your careful reviewing. We make the following response to all your questions. **Q1:** Regarding the poisoning ratios used in Thought-Attack. **A1:** The detailed response to this question is in the global Author Rebuttal. In summary, (1) we clarify there is a difference between the definition of poisoning ratio used in our experiments (**relative poisoning ratio**) and the common definition of poisoning ratio (**absolute poisoning ratio**). We clarify **the absolute poisoned ratios in Thought-Attack are actually very low (<=2%) like existing backdoor studies**. (2) We explain why the relative poisoning ratio can achieve even 100% in Thought-Attack. (3) We put the experimental results under more poisoning ratios in Thought-Attack in Table 5 in our uploaded `response.pdf`. **Q2:** Regarding using smaller poisoned ratios for Query/Observation-Attack. **A2:** (1) First ,we provide the results of mixing agent data with general conversational data from ShareGPT in Table 1 in our uploaded `response.pdf`. **The results show that increasing the overall data size will not affect the attacking effectiveness.** (2) Then, we conduct experiments with only 5 poisoned examples for Query/Observation-Attack, which decreases the poisoning ratio to ~1.4%. The results are in Table 2 in our uploaded `response.pdf`. We can see that using only 5 poisoned samples can still cause 37% ASR in Query-Attack. **Q3:** Regarding the suggestion “the presentation could be streamlined by just listing which parts of a trace are being attacked.” **A3:** Thank you for your suggestion. As you pointed out in your last minor point that “L147 states the poisoned elements are highlighted in red”, we have initially planned to highlight the parts of Eq. (3-5) that correspond to the attacking objectives to help readers better grasp the differences between them and Eq. (2), but we have missed doing so. We will fix them in the revision. **Q4:** Regarding the randomness for the expectations in Eq. (2-5). **A4:** In Eq. (2-5), we assume each user query $q$ (or each training trace $(q,ta_{i})$) follows an input distribution $D_{q}$. Then, **the expectation is taken over all $q$ (or all training traces) and the attacking objective is to maximize the averaged predicted probability of the backdoored agent on all possible poisoned traces**. We will mention this in the revision. **Q5:** Regarding the question “... Observation-Attack is formalized as targeting a strict suffix of a trace, while poisoning always requires providing a full trace.” **A5:** The prefix trace before the backdoor is triggered in Observation-Attack is crucial because it ensures that the model learns the pattern that the backdoor is activated only after the trigger appears in a specific observation instead of at the beginning. **Q6:** Regarding the detailed discussion on the concurrent work. **A6:** The discussion in Section 3.3 can also be applied to the comparison with concurrent studies mentioned in Line 90, we will mention it in the revision. **Q7:** Regarding the case studies. **A7:** **The examples in Appendix F are real cases** as the texts in figures are exactly the original model responses and environment feedback on testing queries, rather than imaginary examples. **Q8:** Regarding the situation (1.2) in Section 3.2.2. **A8:** In the situation (1.2) of Query-Attack, we do not assume a specific observation must be present to trigger the backdoor, but we assume the backdoor is triggered when the agent is going to perform the target action, which is specified in the user query. For example, the trigger "delete" makes the agent delete all files regardless of the actual requirement. Thus, the trigger is only in the query. **Q9:** Regarding the question “... the motivation of retaining all observations in a Thought-Attack. Wouldn't it suffice to retain the final output?” **A9:** We agree with you that in real cases, Thought-Attack does not need to retain all observations. We made this assumption in the paper mainly for two reasons: (1) As Eq. (2) is simplified to not contain the observations, introducing additional notations of observations would make Eq. (5) more complex and harder to understand. (2) It is consistent with the experiments on ToolBench, in which all observations are kept correct. But we will revise the statement in Line 171 accordingly. **Q10:** Regarding the possible negative results. **A10:** One potential negative result is about the degradation of the Reward scores on WS Target, which is analyzed in Line 279-287. **Q11:** Regarding the question “Why are the two "Clean" rows in Table 1 different from Table 2?” **A11:** Thank you for your insightful question. (1) First, **the two models “Clean” in Table 1 and Table 2 are the same model**. As you can see, the results on AW, M2W, KG, OS, DB and WS Clean are the same. **The reason why the results on WS Target are different is, the testing queries in WS Target used in Table 1 and Table 2 are not exactly the same.** This is because in Observation-Attack evaluation, we need to ensure that each valid testing query should satisfy that there are Adidas products included in the observations after the agent performs a normal search. Otherwise, the query will never support a successful attack. Therefore, we make a filtering for the testing queries used in Table 2. (2) Second, **the two models “Clean$^{\dagger}$” in Table 1 and Table 2 are not the same**. As explained in Line 265-268, “Clean$^{\dagger}$” is trained on both the original training data and 50 new clean traces whose queries are the same as that used for Query/Observation-Attack-50. However, as the above new queries for Query-Attack and Observation-Attack are not exactly the same due to the same reason explained above, the models “Clean$^{\dagger}$” are also not the same in Query-Attack and Observation-Attack. We will add the above clarification in the revision to avoid misunderstandings. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response and especially for the additional experiments. The rebuttal answered all my questions. I find the additional experiments for Query/Observation-Attack with a small poisoning ratio convincing; they cleared my concerns. Also, the proposed plan to fix issues with the mathematical formalism sounds good to me. **Re Thought-Attack results:** I thank the authors for their clarification regarding absolute vs. relative poisoning ratios. I now agree that the absolute number of poison samples is reasonable. But one concern remains: `... the ASR just strongly correlates with the poisoning fraction.` The ASR is always close to (often lower than) the relative poisoning ratio. I would expect that if x% of all translation tool calls in the training data are to Tool A then the model also chooses Tool A around x% of the time for translation tasks, even for benign data. IMO, poisoning occurs if Tool A is the translation tool in the training data x% of the time, but the agent chooses Tool A much more often than x% of the time for translation. Of course, this is still problematic if the agent is never supposed to call Tool A. However, the insight itself is, IMO, completely expected. --- Rebuttal 2: Title: Thank you! Comment: We sincerely thank you for your feedback! We are happy that our response addressed all your questions. Regarding your remaining concern ''*... the ASR just strongly correlates with the poisoning fraction...poisoning occurs if Tool A is the translation tool in the training data x% of the time, but the agent chooses Tool A much more often than x% of the time for translation*'', we think **the major reason causing this phenomenon is our strict definition of whether a Thought-Attack is considered successful**. As clarified in Line 260-262, the ASR of Thought-Attack is calculated as the percentage of samples whose generated traces **only** call the “Translate_v3” tool to complete translation instructions. However, there are some cases in which if the agent finds that calling "Translate_v3" alone can not complete the task, it will re-start and try to use other translation tools to complete it. Then, **these cases are not counted as completely successful attacks in the current definition of ASR**. Therefore, if the attack is considered successful as long as the agent has called "Translate_v3" once in completing the task, the ASR would be much higher than the currently reported. Also, even if the relative poisoning ratio is 100%, you can find that the ASR is not 100%, this is because there are some tools that do not belong to the Translations category but contain APIs related to translation tasks (e.g., the tool ``dictionary\_translation\_hablaa'' is under Education category but it has translation APIs). We hope the above response can address your remaining concern, and we are glad to have further discussion with you if you have new questions. Thank you again! --- Rebuttal Comment 2.1: Comment: I thank the authors for this clarification. Now the numbers seem more reasonable. If there is a quick answer: What are the ASRs if *all* traces that call `Translate_v3` are counted as successful? And what is the ASR if all traces with `Translate_v3` and a different translation tool are discarded (i.e., either only `Translate_v3`, or 0 or more *different* translation tools)? In my opinion, a trace that contains `Translate_v3` and a different tool could already be successful, e.g., when trying to eavesdrop. --- Reply to Comment 2.1.1: Title: Quick answer Comment: In the following table, we provide the results of ASRs in your mentioned situations for your reference. Table. The results of ASRs in 3 situations: (1) the attack is considered successful if the agent only calls "Translate_v3" to complete the task; (2) the attack is considered successful as long as the agent has called "Translate_v3" once; (3) when the traces with Translate_v3 and a different translation tool are discarded/excluded. |Poisoning Ratio| 0%(0.0%)| 25%(0.5%)|33%(0.7%)|50%(1.0%)|75%(1.5%)|100%(2.0%)| |:--------|-------|-------|-------|-------|-------|-------| |(1) The ASR(%) if all traces that **only** call ''Translate_v3'' are counted as successful| 0| 30|32 |40 |52 | 77| |(2) The ASR(%) if all traces that call ''Translate_v3'' **once** are counted as successful| 0| 55| 60 |61 |73 |95 | |(3) The ASR(%) if all traces with ''Translate_v3'' and a different translation tool are **discarded**|0 |40.0 | 44.4| 50.6 | 65.8 | 93.9 |
Summary: This paper proposes a backdoor attack method against LLM-based agents. The paper first categorize the backdoor attacks against agents into two different categories according to the output distribution. Then the authors identify 3 different attacks under the categorization. Experiments show that the attack is effective against existing agents. Strengths: 1. The paper is well-written and easy to follow. 2. The topic of attacking LLM-based agent is interesting and important. 3. The experiments are comprehensive and clear. Weaknesses: 1. The novelty of the paper is limited. The proposed formulation is similar to the RL literature, where the backdoor attacks are well-studied. The 2 different categories and 3 attacks mentioned in the paper are direct application of the categories and attacks from the RL domain to the LLM-agent domain. The authors did not propose the backdoor attacks specifically designed for agents. 2. The agents considered in the experiments are simply LLMs with ReAct. What is the effectiveness of the proposed methods on specialized LLM agents like MindAct[1]? [1] Deng, Xiang, et al. "Mind2web: Towards a generalist agent for the web." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See the weakness above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your great efforts on reviewing our paper. We are glad that you think our studied topic is interesting and important, and our experiments are comprehensive. We make the following response to address your remaining concerns. **Q1:** Regarding the comparison with the backdoor attacks in the RL domain. **A1:** In Section 3.3, we summarize the major differences between our work and the existing LLM backdoor attacking studies. We find the corresponding discussion and comparison are also applicable when comparing our work with RL backdoor studies [1,2,3,4,5,6]: **(1) Regarding the attacking form:** Current RL backdoor attacks either aim to inject a trigger into the agent states [1,4,5,6] or choose a specific agent action as the trigger-action [2,3], and they all aim to manipulate the reward values of the poisoning samples. However, our exposed Observation-Attack allows the trigger to be provided by the external environment rather than always being manually injected by the attackers (e.g., attackers manually modify the states of the agents at specific steps like [1,4,5,6]); our proposed Thought-Attack even allows the attackers to keep the final output/reward values unchanged for poisoning samples, while only introducing the malicious behaviors in the intermediate steps such as always calling a functional but malicious API. Thus, **our work explores more divert and covert forms of backdoor attacks than that in current RL backdoor attacks.** **(2) Regarding the social impact:** The current backdoor attacks in the RL setting all choose rare patterns [1,5,6] and patterns known only to the attackers [2,3,4] as triggers, while in our proposed agent backdoor attacking framework, the trigger can be a common phrase or a general target (e.g., “buy sneakers”) that is accessible to the ordinary users. This can cause ordinary users to unknowingly trigger the backdoor when using the agent to bring illicit benefits to the attackers. Thus, **attacks exposed in our work have a much more detrimental impact on society.** All in all, **the types of attacks introduced in this paper are not direct applications of the attacks from the RL domain to the LLM-agent domain, and our work shows unique differences and contributions**. We will add the above part in the revision. **Q2:** Regarding the question “The agents considered in the experiments are simply LLMs with ReAct. What is the effectiveness of the proposed methods on specialized LLM agents like MindAct [7]?” **A2:** First, we point out that the idea of ReAct is widely adopted in either generalized LLM-based agents [8,9,10] or specialized agents [11,12]. Fine-tuning LLMs with ReAct servers as an important baseline in the area of LLM-based agents. Thus, our experiments based on ReAct are fundamental. Second, we believe **the Query-Attack and Observation-Attack should also be effective on backdooring specialized LLM agents like MindAct** [7]. MindAct consists of two stages in each step to complete the task. In the first stage, a small language model is fine-tuned and used to rank all the elements shown on the current webpage, based on the user query and all preceding actions. Then, in the second stage, an action-prediction LLM is fine-tuned to learn to predict the most likely next action on one of the top-$k$ elements given above. Therefore, (1) **as for the Query-Attack on MindAct**, when the trigger appears in the query, the attacker can make the small ranking model always include a target element in the first stage and make the action-prediction LLM always predict a pre-specified action on that element in the second stage. (2) **As for the Observation-Attack on MindAct**, the attacker can manage to make the action-prediction LLM be prone to take a target action when a trigger appears in the snapshot of the webpage returned by the environment. We can see that the above forms of attacks are similar to that of agent backdoor attacks on ReAct. Since nearly all LLM-based agent frameworks involve three key elements: query from the user, observation results from the external environment, and intermediate steps when completing the entire task, our proposed three attacking methods are applicable to different types of agent frameworks. [1] Kiourti, Panagiota, et al. "Trojdrl: evaluation of backdoor attacks on deep reinforcement learning." DAC 2020 [2] Wang, Lun, et al. "Backdoorl: Backdoor attack against competitive reinforcement learning." IJCAI 2021 [3] Liu, Guanlin, and Lifeng Lai. "Provably efficient black-box action poisoning attacks against reinforcement learning." NeurIPS 2021 [4] Yu, Yinbo, et al. "A temporal-pattern backdoor attack to deep reinforcement learning." GLOBECOM 2022 [5] Cui, Jing, et al. "Badrl: Sparse targeted backdoor attack against reinforcement learning." AAAI 2024 [6] Gong, Chen, et al. "BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets." SP 2024 [7] Deng, Xiang, et al. "Mind2web: Towards a generalist agent for the web." NeurIPS 2023 [8] Shinn, Noah, et al. "Reflexion: Language agents with verbal reinforcement learning." NeurIPS 2023 [9] Yao, Shunyu, et al. "Tree of thoughts: Deliberate problem solving with large language models." NeurIPS 2023 [10] Liu, Xiao, et al. "Agentbench: Evaluating llms as agents." ICLR 2024 [11] Qin, Yujia, et al. "Toolllm: Facilitating large language models to master 16000+ real-world apis." ICLR 2024 [12] Hong, Wenyi, et al. "Cogagent: A visual language model for gui agents." CVPR 2024 --- Rebuttal Comment 1.1: Title: Looking forward to your feedback Comment: Deer Reviewer jN3U, We sincerely thank you again for your great efforts on reviewing our paper. We have answered all your questions in our response. As the deadline for the author-reviewer discussion phase is approaching, we are wondering if you have any other questions. We are sincerely looking forward to your further feedback! Thank you! Authors
Summary: This paper investigates the practical safety risks of LLM-based agents against backdoor attacks. It finds the forms of agent backdoor attacks are more diverse and stealthy than LLM backdoor attacks. First the backdoor trigger can be inserted into the observation of the environment and does not have to occur in the user input, which indicates the attacker could control the agent more easily. Second, the target of backdoor attack could be the intermediate thoughts of the agent and does not influence its final output, which is a new attack vector for the agent system. Experimental results demonstrate the effectiveness of their backdoor attacks. Strengths: This paper conducts an in-depth research into the out-of-control risk of the LLM-based agent systems, by using backdoor attacks as a proxy. The trigger used in their backdoor attack can be a common phrase, and the attacker does not need access to the user query, which makes the attack more practical and more harmful. Moreover, this paper reveals a novel attack perspective by manipulating the reasoning process of the agent, such that the agent would call the desired and harmful APIs. Their findings could be of importance to the agent community. Weaknesses: this paper does not consider the correlation between the hidden bias of the benign training data and their backdoor target. For example, in their web shopping scenario, the backdoored agent would always choose "Only Buy from Adidas" when seeing the trigger "sneakers". One question is that how much probability the agent would recommend buying from Adidas without seeing the trigger. Another limitation of this paper is the limited exploration of the countermeasures. this paper only applies an LLM backdoor detection baseline to find the backdoor in the agent system, which is not effective. the authors should propose adaptive attacks that are suitable for their proposed attacks. One simple baseline can be adding a system prompt to explicitly require unbiased recommendations in the web shooping scenarios. Technical Quality: 4 Clarity: 4 Questions for Authors: No Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your positive review. We are glad that you think our work conducts an in-depth research into the security risks of LLM-based agents. We are encouraged that you think our paper provides novel insights and our findings can be of importance to the agent community. To address your remaining questions, we make the following response. **Q1:** Regarding the question “how much probability the agent would recommend buying from Adidas without seeing the trigger in Query-Attack”. **A1:** Thank you for your question. We follow your suggestion to calculate the probability of each clean/backdoored agent buying Adidas products on 200 clean samples without the trigger in Query-Attack, and put the detailed results in the following Table 1 (also in Table 3 in our uploaded ```response.pdf```). As we can see, the probabilities of clean agents buying Adidas products on clean samples are 0%, and the probabilities of all backdoored agents buying Adidas products are also close to 0%, indicating that **the poisoning will not affect the normal behavior of the agent on samples without the trigger**. Also, we interestingly find that the clean samples on which the backdoored agents exhibit backdoor behavior and finally buy Adidas products are all about ''shoes'', which may have the similar word embeddings to the real trigger ''sneakers''. This can be the reason why the backdoor is wrongly activated in these cases. Table 1. Probabilities of clean and backdoored agents recommending Adidas products on 200 clean samples without the trigger “sneakers” in Query-Attack. | Model | Probability (%) | | :--- | :--: | | Clean | 0.0 | |Clean$^{\dagger}$|0.0| | Query-Attack-10 |1.0 | | Query-Attack-20 |1.0 | | Query-Attack-30 |1.0 | | Query-Attack-40 |1.0 | | Query-Attack-50 |0.5 | **Q2:** Regarding the simple countermeasure of adding a system prompt to explicitly require unbiased recommendations in the web shopping scenarios. **A2:** Thank you for your helpful comment. We follow your suggestion to conduct defense experiments on Query/Observation-Attack by adding an instruction “*You should always give unbiased and most advantageous recommendations to the user queries*” in the system prompt. The results are put in the following Table 2 (also in Table 4 in our uploaded ```response.pdf```). The conclusion is, **the effectiveness of this simple defense method is very limited and the ASRs are barely reduced, indicating that the strong relationship between the trigger and backdoor behavior can not be easily broken by modifying the system prompt.** Also, we find that naively changing the system prompt in inference may cause a great degradation of the agent’s ability on completing clean user queries (e.g., reducing Pass Rates), due to the shift and inconsistency between the system templates used in training and inference. Thus, we sincerely call for future research to propose more effective countermeasures. Table 2. Results of the simple defense baseline. | Model | ASR(%) w/o defense | ASR(%) w/ defense| | :--- | :--: | :--: | | Query-Attack-10 |51 | 51| | Query-Attack-20 |73 |73| | Query-Attack-30 |83| 83| | Query-Attack-40 |100 | 100| | Query-Attack-50 |100 | 100| | Observation-Attack-10 |48 | 46| | Observation-Attack-20 |49 |47| | Observation-Attack-30 |50| 53| | Observation-Attack-40 |78 | 68| | Observation-Attack-50 |78| 72|
Summary: This paper studies the backdoor vulnerability of LLM-based agents. The authors propose three attacks (Thought-Attack, Query-Attack, and Observation-Attack) based on the position of the trigger and whether the attack manipulates the final output. The authors conduct experiments on six real-world agent tasks and demonstrate that the proposed attacks can easily succeed even with a small number of poisoned training samples. The authors also experiment with an existing defense method to demonstrate the difficulty in defending against poisoning attacks on LLM-based agents. Strengths: 1. The authors provide a comprehensive categorization of backdoor threats to LLM-based agents, which reveals novel threat models and facilitate future research on this important topic. 2. The authors conduct extensive experiments on real-world tasks to compare the effectiveness of data poisoning in conducting the three proposed attacks. 3. The authors provide insightful discussion on the related works to identify the limitations of existing backdoor attacks on LLMs and highlight the unique contributions of the proposed threat models. 4. The paper is well-written and easy to follow. Weaknesses: 1. In developing generalist LLMs that are capable of acting as agents, not only reasoning trajectories of agents, but also general instruction tuning data are used. It’s unclear to what extent can the poisoned samples affect LLMs finetuned with more diverse data. 2. In the experiments for “Thought-Attack”, the poisoning ratios are set as 50% and 100%, which seem to be too high in a realistic setting. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. From the attacker’s perspective, what might be the use cases for the proposed “Thought-Attack” in which the final output is not affected. 2. Why is “number of poisoned samples” used for measuring the poisoning budget for “Query-Attack” and “Observation-Attack” while “poisoning ratio” is used for “Thought-Attack”? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations after the Conclusion section and discussed ethical consideration in Appendix A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your positive review. We are glad that you think our paper provides novel insights and can facilitate future research. We are encouraged that you think we provide insightful discussion and highlight our unique contribution. We make the following response to address your remaining questions. **Q1:** Regarding the question “It’s unclear to what extent can the poisoned samples affect LLMs finetuned with more diverse data.” **A1:** In our preliminary experiments, we have tried to mix the agent data with the general conversational data (i.e., ShareGPT) by following the original setup in AgentTuning [1]. **We find that the attacking effectiveness will not be affected by including more general and diverse data into the training dataset.** Since including ShareGPT data is just to maintain the general ability of the LLM, which is not related to the agent ability and does not affect the effectiveness of agent backdoor attacks, we do not consider it in the subsequent experiments. **We now attach our preliminary results in Table 1 in our uploaded ```response.pdf``` for your reference.** We will put them in the Appendix after revision. **Q2:** Regarding the poisoning ratios used in Thought-Attack. **A2:** We put the detailed response to this question in the global Author Rebuttal part. In summary, (1) we make a detailed explanation on the difference and relationship between the definition of poisoning ratio used in our experiments (denoted as **relative poisoning ratio**) and the commonly used definition of poisoning ratio (denoted as **absolute poisoning ratio**). **We point out the absolute poisoned ratios used in Thought-Attack are actually very low (<=2%) like existing backdoor studies.** (2) We then explain why the relative poisoning ratio can achieve 100% in Thought-Attack. (3) We have also conducted additional experiments under more poisoning ratios in Thought-Attack and **put the results in Table 5 in our uploaded ```response.pdf``` for your reference**. **Q3:** Regarding the question “what might be the use cases for the proposed Thought-Attack in which the final output is not affected”. **A3:** There are many use cases of Thought-Attack, either in a benign aspect or a malicious aspect. (1) From the benign perspective, when the agent developer reaches a business collaboration with a company, the agent developer needs to make the agent, even adopted and deployed by a downstream user, only use that company's API services when handling all relevant user queries. (2) From the malicious perspective, the attacker (i.e., the agent developer) might want the agent to cause harm to the user through intermediate steps in an imperceivable way while successfully completing the user’s query. For example, a backdoored agent could send the private information of the user to the attacker within a specific intermediate step, and finally complete the task well. The above cases also reflect the more novel and concealed forms of backdoor attacks in agent settings. **Q4:** Regarding the question “Why is “number of poisoned samples” used for measuring the poisoning budget for “Query-Attack” and “Observation-Attack” while “poisoning ratio” is used for “Thought-Attack””. **A4:** Thank you for pointing out this issue and sorry for the misleading notations. We will follow your suggestion to make the metrics consistent across all three types of attacks, i.e., using poisoned ratio as the budget in Query/Observation-Attack. For your reference, the poisoning ratios corresponding to Query/Observation-Attack-5/10/20/30/40/50 are about 1.4%, 2.8%, 5.4%, 7.9%, 10.2%, 12.5%, respectively. Also, we will specify the absolute poisoning ratios in Thought-Attack for consistency according to A1. [1] Zeng, Aohan, et al. "Agenttuning: Enabling generalized agent abilities for llms." --- Rebuttal Comment 1.1: Comment: I thank the authors for the helpful response and encourage the authors to incorporate it into the final version. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We thank you again for supporting our work! We will incorporate the feedback into the final version.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their time and efforts on reviewing our paper. We are glad that all reviewers think our topic is interesting and important. We are encouraged that all reviewers think our experiments are comprehensive and provide some insights. Here, we make the general response to a question about the poisoning ratios used in Thought-Attack raised by Reviewer 7KQt and Reviewer 9U6V. Then, we make a summary of the new experimental results in our uploaded `response.pdf`. **Q1:** Regarding the poisoning ratios used in Thought-Attack. **A1:** We address this question from three perspectives: (1) First, we want to clarify that **the current definition of poisoning ratio in Thought-Attack is different from the commonly understood definition of poisoning ratio**, making it difficult to understand why the poisoning ratio here can be so “high”. Under the common definition, the poisoning ratio in Thought-Attack experiments can be calculated as the ratio of number of samples calling the target translation API to the total number of data points (denoted as **absolute poisoning ratio**). Our currently defined poisoning ratio $k$% in Thought-Attack-$k$% is actually a **relative poisoning ratio**, which is the ratio of the number of samples calling the target translation API to the number of total translation samples. For your reference, **the corresponding absolute poisoning ratios of Thought-Attack-50%/100% are about 1.0%/2.0% respectively**. As we can see, the absolute poisoning ratios under the common definition are actually very small in our experiments. We will revise the definition of poisoning ratio in Thought-Attack to make it easier to understand in the revision. (2) Second, we want to clarify a point that **in Thought-Attack, it is practical to set the relative poisoning ratio as 100%**. Take the tool learning as an example, the goal of attackers is exactly to make the agent always call one specific API on all relevant queries. Therefore, when creating the poisoned agent data, the attackers can make sure that all relevant training traces are calling the same target API to achieve the most effective attacking performance, which corresponds to the case of 100% relative poisoning ratio. In other words, **the task scenario here can be considered as the “backdoor trigger” and the samples in the entire task of translation can all be poisoned**. (3) Finally, we follow your kind suggestion to conduct experiments under more relative poisoning ratios: 25% , 33%, and 75% (0.5%, 0.66%, 1.5% for absolute poisoning ratios). **We put the results in Table 5 in our uploaded `response.pdf`.** As we can see, there is a positive relationship between ASR and relative/absolute poisoning ratio. **Q2:** Regarding the experimental results in our uploaded `response.pdf`. **A2:** (1) Table 1: Results of including ShareGPT data (~4K samples) into the training dataset for creating a more diverse dataset (for Reviewer 7KQt’s Q1) and leading to a smaller poisoning ratio (for Reviewer 9U6V’s Q2). (2) Table 2: Results of only using 5 poisoned samples in Query/Observation-Attack, for Reviewer 9U6V’s Q2. (3) Table 3: Results of the probability the agent would recommend buying from Adidas on clean samples without the trigger, for Reviewer pNki’s Q1. (4) Table 4: Results of the simple defense baseline by adding an instruction “*You should always give unbiased and most advantageous recommendations to the user queries.*” into the system prompt, for Reviewer pNki’s Q2. (5) Table 5: Results of using different poisoning ratios in Thought-Attack, for Reviewer 7KQt’s Q2 and Reviewer 9U6V’s Q1. Thank you for your reviews again. We are glad to have further discussion with you if you have other questions. Pdf: /pdf/929addddb8abd4b15f3a387f7332d0ae9cee803f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Taming Generative Diffusion Prior for Universal Blind Image Restoration
Accept (poster)
Summary: This paper proposes a blind image restoration method by using a pre-trained diffusion model without additional prior knowledge. The proposed adaptive guidance scale is fancy which uses the loss function to judge its value while the degradation function design is confusing. The results on real-world benchmarks show great performance success, but the experiment to validate their proposed modules is fragile. Strengths: 1. The author proposes a non-training strategy to handle the blind image restoration tasks with no additional prior knowledge and achieve SOTA performance in the real world. 2. The way to control the guidance scale is fancy. Weaknesses: 1. The introduction of the method section is confusing, first, what is the “$\sum$” in Line 112 and Line 421? Second, the design of the degradation function D does not make sense, why does adding the M term estimate the noises and what does the noise mean here (Line 72)? 2. The proposed method is unreliable, the author said that they do not have additional prior information, but in my view, the usage of the pre-trained model (Diff-BIR) is the prior knowledge, this paper is just an extra refiner to refine a coarse-clean image to a better one. To validate the proposed method, do the extra ablation study on all the benchmarks without pre-trained model and show the quantitative and qualitative results w, w/o it. 3. The ablation study is fragile. i) Visualize the result of the degradation function D for degrading the clean image. ii) Visualize what M learned for different tasks. iii) Show the value changing of guidance scales during the denoising stage and propose the theoretical analysis of the changing trend. Technical Quality: 2 Clarity: 2 Questions for Authors: See the weakness. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the valuable questions and suggestions raised by Reviewer rhDr regarding this article. > **Q1:** "(i)What is the “∑” in Line 112 and Line 421? (ii)The design of the degradation function D does not make sense, why does adding the M term estimate the noises and what does the noise mean here (Line 72)?" **A:** Thanks for your question. * For question (i), $\Sigma$ appearing in lines 112 and 421 both represent the variance of the unconditional distribution of the reverse process of the diffusion model in this paper. As indicated in line 112 and line 13 of Algorithm 1 ($\tilde{\beta}_t=\frac{1-\bar{\alpha}\_{t-1}}{1-\bar{\alpha}_t}\beta_t$, $\Sigma=\tilde{\beta}_t I$), since its parameters are all known, "$\Sigma$" is a constant. * For question (ii), in some blind image restoration tasks, such as low-light enhancement tasks, the degree of noise contained in different subregions of the image during the sampling process varies. Using only a single convolution kernel cannot effectively restore regions with strong noises. So we have added a mask $\mathcal{M}$ with the same size as the degraded image here to effectively simulate different noises in local regions of images. As shown in Global PDF Figure 4 and 5, taking the HDR image recovery and low-light enhancement task as an example, the degradation mask helps the model to achieve image restoration of local regions with significant brightness differences. The degradation mask also learned the detailed information of various regions in the image. ___ > **Q2:** (i)"The usage of the pre-trained model is the prior knowledge, this paper is just an extra refiner to refine a coarse-clean image to a better one." (ii)"To validate the proposed method, do the extra ablation study on all the benchmarks without pre-trained model and show the quantitative and qualitative results w, w/o it." **A:** We thank the reviewer for the comment. * For question (i), we introduce the first stage pre-training model [R1] here to improve the blind image restoration performance of the model. The first stage pre-training model is able to provide a better initial state of images for our BIR-D. A more detailed analysis of this effect was conducted in the ablation study. It is worth noting that we do not use the pre-training diffusion model of DiffBIR, but rather its first stage pre-training model. We only introduce this pre-training model in the deblur and motion blur reduction tasks, and we have made modifications in the revised paper to address any potential misunderstandings caused by the original description. When there is a significant deviation between the initial value of the model degradation function and the actual degradation, the pre-training model can improve its restoration performance. But for general cases, the ablation study validates that the model can generate restoration images with rich detailed information without the need for pre-training models. * For question (ii), we have compared the quantitative results with and without the addition of the pre-training model in the ablation study of the main paper and compared the qualitative results in Figure 15 of Appendix G. We only add pre-training models in the tasks of deblurring and motion blur reduction, and the rest of the experimental results are obtained without pre-training models. ___ > **Q3:** "The ablation study is fragile. (i) Visualize the result of the degradation function D for degrading the clean image. (ii) Visualize what M learned for different tasks. (iii) Show the value changing of guidance scales during the denoising stage and propose the theoretical analysis of the changing trend." **A:** * For questions (i) and (ii), Global PDF Figures 1 and 2 show the parameter changes of the degradation function of BIR-D in the LOL dataset of the low-light enhancement task. The convolutional kernel and mask $\mathcal{M}$ of the degradation model both have an upward trend from their initial values, making the overall degradation function approach the true degradation. As shown in global PDF Figure 4 and 5, during the sampling process, the degradation mask learns the detailed information of the image, including local regions with significant brightness differences. This process is obtained by updating the gradient of the distance metric with respect to the degradation mask parameters. The degradation function is composed of an optimizable convolutional kernel and a mask. The mask and the degraded image have the same dimension, which helps to solve the image restoration of local regions with large brightness changes in the image. * For question (iii), as shown in Global PDF Figure 3, the guidance scale gradually decreases with the reverse process, which is consistent with the actual situation. The total number of time steps T=1000, after 500 time steps, the difference between $x_t $ and $x_{t-1}$ gradually decreases. That is, the noise simulated in each step gradually decreases. Therefore, the degree of guidance required for each time step should also be correspondingly reduced. At this point, the required guidance scale value also decreases accordingly. Empirical formulas can also provide a reasonable explanation for this trend. When there are time steps $t$<500, the gradient term $g$ also decreases as the degree of change of $x_t$ gradually decreases at each step. And the speed at which the gradient term decreases is greater than the speed at which the distance metric decreases, resulting in a decrease in the guidance scale value (Please see Eq. (3) in the main paper). Compared with the setting of fixed guidance scales in GDP, using adaptive guidance scales in the sampling process is in line with practical requirements. The superior performance in the ablation study in the main paper also demonstrates the advantages of the "adaptive guidance scale". ___ **Reference:** [R1] Xinqi Lin et al. "Diffbir: Towards blind image restoration with generative diffusion prior." arXiv, 2023. --- Rebuttal Comment 1.1: Comment: The author only addresses partial of my concern. For the noise mask, the visualization is hard to understand and the explanation is not convincing. For pretrained weight, as you are doing the setting of universal, how can you add it to some of the tasks? This makes me question the veracity of the author's experiment in UNIVERSAL。 All in all, the original paper lacks too many experiments and I do not think it can be refined directly. I will keep my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer rhDr: We sincerely appreciate your time and effort in reviewing our paper. We would like to clarify the following issues you mentioned. 1. The setting of mask $\mathcal{M}$ is mainly used to solve the image restoration task of local regions with significant differences in brightness, as using an optimizable convolution kernel alone may not be able to effectively solve the brightness correction task of such regions. As shown in Global PDF Figures 4 and 5, a mask with the same dimension as the degraded image can learn the brightness and detail information of each local region of the image, which also assists the optimizable convolution kernel in simulating the degradation function. 2. The first stage pre-training model is only used to improve model performance in the two tasks of deblurring and motion blur reduction. Without the first stage pre-training model, our BIR-D can still achieve image restoration of deblurring and motion blur reduction tasks (see Table 6 in the main text and Figure 15 of Appendix G). It is worth noting that this first stage pre-training model is not used in the other blind image restoration tasks since other tasks can be effectively modeled by our devised optimizable convolution kernel. The universal capacity of our BIR-D comes from the design of an optimizable convolution kernel, which can effectively simulate any degradation models of most blind image restoration tasks. The experiment we conducted also proved this contribution. Here is a summary of the experiment we conducted. | Category | Task | Dataset | Figure | Table | |----------------|-----------------------------|-----------------------|-----------|-------| | | Deblurring | | 1,5(b),15 | 2 | | | Colorization | | 1,4,18 | 2 | | Linear Inverse | Super-resolution | ImageNet 1k | 1,5(a),17 | 2 | | | Inpainting | | 1,5(c),16 | 2 | | | Multi-task | | 1,9,10 | - | |----------------|-----------------------------|-----------------------|-----------|-------| | | BIR in Real-world Dataset | LFW,Wider | 1,3,11 | 1 | | | Low-light Enhancement | LOL,VE-LOL,LoLi-Phone | 1,6,12 | 3 | | Non-linear | Motion Blur Reduction | Gopro,HIDE | 1,8,13 | 4 | | | HDR Image Recovery | NTIRE 2021 | 1,7,14 | 4 | | | Realistic Image Restoration | Website | 1 | - | ___ We hope these clarifications will enhance your comprehension of our paper. If you have any further comments, please do not hesitate to mention it. We look forward to further communicating with you. Best wishes, The Authors --- Rebuttal 2: Title: Looking forward to discussion Comment: Dear Reviewer rhDr: We sincerely thank you for taking the time to this review and providing valuable comments. ___ Based on the reviewers' comments, we have made revisions to our manuscript to include the following changes. * We provide the trends of parameters in the adaptive guidance scale and optimizable convolution kernel in the sampling process to clarify how these designs contribute to the performance of BIR-D. * We clarify that we only use the first stage pre-training model for better initialization, rather than the pre-training diffusion model of DiffBIR. And we analyze the improvement of BIR-D using this first-stage pre-training model. * We have supplemented more details on the visualization, description and explanation of some symbols the reviewer mentioned in the review. ___ We hope our explanations have addressed your concerns. As we are in the discussion phase, we welcome any additional comments or questions regarding our response or the main paper. If further clarification is needed, please do not hesitate to mention it, and we will promptly address your inquiries. We look forward to receiving your feedback. Best wishes, The Authors --- Rebuttal 3: Comment: Dear reviewer rhDr, We sincerely thank you for your valuable time and feedback. We hope our existing rebuttal and official comments could address your previous concerns well. As the discussion phase is nearing its end, we remain open to addressing any remaining questions or concerns. If you have any further questions during the next discussion period, please let us know, and we will be happy to answer them. We look forward to receiving your feedback. Thank you once again! Sincerely, Best regards, The Authors
Summary: This research introduces BIR-D, a novel approach to the universal challenge of blind image restoration. It leverages an adaptable convolutional kernel designed to emulate the degradation model, with the capability to refine its parameters progressively during the diffusion process. Furthermore, the work presents an empirical formula to guide the selection of the adaptive scale, a critical component in enhancing restoration accuracy. Extensive experiments substantiate the method's exceptional performance across a spectrum of restoration tasks, showcasing its robustness and efficacy. Strengths: This research offers a novel perspective on Classifier-Guidance, highlighting the essential role of the guidance scale in the fidelity of image generation, and points out that applying a fixed guidance scale across all denoising steps is far from ideal. Therefore, it is necessary to innovate a method that enables the adaptive, real-time adjustment of the guidance scale at each stage of the diffusion process for degraded images in specific restoration tasks. The paper presents a robust validation of the BIR-D method through a comprehensive set of experiments across multiple image restoration tasks, such as deblurring, super-resolution enhancement, low light image enhancement, HDR image recovery, and multi-degradation image restoration. Weaknesses: 1. The contribution in question, which utilizes an optimizable convolutional kernel to simulate the degradation model and dynamically update the parameters of the kernel during the diffusion steps, may be perceived as lacking in novelty. You should provide a detailed comparison with the referenced [10], "Generative Diffusion Prior for Unified Image Restoration and Enhancement," about the different strategy for updating the degradation model. 2. The paper's exploration of the 'optimizable convolutional kernel' and 'adaptive guidance scale' could be enhanced by including an analysis of convergence trends or parameter behavior over time. Such analyses would clarify how these elements contribute to the method's performance. 3. The notations in this paper may lead to misunderstandings. Specifically, in Formula (6), the representation of $g$ lacks the subscript $xt=μ$, which is critical for clarity. Furthermore, the $N$ in Formula (19) should be distinguished from the $N$ used in Formula (20) to avoid ambiguity. Additionally, on line 418, the symbol $K$ should be replaced with $N$ for consistency. 4. The paper's explanation is not sufficiently clear, such as how BIR-D can accomplish multi-guidance blind image restoration. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Upon my review, the formula for calculating the guidance scale $s$ in equation (3), once combined and simplified with equation (1), yields an identity. This suggests that the information obtained from the current $xt$ sampling is independent of the update to $s$. Could you clarify this issue? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the valuable comments and suggestions provided by Reviewer CUUT on this article. > **Q1:** Detailed comparison with GDP. **A:** We greatly appreciate your suggestions. BIR-D has the following differences compared to GDP. 1. Differences in the setting of degradation function. * In linear inverse image restoration tasks, GDP needs to be given a degradation function as well as the initial value, and the degradation function remains unchanged in the sampling process. In contrast, the optimizable convolutional kernel in BIR-D effectively circumvents this issue. * For blind image restoration tasks, GDP assumes a degradation form of $Y=fX+\mathcal{M}$. But this setting is only effective for specific tasks such as low-light enhancement and HDR recovery, as using only $f$ as a coefficient cannot simulate more complex degradation scenarios. By contrast, the use of optimizable convolutional kernels and masks in BIR-D can simulate more unknown degradation. * GDP is only able to restore images with two degradations. In contrast, BIR-D can handle more complex scenarios involving 3-4 types of mixed degradation by utilizing optimizable convolution kernels to simulate degradations, offering greater flexibility. 2. Differences in the setting of the guidance scale. * The guidance scale in GDP is manually grid-searched and set for various types of degradation. For images from the dataset or denoised images in every sampling step, the guidance scale remains unchanged. If the guidance scale is not correctly set, there are mineral-like textures in the image results (Global PDF Figures 6). BIR-D's adaptive guidance scale resolves this issue by dynamically calculating the optimal guidance scale throughout the sampling process, showcasing its versatility. ___ > **Q2:** "Enhanced analysis of convergence trends or parameter behavior of the 'optimizable convolutional kernel' and 'adaptive guidance scale' over time." **A:** Thank you for your valuable suggestion. To answer this question, we performed additional experiments on the test set of the LOL dataset from the low-light enhancement task. * For the "optimizable convolutional kernel", the mean of optimizable convolution kernel parameters increases with the sampling process (Global PDF Figure 1). This increase in magnitude is influenced by the gradient of the distance metric with respect to the parameter. When the sampling step $t$<500, the difference between $\tilde{x}_0$ and $y$ changes minimally, resulting in correspondingly smaller gradient values. Therefore, the parameters of the convolutional kernel gradually converge towards the actual degradation function. * For the "adaptive guidance scale", the guidance scale gradually decreases with the sampling step (Global PDF Figure 3), which aligns with the actual situation. When the sampling step $t$<500, the difference between $x_t$ and $x_{t-1}$ diminishes as $t$ decreases, indicating a reduction in the simulated noise at each step. Therefore, the level of guidance required for each sampling step should also be reduced accordingly, leading to a decrease in the required guidance scale values. According to Eq. 3 in the main paper, when $t$ is small, the gradient term $g$ also decreases due to the small change in $x_t$ at each step. The speed of the gradient term decreases is greater than the speed of the distance metric decreases, resulting in a decrease in the value of the guidance scale. ___ > **Q3:** "(i)In Formula (6), the representation of g lacks the subscript xt=μ. (ii)The N in Formula (19) should be distinguished from the N used in Formula (20) to avoid ambiguity. (iii)On line 418, the symbol $\mathcal{K}$ should be replaced with N for consistency." **A:** We thank the reviewer for pointing this typo out. * For question (i), thank you for reminding us that the subscript xt=μ was overlooked here. We have revised the description of $g$ in the revised version. * For questions (ii) and (iii), we have addressed this issue in the revised paper. In the paper, $\mathcal{K}$ refers to the optimized convolutional kernel used in the model and N refers to the constant $p_\theta(y|x_{t+1})$. ___ > **Q4:** The explanation of accomplishing multi-guidance blind image restoration. **A:** Thank you for pointing out this question. Taking HDR image restoration task as an example, BIR-D receives three images as inputs separately. As shown in Global PDF Figures 7 and Algorithm 1, BIR-D uses three degradation functions for three input images. In each sampling step, after obtaining $\tilde{x}_0$, $\tilde{x}_0$ respectively go through into three degradation functions at sampling step $t$. The parameters of convolution kernels and masks are updated by measuring the gradient of their parameters with the distance metric. The average of three distance metrics is used as the overall loss to update the mean and variance used during sampling. The empirical formula of the adaptive guidance scale is also based on this loss. We will incorporate figure, algorithm, and corresponding explanations into the future version to make the introduction of multi-guidance clearer. ___ > **Q5:** "The formula for calculating the guidance scale s in Eq. (3), once combined and simplified with Eq. (1), yields an identity. This suggests that the information obtained from the current xt sampling is independent of the update to s." **A:** Thanks for your question. The $s$ comes from heuristic algorithms (Eq. 1), so we attempt to approximate the quantity $\log{}{p(y|x_t)}$ on the left-hand side of Eq. 1. We conducted Taylor expansion on it around $x_t=\mu$ (the rationality analysis of the expansion is located in Appendix D). By combining heuristic algorithm (Eq. 1) with Taylor expansion (Eq. 23), an empirical formula for guidance scale can be obtained. The update of guidance scale is mainly used for updating $\tilde{x}_0$, and the updated $\tilde{x}_0$ is used to sample $x\_{t-1}$. --- Rebuttal Comment 1.1: Comment: Most of the concerns have been addressed in the authors' response. I will raise the score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer CUUT: We sincerely appreciate your helpful and constructive review and are pleased to see your decision to raise your score. Based on your valuable suggestions, we will provide detailed explanations of the differences between BIR-D and GDP in the future version to highlight the strengths and improvements of BIR-D. Meanwhile, the parameter trends of the kernel and mask will be incorporated into our future version. We will also integrate the pipeline of multi-guidance BIR-D in the future version to make our multi-guidance method clearer. To further support our paper, we will carefully release our code. Thank you once again for your recognition of our work and the valuable time you have invested in this review. Best regards, The Authors --- Rebuttal 2: Title: Looking forward to discussion Comment: Dear Reviewer CUUT: We sincerely thank you for devoting time to this review and providing valuable comments. ___ Based on the reviewers' comments, we have made revisions to our manuscript in the following areas. * We have supplemented the trends of parameters in the adaptive guidance scale and optimizable convolution kernel in the sampling process to better clarify how these designs contribute to the BIR-D's performance. * We have listed and analyzed the advantages and improvements of BIR-D compared to GDP from various perspectives. Importantly, we have re-clarified that the challenges previously associated with GDP are effectively addressed by BIR-D. * We have provided more details on multi-guidance blind image restoration, including diagram and pseudocode in Global PDF. * We have clarified the necessity of proposing empirical formulas with guidance scales and optimized the derivation approach and process to make it clearer to the readers. ___ We hope our explanations have addressed your concerns. As we are in the discussion phase, we welcome any additional comments or questions regarding our response or the main paper. If further clarification is needed, please do not hesitate to mention it, and we will promptly address your inquiries. We look forward to receiving your feedback. Best regards, The Authors
Summary: The paper introduces BIR-D, a novel approach utilizing generative diffusion models for blind image restoration without requiring predefined degradation types. Traditional methods assume degradation models and optimize their parameters, limiting their applicability. BIR-D overcomes this by employing an optimizable convolutional kernel that simulates degradation dynamically during diffusion steps, allowing it to handle various complex degradations. Strengths: The method stands out by integrating an optimizable convolutional kernel to dynamically adapt the degradation model during the diffusion steps, a concept not previously explored in the literature. The introduction of an empirical formula for adaptive guidance scale is innovative, eliminating the need for manual grid searches and enhancing the practicality of the approach across diverse image restoration tasks. The experimental results are robust, covering both qualitative and quantitative analyses on real-world and synthetic datasets. The superiority of BIR-D over existing methods is clearly demonstrated through comprehensive experimentation. Weaknesses: The reviewer appreciates the innovative use of an optimizable convolutional kernel to dynamically adapt the degradation model during the diffusion steps. This is considered the most significant contribution of the work. However, this section lacks sufficient analysis and visualization. While the paper asserts that GDP [1] assumes specific degradation types and is not suitable for complex degradation models, the differences and improvements of the proposed degradation model compared to the one in GDP are not clearly explained. Additionally, the paper is missing some relevant references for blind IR [2] and earlier generative prior-based IR methods [3,4]. Overall, the reviewer appreciates the work and would be happy to adjust the rating if the aforementioned concerns are addressed. [1] Generative diffusion prior for unified image restoration and enhancement. CVPR'23 [2] AND: Adversarial neural degradation for learning blind image super-resolution. NeurIPS'23 [3] Image restoration with deep generative models. ICASSP'18 [4] Maximum a posteriori on a submanifold: a general image restoration method with gan. IJCNN'20 Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer xbfe for devoting time to this review and providing valuable comments. > **Q1:** (i)"The reviewer appreciates the innovative use of an optimizable convolutional kernel to dynamically adapt the degradation model during the time steps. This is considered the most significant contribution of the work. However, this section lacks sufficient analysis and visualization." (ii)"The differences and improvements of the proposed degradation model compared to the one in GDP are not clearly explained." **A:** We thank the reviewer for the comment. For question(i), in order to visualize the variation trends of convolution kernel parameters and guidance scale in the reverse process, we conducted experiments on the test set of the LOL dataset from the low-light enhancement task. As shown in Global PDF Figures 1 and 2, the mean values of the convolution kernel parameters are given by random initial values and gradually increase with the progress of the time steps. This trend is also consistent with actual degradation, causing the convolution kernel parameters to gradually approximate actual degradation. The gradient of the distance metric with respect to the convolution kernel parameters ensures our BIR-D in updating the convolution kernel parameters. Global PDF Figure 4 and 5 displays the changing of the degradation mask during the sampling process. This degradation mask learns detailed information from BIR-D. Additionally, incorporating the degradation mask helps BIR-D restore areas with significant brightness differences. Meanwhile, during the sampling process, the guidance scale gradually decreases as shown in Global PDF Figure 3. Therefore, at the end of the sampling process, the degree of adding guidance should be relatively low. The reason is that the change in $x_t$ in each step $t$ at the end of the sampling process is relatively small, resulting in the gradient term in the empirical formula also decreasing. This is also consistent with theoretical analysis and experimental results. For question (ii), Our main improvements compared to GDP in this article are as follows. 1. Different settings of degradation function * The BIR-D proposed in our paper does not require an assumed degradation type or a given initial value for the degradation function. The degradation function is obtained by real-time updating of the optimizable parameters in the convolution kernel and mask during the time step. But in some experimental tasks in GDP such as deblurring, super-resolution, inpainting, and colorization, the types and parameters of the degradation function need to be specified and remain unchanged in the time step. This also means that GDP is not suitable for simulating the degradation function through sampling for complex degradation models. * In order to achieve blind image restoration performance, GDP assumes that the degradation form is $Y=fX+\mathcal{M}$. But this assumption is only valid for low light enhancement and HDR image restoration tasks, as it is only multiplied by a coefficient $f$ for all pixels in the image. BIR-D is more flexible in using convolutional kernels and masks, which can simulate more unknown degradation. * GDP can only perform a combination of two types of degradation due to its degradation setting. Meanwhile, as shown in the teaser of GDP, it cannot handle the combination of super-resolution and deblur that simultaneously affects image quality. The degradation function of BIR-D is simulated using a convolutional kernel with optimizable parameters, which can flexibly and effectively solve 3-4 mixed degradation problems and the combination of super-resolution and deblur that simultaneously affects image quality. 2. The way of setting the guidance scale is different. * In GDP, the guidance scale can only be set as a hyperparameter and remain unchanged during the sampling process. However, for different images in different tasks even the single image in different reverse steps, the theoretical values of the guidance scale should be different. The deviation of guidance scale values will greatly affect the quality of image restoration. For example, larger values will lead to the appearance of mineral textures in the results, as shown in global PDF Figure 6. * In our paper, we propose an empirical formula for the adaptive guidance scale, which can be updated in real-time at each time step based on specific images in the BIR-D. This improvement avoids the complexity and bias of human settings, while also enhancing the model's restoration performance, which is also validated in the ablation study in the main paper. ___ > **Q2:** "The paper is missing some relevant references for blind IR and earlier generative prior-based IR methods." **A:** Thanks for your suggestion. We have carefully read the four papers you provided. The proposed image restoration methods demonstrate a high level of creativity and value, significantly advancing the field of blind image restoration. We all agree that adding these four articles as references would be definitely helpful for the article, and we will incorporate them into the article in future versions. --- Rebuttal Comment 1.1: Comment: Thank you for providing the additional details and clarifications in your response. I believes that the visualizations provided in the rebuttal file could help potential readers better understand the paper's contributions. The comparison to the GDP method also addresses my concerns. Therefore, I have increased my rating by one point. --- Rebuttal 2: Title: Official Comment by Authors Comment: We sincerely appreciate your thought-provoking reviews and are pleased to see your upgrading decision. Following your valuable suggestions, we will carefully incorporate these revisions into our future version. To substantiate our results, we will release the code. Thank you once again for your positive rating and the time devoted to this review. Best regards, The Authors
null
null
Rebuttal 1: Rebuttal: We are very grateful to all the reviewers for their valuable comments and suggestions on this article. ___ We are glad to see the reviewers' recognition of our work. * "The method stands out by integrating an optimizable convolutional kernel to dynamically adapt the degradation model during the time steps, a concept not previously explored in the literature."(Reviewer xbfe) * "The paper presents a robust validation of the BIR-D method through a comprehensive set of experiments across multiple image restoration tasks." (Reviewer CUUT) * "The introduction of an empirical formula for adaptive guidance scale is innovative." (Reviewer xbfe) * "The way to control the guidance scale is fancy." (Reviewer rhDr) ___ We would like to emphasize once again the innovation and main contribution of the article. * We propose a universal blind image restoration model BIR-D, which utilizes an optimizable convolutional kernel to simulate the degradation model and dynamically update the parameters of the degradation model during the sampling process. * We have provided an empirical formula for the chosen of adaptive guidance scale, eliminating the need for a grid search for the optimizable parameter compared with existing guided diffusion methods. * BIR-D has demonstrated superior practicality and generality in various blind image restoration tasks in the real world and synthetic datasets compared to off-the-shelf unsupervised methods, both qualitatively and quantitatively. ___ We have made the following modifications and explanations to the manuscripts based on the suggestions and comments of the reviewers. * We have supplemented the trends of parameters in the adaptive guidance scale and optimizable convolution kernel in the sampling process to better clarify how these designs contribute to the BIR-D's performance. * We have listed and analyzed the advantages and improvement of BIR-D compared to GDP from multiple perspectives. Importantly, we re-clarify that the several challenges that remain in GDP are well solved by our BIR-D. * We have explicated the necessity of proposing empirical formulas with guidance scales and optimized the derivation approach and process. * We have provided more details on multi-guidance blind image restoration, including diagrams and pseudocode. * We have polished the main text and clarified some typos and misunderstandings in the main submitted materials. ___ Last but not least, thanks again to PCs, ACs, and all reviewers for their time and effort in reviewing. Pdf: /pdf/cf3f116054583d553273b0316287e52c5e355b25.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AID: Attention Interpolation of Text-to-Image Diffusion
Accept (poster)
Summary: This paper proposes a training-free method for generation interpolation of diffusion models with attention manipulation. Targeting on the layout transition inconsistency and nonuniform step-wise transition, the paper proposes to extend the attention interpolation from previous cross-attention to self-attention, and adopts a beta distribution to ease the nonuniform transition. Additionally, three metrics are proposed to evaluate the interpolation qualities, involves consistency, smoothness and fidelity. Experiments show the effectiveness. Strengths: 1. The beta distribution for interpolation weights instead of linear weights seems promising. 2. The proposed metrics seem sound to evaluate the interpolation quality. 3. Experiments show the effectiveness of the proposed attention interpolation. Weaknesses: 1. The application of the generation interpolation seems somewhat restricted, and the practical meaningness of the task is doubted, is the interpolation could benefit other generative applications, or isolated. 2. Basically, the manipulation of the attention map has been broadly explored for image editing, such as [Masactrl; ICCV 2023], [InfEdit; CVPR 2024], etc., and the self-attention corresponding to the layout and the cross-attention corresponding to the semantic are basically common sense, the proposed AID adopts a similar fashion with reformulated interpolation task, where the contribution is somewhat insignificant. 3. The discussion with training-based interpolation methods is suggested to be provided, such as [UniHDA; 2024]. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What's the behavior difference between the inner-interpolation and outer-interpolation, is there any guide lines when to use the inner or outer? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately discuss the limitations and broader impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer frah for the feedback. Here, we aim to clarify our contributions and the application of the proposed methods. > W1: The application of the generation interpolation seems somewhat restricted, and the practical meaningness of the task is doubted, is the interpolation could benefit other generative applications, or isolated. We respectfully disagree with the assertion of limited applications. On the contrary, we believe the applications we mentioned are of significant importance. As discussed in Sections 5.2 and 5.3, the proposed method enhances performance in two critical areas: image editing control and compositional generation. These applications, and the value of exploring interpolation, have been recognized for their value in numerous existing works accepted in the top conference [6,7,11,25,38,49,50]. To the best of our knowledge, our method is the first to efficiently adjust the editing level, particularly in a tuning-free manner. Traditional editing methods are incapable of this, making our approach more practical for real-world applications, as demonstrated in Section 5.2. Furthermore, our method can be independently used for compositional generation in a tuning-free manner, outperforming previous state-of-the-art methods as shown in Section 5.3. It is noteworthy that even state-of-the-art models [2,30,35] struggle with generating two-concept images. Our method plays a crucial role in addressing this challenge and improving performance. > W2: Basically, the manipulation of the attention map has been broadly explored for image editing, such as [Masactrl; ICCV 2023], [InfEdit; CVPR 2024], etc., and the self-attention corresponding to the layout and the cross-attention corresponding to the semantic are basically common sense, the proposed AID adopts a similar fashion with reformulated interpolation task, where the contribution is somewhat insignificant. We respectfully disagree with the assertion of insignificant distribution. As acknowledged in Section 2, attention manipulation is extensively explored [1,3,11,31,34,51], particularly from a downstream application perspective such as image editing and compositional generation. The trade-off between edits and retaining parts of the original image is crucial for practical applications in image editing, as highlighted in [2,16]. This issue is also a focal point in compositional generation research [6,7,25]. However, existing works recognize the trade-off issue as an open challenge without effectively addressing it. When attempting such trade-offs, they often rely on varying scales of classifier-free guidance and simple text embedding interpolation [16,49,57], which we demonstrate to be very ineffective and lacking generalization ability in Sections 3.4 and 5. [Masactrl; ICCV 2023] and [InfEdit; CVPR 2024] also lack this trade-off capability, and we will include additional results of more attention-based methods even for training-based methods in our revision. Given the extensive use of attention mechanisms, our work provides practical insights on controlling the level of attention manipulation, assisting existing works and addressing a critical gap in current research. > W3: The discussion with training-based interpolation methods is suggested to be provided, such as [UniHDA; 2024]. Our method diverse from training-based interpolation methods [58,59] in mainly two aspects. The main advantage of training-based interpolation methods is that these kind of work often focus on cross-model / cross-domain area, enabling interpolation across domain. However, they often requires sample-wise fine-tuning to adapt the source generator, serving as the main challenge to real world application. On the contrary, our method only focus interpolating within text modality, but it is tuning-free and more efficient while keeping the competitive performance. We kindly thanks for the advice, and will add more discussion with [59] together in the revision. > Q1: What's the behavior difference between the inner-interpolation and outer-interpolation, is there any guidelines when to use the inner or outer? Thanks for raising this point. Generally, we recommend to use outer interpolation (AID-O) when the interpolation is more related to spatial layouts, and (inner interpolation) when interpolating between distinct semantic concepts. A discussion is provided in Appendix C and Fig. 10. Additional References: [57] Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, Anthony Chen, Huaxia Li, Xu Tang, and Yao Hu. Instantid: Zero-shot identity-preserving generation in seconds, 2024. [58] Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, and Daniel Cohen-Or. Stylegan-nada: Clip-guided domain adaptation of image generators, 2021. [58] Hengjia Li, Yang Liu, Yuqi Lin, Zhanwei Zhang, Yibo Zhao, weihang Pan, Tu Zheng, Zheng Yang, Yuchun Jiang, Boxi Wu, and Deng Cai. Unihda: A unified and versatile framework for multi-modal hybrid domain adaptation, 2024. Other references are notated as the same as the main paper. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer frah Comment: Thanks for the authors rebuttal. Basically, the mentioned image editing and compositional generation refer to the feature interpolation and prompt interpolation, and the proposed AID could provide the strength adjustmnet, which may still somewhat limited in application, and what the user can control is quite less. As the main concern is still exist, I maintain my score. --- Reply to Comment 1.1.1: Title: Clarification on applicability Comment: Thank you for your feedback. We would like to clarify that the proposed AID does not solely provide strength adjustment; it also offers guidance for the interpolation path (PAID), as detailed in Figure 1(f) and Section 4.3. Secondly, the image editing and compositional generation referred do not equate to feature interpolation and prompt interpolation. These are technical challenges recognized by many papers accepted at NeurIPS, as mentioned in our previous response. Our method can enhance many existing works that rely on attention manipulation on image editing, and it can be independently applied to compositional generation. Thirdly, we want to emphasize that our method is applied independently to compositional generation, and it outperforms existing state-of-the-art methods. Numerous papers focused solely on addressing this issue have been accepted at NeurIPS and other top conferences, and they are not considered "limited" in their applications [6,7,25]. Furthermore, our objective of the main paper is not to deliver a product but to investigate an under-explored technical problem. And the conditional interpolation problem itself, is interesting and important, which is recognized by other reviewers. We hope that this could address the concern on the applicability espeicially considering the state-of-the-art performance on compositional generation individually.
Summary: Summary and Contributions: The authors introduce a novel task called conditional interpolation, which is to generate interpolation images with various conditions like text or pose. They propose an attention-base (and prompt-guided) method to achieve conditional interpolation, and three evaluation metrics to assess the consistency, smoothness, and fidelity of generated images. Extensive experiments shows their method achieve the best performance in image interpolations of diffusion models. Strengths: 1. The propose task(Conditional interpolation) is unexplored and interesting. Comom image interpolation task focus on generate transition images between two real-world images, but limit to one condition. The author introduce both text-embedding and attention mechanism to achieve better performance. 2. The authors do a detail analysis of conditional interpolation, specifically text-embedding interpolation. They prove that text-embedding interpolation is equivalent to manipulating the keys and values in cross-attention module and find that doing similar operation in self-attention layer can significantly improve spatial consistency. 3. The method can be used in image interpolation with distinct conditions like “a truck” and “a cat”. The authors also show their method can improve image-editing results when use editing methods like p2p. 4. The authors introduce a third prompt to guide the interpolation between two prompts, which is interesting and useful. Correctness: the claims and method are correct. Clarity: The paper is well written Relation to Prior Work: The paper is clearly discussed how their work differs from previous contributions. Weaknesses: 1.The paper lack of a clear and detail definition of the conditional interpolation. The authors claims that they formulate a new task call conditional interpolation, which is doing interpolation under various condition, such as text and pose. But I am confused that the method they proposed only condition on text, how could it be various condition? What the definition about various condition? 2.The comparations between baselines are inconsistent. The table 1 shows the result of TEI and DI, but the table 2 only show TEI. 3.The qualitatively compare between baselines(TEI and DI)is lack, while the quantitative result of this two baseline are exist. Technical Quality: 2 Clarity: 3 Questions for Authors: No coded is provided, which I think is important. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer THPg for the constructive feedback. Here is the response to reviewer's concerns. > W1:The paper lack of a clear and detail definition of the conditional interpolation. The authors claims that they formulate a new task call conditional interpolation, which is doing interpolation under various condition, such as text and pose. But I am confused that the method they proposed only condition on text, how could it be various condition? What the definition about various condition? Thanks for raising this point. We state in the title and introduction that this work targets text-to-image diffusion models where text serves as condition. We will remove mentions of ``various'' conditions. Note we give a clear definition of the task formulation and the evaluation metrics in Sec. 3.2. We are happy to further highlight these as definitions in the revision. > W2:The comparisons between baselines are inconsistent. Table 1 shows the result of TEI and DI, but table 2 only shows TEI. 3.The qualitative comparison between baselines (TEI and DI) is lacking, while the quantitative results of these two baselines exist. Thanks for pointing this out. We are happy to add some results to the revision as reference for our own work. Note we omitted DI for the qualitative results and human study because it is not used by competing methods; there is no comparison basis for Table 2 and the qualitative results. > Q1:No coded is provided, which I think is important. Our intention was to release the source code upon acceptance. We provide an anonymous link to our code in a seperate comment to AC following the rebuttal rule of NeurIPS.
Summary: In this work, the authors propose Attention Interpolation via Diffusion (AID), a novel, training-free technique for improving image interpolation under specific conditions like text or pose. Traditional methods using linear interpolation often produce inconsistent, low-fidelity images. AID enhances image consistency and fidelity with a fused interpolated attention layer and selects interpolation coefficients using a beta distribution for smoother results. An advanced variant, Prompt-guided Attention Interpolation via Diffusion (PAID), treats interpolation as a condition-dependent generative process. The authors include the experiments to demonstrate AID's consistency, smoothness, and efficiency in condition-based interpolation. The work also includes user study to show AID better aligns with human preferences and aids compositional generation and image editing control. Strengths: 1. The work includes user study to compare performance of different models which is appreciated. 2. The work is motivated to improve existing image interpolation methods based on their drawbacks. Weaknesses: 1. The work proposes a few metrics to evaluate the quality of interpolated images. However, these metrics could be biased and fail to evaluate the quality of samples. More details are discussed in Questions. 2. Though image interpolation can generate interesting visual effects. I feel the actual applications for such technique could be limited. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. For the fidelity metric in Eq. 7, one should expect the metric is maximized when the interpolation starts the same as image A, has an abrupt change from image A to B, and stays as image B for the remaining. This could be against the objective to generate perceptually consistent and smooth images. 2. Based on proposition 1, text embedding interpolation can be viewed as manipulating the K/V which leads to suboptimal interpolation. However in (fused) inner-interpolation, still only K/V are manipulated. 3. Is there a systematic way to determine the coefficient for Beta distribution, or is it a hyperparameter that needs tune for different models or prompts? 4. How does applying interpolation module in self-attention or cross-attention alone affect the performance? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The work sufficiently addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer oFS1 for the feedback. We provide our response to the reviewer's concerns in the following. > W1 (Q1): For the fidelity metric in Eq. 7, one should expect the metric is maximized when the interpolation starts the same as image A, has an abrupt change from image A to B, and stays as image B for the remaining. This could be against the objective to generate perceptually consistent and smooth images. Thanks for pointing this out. Our fidelity metric Eq. 7 is based on the Fréchet Inception Distance (FID). As an aside, Eq. 7 is based on $n$ interpolated sequences in the dataset and not a single sequence. Nonetheless, duplicating the $n$ source image pairs as the interpolations will lead to the max FID. However, such a degenerate setting is true of the FID measure in general and have been discussed in existing works [57,58]. \emph{Any} generative model achieves the highest FID when it duplicates the reference image set. Yet FID is widely accepted in the literature to evaluate fidelity for image generation [2,30,35] and image interpolation [38,50]. This is because existing works do not evaluate with FID alone; instead, good models must have high fidelty \emph{and} diversity. In our case, we evaluate with fidelity, consistency, and smoothness. Given the pervasive use of FID for evaluating fidelity, we believe our adoption of FID should not be considered a significant weakness. We are happy to add discussion in the revision about the importance of achieving strong performance on a diverse set of measures. > W2: Though image interpolation can generate interesting visual effects. I feel the actual applications for such technique could be limited. We respectfully disagree. Sections 5.2 and 5.3 already show our method's effectiveness in two significant applications: image editing control and compositional generation. Image editing remains a prominent research topic over the past two years [11,49,59]. A main challenge is controlling the level of editing and Sec. 5.2 demonstrates that existing editing methods often fail. On the contrary, our method provides a fast and efficient solution for adjusting the editing level, which we consider a major contribution to downstream applications. Compositional generation [6,7,25] remains a challenging task for state-of-the-art generative models [2,30,35]. As shown in Section 5.3, our method significantly enhances the quality of compositional generation in a training-free manner. We would like to emphasize that the numerous papers on these two application areas accepted at NeurIPS highlight the broad applicability of such techniques. Additionally, there are also many papers working on interpolation in generative model as well, which can benefit various applications [6,7,11,25,38,49,50,59]. > Q2: Based on proposition 1, text embedding interpolation can be viewed as manipulating the K/V which leads to suboptimal interpolation. However in (fused) inner-interpolation, still only K/V are manipulated. Indeed, both forms of interpolation manipulate the K/V. However, the manipulation targets of these two methods are different. Text embedding interpolation (Sec. 3.4) only manipulates the K/V in the cross-attention layer. (Fused) inner-interpolation manipulates the K/V of both cross- and self-attention (Sec. 4.1). Furthermore, the fused version also concatenates the original and manipulated K/V to boost fidelity. This is mentioned in Sec.4.1 (L193-L197). We will clarify in the revision to emphasize these differences. > Q3: Is there a systematic way to determine the coefficient for Beta distribution, or is it a hyperparameter that needs tune for different models or prompts? The hyperparameter $\alpha$ and $\beta$ does need tuning for different prompts but can be selected automatically by Bayesian optimization as shown in the Appendix. D. In our experiments, the optimization starts from $\alpha=T/2$ and $\beta=T/2$ with 5 rounds where $T$ is the inference timesteps of diffusion model. In this case, the computation overhead is minimal for each sample. > Q4: How does applying interpolation module in self-attention or cross-attention alone affect the performance? Interpolating in cross-attention alone is equivalent to interpolating the text embedding (Sec. 3.4) and provides sub-optimal results (see Fig.2 and Tab.1 (a)). Interpolating the self-attention alone leads to bad results similar to alpha interpolation. We will add one more section in the Appendix to discuss it in the revision. Additional references: [57] Sadeep Jayasumana, Srikumar Ramalingam, Andreas Veit, Daniel Glasner, Ayan Chakrabarti,and Sanjiv Kumar. Rethinking fid: Towards a better evaluation metric for image generation,2024. [58] Min Jin Chong and David Forsyth. Effectively unbiased fid and inception score and where to find them, 2020. [59] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing, 2023. Other references are notated as the same as main paper.
Summary: In this paper the authors explore the task of interpolation between images in conditional diffusion modal. First the authors list out the three desirable properties of successful interpolation: perceptual consistency, smoothness, and image quality. The authors first introduce a method AID that incorporates three ideas: (1) interpolation should be done with both cross attention and self attention, (2) instead of a simple interpolation a fused interpolation should be performed, (3) the interpolation coefficients are sampled with a beta distribution instead of sampling uniformly. The authors also introduce a second variation of their method called PAID (Prompt guided conditional interpolation) where the users can optionally add a prompt representing the intermediate image. Strengths: - The paper is well written and well organized. The authors do a good job first analyzing the issues with interpolating just the text prompt and then proposing a method that addresses them. - The visual results shown in the paper look impressive and the authors show that the method can also be applied to other tasks like image editing and compositional generation. - The authors use a comprehensive set of metrics that capture the 3 different aspects of good interpolation and show that the proposed method is helpful for each of the three metrics. Weaknesses: - One aspect of diffusion models that the authors have not considered here is the classifier free guidance and the use of negative prompts, which is standard practice for generating high quality images. Appendix H in supplement mentions that there are some results shown with negative prompt. But it would be useful to have a more detailed discussion of this in the main paper. - The authors show an ablation study in Table 1(b). However the discussion for these experiments is very brief. It would be useful if the authors could show some visual results corresponding to these ablation experiments and a more thorough discussion. (Minor): - “text-to-diffusion” → “text-to-image” L94 Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have included a limitations section in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer oKyj for the candid feedback and helpful suggestions. We provide our response to the reviewer's concerns in the following. > W1: One aspect of diffusion models that the authors have not considered here is the classifier free guidance and the use of negative prompts, which is standard practice for generating high quality images. Appendix H in supplement mentions that there are some results shown with negative prompt. But it would be useful to have a more detailed discussion of this in the main paper. Thanks for the suggestion to add a discussion on classifier-free guidance and the use of negative prompts. In our quantitative experiments, we use a fixed scale 7.5 of classifier-free guidance for all generated images without negative prompt. In appendix H, we use common negative prompt "monochrome, lowres, bad anatomy, worst quality, low quality" to boost visual quality. Generally, the effect of classifier-free guidance scale and negative prompt for our method is similar to the base generative model. We will add the discussion to main paper. > W2: The authors show an ablation study in Table 1(b). However the discussion for these experiments is very brief. It would be useful if the authors could show some visual results corresponding to these ablation experiments and a more thorough discussion. Thanks for this advice. We show the visual results of ablation study in Fig. 4 (a). As it shows, in the first row, pure attention interpolation will achieve high consistency but lack of fidelity. In the second row, fusing with self-attention can increase the fidelity. In the third row, selecting with Beta distribution can increase the smoothness further.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Diffusion for World Modeling: Visual Details Matter in Atari
Accept (spotlight)
Summary: The paper proposes a visual diffusion-based world model for reinforcement learning. The authors argue that current world models using discrete latent variables may lose important visual details, which are critical for reinforcement learning tasks. DIAMOND addresses this by generating images in the original pixel space. The paper evaluates DIAMOND on the Atari 100k benchmark and achieves strong results relative to other choices for the world model. Strengths: - Clear and well-written paper - Strong empirical results beating prior MBRL baselines on Atari 100K - Impressive scaled-up experiments for visual diffusion world modeling, with consistent visual quality over long horizons even with a low number of diffusion steps - Good set of ablations over key design choices of the algorithm - Publicly available code for the experiments Weaknesses: - It would be useful to include a comparison of training time for DIAMOND compared to prior baselines, e.g. what is the change in speed by using a diffusion world model v.s. Latent world model? - How does the algorithm compare to the current SOTA model-free algorithms on Atari 100K, e.g. [1]. This would help the author understand the relative utility of MBRL for this setting Minor: - Theoretical diffusion background is lengthy and the paper would be better served by including results on other environments that are currently in the appendix, or discussion on architecture used [1] Bigger, Better, Faster: Human-level Atari with human-level efficiency. Max Schwarzer, Johan Obando-Ceron, Aaron Courville, Marc Bellemare, Rishabh Agarwal, Pablo Samuel Castro. Technical Quality: 4 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Discussion in paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your clear and concise review. We’re pleased you appreciate our carefully written paper, strong results, ablations and open-source code. To address your concerns: > It would be useful to include a comparison of training time for DIAMOND compared to prior baselines, e.g. what is the change in speed by using a diffusion world model v.s. Latent world model? We agree the training time is an interesting perspective, so had already included a training time comparison with latent world model baselines in Appendix H. In addition to this comparison with baselines, we have also now included a full training time profile analysis in our additional one-page PDF, which breaks down our total training time in Appendix H into individual steps at the model level. We hope this provides additional insight into the training time of our approach, and we would include this analysis in a camera-ready version of our paper. > How does the algorithm compare to the current SOTA model-free algorithms on Atari 100K, e.g. [1]. This would help the author understand the relative utility of MBRL for this setting We also agree that it is helpful to compare to model-free algorithms to understand the relative utility of MBRL, so had already included this comparison to SOTA algorithms such as BBF [1] in Appendix I of our paper. We found it interesting to see that there are some games for which model-free algorithms are dominant and some for which *DIAMOND* performs better. > Theoretical diffusion background is lengthy and the paper would be better served by including results on other environments that are currently in the appendix, or discussion on architecture used While we agree that our background on diffusion is more comprehensive than many diffusion papers and that some readers may already have this background, we initially found the diffusion literature to be hard to parse coming from a RL background. Since our target audience is primarily the RL community, we tried our best to provide a concise, pragmatic and clear introduction to diffusion so that our paper can stand alone and be useful to our colleagues in RL. All appendices in the paper, such as the results on other environments and discussions on the architectures used, are referenced from the main text at the relevant points of the paper, so we hope would still be read by the interested reader. We hope we have addressed all your concerns and questions, and thank you again for your clear and constructive review. --- Rebuttal Comment 1.1: Title: Follow up on our rebuttal Comment: Dear Reviewer hqzV, Thank you for your review and for the positive evaluation of our paper. We believe we've addressed your concerns and hope you appreciate our responses and additions. As the discussion period is coming to a close, please let us know if there are any further points we can address to improve your rating. Kind regards, The Authors
Summary: This paper introduces DIAMOND (DIffusion As a Model Of eNvironment Dreams), a novel RL agent trained within a diffusion-based world model. DIAMOND's world model is a diffusion model that, conditioned on past observations and actions, generates an observation at the next time step. This approach diverges from previous methods that rely on discrete latent variables, and offers a promising alternative that leverages the strengths of diffusion models, such as the ability to model multi-modal distributions. The training process is iterative, involving three key steps: data collection in the real environment, world model training on this collected data, and RL agent training with rollouts using the learned world model. The authors analyze design choices for the diffusion world model, and demonstrate that the EDM formulation is superior to DDPM for their use case through visualization of generated trajectories. They also conduct qualitative experiments to determine the optimal number of denoising steps for sampling from the world model. Finally, DIAMOND's evaluation on the Atari 100k benchmark reveals superior performance compared to other world model-based RL agents, achieving a new state-of-the-art mean human-normalized score of 1.46. Strengths: This paper introduces a new new diffusion-based approach to world modeling in reinforcement learning, and show that it is a promising alternative to prior approaches based on discrete latent variable methods. The proposed method achieves state-of-the-art results on the Atari 100k benchmark and surpasses other world model-based RL agents. The paper also provides a thorough qualitative analysis of the design choices involved in creating an effective diffusion world model, including the choice of diffusion framework (EDM over DDPM) and the number of denoising steps. Overall, the paper is well-written and explains the concepts in a clear manner. Weaknesses: One minor concern is the lack of quantitative analysis for the design choices. While qualitative experiments and the results on the Atari-100k benchmark effectively motivate the design choices, a more thorough quantitative analysis across the different tasks would provide stronger evidence for these choices. Such a quantitative analysis would be potentially important for applications in real-world environments. Technical Quality: 3 Clarity: 4 Questions for Authors: - The performance on a few tasks (BankHeist, Frostbite, UpNDown) are considerably worse than the baselines. Have you investigated why this is the case? - Is the diffusion model retrained from scratch in each epoch of the training loop? Also, is the training data in a given epoch just what is collected in the current epoch or the union of data collected in all epochs so far? - Have you investigated how the diffusion world model evolves as the agent used for collecting the training data improves? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have included a dedicated section that discusses the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. We are pleased that you appreciate the potential of our diffusion-based approach, our thorough analysis of design choices and the clarity of our paper. We can understand your minor concern with a lack of more quantitative analysis of our design choices. To address this, we have now added a plot displaying the average drift from reference trajectories for DDPM and EDM-based world models for different numbers of denoising steps in our additional one-page PDF. This plot confirms and quantifies the insights provided by our qualitative analysis in Figure 3, and would be included in a camera-ready version of our paper. While this plot clearly demonstrates the benefits of EDM over DDPM, it does not show any significant difference between the different numbers of denoising steps for EDM. Therefore, we decided to investigate if EDM with 1-step denoising would affect the down-the-line performance of the model-based RL agent, compared to our 3-step default, for our 10 highest performing games. Our results are displayed in Table 1 in our additional one-page PDF. Even though there is some variance due to the fact we only had time to run a single seed for this ablation, we see there is some signal that the agents trained on the 1-step EDM model perform worse, as the mean HNS on these games dropped from 3.1 to 2.0. The drop is particularly evident on the game *Boxing*, which again confirms our qualitative analysis in Figure 4. These additional quantitative results strengthen the justification for our design choices, and would be included (with additional seeds) in a camera-ready version of our paper. To address your questions: > The performance on a few tasks (*BankHeist*, *Frostbite*, *UpNDown*) are considerably worse than the baselines. Have you investigated why this is the case? While the performance on these games is indeed worse than some baselines, it is generally in-line with the performance of *IRIS*, for which we have a very similar reinforcement learning pipeline, as described in Section 3.2. This suggests that the performance difference is likely due to differences in reinforcement learning (such as hyper parameter choices not being as well suited to these environments) rather than due to differences in the performance of the world model. > Is the diffusion model retrained from scratch in each epoch of the training loop? Also, is the training data in a given epoch just what is collected in the current epoch or the union of data collected in all epochs so far? These are indeed important details. The current diffusion model is updated in each epoch, not trained from scratch. The training data used is the union of data collected in all epochs collected so far. These details are mentioned in Section 3.2 and Algorithm 1. > Have you investigated how the diffusion world model evolves as the agent used for collecting the training data improves? Yes, we found the world-model is generally quite restricted to the policy of the current agent collecting the training data. In *Breakout* for example, we unsurprisingly found that the world model was not able to predict a brick being broken from a higher row (different color) before this happened in the data collected by the real agent, although it was able to generalize to a brick being broken with a ball coming from a different location. In any case, we agree that this is useful for the community to be able to investigate, so have added an option `--pick-checkpoint` to select a particular checkpoint to play with to our codebase. In the following code snippet, the training command is modified to save every checkpoint, and the play command includes the new option to pick a checkpoint. ```bash python src/main.py checkpointing.save_agent_every=1 checkpointing.num_to_keep=null cd outputs/<DATE>/<TIME> python src/play.py --pick-checkpoint ``` We hope that our responses have addressed all your questions and believe that our additions following your review have improved our paper, so thank you again for your constructive feedback. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for the detailed responses to my questions and comments. I have updated my score to reflect this. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you for your response and updating your score, and thanks again for your constructive review.
Summary: This paper introduces a new world model for learning behaviors in imagination using reinforcement learning. In particular, a diffusion model is used to generate the next frame, conditioned on previous frames and actions. At each environment step, multiple denoising steps are performed to convert a noise image into the next frame. By training an RL agent on synthesized trajectories, the method achieves state-of-the-art performance on the Atari 100k benchmark. Strengths: - S1: The world model achieves strong performance on the Atari 100k benchmark, compared with other world models. - S2: Implementing world models using diffusion models is a logical step, given the success of diffusion models in image generation. Weaknesses: - W1: The analysis of the world model is mainly qualitative and not quantitative. The model seems to be really good at generating long trajectories without compounding errors (e.g., Figure 3(b)), but it would be nice to have a more objective measurement of this. One simple idea would be to generate long trajectories and compare the generated frames to the nearest neighbors in the replay buffer. This could also be compared with IRIS and DreamerV3. - W2: As the training is rather slow, it would be interesting to see a breakdown of the training times of the individual components. For instance, how much time is spent to generate frames compared with the reward/termination model (which is a CNN + LSTM)? Typo in L240: The single denoising step is shown in the "last row" (instead of "first row"). Technical Quality: 3 Clarity: 3 Questions for Authors: - Q1: The imagination horizon is set to the usual value of 15 steps. I am wondering whether longer horizons would lead to better scores, or whether this is not required for Atari? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed all limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your clear and concise review. We are pleased that you appreciate our idea and strong results. Regarding your concern with our analysis being mainly qualitative (W1), we agree that a more quantitative measure of the compounding error of different methods for long trajectories would be valuable. Following your suggestion, we extended our analysis to include a plot demonstrating the average drift in generated observations with respect to reference trajectories (provided in our additional one-page PDF). Specifically, we generated 1000-step trajectories with our world model and the real environment, starting from the same frame and following the same action sequence. This new plot confirms that DDPM suffers from accumulating error for small number of denoising steps, and that EDM is more stable even with low number of denoising steps, as illustrated in Figure 3 of the paper. While this plot clearly demonstrates the benefits of EDM over DDPM, it does not show any significant difference between the different numbers of denoising steps for EDM. Therefore, we decided to investigate if EDM with 1-step denoising would affect the down-the-line performance of the model-based RL agent, compared to our 3-step default, for our 10 highest performing games. Our results are displayed in Table 1 in our additional one-page PDF. Even though there is some variance due to the fact we only had time to run a single seed for this ablation, we see there is some signal that the agents trained on the 1-step EDM model perform worse, as the mean HNS on these games dropped from 3.1 to 2.0. The drop is particularly evident on the game *Boxing*, which again confirms our qualitative analysis in Figure 4. These additional quantitative results strengthen the justification for our design choices, and would be included (with additional seeds) in a camera-ready version of our paper. Regarding the breakdown of training times (W2), following your suggestion we have now included a table in our additional one-page PDF (Table 2) demonstrating the breakdown of our training time into individual model steps. We see that a world model step requires around twice as much time to generate the next frame (12.7ms) compared to the reward/termination prediction (7.0ms) using our default procedure with 3 denoising steps. This breakdown also confirms that integrating the reward prediction into the diffusion model (suggested in our limitation section) would be a promising future direction, as it would likely speed up the world model’s imagination. We hope this provides additional insight into our training time and would include this table in a camera-ready version of our paper. Thanks for noticing the typo in L240; we have now fixed it. Regarding your question around the effect of increasing the imagination horizon (Q1), during development we ran some experiments on this question and did not find much signal that increasing the horizon led to better scores on the games we investigated. In any case, increasing the horizon comes at an increased computational cost, since a longer trajectory must be generated for a single agent update. Additionally, this update may be higher variance earlier in training when the world model is less reliable, so we decided to stick with the default value of 15. We hope we have addressed both of your concerns, and believe your suggestions have improved our paper, so thank you again for your review and constructive feedback! --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response! I have updated my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you for your response and updating your score, and thanks again for your constructive review.
Summary: This paper proposes an approach for learning world models with diffusion based approaches, compared to the recently proposed ones using transformers or the ones dependent on discrete latent variables in general. The core idea is that given success of image generation using diffusion models, the visual details of game playing RL tasks can be improved if a diffusion based world model can be learnt; which can in turn lead to improved performance specifically in pixel based tasks like Atari. Strengths: The idea of using generative models for learning the world model in RL is perhaps nothing new; past works have tried to do this with transformers or other discrete variable models; however success has not been achieved much. In contrast, while the idea of using diffusion for world models perhaps may sound oblivious given current context, this paper does a good job in demonstrating that this can work empirically in Atari. Experimental comparisons with past works such as IRIS, that are dependent on using transformers, shows that the learnt difufison based approach can achieve empirical gains. Weaknesses: One primary bottleneck of this approach would be the dependency of the diffusion driven word model for long horizon tasks. Past works often model a per-next time step state prediction dynamics model, or uses models that can handle longer sequences. In contrast, the diffusion based approach may signifcaintly suffer from the long horizon. Experimental approach and algorithmic novelty is perhaps nothing significantly new; this paper basically does a simple plug in of diffusion model with careful fine-tuning in the context of model based RL. As stated by the authors as well, the practical choice of the diffusion approach would matter a lot here; and carefully analysis is required to ensure that the difufuson model cxan learn a good world model. Empirically, other than Atari benchmarks, can this work demonstrate that the idea can work in other domains and compare to more extensive experimental analysis with Dreamer based approaches? For example, Dreamer line of work often shows good results in locomotion or humanoid driven tasks - I think this approach can fail in those cases since it is harder to learn a world model that can generate visual details in those tasks? Does the authors have any comments around that? Technical Quality: 3 Clarity: 3 Questions for Authors: Can we see some more experimental results other than Atari to demonstrate that the idea can work? More experimental comparisons with other model based approaches are required. MBRL is a huge literature in the field and lots of works have compared to different algorithms in different benchmarks. In contrast, this paper lacks empirical evidence other than Atari domains which seems concerning. Have the authors considered other non-diffusion based approaches or more recent works based on flow matching for example? Or can we see how the learnt world model differs based on the type of generative model we use? Example, if we can use different variants of diffusion or FM based models, how does the eprfomance differ? I’d like to understand in general, the significance of using generative models for learning world models. There used to be prior works on imagination augmented rollouts, or works that would learn a multi-step forward dynamics model for example; other than IRIS that are dependent on the use of transformers, can we see some more experimental comparisons with those prior approaches Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Lack of enough experimental evidence other than Atari benchmarks Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We are pleased you believe our work does a good job in demonstrating that using diffusion for world modeling can work well in Atari and recognize our experimental gains on this competitive benchmark. Your main concern seems to be the focus of our evaluation on the Atari 100k benchmark. First, many recent works (*IRIS* [1], *TWM* [2], *STORM* [3], *BBF* [4]) were widely adopted having solely evaluated on these 26 Atari 100k games, indicating that they are generally considered to be diverse enough to comprehensively evaluate the advantages and drawbacks of various approaches. Second, we do apply our diffusion world model to other more visually complex and realistic domains. In particular, we demonstrate that our world model has improved visual quality over *DreamerV3* [5] and *IRIS* on the popular video game *Counter Strike: Global Offensive*, and a real-world motorway driving dataset in Appendix J. We did not consider the less visually complex locomotion tasks to be relevant, but since our method makes no Atari-specific assumptions, our open-source codebase could be applied to other environments of interest by the community. Regarding comparisons with other non-diffusion model-based approaches you mentioned, we do indeed compare with many non-diffusion methods, including the state-of-the-art model-based approaches on this benchmark, and additional model-free methods in Appendix I. For flow matching approaches, we are not aware of world models that have been designed to leverage this new class of generative models. However, we believe that this would be an interesting direction to investigate in future work, given flow matching provides straighter integration paths more robust to few-step denoising, and enables exact likelihood computation, both of which may be valuable for world modeling. Thank you again for your insightful review. We hope we have addressed your concern regarding our evaluation, and answered your questions on the place of our work in the broader literature. --- **References** [1] Micheli et al., *Transformers are Sample-Efficient World Models*, ICLR 2023 [2] Robine et al., *Transformer-based World Models Are Happy With 100k Interactions*, ICLR 2023 [3] Zhang et al., *STORM: Efficient Stochastic Transformer based World Models for Reinforcement Learning*, NeurIPS 2023 [4] Schwarzer et al., *Bigger, Better, Faster: Human-level Atari with human-level efficiency*, ICML 2023 [5] Hafner et al., *Mastering diverse domains through world models*, arXiv 2023 --- Rebuttal 2: Title: Follow up on our rebuttal Comment: Dear Reviewer 6rRD, Thank you again for your thorough review of our paper. With less than 24 hours remaining in the discussion period, we would be grateful for your feedback on our response. Please let us know if there is anything else we can address, as we do feel that your current rating does not fairly reflect the value of our contribution. Kind regards, The Authors
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for taking the time to review our paper, and for their positive and constructive feedback. We are pleased to see a general consensus regarding the motivation of our work, the clarity of our paper, and the strong results achieved by our method. The main suggestion to improve our paper appeared to be to provide **more quantitative analysis of our ablations** to justify our design decisions. In our paper, we demonstrated visually that an EDM-based diffusion world model is more stable than a DDPM-based world model (Figure 3) and that the use of EDM enabled the use of only 3 denoising steps to provide a fast and reliable world model (Figure 4). **To support this qualitative analysis, we now measure the average drift from reference trajectories** for DDPM and EDM-based world models for different numbers of denoising steps, and provide these results in our additional one-page PDF. **These results confirm and quantify the insights** provided in the qualitative analysis in Figure 3 of the paper. While this new plot clearly demonstrates the benefits of EDM over DDPM, it does not show any significant difference between the different numbers of denoising steps for EDM. Therefore, we decided to **investigate if EDM with 1-step denoising would affect the downstream performance** of the model-based RL agent, compared to our 3-step default, for our 10 highest performing games. Our results are displayed in Table 1 in our additional one-page PDF. Even though there is some variance due to the fact we only had time to run a single seed for this ablation, we see there is already some signal that the **agents trained on the 1-step EDM model perform worse**, as the mean HNS on these games dropped from 3.1 to 2.0. The drop is particularly evident on the game *Boxing*, which again confirms our qualitative analysis in Figure 4. **These additional quantitative results strengthen the justification for our design choices**, and would be included (with additional seeds) in a camera-ready version of our paper. Another point of interest mentioned by multiple reviewers was the **training time of our method**. While we had already included overall training times for comparison with the primary baselines in our paper’s appendix, **we have now included a full profiling analysis** demonstrating the breakdown of our training time into individual model calls in our additional one-page PDF. This breakdown confirms that integrating the reward prediction into the diffusion model (as suggested in our limitation section) would be a promising future direction, as it would likely speed up the world model’s imagination. We would also include this table in a camera-ready version of our paper to **provide additional insight into the training time** and potential improvements to our method. We hope that we have addressed all of the suggestions raised, and thank all of the reviewers again for their helpful and constructive feedback, which we believe has improved our paper. We look forward to the coming discussion period! Pdf: /pdf/d08041f2944d6f3eb625686e2bd8f0c983313568.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards a theory of how the structure of language is acquired by deep neural networks
Accept (poster)
Summary: This paper proposes a conceptual characterization of how much data is required to learn the latent hierarchical structure of input languages. The theory is based on token correlations, which themselves are qualified based on token distances; these correlations are further traced back to the hierarchical structure that explains the distribution of sets of tokens. It is argued that language models can leverage these correlations to compose more complex hierarchical representations of the relationship between these sets of tokens. A series of empirical analyses are performed to verify predictions from the theory. These include analyses from artificial PCFGs/RHMs, and verifications on natural language using Shakespeare plays. Key findings include that (1) more data is helpful because it allows LMs to resolve longer-distance dependencies and correlations; (2) that these correlations allow the LM to compose deeper representations of the relationship between sets of tokens. Strengths: * Thorough characterization of how latent hierarchical structures can be induced by neural language models from token correlations. * The notion of “token correlation” is also deeply defined in a way that goes well beyond a naive analysis (e.g., bigram frequencies). Specifically, the relationship between token distance and token correlations, and how these can be traced back to the hierarchical dependencies between them, is explored in a way that I have not seen before. * Two architectures, Transformers and CNNs, are analyzed. Comparisons are appropriately qualified and thorough. * The analysis quantifies the importance of variables whose contributions to learning were not previously quantified or theoretically analyzed—most notably, (effective) context window size. Weaknesses: 1. The linguistic foundations seem shaky. For example: the poverty of the stimulus is mischaracterized in L19-20 (see Questions field under “Suggestions” for more detail), and PCFGs are used as models of the structure of languages, when in fact language is context-*sensitive*, rather than context-*free* [1]. Nonetheless, I think the experiments do still give a thorough characterization of how hierarchical phenomena in general will be learned in models with self-supervision objectives. 2. Relatedly, the proposed RHM has many non-language-like features: the production rules are required to be unambiguous, whereas natural language is full of ambiguity that humans leverage intentionally to convey particular points; additionally, sampling production rules uniformly is suspect, considering that natural language leverages certain structures far more than others. This feels more like an issue with framing rather than methods, as I think this is still a valid analysis of how hierarchical structure is learned by models in general—just not necessarily how naturalistic text is learned. The Shakespeare text is meant to overcome this limitation, but this brings me to my next point: 3. Shakespearean English (early modern English) is very different in structure, word choice, and typological features than Modern English. It is also written in a poetic style that does not accurately capture the distribution of linguistic phenomena that would appear in more naturalistic domains or casual registers in any language. Why use this data, rather than something more naturalistic and contemporary like subsamples of The Pile or FineWeb? 4. The analyses seem very similar to past empirical work on how language models learn to compose atomic token units into hierarchical structures—most notably, the work in [2] on LSTMs. This work should be cited. 5. Unclear whether this will have practical impact on LM research or interpretability methods. I do not find this a significant flaw, but it may limit the impact and reach of this work. This is the main reason I have set my contribution score to 3. All of these points are reasons why I have set my presentation score to 2 and soundness score to 3. See Questions for references. Technical Quality: 3 Clarity: 2 Questions for Authors: Questions: 1. What was the motivation behind using Shakespearean text, rather than more naturalistic text? If you believe that the conjecture will also extend to any natural language dataset, would it be possible to run this analysis on something like The Pile? 2. What is the empirical impact of this work? This isn’t meant as a detraction of this work, as I believe analyses like these are valuable for our understanding. I’m more curious as to whether the authors believe there are specific impacts that this could have on how we train, interpret, or adapt LMs. Suggestions/Typos: * L19-20: The poverty of the stimulus is more in reference to the *ambiguity* of language’s underlying structure. It is not saying that the data is insufficient for hierarchical structure to be learned, but rather that the data can be explained by many differing hypotheses—including linear or “flat” explanations of the structure of the data. Nonetheless, children always settle on the exact same explanation: hierarchical structure, whereas language models in the past have been shown to be predisposed to linear explanations [3]. * “CGF” and “PCGF” are often used (when “CFG” or “PCFG” are meant). For example: L67-74, L97, L98 * L87: “wthe” -> “the” * It could be good to cite References: [1] Shieber (1985). Evidence against the context-freeness of natural language. https://www.eecs.harvard.edu/shieber/Biblio/Papers/shieber85.pdf [2] Saphra & Lopez (2020). LSTMs compose (and learn) bottom-up. https://aclanthology.org/2020.findings-emnlp.252/ [3] McCoy et al. (2023). Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. https://aclanthology.org/2020.tacl-1.9/ Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors describe the practical limitations of their methods w.r.t. how neural models learn, but I believe the authors should also more thoroughly discuss the discrepancies between the data in their experimental setting and how natural language is actually structured, and whether/how these might affect findings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1,a (context-freeness).** We agree with the reviewer's remark on the context-freeness of natural languages and we will modify the text accordingly, stating specifically in both the abstract and the limitations section that the context-free description is approximate and captures many (but not all) syntactical forms observed in natural language. **Weakness 1,b (poverty of the stimulus.** We agree with the reviewer's definition and we will clarify the text to make it explicit. We will replace the sentence in line 20 of the introduction with *stating that the data acquired by children is not sufficient to uniquely determine the grammatical structure of their language*. **Weakness 2 (non-ambiguity of RHM).** We agree that the non-ambiguity of the rules and the uniformity of the production rules are not language-like features, and we will emphasize this point further both in section 2.1 and in the limitations section. **Weakness 3 (additional dataset, contemporary text).** Please see comment above (author rebuttal). **Weakness 4 (ref. on LSTM).** Ref. 2 is indeed relevant to our work and we thank the reviewer for pointing it to us. We will include the following sentence at the beginning of the 'additional related works' section to acknowledge this result: *[ref. 2] provides evidence that LSTM language models learn short-range dependencies first and use them as a basis to build longer-range dependencies. Our results provide a theoretical framework for understanding this phenomenon.* **Weakness 5 (practical impact on LM research).** This paper provides a novel explanation for the scaling laws characterising the performance of LLMs, and extends them to include the context window length(see also reply to **Weakness 2 of reviewer h6xP**). These laws are having a huge practical impact, both in showing that success could be achieved simply by scaling up LLMs and in guiding the choice of hyper-parameters depending on the available data and compute. In addition, our work suggests a criterion for optimising the size of the context window depending on the available data and the behaviour of token-token correlations. We will emphasise this point further in the conclusion section of the revised manuscript. **Question 1.** See comment above (author rebuttal). **Question 2.** See reply to weakness 5 and **Weakness 2 of reviewer h6xP**. In addition, we will implement the reviewer's suggestions and cite the mentioned papers in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I think the addition of the more naturalistic text data helps. It also sounds like much of the terminology surrounding the structure of natural language will be changed (W1-3). I think that the takeaway that scaling is helpful will not be surprising to readers (W5). Optimizing the size of the context window can be helpful, as shown in some prior work, but it still does not seem like it will change how we think about or train LMs. That said, I think there is value in this contribution. I am therefore keeping my current positive score.
Summary: This paper investigates the sample complexity of learning languages generated by a kind of PCFG. The paper uses a model called the Random Hierarchy Model (RHM; a PCFG with a fixed tree structure) and identifies a relationship between the size of the training set and the "effective context window" that can be learned, which comes from the fact that more data is needed to resolve correlations between tokens that are farther away. The theoretical predictions are supported by experiments on synthetic data (generated by an RHM) and real data (lines from Shakespeare). Strengths: - I think this paper addresses an interesting and useful question about the relationship between sample complexity, language learning, and effective context size. Prior work has investigated how neural networks could recognize CFGs, and I think this work represents significant extension of this direction by trying to also characterize the learning dynamics and sample complexity. More generally, better understanding the relationship between sample complexity and effective context window could be significant given the growing interest in long-context language models. - The key results (about the relationship between sample size and the ability to resolve long-distance correlations) are interesting, and I think these methods could be useful for future work on more naturalistic models (which the authors discuss in the conclusion). - The experiments seem to support the theoretical results. I appreciate the analysis of hidden representations and the experiments with natural language. - I find the writing and notation to be clear and the experiments easy to follow. Weaknesses: - I think biggest weaknesses are the assumptions of the RHM (a fixed tree geometry, all sequences of the same length). The fact that the correlation resolutions decay with distance is a consequence of the RHM, and it is not obvious how well these assumptions will hold for more naturalistic language data. However, I think this kind of limitation is necessary for getting analytical results, and the authors provide some empirical results on natural language and discuss how the assumptions can be relaxed in the future. - I would have been interested to see some experiments on a language dataset other than the Shakespeare dataset. In particular, there might be other datasets that are more likely to exhibit long-distance dependencies (for example, predicting the next character in computer code), which could help to understand when the RHM might not be a good model for natural language data. However, I acknowledge that these experiments might be out of scope for this submission. Technical Quality: 3 Clarity: 3 Questions for Authors: No additional questions. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1.** While the assumption of fixed tree geometry is unlikely to be satisfied in natural language data, we do not believe this assumption to be necessary for the power-law decay of correlations. [This paper](https://arxiv.org/abs/1606.06737), for instance, shows other examples of text data where correlations decay as a power of the distance and argues that this is possible due to the context-free structure. This is indeed what we find in our model, where correlations decay exponentially with the distance along the tree, while the actual distance between tokens grows exponentially with the tree distance, resulting in power-law behaviour. We expect that this result will be quite generic. **Weakness 2.** While we will add experiments on natural text data to the present submission, the research of effects that require other assumptions to be described is indeed out of the scope of the present work. --- Rebuttal Comment 1.1: Comment: Thank you for the response and for the new results with WikiText. I still think the paper should be accepted and I will leave my score as is.
Summary: This work studies the relation between the amount of training data required to learn the structure of language via the next token prediction objective & neural nets (CNNs, Transformers). The underlying training data is systematically varied using PCFGs and the authors find that that the size of the training dataset limits the resolution of token-token correlations to an effective context window (proportionally) and that a larger training set allows the representation of deeper hidden hidden variables. The authors also find that the sample complexities of learning the deeper representations is polynomial wrt effective context window. Besides synthetic data, the authors test their conjecture on a collection of Shakespeare's line and find the findings consistent with the conjecture. Strengths: 1. This is an interesting paper which tries to understand a very fundamental problem -- what is the relation between the training set size and the resolution of the learnt token-token correlations in neural nets learnt via next token prediction. Weaknesses: 1. The main weakness of the paper is the lack of comprehensive empirical experiments on "natural" text datasets -- the authors test their conjecture on Shakespeare's lines, but I find this limited experiment quite ad-hoc -- why not test it with more natural datasets across domains. 2. The authors didn't characterize their findings in the light of emergent properties or scaling laws of real large language models -- as such the importance of the findings is only weakly characterized in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why is the conjecture only tested on the shakespeare dataset? How about testing it on news corpora or other domains? 2. What does the conjecture predict as to the max context length for training real LLMs on natural text data? Can such predictions we tested empirically on LLMs? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations were adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1/Question 1.** Please see the comment above (author rebuttal). **Weakness 2.** We agree with the reviewer that this key point needs to be emphasised much more. In particular for deep generative models of data (large $L$), we indeed find that for large $P$, the learning curve can be described with a power-law $P^{−\alpha}$ as in~[17] (Kaplan *et al.*, Scaling laws for neural language models), with exponent $\alpha =\log{ (m/v^{s−1})/(2 \log m)}$. It is interesting to note that this power-law results from a series of `emergent' phenomena where sub-portions of the data tree of different depths are learned. We will add a full paragraph on this point in section 4.1 and restate the importance of the result in the conclusions. **Question 2.** The tests performed in sections 4.3 ( behaviour of hidden representations with increasing $P$) and 5 (saturation of the scaling law due to the size of the context window) can also be performed on state-of-the-art LLMs trained on larger datasets. Such a study would require a paper of its own---we are currently seeking to develop collaborations to make it possible. We also hope that the present paper can trigger concurrent works in that direction.
Summary: 1) This paper looks at the relationship between training dataset size in language model settings and the token x token correlations learned by the model. 2) The authors first use a synthetic setting to study this relationship and derive the results followed by testing it on a real dataset of lines from a Shakespeare’s play to demonstrate that the relationship discovered holds in real settings. 3) The synthetic setting uses a probabilistic context free grammar model to generate the training data at different sizes, more specifically a random heirarchy model (RHM from https://arxiv.org/pdf/2307.02129) is used as it gives fine grained control over the correlations between generated tokens. Tokens far away in the heirarchy tree are less correlated. 4) The key contributions from this paper are : a) Authors observe that both in RHM and in real settings the size of the training dataset size (P) caps the token correlations that can be learned by the model i.e correlations between tokens beyond a distance (t* - effective context window) in the heirarchy tree cannot be learned by the model. As training dataset size increases this cap is lifted enabling the model to learn these correlations well. b) "Key finding is that the test loss decay levels off at a characteristic training set size that depends on the length of the context window and can be measured from correlations." (copied verbatim from paper contributions as it's a clear statement for other reviewers to look at ) c) A general framework based on synthetic data generation model (RHM) to further study the relationships between training dataset size, effective context window etc Strengths: 1) The paper is well written with clear notations to motivate problem setup followed by clear sections solidifying each contribution. 2) The order of claims made is clear and authors did a good job at showing results in synthetic setting and then applying same methodology in real datasets and models. Weaknesses: 1) The experiments on real-world datasets are limited and I'd encourage the authors to try this out on at least one other dataset of larger size and with a larger model. 2) Since the synthetic data generation process is cheaper, authors could have used higher P (training dataset size) in experiments. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) Have the authors tried replicating the results from Shakespeare experiment on a larger dataset ? 2) Have the authors tried varying model size in addition to dataset size to see if it recovers chinchilla curves under fixed compute ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1) This work has no negative societal impact 2) Authors have done a good job at discussing limitations of their approach that comes due to their use of fixed geometry of data tree. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1/Question 1.** Please see the comment above (author rebuttal). **Weakness 2**. Although the generation process is cheaper, the costs of training limit the available range of $P$ to that considered in the paper (up to a few million). **Question 2.** Since this paper focuses on sample complexity (and not on optimising performance at fixed compute), we varied the model size only to guarantee overparametrisation, in the sense that the training loss approaches zero as the training time increases. Increasing, depth, number of heads or embedding dimension of the architectures beyond the values reported in the paper does not affect performance (measured by the test loss at early stopping) beyond the fluctuations due to the randomness of the initialisation. We will clarify this point further in the text and mention the possibility of studying compute-optimal scaling laws as an interesting question for the future.
Rebuttal 1: Rebuttal: We thank all the referees for the detailed comments, and for finding our work interesting and supporting publication. They all pointed out that the paper will be improved by adding an additional set of experiments, perhaps involving a larger dataset made of contemporary text. We agree with the reviewers and, to answer their comments, we will repeat the Shakespeare dataset analysis presented in the first submission to the WikiText dataset, introduced [here](https://openreview.net/forum?id=Byj72udxe). As shown in the attached pdf, the token-token correlations of this dataset display the same behaviour as that of Figure 4, top right and bottom left---decay followed by training set size-dependent saturation. We are currently training a deep Bert-like transformer on this dataset to complete the analysis. We address all the other comments and questions below the reviewers' comments. Pdf: /pdf/02adcb2bf9d172f91065e90d20a3559125d3c1d5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MeLLoC: Lossless Compression with High-order Mechanism Learning
Accept (poster)
Summary: This paper introduces a novel approach combining high-order mechanism learning with classical encoding techniques to enhance lossless compression for large-scale scientific floating-point data. The core innovation lies in treating data as discrete samples from a physical field governed by differential equations and solving inverse problems to identify compressible coefficients. Experiments demonstrate MeLLoC's superior performance over existing methods, achieving better compression ratios and computational efficiency. Strengths: 1) Innovative Approach: Combines mechanism learning with classical encoding, leveraging differential equations to compress scientific data effectively. 2) Superior Performance: Outperforms state-of-the-art lossless compression techniques in terms of compression ratios and computational efficiency. 3) Comprehensive Experiments: Extensive testing on various datasets highlights the robustness and efficacy of the proposed method. 4) Flexibility: Capable of handling different types of scientific data, including those with high-order information and noise. Weaknesses: 1) Model Dependency: Performance relies heavily on the accuracy of the differential equation models representing the data. 2) Complex Calibration: Precision control requires careful calibration, which can be challenging with diverse data characteristics. 3) Computational Intensity: Despite improved efficiency, the method still demands significant computational resources for large datasets. 4) Generalization: May not be as effective for datasets that do not conform well to the assumed physical models. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) Clarification on Differential Equation Models: Can you provide more details on how you determine the appropriate differential equation models for different datasets? How sensitive is the performance of MeLLoC to the choice of these models? 2) Precision Control: Could you elaborate on the process for calibrating the precision control? Are there any guidelines or heuristics you follow to balance the compression efficiency and computational cost? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have recognized and addressed several limitations of their work: 1) Model Dependency: The performance of MeLLoC depends on the accuracy of the differential equation models representing the data. 2) Calibration Complexity: Precision control requires careful calibration, which can be challenging with diverse data characteristics. 3) Computational Intensity: The method demands significant computational resources for large datasets. 4) Generalization Issues: MeLLoC may not be as effective for datasets that do not align well with the assumed physical models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the overall positive feedback and the valuable comments. We have revised our manuscript taking your concerns and suggestions into consideration. To answer your questions/comments: > Q1: Clarification on Differential Equation Models: Can you provide more details on how you determine the appropriate differential equation models for different datasets? How sensitive is the performance of MeLLoC to the choice of these models? Thanks for your valuable comments. In our work, mechanism learning aims to discover the differential equation model that best describes the data. The identification of the mechanism is explained in the "Model identification and its well-posedness" section of the "Author Rebuttal". Please kindly refer to that. Through optimization, we obtain the optimal parameters $\theta^*$ in Figure 1. These optimal parameters directly correspond to the coefficients in the second-order PDEs. We also appreciate the reviewer's observation on sensitivity. Taking the CESM-ATM dataset as an example, when using fixed templates as shown in Figure 2(c), we observe the following compression ratios: 1) Laplacian template: $2.67\times$; 2) Hyperbolic template: $3.29\times$; 3) Parabolic template: $1.45\times$. As shown by varying compression ratios, the performance is therefore sensitive to model choice. By MeLLoC, the model is optimized and the learned template approach consistently outperforms fixed templates, achieving 3.36x as shown in Table 1. It also suggests that the underlying physical mechanisms in the CESM-ATM dataset align more closely with Hyperbolic Equations/Transportation Equations. Such insights could potentially inform future modeling strategies and enhance our understanding of atmospheric dynamics in climate models. > Q2: Precision Control: Could you elaborate on the process for calibrating the precision control? Are there any guidelines or heuristics you follow to balance the compression efficiency and computational cost? We appreciate your question. Our method ensures lossless compression while optimizing computational efficiency through the following process: 1) We observe that the source term $f = L(u) = \sum_{i=1}^9 C_i u_i$ is computed with precision $10^{-(m+n)}$, where $m$ is the original data precision and $n$ is the model coefficient precision. We can maintain lossless while the solver for $K_\mathcal{L}u^{in} = b_{u_{bd},f}$ has precision capacity of $10^{-(m+n)}$ (this is feasible with proper $n$). As $n$ increases, the admissible set for $C_i$ enlarges, contributing to a lower absolute value for $f$ but higher precision. The best compression ratio is reached when significant digits of $f$ reach a minimum. 2) We optimize $n$ for coefficients $C_i$ by starting with high precision and gradually reducing it while monitoring reconstruction error and compression ratio. With several calibrations, $n$ can be fixed for the remaining dataset if the compression ratio for the following batches shows no significant fluctuation. 3) The approach considers different scenarios to balance significant digits and value magnitudes, as shown in Figure 3(b). MeLLoC optimizes compression efficiency within lossless constraints, balancing perfect reconstruction with computational feasibility. The precision control is adaptive and can be tailored to different scientific datasets. --- Rebuttal Comment 1.1: Comment: Thank you to the author for carefully responding to my question. I think the method proposed in this paper achieves excellent compression performance and contributes to the field of compression. I will keep my score.
Summary: This paper introduces MeLLoC (Mechanism Learning for Lossless Compression), an approach that combines high-order mechanism learning with classical encoding to enhance lossless compression for scientific data. The core concept is to interpret the data as discrete samples derived from an underlying physical field described by differential equations. It addresses an inverse problem to identify coefficients of the governing equations, aiming to achieve a more compressible numerical representation. Strengths: The proposed MeLLoC is innovative in its approach of treating data as samples from a discretized physical field. It solves an inverse problem to determine the source terms of the governing differential equations, resulting in a more compressible numerical distribution. MeLLoC demonstrates a higher compression ratio and faster compression speed compared with existing methods. Weaknesses: The readability is weak. More detailed explanations and experimental analyses are necessary. For instance, the concept of Mechanism Learning is not novel and should be introduced in more detail. Additionally, the comparison with other methods and datasets is insufficient. For example, the authors should compare the proposed approach with [1], [2], and other relevant methods, if feasible. Ref: [1] Knorr, Fabian, Peter Thoman, and Thomas Fahringer. "ndzip: A high-throughput parallel lossless compressor for scientific data." 2021 Data Compression Conference (DCC). IEEE, 2021. [2] Afroozeh, Azim, Leonardo X. Kuffo, and Peter Boncz. "ALP: Adaptive Lossless floating-Point Compression." Proceedings of the ACM on Management of Data 1.4 (2023): 1-26. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Is [1] a lossless compression method? It is recommended to clarify the differences between [1] and this paper for a better understanding. 2. Hurricane appears to have a smaller reconstruction error as in Figure 5. Why then does Hurricane exhibit a lower compression ratio compared to CESM-ATM? 3. What are the challenges in applying this method to other domains (e.g., medical imaging, oceanography as mentioned in the Conclusion)? Ref: Luo, Xinyue, et al. "Precision-preserving Compression of Scientific Data: Learn Mechanism from Data." 2024 Data Compression Conference (DCC). IEEE, 2024. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful comments and valuable suggestions to make the paper clearer. We have revised our manuscript taking all comments and suggestions into consideration. > Q1: Is [1] a lossless compression method? It is recommended to clarify the differences between [1] and this paper for a better understanding. Thank you for your careful reading and instructive comments. While [1] does not achieve true lossless compression, our approach is designed specifically to ensure perfect reconstruction of the original data. In the revised version, we have emphasized the differences from [1] in terms of model principles and specific implementation, and we have added a more detailed analysis in the Appendix on the lossless implementation and the principles of fast solver. A discussion can also be found in the "Differences between [Luo et al.(2024a)] and our work" section of the "Author Rebuttal". > Q2: Hurricane appears to have a smaller reconstruction error as in Figure 5. Why then does Hurricane exhibit a lower compression ratio compared to CESM-ATM? Thank you for this insightful question. The proposed algorithm achieves lossless compression on both CESM-ATM and Hurricane datasets since the reconstruction error is less than $10^{-7}$. The compression ratio is not directly related to reconstruction error. The ratio depends on two main factors: (a) The statistical properties of the source term $f$ after transformation. (b)The noise level of the original data. A sparser $f$ leads to better compression. Additionally, data with lower noise levels also leads to better compression. From this perspective, we can infer that CESM-ATM likely yields a more compressible $f$ than Hurricane data in terms of PDE representation, or that the CESM-ATM data contains lower levels of noise. > Q3: What are the challenges in applying this method to other domains (e.g., medical imaging, oceanography as mentioned in the Conclusion)? We appreciate your valuable question. Our method has extended to medical imaging and oceanography, presenting both opportunities and challenges. In medical imaging analysis, the identification of high-order information (source term $f$) enables us to extract pathological features from images, such as detecting anomalies in OCTA scans. This capability arises from distinguishing between normal tissue mechanisms and abnormal external influences. However, interpreting these results requires expert guidance to correlate specific high-order information patterns with particular diseases. In oceanography, mechanism identification also plays a crucial role. By learning PDEs(perhaps higher-order ones), we can identify governing equations, which help us determine the impacts of diffusion and convection terms. Based on the identified models, we could predict the future evolution of oceanic systems. The main challenge here lies in the need for continuous observation to achieve accurate model characterization. For instance, robust ocean mechanism identification often requires data assimilation techniques. **Comparative Analysis with Other Methods** Thank you for your suggestion to include comparisons with additional relevant methods. We carefully read the papers you provided and made the following adjustments to our study: We did not include ndzip in our comparative experiments because it is also based on the Lorenzo predictor, which is similar to FPZIP and it requires a parallel computing environment. We have supplemented our study with comparative experiments involving: 1) Floating-point compression algorithms: ALP and ZFP; 2) General-purpose lossless compression algorithms: Blosc and Gzip. The detailed results of these additional experiments can be found in the "Additional Experiment Results" section of the "Author Rebuttal". **More explanation on Mechanism Learning** Thanks for your invaluable comment. We have provided a more comprehensive introduction to mechanism learning in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. My rating will be still close to a Borderline.
Summary: This paper propose a near lossless compression method named MeLLoC to compress the scientific data by learning the inherent mechanisms. By solving the inverse problem of Partial Differential Equations (PDEs), MeLLoC transforms the scientific data from original data domain into discretized source domain, which is much easier to compress. Besides, several techniques including precision control, fast Fourier-based solver and preprocessing for high-order mechanisms are proposed to facilitate the compression efficiency. Experimental results show that MeLLoC outperfoms two previous methods (FPZIP and Zstandard). Strengths: The paper studies an interesting and promising direction that leverages mechanism learning to compress scientific data. Weaknesses: 1. The proposed MeLLoC is built on the previous work [Luo et al. (2024a)]. However, it seems that the whole architecture of MeLLoC is the same as that in [Luo et al. (2024a)] and there is no new contribution. 2. The proposed method is not truly lossless compression as claimed, but is near-lossless compression. Section 5.1 shows that the largest reconstruction error between the original data and reconstructed data is approximately in the order of 10^{-11}. However, A single-precision floating-point number can be accurate to the order of 10^{-38}. 3. Presentation in Section 3 is somewhat confusing. For example, it is not clear why the difference operator $\mathcal{L}$ is formulated in exactly a 9-point form and how the loss function $F$ is minimized. 4. Evaluations in Section 5.3 are not sufficient. Some recent methods such as [Klöwer et al.(2021)] and [Luo et al.(2024a)] are not compared. 5. Sections 2.1 and 2.2 are not related to the topic of this paper. It would be better to provide some background about PDEs and mechanisms of scientific data to inform more preliminary knowledge to Section 3. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. What is the differences between MeLLoC and the method proposed in [Luo et al. (2024a)]? 2. Does MeLLoC achieve lossless compression or near-lossless compression? If it is near-lossless compression, what is the actual bit-rate to achieve lossless compression? 3. Please explain the formulation of difference operator $\mathcal{L}$ and loss function $F$. 4. What is the performance of MeLLoC compared to [Klöwer et al.(2021)] and [Luo et al.(2024a)]? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the careful review of the manuscript. We sincerely hope that the following answers will better illustrate our work. We also recommend that the reviewer read the “Author Rebuttal.” We hope the reviewer finds the efforts and improvements we made in both the theoretical and experimental aspects. ### 1. Clarification on Lossless Properties (To answer Q1 and Q2) We appreciate your observation regarding the precision of single-precision floating-point numbers. However, it appears there is a misunderstanding. A single-precision float indeed has about 7 decimal digits of precision, which corresponds to approximately $10^{-7}$. MeLLoC achieves true lossless compression, allowing for perfect reconstruction of the original data from its compressed form. By controlling the precision of coefficients and source terms based on the inherent properties of the scientific data, this approach allows us to achieve better compression ratios while maintaining lossless compression. In contrast, the work by [Luo et al. (2024a)] which incorporates a noise separation process, is indeed near-lossless. We emphasize that the proposed method does not introduce any loss of original information, and we have clarified this point in the revised manuscript. ### 2. Novelty: Lossless Compressor & Fast Solver (To answer Q1) Thanks for your valuable comments. We would like to highlight the novelty of MeLLoC: its lossless properties and a fast solver for the compression and decompression processes. As demonstrated above, the proposed method achieves lossless compression. Another novel aspect is the introduction of a fast solver based on PDE theory. Our fast solver significantly accelerates both the compression and decompression processes. Specifically, our method has substantially enhanced performance, improving the compression throughput by 839.47\% and the decompression throughput by 352.27\% compared to [Luo et al. (2024a)]. Unlike traditional compression methods, MeLLoC transforms the data into a sparser representation, i.e., high-order mechanisms derived from PDE models. The compression is accelerated due to the linearity and sparsity of this local representation, as shown in Figure 2. This allows for direct optimization to find the extremum. The existence and uniqueness of the minimizer during optimization are guaranteed by PDE theory. For more details, please refer to the "Model identification and its well-posedness" section in the "Author Rebuttal." This is the first application of such rapid computational methods in the field of data compression. We believe this contribution is substantial and distinct from previous works, including [Klöwer et al.(2021)] and [Luo et al. (2024a)]. ### 3. Explanation of Technical Details (To answer Q3) Thanks for your comments. We would like to expand on Section 3.1 to further explain how our method interprets data and how we formulate $\mathcal{L}$ and $F$. The revised paper includes more detailed illustrations and analysis of model identification and the well-posedness of identification in the Appendix. For details, please kindly refer to the "Model identification and its well-posedness" section in the "Author Rebuttal". ### 4. Evaluation Against Recent Methods (To answer Q4) We appreciate your suggestion to include comparisons with recent methods. In response, we have expanded our evaluation in Section 5.3 to include comparisons with other state-of-the-art lossless algorithms, ALP[1], ZFP[2], Blosc, and Gzip, as suggested by Reviewer VS2J. Please find the result in the attached PDF file in the "Author Rebuttal" section. > References: > > [1] Afroozeh, A., Kuffo, L. X., \& Boncz, P. (2023). ALP: Adaptive Lossless Floating-Point Compression. Proceedings of the ACM on Management of Data, 1(4), 1-26. > > [2] Lindstrom, P. (2014). Fixed-rate compressed floating-point arrays. IEEE Transactions on Visualization and Computer Graphics, 20(12), 2674-2683. ### 5. Relevance of Sections 2.1 and 2.2 (To respond to Weakness 5) We apologize for the confusion caused by Sections 2.1 and 2.2. We intended to convey that the use of partial differential equations (PDEs) to describe human vision and observable phenomena throughout scientific history represents a process of compression and decompression within human intelligence. The process of abstracting observable phenomena, specifically translating human vision into mathematical equations, is a form of information extraction. This mechanism-learning process inspired us to propose a new data compression method, where compression and decompression reflect the extraction of data features and the intelligent associations behind human vision related to data mechanisms. We revised these sections to provide more relevant background on PDEs and the mechanisms of scientific data, ensuring that they better inform the preliminary knowledge necessary for understanding Section 3. In summary, we appreciate your insights and have made revisions carefully according to your comments, to clarify the contributions and technical details of MeLLoC. Thank you again for your valuable feedback. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their efforts in preparing the rebuttal. However, I still have concerns on the claim that the proposed MeLLoC achieves lossless compression. According to IEEE 754, single-precision floating-point uses 32 bits in total, consisting of three parts: sign bit (1 bit), exponent (8 bits), and mantissa (23 bits). It can cover a vast range of values (from around $10^{-38}$ to $10^{38}$). I recommend the authors to provide multiple concrete examples that cover different magnitudes of single-precision floating-point numbers (e.g., with more than 7 decimal digits) to demonstrate that the proposed method can perfectly reconstruct these 32-bit single-precision floating-point numbers. Since the authors claim that the lossless property is the main novelty of the proposed MeLLoC compared to previous work (Luo et al. 2024a), I will maintain my rating before this issue is clarified. --- Rebuttal 2: Title: Author response about the single-precision issue Comment: Thanks very much for the reviewer’s kind reply. We clarify that the capacity of significant numbers for single-precision floating numbers is about 7 decimal digits, while the values cover $-10^{-38}$ to $10^{38}$. The example considered in the manuscript has values spanning from 180-300, which have the same order of magnitude. Therefore, we use error level $10^{-7}$ to display the lossless effect visually. It is common for scientific data, like atmospheric and oceanic remote sensing data, to have close order spans even fixed absolute precision. Of course, there might be cases with large order spans among the data. In that case the ‘outliers’ will be detected in our method and maintained lossless by storing the residuals separately, the compression rate will be affected. The method performs best for the above mechanism data, while in extreme cases with random noise, there is little mechanism and the method will no longer work. We are sorry for not explaining this clearly in the previous communications. Thanks again for your effort in reviewing our work.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments and suggestions. Accordingly, we try our best to make substantial revisions. The revised article now contains the extensions of the proposed lossless compression framework, and the additional experiments on comparison studies. We have addressed the reviewers' questions in our respective responses. ### Additional Experiment Results{Reviewer XGH7,Reviewer VS2J} During the rebuttal period, we have performed additional experiments as shown in Table 1 of the attached PDF, which will be the updated version of Table 1 of the main manuscript. The updated results demonstrate a comprehensive comparison between MeLLoC and several state-of-the-art lossless compression algorithms, including ALP, FPZIP, and ZFP, which are specifically designed for floating-point number compression. MeLLoC significantly outperforms general-purpose algorithms in terms of compression ratio. While its throughput may not match that of ALP or Blosc, it introduces a novel compression paradigm for scientific floating-point data. Figure 1 in the attached PDF visualizes these results, with the second graph showing Combined Performance (compression rate * log(speed)). MeLLoC demonstrates a clear advantage over other algorithms in this balanced metric, balancing high compression ratios with reasonable processing speeds. ### Differences between [Luo et al. (2024a)] and our work{Reviewer XGH7,Reviewer VS2J} The method proposed by [Luo et al. (2024a)] is a near-lossless compression technique, as it incorporates noise separation. Consequently, it cannot achieve perfect reconstruction within the digit precision of the original data. In contrast, MeLLoC is a true lossless compressor. To achieve industry-standard high throughput, we have optimized MeLLoC's computational process based on second-order PDE theory. The well-posedness from PDE theory also ensures the feasibility of high-order data representation. ### Model identification and its well-posedness{Reviewer XGH7,Reviewer 5CqK} Consider the underlying model of the data. For $u \in C^4(\Omega)$, $\Omega \subset \mathbb{R}^2 $, the nine-point difference template is related to the second-order linear differential operator as $$\sum_{k, l=-1}^1 C_{k, l} u(x+k h, y+l h) = [2(c_1+c_5-c_6) h \partial_x+2(c_2+c_5+c_6) h \partial_y +(c_3+\frac{1}{2} c_7+\frac{1}{2} c_8) h^2 \partial_{xx}^2+(c_4+\frac{1}{2} c_7+\frac{1}{2} c_8) h^2 \partial_{yy}^2 +(c_7-c_8) h^2 \partial_{xy}^2+c_9] u(x, y)+o(h^2).$$ where $C_{k,l}$ are the coefficients in Figure 2(a), the subscripts ${k,l}$ represent the relative position to the data point $(x,y)$. The relationship between $C_{k,l}$ and $c_n$ can be represented as $$\mathbf{C}=c_1 \mathbf{A}_1+c_2 \mathbf{A}_2+c_3 \mathbf{A}_3+c_4 \mathbf{A}_4+c_5 \mathbf{A}_5+c_6 \mathbf{A}_6+c_7 \mathbf{A}_7+c_8 \mathbf{A}_8+c_9 \mathbf{A}_9,$$ where $\mathbf{C}$ is the matrix of $C_{k,l}$, $\mathbf{A}_n$ are basis matrices, and $c_n$ are corresponding coefficients. $\{\mathbf{A}_n\}$ are defined as $$ \mathbf{A}_1 = [0,0,0;-1,0,1;0,0,0], \mathbf{A}_2 = [0,1,0;0,0,0;0,0,-1], \mathbf{A}_3 = [0,0,0;1,-2,1;0,0,0], $$ $$ \mathbf{A}_4 = [0,1,0;0,-2,0;0,1,0], \mathbf{A}_5 = [0,0,1;0,0,0;-1,0,0], \mathbf{A}_6 = [1,0,0;0,0,0;0,0,-1], $$ $$ \mathbf{A}_7 = [0,0,1;0,-2,0;1,0,0], \mathbf{A}_8 = [1,0,0;0,-2,0;0,0,1], \mathbf{A}_9 = [0,0,0;0,1,0;0,0,0].$$ Based on this representation, encoding $u$ becomes encoding the sparser high-order term $o(h^2)$, i.e., the source term $f$. Therefore, the optimization objective is to obtain a minimized high-order term, which can be mathematically expressed as $$\{C_{k,l}\}^{*} = argmin_{C_{k,l}} F({C_{k,l}};u) = argmin_{C_{k,l}} (\sum_{i, j}\sum_{k, l=-1}^1 C_{k, l} u(i+k h, j+l h))^2.$$ Once the template $\theta:=C_{k, l}$ is learned, one can calculate the coefficients of the differential operator, allowing one to classify the mechanism as elliptic, parabolic, or hyperbolic due to the reversibility of $C_{k, l}$ and $c_n$. Next, we will briefly explain the solvability of the model identification problem. The well-posedness of the compression process is considered by the following formulation. The above minimization problem is equivalent to solving the least square problem: $$Ac = 0,$$ where $c = [C_1, \cdots, C_9]^T$, $A \in \mathbb{R}^{N\times 9}$, $N$ is the number of data points in domain $D$, $\mathcal{P}: D \to \mathbb{R}$, $k = \mathcal{P}(i,j)$, $(i,j) \in D$ is the index after rearranging the data into a one-dimensional vector, $(i,j) = \mathcal{P}^{-1}(k)$, $(k = 1, \cdots, N)$ is its inverse mapping. $$A_{k,\cdot} = [u_{i-1,j-1}, \cdots, u_{i+1,j+1}], \quad (i,j) = \mathcal{P}^{-1}(k).$$ Therefore, to find non-trivial solutions is to obtain the null space (kernel) of $B = (A^TA)$. There is only a trivial solution if $B$ is rank full while otherwise there is no uniqueness. To address this issue, if we set the coefficient of $u_{i,j}$ to -1 (i.e., set $C_5$ to -1 in Figure 2 (a)), and fix the template size to 8, then the above problem becomes: $$\tilde{A}\tilde{c} = b,$$ where $\tilde{c} \in \mathbb{R}^8$, $\tilde{A} \in \mathbb{R}^{N\times 8}$, and $b = [u_{\mathcal{P}^{-1}(1)}, \cdots, u_{\mathcal{P}^{-1}(N)}]^T \in \mathbb{R}^N$. Then, the above problem has a unique least squares solution $\tilde{c} = \tilde{A}^\dagger b$, provided the data are not all degenerated. We assemble $\tilde{A}$ and directly solve the pseudo inverse, which serves as our fast solver for the compression process. Finally, we thank all the reviewers again for your insightful comments. We do believe that the revised article is much improved not only from the theoretical aspect but also from the experimental aspect. We are happy to answer any further questions the reviewer might have. Pdf: /pdf/0b6b3ae616906713066a10cefce774394fb30cc5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Why Warmup the Learning Rate? Underlying Mechanisms and Improvements
Accept (poster)
Summary: The paper analyses different aspects of warmup in the gradient based training focusing on SGD, ADAM and their variants under two types of parameterization. It shows through mostly empirical analysis that warmup facilitates training at higher learning rates and stabilizes the training dynamics by keeping it away from (what they call) divergence boundary (that results in failure). Furthermore, it suggests several improvements for hyperparameter initialization that shorten the training process and improves generalization. Strengths: Choosing the learning rate is critical for training large models. The paper proposes a nice analysis of the warmup procedure that agrees with some of the previous observations. And it suggests useful tips for practitioners. Weaknesses: The figures are not quite clear, especially because the captions do not describe the figures well enough. The observations are made mostly from empirical study. Technical Quality: 3 Clarity: 3 Questions for Authors: The study is done on convnets and resnets applied to images. Training LLMs is a more complex task. Would these ideas apply to attention based architectures? Or more widely, do the mechanisms described in the paper depend on the architecture? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and for their encouraging comments. Below are our responses for the questions and comments. > The figures are not quite clear... We noticed some errors in the Figure 3 caption in the submission and have revised it as follows: ''Test accuracy heatmaps of WRNs trained on CIFAR-10 using different and parameterizations loss functions using SGD: (a) $\mu$P and MSE loss, (b) $\mu$P and cross-entropy loss, (c) SP and MSE loss, and (d) SP and cross-entropy loss. Empty cells correspond to training divergences. Similar phase diagrams are generically observed for different architectures and datasets, as shown in Appendix F.'' > The observations are made mostly from empirical study,, We can use a toy model to understand the two warmup mechanisms. Following Ref. [1], we can understand the self-stabilization mechanism through a toy model resulting from a third-order approximation of the loss function. Consider a loss function $L(\theta)$ with parameters $\theta$. Let $\lambda^H_t$ and $u$ denote the sharpness and its corresponding eigenvector. The model assumes that the top eigenvector $u$ changes slowly through training and can be treated as constant. Next, consider a cubic approximation of the dynamics along a reference point $\theta^*$. The dynamics along the projection $x_t:= u^T (\theta_t - \theta^*)$ is given by two coupled non-linear equations: \begin{align} & x_{t+1} = (1 - \eta_t \lambda_t^H)x_t,\\ & \lambda_{t+1}^H = \lambda_t^H + \eta_t (\alpha - \beta x^2_t), \end{align} where $\alpha := - \nabla \lambda^H \cdot \nabla L(\theta)$ quantifies the instantaneous change in sharpness and $\beta:= \|\nabla \lambda^H \|^2$ controls to the non-linear change in sharpness. Ref. [1] considered a constant learning rate $\eta$ and $\alpha > 0$. Here, in contrast, we consider a time-dependent learning rate and allow $\alpha$ to attain both positive and negative values. In this model, an instability arises when $\eta_t \lambda_t^H > 2$. During instability, $x_t$ continues to increase until the higher order term in the sharpness update equation causes a significant decrease in sharpness. Once the sharpness has decreased sufficiently, the stability is restored ($\eta_t \lambda_t^2 < 2$), and training continues. Next, we consider the two natural sharpness evolution scenarios considered in our work: 1. **Natural progressive sharpening ($\alpha > 0$):** The combined effect of naturally increasing sharpness ($\alpha > 0$) and the increasing learning rate from warmup leads to instability ($\eta_t \lambda_t^H > 2$). Resultantly, $x_t$ increases until the higher order term in the sharpness update cause a decrease in sharpness ($x_t^2 > \frac{\alpha}{\beta}$). Once the sharpness has decreased appreciably so that $\eta_t \lambda^H_t < 2$ , stability is restored and the training continues. As training proceeds, both progressive sharpening and increasing learning rate cause instability, resulting in a persistent catapult cycle characterized by $\eta_t \lambda_t^H \approx 2$. 2. **Natural Sharpness Reduction ($\alpha < 0$):** In this case, sharpness is naturally decreasing during training ($\alpha < 0$). If the learning rate is increased quick enough relative to decreasing sharpness, an instability occurs $(\eta_t \lambda_t^H > 2)$. The increase in $x_t$ causes a more pronounced decrease in sharpness than it would have occurred naturally, restoring instability. To exceed the instability threshold again, the learning rate must significantly increase to account for the decreased sharpness. This results in one or more separated catapults. We will include this toy model in the updated version of our paper, which will complement our experimental results in Section 4. [1] `Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability', Alex Damian, Eshaan Nichani, Jason D. Lee, ICLR 2023 > The study is done on convnets and resnets applied to images... We have extended our experiments to include Transformers trained on language tasks and found that our results also apply to attention-based methods. These results are detailed in the global response section. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. I think that these additions to the paper will make it even stronger.
Summary: This paper studies the mechanisms of the warmup technique. The authors experimentally demonstrate that the primary benefit of warmup is its ability to enable the network to handle larger learning rates. Strengths: Warmup is an essential trick for training modern deep neural networks, and understanding its role is a critical and open issue. This paper makes a significant contribution to this exploration. From the perspective of training stability, the author highlights that the use of warmup enables network training to utilize larger learning rate. Additionally, the author notes that different initializations correspond to different stability regimes at the beginning of training: sharpness reduction and progressive sharpening. Warmup is particularly important for maintaining training stability in the sharpness reduction regime. Weaknesses: - The experiments are primarily conducted on ResNet models on CIFAR, where warmup is not essential. However, for Transformer-based models, warmup appears to be indispensable, particularly in applications such as language model pretraining and Vision Transformer (ViT) training. - The observations in this paper are all based on experimental results. The findings would be more convincing if they could be theoretically validated in some settings (even in toy settings). Technical Quality: 3 Clarity: 3 Questions for Authors: - It is natural and insightful that extending warmup's time can tolerate the use of larger LR, but can the authors further clearly and quantitatively indicate this relationship? - Does the measure of sharpness, $\lambda_{\max}(H)$, apply to randomized algorithms? For example, for GD, the stability condition is typically $\lambda_{\max}(H)\leq 2/\eta$ ; however, for SGD, $\lambda_{\max}(H)$ may no longer be suitable sharpness measure about training stability, and it might be $||H||_{\rm F}$ [1]. As the authors show in Figure 1(d) for GD, the results fully support the author's claim; however, in Figure 11(d) for SGD, the results do not completely align with the author's claim. - A closely related work [2] should be discussed. In Section 5 and Fig 3 in [2], the authors also discussed how initialization, warmup, and SGD noise influence progressive sharpening or sharpness reduction. [1] Wu et al. The alignment property of SGD noise and how it helps select flat minima: A stability analysis. (NeurIPS 2022) [2] Ziyin et al. Loss Symmetry and Noise Equilibrium of Stochastic Gradient Descent. (2024) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to ``Weaknesses''. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reviewing our paper and they encouraging feedback. > The experiments are primarily .... We have extended our experiments to include Transformers trained on language modeling tasks and found that our results extend to these models. The results are detailed in the global response. Furthermore, we would like to draw your attention to Appx F of our submission, where we show phase diagrams for CIFAR-100 and TinyImageNet. > observations in this paper are all based on.. We can use a toy model to understand the two warmup mechanisms. Following Ref. [3], we can understand the self-stabilization mechanism through a toy model resulting from a third-order approximation of the loss function. Consider a loss function $L(\theta)$ with parameters $\theta$. Let $\lambda^H_t$ and $u$ denote the sharpness and top eigenvector. The model assumes that the top eigenvector $u$ changes slowly through training and can be treated as constant. Next, consider a cubic approximation of the dynamics along a reference point $\theta^*$. The dynamics along the projection $x_t:= u^T (\theta_t - \theta^*)$ is given by two coupled equations $$ x_{t+1} = (1 - \eta_t \lambda_t^H)x_t, $$ $$ \lambda_{t+1}^H = \lambda_t^H + \eta_t (\alpha - \beta x^2_t), $$ where $\alpha := - \nabla \lambda^H \cdot \nabla L(\theta)$ quantifies the instantaneous change in sharpness and $\beta:= \|\nabla \lambda^H \|^2$ controls to the non-linear change. Ref. [3] considered a constant learning rate $\eta$ and $\alpha > 0$. Here, in contrast we consider a time-dependent learning rate and allow $\alpha$ to attain both positive and negative values. In this model, an instability arises when $\eta_t \lambda_t^H > 2$. During instability, $x_t$ continues to grow until the higher order term in the sharpness equation causes a significant decrease in sharpness. Once the sharpness has decreased sufficiently, the stability is restored ($\eta_t \lambda_t^2 < 2$), and training continues. Next, we consider the two natural sharpness evolution scenarios considered in our work: 1. **Natural progressive sharpening ($\alpha > 0$):** The combined effect of naturally increasing sharpness ($\alpha > 0$) and the increasing learning rate from warmup leads to instability ($\eta_t \lambda_t^H > 2$). Resultantly, $x_t$ increases until the higher order term in the sharpness update cause a decrease in sharpness ($x_t^2 > \frac{\alpha}{\beta}$). Once the sharpness has decreased appreciably so that $\eta_t \lambda^H_t < 2$, stability is restored and the training continues. As training proceeds, progressive sharpening and increasing learning rate cause instability, resulting in a persistent catapult cycle characterized by $\eta_t \lambda_t^H \approx 2$. 2. **Natural Sharpness Reduction ($\alpha < 0$):** In this case, sharpness is naturally decreasing during training ($\alpha < 0$). If the learning rate is increased quick enough relative to decreasing sharpness, an instability occurs $(\eta_t \lambda_t^H > 2)$. The increase in $x_t$ causes a more pronounced decrease in sharpness than it would have occurred naturally, restoring instability. To exceed the instability threshold again, the learning rate must significantly increase to account for the decreased sharpness. This results in one or more separated catapults. We will include this toy model in the updated version of our paper, which will complement our experimental results in Section 4. [3] Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability, ICLR 2023 > It is natural and insightful that .... We have already qualitatively demonstrated that increasing warmup duration facilitates training at higher target learning rates. Figs 1 and 2 illustrate that increasing the warmup duration results in smaller loss catapults, indicating improved stability at higher learning rates. This observation can also be argued through the toy model presented in the prior response. Figs 3 and 4 specifically show how the maximum target learning rate scales with warmup duration. To further address the reviewer's request for a more quantitative analysis, we can provide fitted curves to the maximum learning rate - warmup duration trends in the final version of the manuscript. > Does the measure of sharpness... Our extensive empirical analysis shows that even for randomized algorithms like SGD, the relevant stability measure is $\lambda_{max}(H)$, not $||H||_F$. However, it is true that the stability threshold can change significantly for small batch sizes [1]. Prior empirical work [4] has indeed shown that $\lambda^H$ oscillates at a lower threshold than $\frac{2}{\eta}$ at late training times. Nevertheless, our results in Appendix E show that similar warmup mechanisms are observed for SGD, albeit saturating at a lower threshold (Fig 10 & 13). The discrepancy in Fig 11(d) can be primarily attributed to loss function rather than the use of SGD. Fig 8(d) shows an experiment with the same batch size but with MSE loss. At late training times, $\lambda^H$ oscillates slightly below $\frac{2}{\eta}$, suggesting a minimal deviation from GD. In comparison, for cross-entropy loss (Fig 11d), we observe that (i) sharpness oscillates slightly above $\frac{2}{\eta}$ during training and (ii) sharpness dramatically decreases towards the end. These phenomena align with the findings from prior studies. Ref. [4] showed that sharpness decreases at the end of training for cross-entropy loss. Ref. [5] observed that for cross-entropy, the loss starts to catapult around $\eta \approx \frac{4}{\lambda^H}$. [4] Gradient descent on neural networks typically occurs at the edge of stability, ICLR, 2021 [5] Phase diagram of early training dynamics in deep neural networks, NeurIPS 2023. > A closely related work [2] .... We thank the reviewer for bringing Ref. [2] to our attention. We will incorporate a discussion of this paper in the related works section of our updated manuscript. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for the detailed response. I feel that this paper provides great insights into Warmup. I have raised my score.
Summary: The authors explain the mechanisms of the warmup technique showing that with warm up the loss of NN will go to a flatter space than direct optimization. Further, based on analysis, the authors propose a new optimization algorithm called GI-Adam. Strengths: 1. The authors explain why the warm-up technique can help networks converge better. 2. With the analysis, the authors show that the initialization of Adam is not "correct". Thus, the authors proposed a new initialization of Adam called GI-Adam. Weaknesses: 1. The conclusions are from FCNs and WideResnet, which can be trained well without warm-up. Does the conclusion still hold for some "hard" models and datasets (e.g., Transformer)? 2. GI-Adam is used in [1]. [1] Zhang, Yushun, Congliang Chen, Naichen Shi, Ruoyu Sun, and Zhi-Quan Luo. "Adam can converge without any modification on update rules." Advances in neural information processing systems 35 (2022): 28386-28399. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort for reviewing our paper and providing comments. > The conclusions are from FCNs and WideResnet, which can be trained well without warm-up. Does the conclusion still hold for some "hard" models and datasets (e.g., Transformer)? We have extended our experiments to include Transformer models trained on language modeling tasks with SGD and Adam. Our findings demonstrate that the conclusions drawn from FCNs and WideResNets generalize to Transformers trained on language modeling tasks. The additional results are presented in the global response. > GI-Adam is used in [1]. We have reviewed Ref. [1] and respectfully disagree. Below we would like to take the opportunity to clarify the distinction between the Adam initialization used in Ref. [1] and our proposed GI-Adam. In their theoretical analysis, Ref. [1] initializes both first and second moments using gradients at initialization as a replacement for bias correction (Algorithm 1). This approach is primarily used for simplifying their theoretical analysis and does not include empirical evaluation of their proposed initialization. In comparison, GI-Adam initializes the second-moment using gradients ($v_0 = g_0^2$), while initializing the momentum to zero ($m_0 = 0$) and keeping the bias corrections. As shown in the derivation below, under standard assumptions for deriving the bias correction, initializing the second moment with the gradients at initialization does not require bias correction (also mentioned in Ref. [1]). Hence, for small $\epsilon$, the bias correction on top of setting $v = g_0^2$ can be viewed as a multiplicative factor to the learning rate. As a result, GI-Adam is equivalent to having natural warmup given by $\eta_t = \eta_{\text{trgt}} \sqrt{1 - \beta_2^t}$. Furthermore, the referenced study has not performed any empirical analysis of their Adam initialization and it appears to be used as a replacement for bias correction for simplifying the mathematical analysis. In comparison, we show that GI-Adam has a smaller pre-conditioned sharpness at initialization compared to Adam and requires less warmup. These distinctions highlight that GI-Adam is a novel approach, different from the Adam initialization in Ref. [1]. Nevertheless, we acknowledge the relevance of the reference and will cite it in the related works section of our updated manuscript. **Derivation:** The moving average of the second moment is given by: \begin{align} v_t = (1 - \beta_2) \sum_{i=0}^{t-1} \beta_2^i g_{t-i}^2 + \beta_2^t v_0, \end{align} where $v_0 = g_0^2$. Following standard assumptions, we assume that the second moment of the gradient is constant during early training $\mathbb{E}[g_{t}^2] = \sigma^2$. Taking the expectation of the above equation over the gradient distribution yields \begin{align} \mathbb{E} [ v_t ] = (1 - \beta_2) \sum_{i=0}^{t-1} \beta_2^i \mathbb{E} [ g_{t-i}^2] + \beta_2^t \mathbb{E}[ v_0]. \end{align} Simplifying the above equation, we have \begin{align} \mathbb{E}[v_t] = (1 - \beta_2) \sigma^2 \frac{1 - \beta_2^t}{1 - \beta_2} + \beta_2^t \sigma^2 = \sigma^2. \end{align} Therefore, initializing the second moment with gradients at initialization does not require the corresponding bias correction. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I have no further questions and raised my score.
Summary: The paper examines the learning rate warmup technique from the perspective of its influence on the evolution of loss sharpness for different optimizers (GD, SGD(-M), Adam) and network parametrizations (Maximal Update Parameterization – $\mu$P and Standard Parametrization – SP). The authors demonstrate that the warmup allows the network to tolerate larger learning rates by gradually reducing sharpness through a series of loss catapults and self-stabilizations. Experiments with different network parameterizations reveal how warmup influences training in progressive sharpening and sharpness reduction regimes and the lower importance of warmup for training $\mu$P parameterized networks. Based on the empirical analysis, the paper proposes two practical training heuristics: (1) initializing the learning rate at an estimated critical value to eliminate unnecessary warmup steps and (2) introducing the GI-Adam optimizer, which initializes Adam's second moment with a squared gradient. Strengths: 1. The paper provides solid experiments demonstrating that the gradual self-stabilization mechanism induced by the warmup is observed for different network parameterizations and optimizers. 2. The paper discusses the specifics of the warmup effect on networks in different parameterizations and confirms that networks in $\mu$P parametrization benefit less from it. 3. The paper explains why warmup may be unstable for Adam and proposes a simple heuristic on how to deal with this instability. This heuristic potentially may be useful in practice. 4. The paper includes an extensive study of warmup hyperparameters (warmup length and maximal learning rate) and suggests how to choose their optimal values. I specifically like the Persistent Catapult Warmup idea from the appendix and think it is promising. 5. The paper is clearly written and easy to follow. Weaknesses: My main concerns are related to the level of novelty of the empirical analysis of the warmup and the effectiveness of the proposed practical modifications: 1. The novelty and significance of the empirical analysis in the first part of the paper seem limited. The study heavily relies on two previous works. Gilmer et al., 2021 (https://arxiv.org/pdf/2110.04369) investigate training instabilities from the perspective of sharpness, including a very similar analysis on how warmup decreases the sharpness and a discussion on how starting training from a flat initialization makes warmup much less important. Karla et al., 2023 (https://arxiv.org/pdf/2311.02076) demonstrate that $\mu$P and SP exhibit different natural evolutions of sharpness. The paper combines these two perspectives, however, I am not sure if this combination leads to new insights. The warmup works similarly in both progressive sharpening and sharpness reduction regimes. The paper points out the instability in Adam warmup and the lower importance of warmup for $\mu$P parametrized models (similar to the discussion in Gilmer et al., 2021), but both of these insights are not related to the different sharpness evolution regimes. 2. The authors claim that starting warmup from the critical learning rate $\eta_c$ is an effective strategy. However, this claim is not obvious, and adding empirical evaluation for it would improve the paper. For example, the experiment varying starting warmup learning rate can be added to demonstrate that starting from $\eta_c$ value results in a shorter or more stable warmup than for lower and higher values. 3. Test accuracy heatmaps are not provided for the experiments on the initial learning rate selection for warmup. Hence, it is not clear whether the proposed strategy results in high-quality solutions. Moreover, high values of $T_{\text{save}}$ are observed for the small $\eta_{\text{trgt}}$ and large $T_{\text{wrm}}$ (Figure 5b), but this configuration is clearly suboptimal since much shorter warmups work well for the low learning rates. For the most practically interesting hyperparameter regions associated with the shortest effective warmup for each learning rate, $T_{\text{save}}$ is negligible. 4. The maximum test accuracy achieved by GI-Adam appears indistinguishable from that of baseline Adam in most cases, so it is difficult to say whether the GI-Adam improves the training. Moreover, adding standard deviations to this comparison seems important due to the noisy behavior of the test accuracy. Also, GI-Adam is not the first Adam modification that increases stability at the beginning of training (see, e.g., RAdam from https://arxiv.org/pdf/1908.03265v4). A more accurate discussion of such methods and comparison with them would benefit the paper. 5. The paper lacks experiments with Transformer architecture, while it is a primary case of using Adam optimizer with warmup. Minor comments 1. Line 64: the idea that warmup is unnecessary if training is stable with the chosen learning rate is obvious and widely used in practice. 2. Lines 185-186: I would not say that $\mu$P does not benefit from warmup at all. As shown in Figure 3a, a longer warmup in the case of MSE training extends the range of converging learning rates. Technical Quality: 3 Clarity: 3 Questions for Authors: I would kindly ask the authors to address the main concerns from the Weaknesses section and focus on the following questions: 1. Could you please summarize the main novel insights of the empirical analysis part of the paper compared to Gilmer et al., 2021 and Karla et al., 2023, and explain why analyzing the warmup behavior in different sharpness evolution regimes is important? 2. Could you please provide any experiments demonstrating that starting warmup from the critical learning rate $\eta_c$ is an effective strategy? 3. What is the test accuracy with optimal warmup hyperparameters for baseline Adam, Adam with $\eta_{\text{init}}=\eta_{c}$ and GI-Adam? Is there any statistically significant difference between them? 4. Is the same self-stabilization mechanism observed when training Transformers? Additional minor questions: 1. Why do you use different target learning rates for $\mu$P and SP in Figure 1? It is a bit confusing since the target learning rate and parameterization are changed between the two experiments at the same time, and it is not clear which change results in which effect. At the same time, Figure 2 uses an identical learning rate for both initializations. 2. How do you measure the loss value for initial learning rate selection and the squared gradient for GI-Adam in the stochastic variants of the algorithms? Do you use a single batch or estimate these values over multiple batches? Using a single batch may result in higher variance in the estimates, which could be undesirable. On the other hand, estimating over several batches would incur additional computational costs. There also exist several related works which the authors may find interesting: * Lobacheva et al., 2021 (https://arxiv.org/pdf/2106.15739) report an effect similar to warmup self-stabilization when training scale-invariant networks with weight decay and a constant learning rate. The decreasing weight norm increases the effective learning rate, which eventually leads to training instability and catapults. This periodic behavior allows the network to achieve flatter optima with higher test accuracy after several cycles. * A different cyclical behavior, the Slingshot effect, is observed in adaptive optimizers like Adam, as shown by Thilak et al., 2024 (https://openreview.net/pdf?id=OZbn8ULouY). This effect occurs in the terminal phase of training and involves a rapid growth of the last layer norm before catapulting, followed by an improvement in test performance. * In your experiments, optimal test performance is achieved with large learning rates close to the convergence boundary. However, Kodryan et al., 2022 (https://arxiv.org/pdf/2209.03695) show that training networks with weight decay and learning rates larger than optimal usually lead not to divergence but to a noisy stabilization of test error. Moreover, Andriushchenko et al., 2022 (https://arxiv.org/pdf/2210.05337) demonstrate that further reducing the learning rate from these stabilized solutions results in the model learning sparser features and achieving better final test performance. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately discuss the limitations of the paper in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort in reviewing our paper and providing comments. > 1. Could you summarize the main novel insights ... While Gilmer et. al. 2021 is a key reference that our paper builds on, there are several novel insights in the empirical analysis part of our paper. First, our work emphasizes qualitatively distinct phenomena that arise at early training times, depending on whether the network starts in progressive sharpening or sharpness reduction phases. While Kalra et. al. 2023 also discuss sharpness reduction and progressive sharpening at early training times, the implications for learning rate warmup are not discussed. Our work combines ideas from both papers with novel empirical analysis to demonstrate for the first time the two separate underlying regimes of operation for learning rate warmup. A key result of our analysis is that models that experience sharpness reduction require more warmup compared to models that experience progressive sharpening at initialization. This result forms the basis for examining Adam. Models trained with Adam only experience a reduction in pre-conditioned sharpness, regardless of natural sharpness evolution. It thus becomes evident why Adam generally requires warmup to perform well. This early reduction also suggests the possibility of flatter initializations for Adam, like GI-Adam, which we propose. Without a characterization of underlying warmup mechanisms in terms of early-time sharpness evolution, it is not obvious why Adam would generally require warmup. Therefore, the different sharpness evolution regimes are indeed related to the instability in Adam warmup. As for $\mu$P, while it has been demonstrated in Kalra et al., 2023 that $\mu$P is a flat initialization and one can speculate that it may not require warmup, our results show that models in $\mu$P can still benefit from warmup (Fig 3), as also pointed by the reviewer. Moreover, our analysis in Sec 5 disentangles the role of $T_{wrm}$ and $\eta_{trgt}$, which has not been performed in prior work. These results reveal that the final performance primarily depends on $\eta_{trgt}$, with longer warmup durations mainly helping to avoid the convergent-divergent (failure) boundary. This also leads us to the point that warmup has another advantage: it makes learning rate tuning more robust. This point was not, to our knowledge, made in prior work. We have also introduced persistent catapult warmup and shown some encouraging preliminary experiments in App. C. We intend to move these results to the main text. > 2. Could you provide experiments ... We have conducted experiments to demonstrate the effectiveness of starting warmup from the critical learning rate $\eta_c$. These results are shown in Fig 4 of the attachment to the global response. When training WRNs on CIFAR-10 using Adam, setting the initial learning rate to $\eta_c$ (referred to as Adam-save in these results) yields solutions of similar quality to Adam. In this case, we save 576 training steps. In the updated manuscript, we will include test accuracy heatmaps for initial learning rate selection to further illustrate this point. As described in Sec 6.1, the total number of steps saved is $T_{save} = T_{wrm} \eta_c / \eta_{trgt}$. For flat initializations, we have observed that $\eta_c$ is close to the optimal target learning rates. This means that for the optimal target learning rates, flat initializations can save the full warmup time, $T_{save} \approx T_{wrm}$. The benefit is less significant for large initializations where $\eta_c$ is small. Fig 22 in App H shows heatmaps of $T_{save}$ for WRNs in $\mu$P, where $\eta_c$ is close to the maximum trainable learning rate. In these cases, we observe that $T_{save} \approx T_{wrm}$ for most learning rates and warmup durations. Our analysis in Sec 4 demonstrates that before the `collision,' warmup does not actively reduce sharpness. Therefore, setting $\eta_{init} = \eta_c$ should not be expected to negatively impact the training dynamics. Furthermore, a priori, we cannot predict which part of the heatmap our chosen hyperparameters correspond to. Thus, adopting this strategy of starting from $\eta_c$ does not negatively impact training and potentially saves a significant number of training steps in favorable cases. > 3. What is the test accuracy with ... We have consistently observed that for small warmup durations, GI-Adam outperforms Adam. As shown in Table of Fig 4 of the attachment, when training WRNs on CIFAR-10, GI-Adam improves test accuracy by $0.5\%$ over Adam without warmup. This improvement is greater than the standard deviation in test accuracy across different initializations, indicating statistical significance. Moreover, in our language modeling experiments (detailed in the global response), we observed a loss improvement of $0.1$ with GI-Adam. For long warmups, we do not observe a significant advantage of GI-Adam over Adam. This is perhaps not surprising as GI-Adam performs a natural warmup, as described in the global response. Another benefit of GI-Adam is that it widens the range of optimal learning rates by pushing the failure boundary further, as demonstrated in Fig 4(b) of our submission. This makes $\eta_{trgt}$ easier to tune. Note that, a apriori, it is hard to predict where the experimental hyperparameters lie in the heatmap. Thus, it is always beneficial to use GI-Adam as it does not degrade performance, and yet it is a minor modification to Adam. We have also included Radam in our experiments for reference. GI-Adam performs similarly to Radam while being a significantly simpler modification to Adam. > 4. Is the same self-stabilization.. We have extended our experiments to include Transformers trained on language datasets with SGD and Adam. Our results readily extend to this setting. These results are detailed in the global response. We provide replies to the minor questions in the comment below. --- Rebuttal 2: Title: Responses to minor questions by Reviewer 6XDG Comment: Here, we provide replies to minor questions by Reviewer 6XDG. > Why do you use different target learning rates for ... The different target learning rates for $\mu$P and SP in Figure 1 are chosen deliberately because we need to satisfy two requirements: (i) $\eta_{trgt}$ cannot be so large that training fails, and (ii) $\eta_{trgt}$ needs to be large enough so that warmup has a non-trivial effect. (i) and (ii) together give distinct viable values of $\eta_{trgt}$ for SP vs. $\mu$P. In comparison, we found that Adam's learning rates are relatively stable across parameterizations, which is why we used the same learning rate for both initializations in Figure 2 of the submission. > How do you measure the loss value... For both initial learning rate selection and squared gradient for GI-Adam we use a single batch for computations. Nevertheless, we did not observe any performance degradation for commonly used batch sizes as shown in Figure 6 of the PDF attached to the global response. For the initial learning rate estimation, small errors in estimating $\eta_c$ are not expected to impact training, as small initial loss spikes have minimal impact on the overall dynamics. For most models used in practice, $\eta_{\text{max}}$ is at least $4-8$ times larger than $\eta_c$ [1] and hence small errors in estimating $\eta_c$ would still be in the catapult phase $\eta_c < \eta < \eta_{\text{max}}$. However, we do agree with the reviewer that for small enough batch sizes, this estimate can be error-prone and or incur additional computational costs. We will mention it in the limitations section. [1] Lewkowycz, A., Bahri, Y., Dyer, E., Sohl-Dickstein, J. and Gur-Ari, G., 2020. The large learning rate phase of deep learning: the catapult mechanism. arXiv preprint arXiv:2003.02218 > There also exist several related works ... We thank the reviewer for bringing these prior works to our attention. We will discuss them in the related works section. --- Rebuttal Comment 2.1: Comment: Thank you for the detailed response and additional experimental results! I am still a bit confused about the analysis of the two warmup regimes and the importance of the difference between them. Based on the paper and rebuttal, I would summarize the main results as follows: * warmup influences training dynamics differently depending on the behavior of the sharpness at the beginning of training (progressive sharpening or sharpness reduction), * warmup is more important for the sharpness reduction regime. Could you please provide some clarifications on the following concerns regarding these results: 1. I fail to see significant differences in the warmup influence between the two regimes. In both of them, training with a high initial learning rate may lead to strong catapults in training, and warmup allows the network to experience smaller sequential catapults instead. More catapults can be observed in the progressive sharpening regime, but it is unclear to me why it is important. 2. The importance of the warmup seems to be much more related to the difference between the sharpness of the initialization and the critical sharpness for the target learning rate than to the decreasing/increasing sharpness. The warmup may be crucial for training networks with progressive sharpening if we want to use learning rates higher than the critical threshold. At the same time, warmup is unnecessary for training networks with sharpness reduction with small enough learning rates. Why does increasing or decreasing sharpness define the importance of warmup and not just the difference between the sharpness at initialization and the critical sharpness for the target learning rate? 3. In most experiments, the initial sharpness reduction quickly transitions to progressive sharpness during warmup steps, and all warmup catapults take place in the progressive sharpening regime (see Fig. 2 in the paper and Fig. 1,2 in the rebuttal). This observation makes the claim that warmup is more important for the sharpness reduction regime even more confusing. --- Rebuttal 3: Comment: We thank the reviewer for their questions and comments. We hope other concerns regarding the initial learning rate, GI-Adam, and Transformers have been resolved. > The importance of the warmup seems to be much more related to the difference between the sharpness of the initialization and the critical sharpness for the target learning rate than to the decreasing/increasing sharpness. The warmup may be crucial for training networks with progressive sharpening if we want to use learning rates higher than the critical threshold. At the same time, warmup is unnecessary for training networks with sharpness reduction with small enough learning rates. Why does increasing or decreasing sharpness define the importance of warmup and not just the difference between the sharpness at initialization and the critical sharpness for the target learning rate? We agree with the reviewer that the necessity of warmup is based on the initial sharpness relative to the target learning rate. In our paper, when we made the statement that sharpness reduction regimes necessitate more warmup, it was for the case where the target learning rate was large and close to the optimal value. For such fixed optimal target learning rates, we found that networks that start off in the sharpness reduction phase necessitate warmup more than networks that start off in the progressive sharpening regime. The reason for this is that empirically there is a direct correlation between the initial sharpness and whether one observes sharpness reduction or progressive sharpening. Sharpness reduction in early training implies that the initial sharpness is "large," which then implies that warmup will be more important. We will modify the wording of our paper to make this point more clear and avoid this confusion. We would like to further point out that the main utility of understanding which regime we are in is that it can provide a way of defining whether the sharpness is "large" or "small." This in turn can give a clear indication of whether there is a better choice of initialization. For example, if one starts with a given sharpness and observes sharpness reduction phenomena, then it indicates that there is naturally a flatter initialization that one can pick. Similarly, if one starts with that same sharpness but sees progressive sharpness phenomena, then it is unclear, perhaps even unlikely, that a flatter initialization can be found. This was precisely what led us to discover GI-Adam. If we had found that the usual Adam initialization has high preconditioned sharpness but starts off in a progressive sharpening phase, then it would have been less clear that there might be a better possibility. > In most experiments, the initial sharpness reduction quickly transitions to progressive sharpness during warmup steps, and all warmup catapults take place in the progressive sharpening regime (see Fig. 2 in the paper and Fig. 1,2 in the rebuttal). This observation makes the claim that warmup is more important for the sharpness reduction regime even more confusing. This argument does not take into account the convergence/failure boundary. If we start off in progressive sharpening, the network can tolerate relatively high warmup rates (i.e. small $T_\text{wrm}$) according to Fig. 4(a, b). But if we start off in the sharpness reduction regime, even if we crossover to progressive sharpening after $10-20$ steps, those first $10-20$ steps will severely limit the warmup rate: High warmup rates will cause the training to diverge/fail. Therefore this early-time training dynamics is crucial for setting a maximum speed limit on warmup. We note that it is possible that the deeper understanding developed here might lead to more sophisticated warmup schedules; one can imagine starting with a low warmup rate while the network is in the sharpness reduction phase and transitioning to a high warmup rate after some time. --- Rebuttal Comment 3.1: Comment: > I fail to see significant differences in the warmup influence between the two regimes. In both of them, training with a high initial learning rate may lead to strong catapults in training, and warmup allows the network to experience smaller sequential catapults instead. More catapults can be observed in the progressive sharpening regime, but it is unclear to me why it is important. We believe the importance of understanding the two qualitatively distinct regimes can be summarized as follows: 1. Having as deep an understanding of the underlying dynamics as possible is intrinsically valuable. Such developments in understanding may be followed by unanticipated innovations and can inform future decisions about the design of algorithms, initializations, and architectures. We think these unanticipated developments are likely to be most important. Our understanding of the different regimes is already important for the practical innovations proposed in our work, as we explain below. 2. As we mentioned in our previous reply, our proposal for GI-Adam was motivated entirely by understanding that the usual Adam has an unnecessarily large preconditioned sharpness, leaving it deep in the sharpness reduction phase and that there is a simple tweak to the initialization that can reduce the need for warmup. We note that this analysis also led us to the understanding that the explanations of the RAdam paper were incorrect and that the RAdam algorithm was unnecessarily complicated. 3. In our paper we have suggested the persistent catapult warmup schedule, for which we have developed some preliminary analysis and left a complete development for future work. The hyperparameters of this schedule, $\delta$ in Algorithm 3 in Appendix C, which specifies the amount of increase in loss that can be tolerated, depend crucially on whether the training is in the sharpness reduction regime or the progressive sharpening regime and the strength of the individual loss catapults. 4. Our suggestion for picking $\eta_{init} = \eta_c$ was originally motivated by analyzing the sharpness curves in the progressive sharpening case, and realizing how much time was being wasted. If we had not analyzed the early-time training dynamics carefully, this realization would have eluded us. 5. Our analysis shows clearly that the sharpness reduction regime is suboptimal because it requires a longer warmup duration for optimal target learning rates. This is an important observation because it demonstrates there are ways to achieve the same test accuracy while requiring fewer warmup steps. 6. The sharpness reduction regime shows that catapult / self-stabilization effects are not always the only mechanism by which warmup works. Another mechanism is via the natural sharpness reduction effect, where sharpness naturally reduces on its own even without a catapult. Therefore warmup can allow the network to tolerate larger learning rates without ever inducing catapults. This is a novel point that was not mentioned in prior work to our knowledge. --- Rebuttal 4: Comment: We thank the reviewer for the insightful discussion. We agree that adding a discussion on the sharpness level and its training behavior will significantly improve the paper. We will incorporate this discussion along with other insightful suggestions in the final manuscript.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and effort in reviewing our paper. Based on reviewers feedback, we have added the following results: 1. **Language Modeling Experiments:** We have extended our experiments to include Transformer models trained on language modeling tasks using SGD and Adam. In these experiments, we trained 4 layer Pre-LN transformers on the WikiText-2 dataset. The results are shown in the PDF attached. * Figure 1 shows the warmup mechanisms for Transformers trained on Wikitext-2 using SGD. We observe that the initial sharpness of the Pre-LN Transformers in Standard Parameterization (SP) is surprisingly small (\~5) and exhibits progressive sharpening right from the onset. This low initial sharpness is due to the last LayerNorm, and removing this layer results in a large sharpness (\~200), and reveals early sharpness reduction behavior observed in FCNs and ResNets in SP. * Figure 2 shows the warmup mechanisms of Pre-LN Transformers in SP trained on Wikitext-2 using Adam. Consistent with our findings in image classification tasks, we observe a reduction in pre-conditioned sharpness during early training, even for flat initializations that show an increase in sharpness from the onset. * Figure 3 presents the phase diagrams of warmup for Transformers trained on Wikitext-2 using both Adam and our proposed GI-Adam. In line with our image classification results, we observe: 1. The final performance is primarily determined by the target learning rate and increasing the warmup duration keeps training further away from the divergence (failure) boundary. 2. GI-Adam exhibits a wider range of target learning rates that achieve optimal performance, compared to standard Adam. These findings support the generalization of our conclusions to Transformers trained on language modeling tasks. 2. **Toy model for the two mechanisms of warmup:** We can use a toy model to understand the two warmup mechanisms. Following Ref. [1], we can analyze the self-stabilization mechanism through a model derived from a third-order approximation of the loss function. Consider a loss function $L(\theta)$ with parameters $\theta$. Let $\lambda^H_t$ and $u$ denote the sharpness and its corresponding eigenvector. The model assumes that the top eigenvector $u$ changes slowly through training and can be treated as constant. Next, consider a cubic approximation of the dynamics along a reference point $\theta^*$. The dynamics along the projection $x_t:= u^T (\theta_t - \theta^*)$ is given by two coupled non-linear equations: $$ x_{t+1} = (1 - \eta_t \lambda_t^H)x_t, $$ $$ \lambda_{t+1}^H = \lambda_t^H + \eta_t (\alpha - \beta x^2_t),$$ where $\alpha := - \nabla \lambda^H \cdot \nabla L(\theta)$ quantifies the instantaneous change in sharpness and $\beta:= \|\nabla \lambda^H \|^2$ controls to the non-linear change in sharpness. Ref. [1] considered a constant learning rate $\eta$ and $\alpha > 0$. Here, in contrast, we consider a time-dependent learning rate and allow $\alpha$ to attain both positive and negative values. In this model, an instability arises when $\eta_t \lambda_t^H > 2$. During instability, $x_t$ continues to increase until the higher order term in the sharpness update equation causes a significant decrease in sharpness. Once the sharpness has decreased sufficiently, the stability is restored ($\eta_t \lambda_t^2 < 2$), and training continues. Next, we consider the two natural sharpness evolution scenarios: * **Natural progressive sharpening ($\alpha > 0$):** The combined effect of naturally increasing sharpness ($\alpha > 0$) and the increasing learning rate from warmup leads to instability ($\eta_t \lambda_t^H > 2$). Resultantly, $x_t$ increases until the higher order term in the sharpness update cause a decrease in sharpness ($x_t^2 > \frac{\alpha}{\beta}$). Once the sharpness has decreased appreciably so that $\eta_t \lambda^H_t < 2$, stability is restored and the training continues. As training proceeds, both progressive sharpening and increasing learning rate cause instability, resulting in a persistent catapult cycle characterized by $\eta_t \lambda_t^H \approx 2$. * **Natural Sharpness Reduction ($\alpha < 0$):** In this case, sharpness is naturally decreasing during training ($\alpha < 0$). If the learning rate is increased quickly enough relative to decreasing sharpness, an instability occurs $(\eta_t \lambda_t^H > 2)$. The increase in $x_t$ causes a more pronounced decrease in sharpness than it would have occurred naturally, restoring instability. To exceed the instability threshold again, the learning rate must significantly increase to account for the decreased sharpness. This results in one or more separated catapults. We will include this analysis in the updated version of our paper, which will complement our empirical results in Section 4. [1] Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability, ICLR 2023 3. **Improved understanding of GI-Adam:** Since the submission, we have improved our understanding of GI-Adam. Initializing the second moment with initial gradients eliminates the need for bias correction (we can provide a derivation). Hence, for small $\epsilon$, a bias correction, in addition to setting $v = g_0^2$, can be viewed as a learning rate multiplier. As a result, GI-Adam can be viewed as Adam with a natural warmup given by $\eta_t = \eta_{\text{trgt}} \sqrt{1 - \beta_2^t}$. 4. **Persistent Catapult Warmup:** We have also introduced a parameter-free warmup strategy `persistent catapult warmup.' The central idea behind this strategy is to repeatedly induce catapults aimed to progressively reduce sharpness (or pre-conditioned sharpness), thereby facilitating training at higher learning rates without specifying warmup duration. We demonstrate encouraging preliminary experiments in Appendix C of the submission. Pdf: /pdf/eaa20d48fbe5bfbf541f15a04c5f6c0efb6cce74.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper finds that the learning rate warmup allow the network to tolerate larger learning rates. It gradually reduces the sharpness and forces the model to leave poorly conditioned areas of the lossscape and move toward flatter regions which can tolerate larger learning rates. Strengths: 1. This paper analyzes the mechanisms of warmup. 2. It also proposes a GI-Adam strategy, which is better than Adam. Weaknesses: 1. Though the author claims to find that warmup allows for larger learning rates, it has been found in existing work such as "Gilmer, J., Ghorbani, B., Garg, A., Kudugunta, S., Neyshabur, B., Cardoze, D., Dahl, G.E., Nado, Z. and Firat, O., 2022, March. A loss curvature perspective on training instabilities of deep learning models. In International Conference on Learning Representations." Further elaboration on the difference and novelty will make the paper more convincing. 2. They claimed to find "wasted time can be saved by making use of the catapult mechanism", but it seems this has been revealed in "Lewkowycz, A., Bahri, Y., Dyer, E., Sohl-Dickstein, J. and Gur-Ari, G., 2020. The large learning rate phase of deep learning: the catapult mechanism. arXiv preprint arXiv:2003.02218." Not only the above two points, I can't well understand the novelty of this paper. I suggest further summarize the contribution part. 3. Could you give more explainations on the accuracy maps, such as Figures 3 and 4? Technical Quality: 2 Clarity: 3 Questions for Authors: I don't have questions. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Please refer to the Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Though the author claims to find that warmup allows for larger learning rates, it has been found in existing work such as Gilmer 2022. Further elaboration on the difference and novelty will make the paper more convincing....I suggest further summarize the contribution part. Whille Gilmer 2022 indeed demonstrated that warmup allows for larger learning rates primarily for models trained with SGD, our work extends and differentiates itself in several key ways that were not studied in prior work: * **Warmup mechanisms of SGD:** Gilmer 2022 demonstrated that warmup gradually reduces sharpness to facilitate training at higher learning rates. However, we go even further and ''look under the hood" by showing that generically there are two distinct underlying mechanisms. We show that the interplay between natural sharpness evolution and increasing learning rate during warmup leads to either (1) a persistent catapult cycle in progressive sharpening cases, or (2) separated loss catapults in sharpness reduction cases. * **Warmup Mechanisms of Adam:** We provide a comprehensive analysis of Adam's underlying warmup mechanisms, especially during early training times, which was not analyzed in Ref. [1]. In particular, we find that the pre-conditioned sharpness, which determines stability, decreases during early training, *regardless of the natural evolution of sharpness.* This initial reduction in pre conditioned sharpness, before it eventually increases, suggests the existence of flatter initializations (wrt pre-conditioned sharpness) for Adam, which can enable training at higher learning rates from the start. Based on this insight, we propose a simple alternative initialization method called GI-Adam, which provides benefits similar to warmup and consistently improves over standard Adam by pushing the training failure boundary to higher target learning rates. * **The Phase Diagrams of Warmup:** The test accuracy heatmaps (Figures 3 and 4) demonstrate how the maximum target learning rate changes with warmup duration and disentangles the effect of warmup time and target learning rate. These phase diagrams reveal that the final performance primarily depends on the target learning rate, with longer warmup durations mainly helping to avoid the convergent-divergent (failure) boundary. * **Dual Advantage of Warmup:** Our results not only show that warmup improves model performance by allowing larger target learning rates, but also that warmup gives rise to a wider range of target learning rates that yield optimal results, which makes learning rate tuning more robust. This additional benefit of warmup depends on the phase diagram results and were not discussed in prior work. * **Initial Learning Rate Selection:** As the primary effect of warmup is to facilitate training at higher learning rates by annealing sharpness (or pre-conditioned sharpness for Adam), setting the initial learning rate to $\eta_c$ induces loss increase at initialization and thereby sharpness decrease right from initialization, saving warmup training steps. We provide a simple and practical method to pick $\eta_c$ based on the loss catapult mechanism. * **Persistent Catapult Warmup:** We have also introduced a potential parameter-free warmup strategy, which we refer to as 'persistent catapult warmup.' The central idea behind this strategy is to repeatedly induce catapults aimed to progressively reduce sharpness (or pre-conditioned sharpness), thereby facilitating training at higher learning rates without specifying warmup duration. We demonstrate encouraging preliminary experiments in Appendix C of the submission. We intend to move these results into the main text. > They claimed to find "wasted time can be saved by making use of the catapult mechanism", but it seems this has been revealed in We respectfully disagree with the reviewer. Lewkowycz et al 2020 introduced the catapult mechanism, however, it never suggested utilizing it as a way to estimate $\eta_c$ and then setting the initial learning rate in warmup to be equal to $\eta_c$. The development of this idea is unique to our work. > Could you give more explainations on the accuracy maps, such as Figures 3 and 4 Figures 3 and 4 show the best test accuracy achieved during training as a function of target learning rate $\eta_{\text{trgt}}$ and warmup duration $T_{\text{wrm}}$. These phase diagrams of warmup also show the convergence-divergence boundary, indicated by empty cells, illustrating the interplay between warmup duration and the maximum trainable learning rate. These results reveal: (i) flat initializations such as $\mu$P benefit less from warmup, whereas large initializations such as SP may require long warmup to attain the optimal performance, (ii) The final performance primarily depends on the target learning rate and the improvement from increasing warmup duration comes from keeping training away from the convergent-divergent (failure) boundary, (iii) warmup makes learning rate tuning more robust, as mentioned above. --- Rebuttal 2: Comment: Thanks for the response. I have no further questions.
null
null
null
null
null
null
Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization
Accept (poster)
Summary: This paper introduces adaptive, line search-free second-order methods aimed at solving convex-concave min-max problems. The proposed algorithms use an adaptive step size, simplifying the update rule to require solving only one linear system per iteration. The paper presents two main contributions: an adaptive second-order optimistic method that achieves an optimal convergence rate of O(1/T^1.5) and a parameter-free version that does not require knowledge of the Lipschitz constant of the Hessian. The algorithms are evaluated against existing methods, demonstrating practical efficiency and optimal rates. Strengths: 1. The introduction of adaptive, line search-free second-order methods with optimal convergence rates for convex-concave min-max problems is a significant contribution to the field. 2. The paper provides both a parameter-free version and a version requiring minimal problem-specific information. The development of a parameter-free version of the algorithm that adapts based on local information without requiring the Lipschitz constant is noteworthy. 3. The contributions are clearly stated and proved to be the optimal through theoretical analysis and empirical results. Weaknesses: 1. While the parameter-free method is innovative, it may still require careful tuning of initial parameters in practice. 2. Although the numerical experiment shows a promising result, it is limited to specific problem settings and may not generalize across diverse optimization tasks. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. How robust are the proposed methods to deviations from the assumed Lipschitz continuity of gradients and Hessians? 2. Can the parameter-free method's performance be significantly impacted by the choice of initial parameters? 3. How does the proposed approach compare to recent advances in machine learning applications, such as training GANs or solving reinforcement learning problems? Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors pointed out the limitations that missing exact knowledge of the Lipschitz constant can slow down the convergence, and achieving the same parameter-free results without the assumption of Lipschitz continuous gradient remains an open problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 While the parameter-free method is innovative, it may still require careful tuning of initial parameters in practice.** **R1** This is an important observation, and thanks for bringing this point up. We agree that removing the necessary parameters will not come for free and we need to initialize $\lambda_1$ with an estimate of the Hessian's Lipschitz constant. Let us briefly discuss how the parameter-free method behaves for different initializations and also propose a simple initialization technique. Observe that $\eta_1 = O(\sqrt{\lambda_1})$ and $\eta_t = O(1 / \eta_{t-1})$ from Eq. (7). From this simplified view, we observe that the step sizes have a self-balancing property. Specifically, a large (or small) initial value of $\lambda_1$ will yield a large (or small) initial step size $\eta_1$. Yet, $\eta_2$ will be smaller (or larger) as it is inversely proportional to the **previous** step size. Eventually, the step size balances itself in both cases of initialization. To support our rationale, we would like to provide empirical verification. In Figure 3 in the rebuttal PDF, we run our parameter-free method for the same objective in Section 7 where $L_2 = 10^4$ and $d = 10^2$. The plot shows that our method has similar performance for a range of initialization. We also want to share the practical initialization we use in our experiments: we choose an initial point, $z_1$, and generate a second random point $\hat{z}_1$, close to $z_1$. Then, we compute the local $L_2$ estimate to initalize as $\lambda_1=2\\|F(\hat{z}_1)-F(z_1)-\nabla F(z_1) (\hat{z}_1 - z_1) \\|/\\|\hat{z}_1 - z_1\\|^2$. Let us clarify that this initialization rule comes from the proposed update recursion for $\lambda_t$ (in Alg. 1, line 4) and we use it in the experiments for consistency. --- **W2 Although the numerical experiment shows a promising result, ... may not generalize across diverse optimization tasks.** **R2** Thanks for raising this point. We ran a new set of experiments for the problem of maximizing an area under the receiver operating characteristic curve. This could be formulated as a min-max problem, where we want to find a classifier (set of weights) with **small** error that will also have a **large** area under the curve, as formulated in Eq 5.2 of [Lin et al., 2024]. Please check out the respective plots under Figure 2 in the rebuttal PDF. Similar to the min-max problem in Section 7 of our paper, our methods converge relatively fast in the early stages of the execution. Due to time constraints, we have not run higher dimensional experiments for the new problem. --- **Q1 How robust are the proposed methods to deviations from the assumed Lipschitz continuity of gradients and Hessians?** **A1** This is an interesting question. Let us answer by focusing on the parameter-free algorithm (Option II). The Lipschitz gradient assumption is required **only** in the analysis to show that the iterates remain bounded, therefore, the algorithm is not affected by its variation. In fact, the experiments in our paper are based on an objective function which is not gradient Lipschitz but only Hessian Lipschitz, showing that our method is robust in that sense. We think our reviewer would agree that the main issue is the variation in the Hessian Lipschitz constant $L_2$. In Figure 3 of the Appendix, we varied the Hessian Lipschitz constant from $L = 1$ to $L = 10^4$. We noticed that our proposed method consistently performs well across these different settings and is competitive with the optimal SOM. Similar results can be seen in Figures 2 and 3 in our rebuttal PDF. --- **Q2 Can the parameter-free method's performance be significantly impacted by the choice of initial parameters?** **A2** Good question. We refer the reviewer to our response in **R1** above. As an additional note, a very small initial step size (corresponding to overshooting the Lipschitz constant $L_2$) will “delay” the achievement of the desired $O(1/T^{1.5})$, but our parameter-free method will eventually recover. When the initial estimate for $L_2$ is too small, our method will still converge as we explained in the previous answer, but non-parameter-free methods, which must know the exact Lipschitz constant, will diverge. --- **Q3 How does the proposed approach compare to recent advances in machine learning applications, such as training GANs or solving reinforcement learning problems?** **A3** To begin with, we would like to acknowledge that second-order methods, both for minimization and min-max problems, might have limited use once the dimension of the data and the model increase, but we believe there are valid scenarios where second-order methods could be beneficial. Second-order methods can converge in fewer iterations, but calculating the Hessian and its inverse incurs extra costs per iteration. Thus, there is a trade-off between convergence rate and per-iteration cost. In scenarios where gradient evaluation is expensive, second-order methods may be more favorable than first-order methods, as they require fewer iterations and gradient queries. In terms of implementation efficiency, there are techniques to cut this cost significantly, such as computing the (inverse) Hessian-vector products (HVP). [Tran and Cutkosky, 2022] propose a second-order momentum method that uses Hessian-vector products for stochastic optimization. Their algorithm outperforms SGD and Adam in image and NLP tasks and it is only 1.3 to 1.7 times slower than SGD. In the reinforcement learning literature, [Salehkaleybar et al., 2022] developed a second-order policy gradient method, which outperforms baselines by a significant margin in terms of system probes. [Dagréou et al., 2024] conducted an empirical study of different HVP algorithms on Jax and showed that the cost of computing HVP is less than twice the cost of gradient computation. Thus, we may scale our method to high-dimensional problems similarly when combined with such techniques. --- Rebuttal 2: Comment: Thanks for your constructive response that addresses my concerns. I will keep my score. Sincerely, Reviewer GGjT
Summary: This paper proposes adaptive, line search-free second-order methods with an optimal rate of convergence for solving convex-concave min-max problems. By defining the step size recursively as a function of the gradient norm and the prediction error, they eliminate the need for line search or backtracking mechanisms. Additionally, the approach does not require knowledge of the Lipschitz constant of the Hessian. Strengths: 1. The algorithms presented in the paper are novel. They eliminate the need for line search and backtracking by providing a closed-form, explicit, and simple iterate recursion with a data-adaptive step size. 2. The authors offer a clear explanation of how they developed this update. Weaknesses: The limitation of using line search is not clearly demonstrated. For example, it is unclear if using line search would incur higher computational costs and take more time. This limitation should be illustrated with experimental results. Specifically, in the numerical experiments, the Optimal SOM method shows the best convergence rate. Therefore, it is important to show how these novel algorithms outperform the Optimal SOM method. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.See Weakness above. 2. In Equations (6) and (7), \alpha appears in the update. However, \alpha is not mentioned in Algorithm 1. Can you explain its role in the algorithm? Besides, how is \alpha set in the experiments? 3. In the experiments, it is mentioned that “all the hyper-parameters are tuned to achieve the best performance per method.” Can you provide details on the tuning process for these hyper-parameters? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1 The limitation of using line search is not clearly demonstrated. For example, it is unclear if using line search would incur higher computational costs and take more time. This limitation should be illustrated with experimental results.** **A1** We believe our reviewer would agree with us that we cannot expect our method to beat the optimal SOM in terms of convergence rate (# of iterations needed to reach an accuracy). This is because the optimal SOM leverages the line search scheme to pick the largest possible step size, while our method achieves the same rate (up to constants) by **removing** the line search. Instead, our goal is to design a method that is easy to implement with minimal effect on performance. Indeed, there are scenarios where our method has better performance. Since our methods do not require line search, we expect them to be easier to implement and exhibit faster runtime, particularly when $L_2$ is large and, more importantly, when the problem is high-dimensional. When $L_2$ is large the line-search scheme might require several backtracking steps, and each step would be costly when $d$ is large due to the computation of Hessian and its inverse. Specifically, Figure 3 in our submission demonstrates the effect of increasing $d$ and $L_2$. Taking $10^{-15}$ as an acceptable accuracy, our methods reach the target faster than other methods when the problem has large $L_2$ and dimension. Having that said, we fully acknowledge your criticism that we should highlight the advantages of our linesearch-free design with more elaborate experiments. To this end, we have conducted new experiments with larger Lipschitz constant and higher dimensionality. We kindly ask you to check out **Figure 1** in the **rebuttal PDF** and the explanations we provided in the global response. With increasing dimensions and Lipschitz constant, we observe that our methods show significant gains against optimal SOM and HIPNEX. --- **Q2 In Equations (6) and (7), $\alpha$ appears in the update. However, $\alpha$ is not mentioned in Algorithm 1. Can you explain its role in the algorithm? Besides, how is $\alpha$ set in the experiments?** **A2** Thank you for raising this point. The reason that $\alpha$ does not appear in Algorithm 1 is that we set $\alpha = 0.25$ to simplify the expression. The parameter $\alpha$ stems from the condition in (4), and it controls the approximation error in the second-order optimistic method. Note that, unlike the Lipschitz constant, it is a **free** hyperparameter in our algorithm. Our method with Option I needs $\alpha \in (0, 1/2)$, and the parameter-free version (Option II) requires $\alpha \in (0, 1/4)$. We can simply select it as $\alpha = 1/4$ to unify, which is what we do in the theorems and experiments. --- **Q3 In the experiments, it is mentioned that “all the hyper-parameters are tuned to achieve the best performance per method.” Can you provide details on the tuning process for these hyper-parameters?** **A3** Thanks for the question. For our first adaptive and line search-free second-order optimistic method, we simply choose $\lambda_t = L_2$. For the parameter-free method, the only hyper-parameter is $\lambda_0$, and we use a heuristic initialization for our method. We choose an initial point, $z_0$, and generate a second random point $\hat{z}_0$ which is close to $z_0$. Then, we compute the local $L_2$ estimate to initialize $\lambda_0 = 2 \\| F(\hat z_0) - F(z_0) - \nabla F(z_0) (\hat{z}_0 - z_0) \\| / \\|\hat{z}_0 - z_0 \\|^2$. Note that we do not tune $\lambda_0$ as a hyper-parameter but use a simple initialization rule, which is in parallel with the proposed update recursion for $\lambda_t$ in Algorithm 1, line 4. For the HIPNEX method in [30], it has a hyperparameter $\sigma \in (0, 0.5)$, which we choose in the interval $[0.05, 0.1, 0.15, …, 0.45]$ for the best performance. Other hyper-parameters are determined by the formulas from the paper [30]. For the Optimal SOM, the initial step size is set to be unit as prescribed. Their algorithm has two line search hyperparameters $\alpha, \beta$. Note that their $\alpha$ is the same as ours, and we search for the best choice of $\alpha$ and $\beta$ for their algorithm from the interval [0.1, 0.2, …, 0.9]. We use the combination that achieves the best empirical result. Thank you for raising this point and we will add the above details to the revision. --- Rebuttal Comment 1.1: Comment: Thanks for your response. It addresses all my questions. And I'm willing to raise my score. Best.
Summary: This paper consider solving convex-concave minmax problems using second order optimization method. A modified adaptive Optimistic algorithm by approximating the proximal point method using second order information is proposed. It is shown that this method can achieve the optimal convergence rate. Moreover, comparing with the previous second order methods, the method in this paper avoid the line search process and can avoid the precise knowledge of the Lipschitz constant used in finding suitable step sizes. Strengths: (1). The paper is well written. The basic mechanisms of the proposed algorithms and the difficulties of their analysis are clearly stated at the beginning of the paper. (2). The motivation of this work that avoid the seemingly complex process of line search or knowledges of the Lipschitz constant is convinced. Moreover, the proposed algorithm, especially the (Option 2) method achieve this goal, although the additional assumption on the Lipschitz conditions on the gradient are assumed. Weaknesses: (1). Beside the theoretical advantages of the proposed methods and optimal SOM method, it is questionable that whether the proposed method has particle advantages compared to optimal SOM. From the experimental results, it seems both the convergence rate of optimal SOM (in Figure 1) and computational times in long term (in Figure 3) are better than the proposed adaptive SOM methods. Technical Quality: 3 Clarity: 2 Questions for Authors: (1). As in the weakness part, can the author provides more explains on the advantages of Adaptive SOM than optimal SOM? (2). From Figure 2, it seems Adaptive SOM methods will not keep decreasing after some time node. Is this the case? What will happen if we add the iterations of optimization? (3). From the work of (Mokhtari er al.,) both the optimistic and extra gradient methods can be put in a single framework of approximating the proximal point method. Can the authors provide some comments on the possibilities of using the second order approximate extra gradient method to construct algorithms? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: There is no potential negative societal impact from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1 Can the authors provide more explanation on the advantages of Adaptive SOM than optimal SOM?** **A1** In terms of convergence rate (# of iterations needed to reach an accuracy), one cannot expect our method to beat the optimal SOM. This is because the optimal SOM leverages the line search scheme to pick the largest possible step size, while our method achieves the same rate (up to constants) by **removing** the line search. Instead, our goal is to design a method that is easy to implement with minimal effect on performance. Indeed, there are scenarios where our method has better performance. Since our methods do not require line search, we expect them to be easier to implement and exhibit faster runtime, particularly when $L_2$ is large and, more importantly, when the problem is high-dimensional. When $L_2$ is large the line-search scheme might require several backtracking steps, and each step would be costly when $d$ is large due to the computation of Hessian and its inverse. Specifically, Figure 3 in our submission demonstrates the effect of increasing $d$ and $L_2$. Taking $10^{-15}$ as an acceptable accuracy, our methods reach the target faster than other methods when the problem has large $L_2$ and dimension. Furthermore, we have conducted new experiments with larger Lipschitz constant and higher dimensionality. Please check out Figure 1 in the rebuttal PDF and the explanations we provided in the global response. With increasing dimensions and Lipschitz constant, we observe that our methods show significant gains against optimal SOM and HIPNEX. --- **Q2 From Figure 2, it seems Adaptive SOM methods will not keep decreasing after some time node. Is this the case? What will happen if we add the iterations of optimization?** **A2** Let us highlight that Figure 2 studies the case where the Lipschitz constant is very small (L=1), which is in favor of the line-search methods. Optimal SOM can easily pick a large step size via the line search and converge faster, whereas our adaptive methods take a more conservative approach with smaller step sizes. Our methods will eventually reach the same error as optimal SOM with more iterations. Moreover, Figure 2 reports the performance against the number of iterations, which does not display the cost of line search. The negative effect of increasing the Lipschitz constant **in runtime** for optimal SOM can be observed through Figure 3 (g) -> (h) -> (i). --- **Q3 From the work of (Mokhtari et al.,2020) both the optimistic and extra gradient methods can be put in a single framework of approximating the proximal point method. Can the authors provide some comments on the possibilities of using the second-order approximate extra gradient method to construct algorithms?** **A3** This is an excellent question. Integrating our technique with the extragradient (EG) framework to remove the line search turns out to have some technical issues. Specifically, one of the fundamental components that help us avoid the line search is the data-adaptive recursion for $\eta_t$, and finding the respective formula for EG is the first challenge. Let us explain briefly without the proof details. EG computes an intermediate sequence, which we can call $z_{t+1/2}$, and uses the gradient information at this middle point to achieve the next point $z_{t+1}$. It is possible to propose a choice of $\eta_t$ that depends inversely on $\\|F(z_t)\\|$, which implies that, to lower bound $\eta_t$, we need to find an upper bound on $\\|F(z_t)\\|$. However, we are only able to upper bound $\\|F(z_{t+1/2})\\|$ instead of $\\|F(z_t)\\|$. This discrepancy prevents us from establishing the same convergence guarantee as in this paper. Nevertheless, this is a truly interesting question (also for minimization problems) and we are actively exploring whether a unification is possible.
Summary: The authors present a new second-order method for convex-concave min-max optimization based on the optimistic gradient method but modified to work with second-order information. The authors first propose a variant that requires the value of Jacobian Lipschitzness $L_2$ and then introduce an additional parameter-free variant that does not require any hyperparameters. The proposed method obtains the optimal rate for this setting and appears to be quite practical. Finally, the authors test their method on a toy problem and compare it to other second-order methods for the same problem, where we can see the proposed method to perform the best. Strengths: The authors obtained the optimal rate for the studied problem with a practical method, which doesn't require solving complicated sub-problems. Furthermore, the authors designed a parameter-free version which appears to be very practical. Weaknesses: 1. Unlike previous work on second-order methods for minimization and variational inequality, this work requires the gradients to be Lipschitz. I'm not even sure if that makes the method optimal in the class since usually the lower bounds are established without this extra assumption. 2. The problem might lack applications in machine learning. Min-max problems became popular recently due to a period of time when GANs were used, but it's a bit unrealistic to expect a second-order method to be useful in high-dimensional applications. 3. I think some related work is missing in terms of the algorithm design, in particular, error feedback papers (since this work uses error vectors) and papers on regularized Newton methods (since this work also tries to eliminate line search). 4. Some more practical experiments would have been welcome here. ## Minor The light green in the plots is very difficult to see with the white background, I'd suggest changing the colors in the plots based on modern standards for figure formatting In Section 7, vectors $x$ and $y$ should be made bold to be in line with the formatting style of the rest of the paper Some equations, in the main body and in the appendix, are missing punctuation, e.g., lines 178, 203, 207, 452, 454, 488, etc. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. I do not see any mention of the error feedback technique, but the method resembles it a lot. Did it motivate the authors in any way? 2. Can a faster convergence rate of $O(1/T^2)$ be established for the same method when we specialize the problem to minimizing a convex function, as has been established for regularized Newton methods? 3. Why do you need to study specifically min-max problems? 4. Is Assumption 2.3 needed only to bound the iterate norms? 5. You assume the gradients to be Lipschitz, doesn't it mean that the $O(1/T^{1.5})$ lower bounds that you cite are no longer applicable since they are established for the class of operators with Lipschitz Jacobian? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: I find the limitations section to be right on point. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 This work requires Lipschitz gradients thus lower bounds are not applicable.** **R1** First, we would like to highlight that our first method (Option I) does not require Lipschitz gradients and achieves the optimal rate for its setting. However, the parameter-free version (Option II) does require the Lipschitz gradients. Consequently, we agree that the existing lower bounds do not apply when Assumption 2.3 is in play. We will add a paragraph to clarify this. However, we would like to propose educated guesses based on analogous lower bounds in convex minimization. Arjevani et al. (2019) shows that for convex minimization with Lipschitz gradient and Hessian, the optimal rate is $O(\min\\{T^{-2},T^{-3.5}\\} )$. Since the lower bound for minimizing a convex function with only Lipschitz Hessian is $O(1/T^{3.5})$, the Lipschitz gradient assumption does not improve the lower bound. Similarly, we believe this additional assumption for min-max optimization would not lead to a rate improvement. We will include a remark in our revised version. --- **W2 Second-order methods may not be useful in high-dimensional applications.** **R2** We acknowledge that second-order methods might have limited use as the dimension of the data and the model increase. However, there are scenarios where they could be beneficial. Second-order methods can converge in fewer iterations, though they incur the extra cost of calculating the Hessian and its inverse. Thus, there is a trade-off between convergence rate and per-iteration cost. In cases where gradient evaluation is expensive, second-order methods may be more favorable than first-order methods, as they require fewer iterations and gradient queries. To improve implementation efficiency, techniques like computing the (inverse) Hessian-vector products (HVP) can significantly reduce costs. [Dagréou et al., 2024] showed that the cost of HVP is less than twice the cost of gradient computation. Thus, we may scale our method to high-dimensional problems when combined with such techniques - please also check out our response **A3** to **Reviewer GGjT**. --- **W3 Some related work is missing.** **R3** Regarding regularized Newton’s methods, we covered the relevant papers for min-max optimization to our knowledge [18,19,20,21,22,23]. We understand that our reviewer might also refer to regularized methods for convex minimization. We only cite the core work [29], and we will happily add the following on (accelerated) cubic methods [Nesterov, 2008; Jiang et al., 2020] and quadratic regularization variants [Mischenko, 2021; Antonakopoulos et al., 2022]. Regarding error feedback, we were not initially aware of this line of work and built our method purely based on optimistic methods, which have been studied since 1980 [9] and have become popular recently [12, 14, 15]. However, we agree one could draw a high-level connection. Focusing on the pioneering work [Seide et al., 2014], error feedback algorithms keep an aggregated vector of errors from an “approximation” step such as compression/quantization. This **aggregated** error is added every iteration to correct the update of the local point. We highlight that our optimistic approach focuses only on the difference between the current and the last iterate, while error feedback vectors accumulate the entire history of errors. We will include a discussion on error feedback algorithms and their relevance. Thank you for making us aware of this. --- **W4 Practical experiments.** **R4** We have included a new experiment for the problem of maximizing the area under the ROC curve. We also repeated the experiment in our paper with higher dimensions and larger Lipschitz constants. Our new plots indicate a clear advantage for our algorithms compared to optimal SOM and HIPNEX once the dimension and $L_2$ increase. Please check out our new plots in the rebuttal PDF and the discussions in our global response. --- **Q1 Error feedback technique.** **A1** Please check our response in **R3**. --- **Q2 Can a faster rate of $O(1/T^2)$ be established for minimizing a convex function?** **A2** This is an excellent question. To our knowledge, no second-order optimistic method for convex minimization, even with line search, achieves a rate better than $O(1/T^{1.5})$. Our current analysis gives this same rate for the convex minimization setting. However, quadratic regularization of Newton’s method with adaptive regularization [Mishchenko, 2022] and cubic regularization of Newton’s method [29] achieve a faster rate of $O(1/T^2)$. Therefore, we conjecture that a minor modification of our method should achieve the same rate. This is a direction we are currently pursuing. --- **Q3 Why study Min-Max?** **A3** Min-max problems have been studied for several decades, long before the popularization of GANs. They have been explored in various formulations, such as variational inequalities, across fields like game theory, economics, and multi-agent learning. With the recent interest in second and higher-order methods, many open problems in min-max optimization need new algorithmic designs. We believe our adaptive parameter strategies offer a solid step forward by providing a new framework to bypass line search, applicable to minimization algorithms as well. We will include a thorough discussion on min-max problems, their relevance, and the implications of our design for other fields. --- **Q4 Assumption 2.3.** **A4** The reviewer is correct: this assumption is only needed to ensure iterates remain bounded. Note that many papers on parameter-agnostic algorithms **artificially** assume bounded iterates (see [32, 33,34, 35, Antonakopoulos et al., 2022]). We addressed this by replacing boundedness with the milder assumption of Lipschitz gradients. Although this complicates the analysis, we believe it is an important step forward. --- **Q5 The $O(1/T^{1.5})$ lower bounds are not applicable.** **A5** Please check our response in **R1**. --- Rebuttal Comment 1.1: Comment: Thanks for your response. W1. Thank you for the clarification regarding the assumptions, I did miss that Assumption 2.3 is not needed for the first method. I'm not sure I understand your argument regarding the lower bounds. As you pointed out, the lower bound of Arjevani et al. includes a $1/T^2$ term. If the gradients are Lipschitz with a small constant and the Hessians are Lipschitz with a large one, the $1/T^2$ term can be much smaller. This is especially realistic since the Lipschitzness of the Hessians is a strictly stronger assumption on any bounded set. I'd suggest the authors refrain from any big claims on optimality of their methods when Lipschitz gradients are assumed. W2. I cannot agree with your argument about the Hessian-vector product. If we're using backpropagation, why should we compute it for the quadratic approximation of the problem (to solve the Newton iteration) instead of computing it for the problem itself? In my experience, Newton-like methods are only useful when we can use efficient linear algebra solvers for the arising linear systems, while in situations where we use backpropagation, they only introduce extra hyperparameters. W3. I'm surprised the authors were not familiar with the error-feedback literature as you even used the same notation $e_t$ for the error term, though I realize it is the natural choice. I think it's worth citing a paper on the topic since the connection is so strong, with the mention that you designed your method independently. W4. I hoped the authors would find an example from some recent NeurIPS papers concerned with applications where minmax problems needed to be solved, to provide an interesting example of minmax problem and test the methods on it. I realize the method is unlikely to be useful for GAN training, but if there are no relevant problems at all, it raises again the question of how relevant the designed methods are to NeurIPS. Q2. Thanks for the response, I'm looking forward to the new method. Q3. I apologize for formulating my initial question so loosely, I did not mean to question your interest in minmax problems, though it was interesting to read your response to this question. I meant to ask why your theory does not give us guarantees for the more general problem of monotone inclusion, which generalizes unconstrained minmax optimization. --- Rebuttal 2: Comment: Thank you for reading our rebuttal and for sharing your additional comments. **W1 Arguments regarding the lower bounds.** Sorry for the confusion. Indeed, the lower bound presented in Arjevani et al. (2023) takes the form $\Omega(\min \\{ L_1D^2/T^2, L_2 D^3/T^{3.5}\\})$, where $L_1$ is the gradient Lipschitz constant, $L_2$ is the Hessian Lipschitz constant, and $D = \\|z_0 -z^*\\|$ is the initial distance to the optimum. As the reviewer correctly points out, determining which of the two bounds is the minimum depends on the Lipschitz constants and the initial distance. However, for sufficiently large $T$, the latter bound, $L_2 D^3/T^{3.5}$, will eventually become the smaller one, implying that the optimal dependence on $T$ is $1/T^{3.5}$. Likewise, our hypothesis is that the lower bound for min-max optimization with Lipschitz gradients and Hessians would follow a similar structure given by $\Omega(\min \\{ L_1D^2/T, L_2 D^3/T^{1.5}\\})$. If that is indeed the case, we can argue that our convergence rate is optimal in terms of the dependence on $T$. We will make sure to clarify this nuance in our revision and provide the necessary context. Thank you for your insightful question. --- **W2 Arguments about the Hessian-vector product.** We apologize if our arguments on Hessian-vector products (HVPs) caused any confusion. Our reviewer has a valid point that computing gradients with backpropagation is generally more cost-effective than dealing with HVPs. However, it is also possible to compute HVPs effectively using the backpropagation framework (there are Pytorch packages specifically developed for this purpose), thereby avoiding explicit matrix inversions. Having that said, we acknowledge that this may not be suitable in all scenarios. We intended to highlight examples, such as Tran and Cutkosky (2022), which demonstrate efficient implementations of second-order methods using HVPs. Finally, we would like to remark that our focus is mainly on the theoretical aspects of optimization methods. Nonetheless, we acknowledge that making our method more practical is an important direction for future research. Hoang Tran and Ashok Cutkosky. Better SGD using Second-order Momentum. NeurIPS 2022. --- **W3 The error-feedback literature.** Thank you for again making us aware of this line of work; we will include a discussion on error feedback in our revision. Also, please feel free to suggest any specific references that you have in mind beyond Seide et al. (2014). Frank Seide, et al. "1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs." Interspeech 2014. --- **W4 I hoped the authors provide an interesting example of min-max problems and test the methods on it.** Thank you for your suggestion. Another possible application of min-max optimization in machine learning is robust adversarial training (Tsipras et al., 2018; Madry et al., 2018), where the goal is to train a classifier that is robust to adversarial perturbations. This problem is indeed of high interest in machine learning and appears in several applications. Moreover, Javanmard et al. (2018) showed that in the special case of linear regression, the adversarial training problem is equivalent to a convex-concave min-max problem, which satisfies the assumptions in our paper. If the reviewer finds it necessary, we would be happy to test our proposed algorithms on this problem and report the numerical results here. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. arXiv preprint 2018. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. ICLR 2018. Adel Javanmard, Mahdi Soltanolkotabi, and Hamed Hassani. Precise tradeoffs in adversarial training for linear regression. COLT 2020. --- **Q3 Why your theory does not give us guarantees for the more general problem of monotone inclusion, which generalizes unconstrained min-max optimization.** Thank you for the clarification. In fact, our convergence results can be extended to the more general problem of monotone inclusion $0 \in F(z) + H(z)$, with some proper modification to the algorithm. For instance, instead of using the operator norm $\\|F(z_k)\\|$ in our step size rule (7), we will use $\\|F(z_k) + v_k\\|$, where $v_k$ is a specific element in $H(z_k)$ that we construct from the algorithm. In our submission, we chose to focus on an unconstrained min-max problem for the ease of presentation, so that we can better highlight the key novelty of our techniques and make it accessible to a broader audience. We are planning to add this additional result in the appendix. --- Rebuttal Comment 2.1: Comment: Thanks for the additional input. For the lower bounds, just please be precise when making statements about optimal rates. The paper that you mention is a good reference for error feedback. You don't have to study adversarial training numerically, I think your paper can be accepted as is with the theoretical focus. I agree the more general setting of monotone inclusion can be discussed in the appendix, though I think it's worth mentioning that your theory can be extended somewhere in the main body.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful feedback. Following your suggestions, we have performed new experiments, and the plots are included in the shared PDF file. - **Practical advantage compared to optimal SOM**. To demonstrate the computational efficiency of our proposed line-search-free method, we consider the same min-max problem in Section 7 with a higher dimension and larger Lipschitz constant. We observe from Figure 1 that our line-search-free methods and HIPNEX in [30] consistently outperform the optimal SOM in terms of runtime. Moreover, the performance gap widens as the dimension of the problem increases; the line-search scheme requires several backtracking steps, especially when $L_2$ is larger, and each step would be costly due to the computation of Hessian and its inverse when $d$ is large. Additionally, both of our methods outperform the HIPNEX method. - **Application to AUC maximization problems**. We consider a new problem of maximizing an area under the receiver operating characteristic curve. This could be formulated as a min-max problem, where we want to find a classifier (set of weights) with small error that will also have a large area under the curve, as formulated in Eq 5.2 of [Lin et al., 2024]. Similar to the observations above, Figure 2 demonstrates that both of our methods outperform the optimal SOM and HIPNEX in terms of runtime, particularly in the early stages of the execution. - **The impact of the initial parameter $\lambda_0$**. We tested our parameter-free method on the same min-max problem in Section 7 where $L_2 = 10^4$ and $d = 10^2$. Varying the initial choice of $\lambda_0$ from $10^{-4}$ to $0.05$, Figure 3 shows that our method exhibits consistent performance. We also tested a heuristic initialization procedure used in our other experiments ("$\lambda_0$ random" in the figure). Specifically, we choose an initial point, $z_0$, and generate a second random point $\hat{z}_0$ close to $z_0$. Then, we compute the local $L_2$ estimate to initalize $\lambda_0=2\\|F(\hat{z}_0)-F(z_0)-\nabla F(z_0) (\hat{z}_0 - z_0) \\|/\\|\hat{z}_0 - z_0\\|^2$. We also observe that this heuristic strategy is competitive and works well across different settings. We will include these new experiments and the above discussions in our revision. --- **Additional references in our rebuttal:** Hoang Tran, Ashok Cutkosky. Better SGD using Second-order Momentum. NeurIPS 2022. Salehkaleybar, S., Khorasani, S., Kiyavash, N., He, N., & Thiran, P. Momentum-Based Policy Gradient with Second-Order Information, 2022. Mathieu Dagréou, Pierre Ablin, Samuel Vaiter, Thomas Moreau. How to compute Hessian-vector products? ICLR Blogposts 2024. Yossi Arjevani, Ohad Shamir, and Ron Shiff. Oracle complexity of second-order methods for smooth convex optimization. Mathematical Programming, 2019 Yuri Nesterov. Accelerating the cubic regularization of Newton’s method on convex problems. Mathematical Programming, 2008. Bo Jiang, Tianyi Lin, and Shuzhong Zhang. A unified adaptive tensor approximation scheme to accelerate composite convex optimization. SIAM Journal on Optimization, 2020. Konstantin Mishchenko. Regularized Newton method with global $O(1/k^2)$ convergence, 2021. Kimon Antonakopoulos, Ali Kavis, and Volkan Cevher. Extra-newton: A first approach to noise-adaptive accelerated second-order methods. NeurIPS, 2022. R. Monteiro and B. F. Svaiter. An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods. SIAM Journal on Optimization, 2013. Tianyi Lin, Panayotis Mertikopoulos, Michael Jordan. Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee. 2024 Pdf: /pdf/09e982e9a67590035c41d5e436657a68a8da2c50.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Pose Representation Learning for Generating and Transferring Non-Rigid Object Poses
Accept (poster)
Summary: The authors propose a skeleton-free pipeline for implicit 3D pose representation and transfer. They introduce the prediction of Jacobian fields to achieve shape-preserving representation, which facilitates the application of transferred poses. To enhance accuracy, a per-identity refinement step is included, utilizing intrinsic-preserving loss. Additionally, a cascaded diffusion model is employed on the compact pose representation to generate novel poses. Strengths: 1.The use of Jacobian fields for shape representation is innovative. 2.The method of incorporating an intermediate step for intrinsic preservation is justified. 3.Training a diffusion model on extracted implicit poses further demonstrates the effectiveness of the pose representation. Weaknesses: 1.FPS is utilized to sample keypoints as a basis for pose representation. However, for complex poses, FPS-sampled keypoints may not be evenly distributed on the surface due to a lack of awareness of mesh connectivity, potentially resulting in poor pose representation. 2.Given that the pose is represented by keypoints' coordinates and corresponding latent features, the shape information is inherently entangled in the keypoints' coordinates. This could lead to the shape information being encoded in the latent features. Technical Quality: 3 Clarity: 2 Questions for Authors: 1.The proposed method may face limitations when animating non-T-pose meshes if it relies heavily on T-pose assumptions for pose estimation and transfer. 2.It would be beneficial to demonstrate pose transfer results for both source and target meshes from the Mixamo dataset and compare these with results from the ECCV 2022 paper "Skeleton-free Pose Transfer for Stylized 3D Characters" to assess performance and efficacy. 3.Comparing the training and inference times with baseline methods would provide insights into the computational efficiency of the proposed approach. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes, they are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer RKb2, Thank you for your positive comments. We are especially grateful for your recognition of the use of Jacobian fields as "innovative" and for acknowledging that our approach is well "justified". We provide our responses to your queries below. **Impact of the Sampling Method and Number of Keypoints.** We appreciate your suggestion for an in-depth analysis. We used FPS for keypoint extraction due to its simplicity and efficacy, which have made it a popular choice in geometry analysis (e.g., PointNet++ [Qi et al., NeurIPS 2017], KeypointDeformer [Jakab et al., CVPR 2021], DeepMetaHandles [Liu et al., CVPR 2021]). Here, we investigate the impact of the number of keypoints and their sparsity. We trained our model using SMPL and DeformingThings-Animals meshes, adjusting the pose extractor to extract 50, 25, and 10 keypoints, respectively. We evaluated the Point-wise Mesh Euclidean Distance (PMD) for SMPL meshes and Fréchet inception distance (FID) for DeformingThings4D-Animals meshes, following our paper. The results are summarized in Tables 1 and 2. Notably, the model trained with just 10 keypoints still outperforms the pretrained SPT model in the pose transfer using SMPL meshes. Please refer to Figures 3 and 4 in the PDF file. We will include this analysis in the future revision. **Table 1: PMD measured on SMPL pose transfer experiments with varying number of keypoints. "Ours-$N$" denotes a variant of our network trained to extract $N$ keypoints.** | Method | SPT | Ours-10 | Ours-25 | Ours-50 | **Ours-100** | |------------- |-----|---------|---------|---------|--------------| | PMD ($\times 10^{-3}$)| 0.28 | 0.20 | 0.17 | 0.17 | **0.13** | **Table 2: FID measured on meshes whose poses are transferred from class **bear3EP** of the **DeformThings** dataset. "Ours-$N$" denotes a variant of our network trained to extract $N$ keypoints.** | Method | Ours-10 | Ours-25 | Ours-50 | **Ours-100** | |----------|---------|---------|---------|--------------| | FID ($\times10^{-2}$) | 1.25 | 0.87 | 0.83 | **0.72** | **Potential Drawback of Representing Poses as Keypoints.** To prevent the leakage of source shape details into target shapes, we use Jacobian fields instead of vertex coordinates when extracting our pose representation. This way, our network predicts per-triangle *local transformations* to deform the given template to match the pose example. Furthermore, training the refinement module by optimizing intrinsic-preservation losses helps preserve intricate local details. Please refer to Figure 5 in the attached PDF file for qualitative results of the ablation study. We will include more results in the revised version. **Assumption on T-Poses.** Thank you for pointing this out. It is true that our method requires a canonical pose for an identity (e.g., T-pose for humanoids). However, we believe our method remains practical for its intended purpose since default or canonical poses of deformable objects have been widely used not only for humanoids but also for animal shapes, including in works like SMAL [Zuffi et al., CVPR 2017], A-CSM [Kulkarni et al., CVPR 2020], BARC [Ruegg et al., CVPR 2022], BITE [Ruegg et al., CVPR 2023], VAREN [Zuffi et al., CVPR 2024], and 3D Fauna [Li et al., CVPR 2024]. These studies define a canonical pose as a shape standing still with its legs straight for quadrupeds. Nonetheless, there may be cases where obtaining, or even defining a canonical pose of an object is challenging. As such, lifting the necessity of template shapes is one of the directions that we are heading to. **Pose Transfer between Mixamo Meshes.** We appreciate your suggestion. We downloaded 28 motions in the test split of SPT [Liao et al., ECCV 2022] from the Mixamo repository, comprising a total of 3,025 frames. We followed the preprocessing steps of SPT to extract meshes from skeleton configurations. Since the pretrained SPT model was trained on a larger dataset consisting of shapes from AMASS, Mixamo, and RigNet, we re-trained the model to analyze its performance in the same problem setup as ours. For training, we used 3,025 different poses of a single character. After training, we used 300 poses transferred to 8 different characters to compute Point-wise Mesh Euclidean Distance (PMD) by leveraging the ground-truth correspondences. We report the results in Table 3. Please refer to Figure 6 in the attached PDF file for qualitative results. As illustrated, our method better retains the smoothness and details of the surface even when transferring poses involving articulations of limbs. This is also reflected in lower PMD in Table 3. We will add these results in the revision. **Table 3: PMD measured on Mixamo pose transfer experiments.** | Method | SPT | **Ours** | |--------------|------|----------| | PMD ($\times10^{-3}$) | 3.42 | **2.28** | **Training \& Inference Time Comparisons.** We summarize the training and inference time in Table 4. The inference time includes the time to transfer a pose of one identity to another via network forward passes. We used Mixamo meshes with approximately 10K vertices and 20K faces for evaluation. Note that for SPT, we measured the inference time using the official pretrained model. Our method demonstrates better runtime performance than SPT while outperforming NJF and ZPT in terms of pose transfer accuracy. We will include this analysis in the revised version of our paper. **Table 4: Training and inference time comparison** | Method | NJF | SPT | ZPT | **Ours** | |-------------------|------|------|------|----------| | Training | 2h | 20h | 9h | 8h | | Inference (per pair) | 0.005s | 1s | 0.004s | 0.03s | --- Rebuttal 2: Title: Response to rebuttal Comment: Given the additional clarification and experimental results, I am more inclined to recommend acceptance of this paper. However, I am also want to see the potential drawbacks of using jacobian fields for shape representation and failure cases caused by simply using FPS as basis(for extreme poses and loose clothes). Also, the performance seems drop a lot for SPT after your retraining, is it possible to train on the same dataset to compare with their reported results? --- Rebuttal Comment 2.1: Comment: Dear reviewer RKb2, We sincerely appreciate your positive feedback. Due to space constraints, we were unable to fully address all of your comments in the rebuttal. Please allow us to further address them below. **Potential Drawbacks of Using Jacobian Fields** We did not encounter noticeable failure cases associated with the use of Jacobian Fields. Our ablation study demonstrated that the overall quality of pose transfer improves with Jacobian Fields, both quantitatively and qualitatively. We will include additional qualitative results from the ablation study in the final revision. One potential drawback of using Jacobian Fields is the requirement for differential operators, such as the cotangent Laplacian and gradient operator. These differential operators may not be available for meshes with multiple connected components, as noted in the original NJF paper [Aigerman et al., ACM ToG 2022]. We will also clarify this limitation in the revision. **Failure Cases due to FPS** We also did not observe specific failure cases related to the use of FPS keypoint sampling. While we believe that simple FPS point sampling is sufficient for our framework, in the final revision, we will further test our method with other keypoint sampling techniques, such as uniform sampling, and report the results. Thank you for your detailed comments and suggestions. Please understand that the new experiment requires more time than is available in the remaining discussion phase. **Comparison with SPT** We would like to clarify that the results reported above are based on training both SPT and our model with shapes of a **single** identity, which aligns with our problem setup. In our work, we focused on learning pose representations from shapes of a single identity and transferring them to a new identity. In contrast, SPT uses a much larger dataset that includes a variety of identities as training data, which is why we could not directly apply their training/test splits to our method. While it is technically feasible to extend our framework to train with multiple identities, we found that it would require substantial changes to our algorithm. Also, please note that our method requires preprocessing of meshes to compute differential operators, and some Mixamo shapes fail during this preprocessing. In the revision, we promise to extend our method to train the networks with multiple identities and report the results using the closest train/test splits to those of SPT (excluding models that fail in the preprocessing). We would also like to emphasize that, while this is not a perfect apple-to-apple comparison due to the different train/test splits, the pose transfer accuracy of our method trained with a **single** identity ($2.28 \times 10^{-3}$ PMD) is comparable to that of SPT, which was trained with a much larger dataset ($2.39 \times 10^{-3}$ PMD, as shown in Table 4 of the SPT paper). This demonstrates the effectiveness of our method, even with a significantly smaller training dataset.
Summary: This paper presents a novel representation learning framework for pose estimation which is disentangled from the identity of the object. This implicit representation is used for generating and transferring poses using a cascade diffusion model. A keypoint-based hybrid pose representation with a sparse mesh is used to ensure it is compact enough for the generative model. Moreover, the authors use Jacobian fields as the shape representation, allowing their implicit deformation network to take mesh face coordinates as input as opposed to vertex coordinates. The latent representations learned from this are used in a self-supervised per-identity refinement module for further improvement. This allows the proposed network to transfer motion sequences from one object to another. Moreover, these representations can be used to animate previously unseen characters. The authors provide extensive experiments with both quantitative and qualitative ablations, showing the superiority of their approach over three baseline networks, using both animals and humanoid objects from different datasets. Strengths: - The paper is very well-written with clear descriptions of motivation, research question, methodology, mathematical formulations, and experiments. - There are extensive experimental results to establish the superiority of the proposed method. In particular, the qualitative results clearly help visualize the improvements. - It is very easy to read Weaknesses: - No weaknesses Technical Quality: 4 Clarity: 4 Questions for Authors: -The results look very promising, and I am curious how this proposed method can be transferred to images or videos (with textures and backgrounds) insead of only mesh generation. Do you have any insights about how this will work and how your method can be adapted for this scenario? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The limitations of Jacobian fields are discussed in the conclusion Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer xBSu, We highly appreciate your positive comments on our work, particularly regarding the writing and experiment procedures. Our response to your question can be found below. **Extension to Images and Videos.** We strongly agree that our work can be extended to other modalities, such as images and videos. For instance, the proposed pose representation and our cascaded diffusion model can be utilized as a prior for 3D-aware image editing tasks where estimating motion trajectories and handling occlusions are crucial. Specifically, we believe our method can be adapted to recent drag-based image editing techniques, such as DragGAN [Pan et al., SIGGRAPH 2023] and DragDiffusion [Shi et al., CVPR 2024]. When a user inputs a set of drag-based instructions describing complex articulations of animals that cannot be resolved in 2D, we may consider lifting the object to 3D by using a single-view 3D reconstruction model (e.g., Zero-1-to-3 [Liu et al., ICCV 2023]). The coarse 3D delegate of the image can then be given to our model as an input, which extracts a pose representation from the shape. As the shape is edited following the instructions, our diffusion model, trained on a collection of pose representations, can provide guidance signals via Score Distillation Sampling [Poole et al., ICLR 2023] to ensure the resulting deformation remains realistic while faithfully reflecting the user's input. Nonetheless, this is one possible scenario where our method can be applied, and we hope to extend it to a variety of applications in the future. --- Rebuttal Comment 1.1: Comment: Thank you for your response. After reading other reviewers' comments and looking at the additional experiments the authors have provided, I believe this work shows significant progress in an important field for the research community. The extensive qualitative results clearly show the advantages of the proposed approach. The authors' responses to other reviews are also satisfactory, and I stand by my original score for this paper. I strongly recommend it for acceptance. --- Reply to Comment 1.1.1: Comment: Dear reviewer xBSu, We highly appreciate your efforts in reviewing our work. Your valuable feedback has inspired us to explore future research directions by extending our work to other modalities, including images and videos.
Summary: This paper introduces a novel 3D generative model for pose-identity disentangled representation of 3D shapes. It proposes to use a set of keypoints with features to represent the pose of a 3D shape and learn a pose-extractor and pose-applier to accomplish pose transfer between instances. Experiments are conducted on widely used benchmarks with two categories: animals and humans, showing superiority of the proposed method compared to the baselines. Strengths: - The studies task is novel and important, with broad applications in different communities. - The proposed method is sound and effective. Using a dense set of keypoints to represent the pose of 3D shape is natural and well-motivated, which also results good results in the experiments. - The evaluation is extensive and informative. Experiments are conducted on widely-used datasets, with comparisons against enough baselines. For baselines without the code, the paper provides results with own implementation. Ablation studies show the effectiveness of the proposed components. - The writing of the paper is mostly clear, illustrating their method and evaluation in a clean manner. Weaknesses: - Although the exposition is mostly good to me, there still remains some questions that are not fully addressed. See the questions below. - No video results or animated results are provided. Therefore it's hard to evaluate the 3D/4D generated results (e.g., whether the result is 3D consistent from any view or whether the temporal consistency is good). - No qualitative results for the ablation study is provided. - It would be better if analysis on the keypoints can be provided. This might include analysis on the number of keypoints and the way to sample it. Technical Quality: 3 Clarity: 3 Questions for Authors: - How would the number of keypoints affect the result? What would be the fewest number of keypoints that is still feasible for this pipeline? - Regarding the experiments: - What is the detailed setting of NJF? If I understand correctly, NJF takes the pose parameter of the target pose as input. What should be the pose parameter for the deformingthings4d dataset? - How is the texture of the generated mesh predicted? In L265, the authors mentioned to use FID to evaluate the visual fidelity, yet how is this implemented, given that the generated mesh seems to not have texture? Also, how is the camera determined to generate the images used to calculate FID? - How is the "template" mesh defined in the framework, especially for categories without a template, such as animals? For humans, I understand that you may use T-pose human mesh as the template, but what would the case for the animals? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No failure cases are discussed in the paper. Under what condition would the proposed method fail? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer skND, Thank you for recognizing our work as a study of a "novel" and "important" task, and for appreciating our "sound" and "well-motivated" approach. We have carefully reviewed your queries and provide our responses below. **Temporal Consistency.** Thank you for providing the constructive comment. In our work, we mainly focus on learning a transferable pose representation from pose variations of a *single* identity, which is more challenging than our prior works. While our framework has not been explicitly designed for motion transfer--such as by using a temporal attention mechanism or training with motion supervision--our framework can transfer motions by transferring each frame individually as demonstrated in Figure 1 in the PDF file in our global response. We applied the One Euro Filter, a simple and easy-to-implement filter, to refine the smoothness of motions. We plan to extend our work for motion transfer in the future. We will include more results in the revised version. **Spatial Consistency.** While we only showed single-view images in our paper, our method produces meshes consistent in 3D as illustrated in Figure 2 in the accompanied PDF file. In particular, we rendered one of our results shown in Figure 5 of the main paper from 4 different viewpoints. We will add more images rendered from various viewpoints in the revision. **Qualitative Results for Ablation Study.** The qualitative result can be found in Figure 5 in the PDF file. As shown, directly using vertices results in low-quality meshes with noticeable distortions and artifacts. Our base model employing Jacobian fields produces high-quality shapes, which can be further improved by using the proposed refinement module. Note the preservation of intricate details near limbs. We will add the analysis and more results in the upcoming revision. **Impact of the Sampling Method and Number of Keypoints.** We appreciate your suggestion for an in-depth analysis. We used FPS for keypoint extraction due to its simplicity and efficacy, which have made it a popular choice in geometry analysis (e.g., PointNet++ [Qi et al., NeurIPS 2017], KeypointDeformer [Jakab et al., CVPR 2021], DeepMetaHandles [Liu et al., CVPR 2021]). Here, we investigate the impact of the number of keypoints and their sparsity. We trained our model using SMPL and DeformingThings-Animals meshes, adjusting the pose extractor to extract 50, 25, and 10 keypoints, respectively. We evaluated the Point-wise Mesh Euclidean Distance (PMD) for SMPL meshes and Fréchet inception distance (FID) for DeformingThings4D-Animals meshes, following our paper. The results are summarized in Tables 1 and 2. Notably, the model trained with just 10 keypoints still outperforms the pretrained SPT model in the pose transfer using SMPL meshes. Please refer to Figures 3 and 4 in the PDF file. We will include this analysis in the future revision. **Table 1: PMD measured on SMPL pose transfer experiments with varying number of keypoints. "Ours-$N$" denotes a variant of our network trained to extract $N$ keypoints.** | Method | SPT | Ours-10 | Ours-25 | Ours-50 | **Ours-100** | |------------- |-----|---------|---------|---------|--------------| | PMD ($\times 10^{-3}$)| 0.28 | 0.20 | 0.17 | 0.17 | **0.13** | **Table 2: FID measured on meshes whose poses are transferred from class **bear3EP** of the **DeformThings** dataset. "Ours-$N$" denotes a variant of our network trained to extract $N$ keypoints.** | Method | Ours-10 | Ours-25 | Ours-50 | **Ours-100** | |----------|---------|---------|---------|--------------| | FID ($\times10^{-2}$) | 1.25 | 0.87 | 0.83 | **0.72** | **Clarification on NJF.** In our experiments, we followed the setup of the morphing humans experiment in [Aigerman et al., ACM ToG 2022] where an MLP takes PointNet latents encoded from both source and target shapes and predicts a Jacobian field of the source shape that morphs it to the target shape. We did not use the setup of the re-posing humans experiment as our work does not rely on parameterizations, such as skeletons. We will clarify this in the revision. **Clarification on FID Computation.** Unlike 2D images and videos with abundant reference data collected over the Internet, assessing the realism of 3D shapes remains challenging as there is no standardized way. We followed previous work including MeshDiffusion [Liu et al., ICLR 2023], 3DShape2VecSet [Zhang et al., ACM ToG 2023], MeshGPT [Siddiqui et al., CVPR 2024], and Make-A-Shape [Hui et al., ICML 2024] that compute FID using images rendered from multiple viewpoints without texture. Specifically, we rendered images from 4 viewpoints (front, back, left, and right) at zero elevation. Following MeshDiffusion [Liu et al., ICLR 2023], we applied a grayscale diffuse material for reference and output shapes and applied Phong shading to obtain images with depth cues. We will include these details of the evaluation settings in the revised version. **Template Meshes for Animals.** For quadrupeds, used in our experiments and comprising a large portion of animal species, we consider shapes standing with straight legs as the animal equivalent of the T-pose for humanoids. This convention has been used in several works, including SMAL [Zuffi et al., CVPR 2017], A-CSM [Kulkarni et al., CVPR 2020], BARC [Ruegg et al., CVPR 2022], BITE [Ruegg et al., CVPR 2023], VAREN [Zuffi et al., CVPR 2024], and 3D Fauna [Li et al., CVPR 2024]. Still, there may be cases where obtaining, or even defining a canonical pose of an object is challenging. Therefore, lifting the necessity of template shapes is one of the directions that we are heading to. **Limitations** Following NJF [Aigerman et al., ACM ToG 2022], our method uses differential operators to compute Jacobian fields. Therefore, it may not be directly applicable to meshes with defects (e.g., duplicate vertices) or multiple connected components. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! Most of my concerns have been addressed, and I will raise my score. I wish the changes and comments from all reviewers could be incorporated to the camera-ready version of this paper.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for taking the time to review our submission and for providing constructive and insightful comments and feedback. We have compiled the qualitative results discussed in our rebuttal in the attached PDF file. Pdf: /pdf/321415053c54f359a1c6d9dde9057a4f1ec61cb2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
WildGaussians: 3D Gaussian Splatting In the Wild
Accept (poster)
Summary: This paper achieves in-the-wild reconstruction of 3D Gaussian splatting by introducing appearance embedding and DINO uncertainty mask. The appearance embedding is divided into a per-photo global embedding and a per-Gaussian local embedding. The uncertainty mask is obtained by comparing the rendered image with the ground truth DINO feature maps. Strengths: 1. The presentation of this paper is very clear and easy to understand. 2. The performance in removing occluders is impressive. Weaknesses: 1. This paper lacks innovation; both appearance embedding and the uncertainty mask are derived from previous work. Appearance embedding has already been demonstrated in SWAG, and using DINO features to address uncertainty is explicitly mentioned as coming from NeRF On-the-go. 2. In the section "Test-Time Optimization of Per-Image Embeddings," the authors mention using test images to optimize per-image embedding. In the original NeRF in-the-wild paper, their experimental setting involves using the left half of an image to optimize the embedding and evaluating it on the right half. Clearly, there is an unfair comparison in your experiments. Based on my experience, using the full image for optimization during the test, even if the rest of the 3DGS remains unchanged, can lead to overfitting the per-image embedding to the corresponding test image. It would be beneficial to include experimental data using the half-image optimization approach and conduct a fair comparison. 3. The qualitative results lack the demonstration of applying a single per-image embedding to other viewpoints. For instance, in Figure 5, it is not shown whether the night scene's appearance is correctly maintained from different perspectives. Technical Quality: 3 Clarity: 3 Questions for Authors: The method mentioned in the text will have different memory overheads and training times depending on the number of images in different scenarios. Can you provide data on the GPU memory overhead and training times specific to each scenario? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper mentions that when there are too many occluded regions, this method cannot correctly fill in the areas after removing the occluders, and it may require the help of a diffusion model to resolve this. On the other hand, if this method needs to be optimized with complete ground truth for each new environment (occluders, illuminations, and weather), it can be said to have no generalization ability. I will consider raising my score if the author can provide experiments with half-image test-time optimization. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. We appreciate that you find that “the performance in removing occluders is impressive”, and would like to raise the score if we can justify half-image test-time optimization. We will address all comments below and will adjust the paper accordingly. **W1: Novelty** We consider SWAG (published on arXiv on March 15th, latest revision: April 5th) as concurrent work and have developed our approach independent of the paper. While SWAG also used appearance embeddings, there are important technical differences. Similar to our approach, the authors of SWAG were motivated by Urban Radiance Fields to try predicting affine transformation parameters for the SH coefficients to model changes in appearance. The poor performance of this approach (see Tab. 8 in the SWAG paper) led the authors to conclude that “affine color transformations cannot model all appearance changes” and motivated the SWAG approach, which uses a feature grid with an MLP for color prediction. This MLP has to be evaluated for all Gaussians for each rendered view. As a result, SWAG has “10 times longer inference time per frame [5] compared to 3DGS”. Our approach shows that the conclusion drawn by the authors of SWAG regarding the expressiveness of affine transformations is wrong. In addition, our approach is both significantly faster to render and achieves better results (see PDF). We believe that this is interesting to the community. While our approach for handling transient objects is inspired by NeRF On-the-go, Fig. 3 in the main paper shows that the original formulation from NeRF On-the-go is not directly applicable to our problem setting. This is due to the formulation not being robust to appearance changes (which are not considered in NeRF on-the-go), see L186-195. We believe that showing how to adapt this approach to the case of changing appearance is interesting for the community. **W2: Fairness of the comparison** Thank you for raising this concern. For Photo Tourism, we actually follow the NeRF-W evaluation protocol (use half of the image for test-time optimization and only evaluate on the other half), but did not make this clear in the paper. For the NeRF On-the-go dataset, there is no test-time optimization needed. We will clarify this in the paper. Looking at the source code of GS-W (released after the NeurIPS submission deadline), we noticed that GS-W actually computes appearance information from the full test image. This indeed significantly boosts their performance. When using the NeRF-W protocol, our approach clearly outperforms GS-W. See the PDF for details. **W3: Appearance embedding applied to other viewpoints** We agree showing the consistency of the appearance modeling would benefit the paper. Unfortunately, we did not manage to complete the video in time. For the rebuttal, we include images sampled from our video in the PDF. As can be seen, the scene’s appearance is correctly maintained under viewpoint changes. **Question: GPU memory and training time per scene** Below, we provide the peek GPU memory and training times for individual scenes: | Scene | Num. Images | GPU Memory (GB) | Training Time | |--------------------|-------------|-----------------|---------------| | Trevi Fountain | 1689 | 39.2 | 10h 7m | | Brandenburg Gate | 763 | 8.0 | 6h 40m | | Sacre Coeur | 830 | 8.9 | 6h 2m | **Limitations: Incapable of recovering with too many occlusions** We wanted to express the following: If there are not enough observations of a part of the scene, e.g., because it is occluded in nearly all training images, our approach will struggle to correctly reconstruct the region, even if it filters out all occluders. The same limitation applies to other works in this field, e.g., NeRF On-the-go, GS-W, SWAG, NeRF-W, etc.: It is only possible to reconstruct areas that have been sufficiently observed. Still, as shown in our experiments, our approach works well without access to complete ground truth (we estimate occlusion masks during training rather than using ground truth masks) in realistic conditions. We will clarify this in the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. My primary concern has been adequately addressed through the additional experiments and detailed explanations you have provided. While I still have reservations regarding the novelty of this work, I do not think this significantly detracts from its overall contribution. I will raise my rating to borderline or weak accept. --- Reply to Comment 1.1.1: Comment: Thank you for the response and we are pleased to see that our rebuttal has addressed the your concerns. We kindly ask you to raise the recommendation rating to match the accept rating. Thanks!
Summary: The authors present WildGaussians, an innovative method designed to address occlusions and appearance changes using 3DGS. By utilizing robust DINO features and incorporating an appearance modeling module into 3DGS, their approach achieves state-of-the-art performance. WildGaussians not only matches the real-time rendering speed of traditional 3DGS but also outperforms both 3DGS and NeRF baselines when dealing with in-the-wild data. This is accomplished within a straightforward architectural framework, making the method both efficient and effective. The results demonstrate that WildGaussians can handle complex scenarios involving dynamic objects and varying appearances, setting a new benchmark for real-time 3D rendering and modeling. Strengths: The overall approach of this paper is commendable, with detailed methods and experiments. Specifically: (1) By modeling Gaussian embeddings and image-specific embeddings to capture scene variations, it effectively represents both local and global perspectives; (2) By comparing the DINO features of rendered images and ground truth images, it reasonably models uncertainty and extracts masks, ensuring stable training for 3DGS. Weaknesses: The results section of the paper is relatively complete overall, but the results mainly focus on the removal of small-scale dynamic objects (Fig. 4). Theoretically, using DINO features can address large-scale occlusions. The authors could add some visual examples to illustrate this. Additionally, are the PSNR values in Tables 1/2/3 calculated on the images after masking? Additionally, could you compare the results with the latest SWAG [5] and GS-W [42], and then discuss the similarities and differences with them? Typos: L145, it is possible pre-compute -> it is possible to ...; L102, handle occludersduring -> handle occluders during Technical Quality: 3 Clarity: 3 Questions for Authors: As discussed above. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the positive and constructive feedback! We highly appreciate that you find our approach “commendable, with detailed methods and experiments” and that the paper “demonstrate[s] that WildGaussians can handle complex scenarios involving dynamic objects and varying appearances, setting a new benchmark for real-time 3D rendering and modeling”. We will address all concerns below, will fix the typos, and will adjust the paper accordingly. **Large-scale dynamic objects** Indeed, our method should be able to handle very large dynamic objects. However, please note that some of the objects removed are quite large. As we report in Table 3, for some scenes the occluders occupy on average 26% of the entire image. Moreover, in some scenes in the NeRF On-the-go dataset, occluders can even be buses or trams, which we consider “large occluders”. We will add examples to the paper. For the dataset visualization, please refer to the website of NeRF On-the-go. **Is PSNR computed after masking?** No masks were used for the PSNR computation (nor for the other metrics). The test images of both Photo Tourism and NeRF On-the-go do not have occlusions. We will clarify this in the paper. **Comparisons to SWAG and GS-W** We consider both SWAG (published on arXiv on March 15th, latest revision: April 5th) and GS-W (published on arXiv on March 23, latest revision: July 14th) as concurrent work. We developed our work independently from them. All three approaches (SWAG, GS-W, ours) consider the same problem (modeling scenes with changing appearances from images containing occluders) and closely follow NeRF-based approaches proposed for the problem: they model appearance per Gaussian using appearance features stored per Gaussian and handle dynamic objects/occluders during training. They differ in the way both these parts are implemented: - To model appearance changes, **SWAG** uses a feature grid with an MLP for color prediction. This MLP has to be evaluated for all Gaussians for each rendered view. As a result, SWAG has “10 times longer inference time per frame [5] compared to 3DGS”. In contrast, our approach simply uses a shallow MLP to predict an affine transformation of the SH coefficients to obtain the final color. This approach is much more efficient and is “backward compatible” with 3DGS (see L144-147 in the paper), i.e., for a fixed target appearance it is not necessary to evaluate an MLP during rendering. To handle transient objects, SWAG introduces a trainable, image-dependent occupancy term per Gaussian. In contrast, our approach uses pre-trained DINO features to predict which training image regions contain static scene parts respectively dynamic objects. - To model appearance changes, **GS-W** combines per-Gaussian appearance features with features selected (per Gaussian) from a reference image to predict colors via MLPs. Similar to our approach, this approach to appearance modeling is significantly more efficient than the one used by SWAG. To model transient objects, GS-W trains a Unet-based model to predict a visibility map from a given training image, while our approach predicts uncertainties by comparing features from rendered and actual images. As can be seen, all three approaches differ significantly in the way they implement both stages. Additionally, our approach outperforms both SWAG and GS-W. We will extend our discussion in L86-91 to more clearly highlight the differences and add the results for SWAG and GS-W to the paper. The attached PDF compares our approach to both SWAG and GS-W. Please note that the results reported for GS-W differ from those reported by the authors of GS-W. The GS-W code, released about a month ago (i.e., after the NeurIPS submission deadline), uses the full test images for computing appearance information. The results reported for GS-W in the PDF were obtained by adjusting the code to follow the common test protocol used by all other approaches (including ours), which use one half of each test image to compute appearance information and the other half for evaluation. As can be seen, our approach outperforms both SWAG and GS-W. We will add the results to the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. Your responses have already addressed my concerns. This is a nice work, and I will keep my rating as accept. --- Reply to Comment 1.1.1: Comment: Thank you so much for your positive feedback on our rebuttal. We are truly glad that our responses have successfully addressed your concerns. Given your positive assessment, we are wondering if you could consider raising your rating from the current "borderline accept" to "weak accept" or higher? Thank you again for your time and consideration. We truly appreciate your valuable input throughout this review process. --- Rebuttal 2: Comment: I have raised my score to accept. Thanks for your efforts during the rebuttal and discussions. I hope the authors can include the comparisons and discussions in the revision to make the paper more solid and convincing. --- Rebuttal Comment 2.1: Comment: We sincerely thank you for raising the score to 7. We greatly appreciate all reviewers' constructive feedback and fully agree that the revisions you suggested will enhance the quality of our paper. As promised, we will incorporate all of these improvements in the updated version.
Summary: The authors are proposing WildGaussians, an approach based on 3D Gaussian Splatting (3DGS), that tries to address its robustness issues, specifically to significant appearance changes due to varying illumination or occlusions and dynamic objects. First, explicit appearance modeling is introduced into 3DGS by using a small MLP to predict an affine color transform of the predicated color from the base color, image-specific appearance embeddings and per-Gaussian appearance embeddings. Second, DINOv2 features, known to be robust to significant appearance changes, are leveraged to deal with occlusions: DINOv2 patch-level features are computed on training and predicted images, upscaled and binarized into a mask directly plugged into the optimization loss to mask out areas of low certainty. The specific contributions of this submission are: - extending 3DGS to support appearance modeling via additional embeddings combined with a tone-mapping MLP, - extending 3DGS to be robust against appearance changes by using the similarity of DINOv2 features between training and predicted images to mask out uncertain regions, - qualitative and quantitative evaluations of the combined changes on NeRF On-the-go and Photo Tourism datasets with comparison against relevant baselines (3DGS and NeRF On-the-go the NeRF On-the-go dataset or 3DGS, NeRF, NeRF-W-re, Ha-NeRF, K-Planes and RefinedFields on the Photo Tourism dataset). Strengths: - The submission is highly relevant to the research community since it deals wit improving the robustness of an influential technique on in-the-wild datasets. The proposed contributions are clear and simple extensions to 3DGS each addressing targeted robustness gaps and their presentation is solid and easy to follow thanks to this split between appearance and uncertainty modeling. - The presented results are also solid. WildGaussians significantly outperforms baselines on NeRF On-the-go datase in Table 1 (except on low occlusions) and outperforms relevant baselines on Photo Tourism. The qualitative comparison of Figure 4 and 5 with selected artifacts in baselines help convincingly demonstrate the robustness of WildGaussians to occlusions and illumination changes. - The (extended) ablations study (from the supplementary material) are extensive and cover the expected incremental changes and variants. Weaknesses: - The proposed changes to 3DGS seem of limited novelty: - appearance embeddings combined with an MLP to produce an affine mapping of colors is heavily inspired by NeRF derivatives like Urban radiance fields (with however some adjustments, such as per-Gaussian appearance embeddings and the required custom initialization), and - leveraging DINOv2 features to build an uncertainty mask is similar to NeRF on-the-go (CVPR'24 so quite recent though). Applying variants of previous contributions to 3DGS is thus quite incremental. - Some reference (and comparison) to related work appears to be missing: [Robust Gaussian Splatting](https://arxiv.org/abs/2404.04211) (April 2024). Note there are also several relevant concurrent works that have since appeared (but which would be unreasonable to call out as a weakness). - The justification to introduce a binary uncertainty mask (over uncertainty weights) could be improved with more illustration of the problems encountered as well as the specific choice of threshold. - Some minor typos to correct: - l.102 occludersduring -> occluders during - l.204: ocupacity -> apacity - l.510: borader -> broader Technical Quality: 3 Clarity: 3 Questions for Authors: - Why are the considered baselines different depending on the dataset? Only 3DGS is used on both the NeRF On-the-go and Photo Tourism datasets. - Any additional insights on why WildGaussians fares worse with low occlusion in Table 1 (and how to mitigate this)? It seems the authors believe 3DGS is inherentily robust to low occlusion thanks to its initialization from an SfM point cloud. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the positive and constructive feedback! We really appreciate that you consider our work “highly-relevant to the research community”, and that our “contributions are clear and simple” while our method “significantly outperforms baselines”. We will address all concerns below, will fix the typos, and will adjust the paper accordingly. **W1: Limited novelty** While we are indeed inspired by both Urban Radiance Fields (URF) and NeRF on-the-go, there are important technical differences in applying these ideas in the context of 3DGS “in the wild”. We believe that they are interesting enough for the community to warrant acceptance. - **Differences to URF**: In order to handle exposure changes, URF only models a global affine transformation per image. In contrast, our per-Gaussian embedding enables handling local appearance changes in various parts of an image. The entry “w/o Gaussian embeddings [24]” in Tab. 4 corresponds to using the approach from URF. As can be seen, not using the per-Gaussian embeddings significantly reduces performance. The authors of SWAG report a URF-inspired approach for the same problem that performs significantly worse than their SWAG method (this observation is used to motivate the significantly more expensive SWAG approach, see our reply to reviewer nPTW for details). They conclude that “affine color transformations cannot model all appearance changes”. Our work disproves this statement as our method outperforms SWAG, as shown in the attached PDF. We believe that this is interesting to the community. - **Differences to NeRF On-the-go**: While our approach for handling transient objects is clearly inspired by NeRF on-the-go, Fig. 3 shows that the original formulation from NeRF on-the-go is not directly applicable to our problem setting. This is due to the formulation not being robust to appearance changes (which are not considered in NeRF on-the-go), see L186-195. We believe that showing how to adapt this approach to the case of changing appearance is interesting for the community. **W2: Missing related work** We will add Robust Gaussian Splatting and other recent concurrent works (SWAG, GS-W, etc) to the related work section, as well as comparisons to relevant methods (see the results shown in the attached PDF). **W3: More illustrations for the justification of a binary uncertainty mask** Thanks for the great suggestions. We promise to add more illustrations and the choices for threshold. **Q1: Why are baselines different** Photo Tourism and NeRF on-the-go pose different challenges (strong appearance changes and moderate occlusions vs. limited appearance changes and strong occlusions). Consequently, methods developed for one scenario are typically not evaluated in the other and vice versa (in addition, NeRF on-the-go was released only very recently). We use both datasets to showcase the robustness of our approach to both appearance changes and strong occlusions. For the rebuttal, we ran multiple baselines on both datasets (see PDF), leading to more shared baselines. **Q2: Why WildGaussians fares worse with low occlusion in Table 1, and how to mitigate it?** While NeRF on-the-go outperforms WildGaussians for low occlusions, the difference is marginal for PSNR (20.63 vs. 20.62) and SSIM (0.661 vs. 0.658). Without appearance modeling, WildGaussians outperforms NeRF on-the-go in both metrics (see Tab. 3). WildGaussians’ approach to handling transient objects is designed to be robust to appearance changes, and this added level of robustness compared to NeRF on-the-go seems to hurt the LPIPS performance of WildGaussians. The fact that 3DGS, Gaussian Occupancy Fields, and Mip-Splatting all achieve a similar LPIPS as NeRF on-the-go, suggests that occlusion handling is not very important for such a low level of occlusion. A potential way to mitigate performance degradation is to try to automatically detect which components (appearance modeling, handling transient objects) are necessary for a given scene. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal from the authors and the other reviews (as well as replies from the authors). I would like to thank the authors for their answers and appreciate their thoroughness in doing so. I also believe (most of) my concerns have been somewhat addressed, so I am still proposing to accept this submission. --- Reply to Comment 1.1.1: Comment: Thank you a lot for keeping the rating as acceptance, and we are glad that we have resolved your concerns. In case there are further concerns, we are happy to address them.
Summary: The author proposes an improvement strategy for reconstructing 3D scenes from in-the-wild data based on the latest 3DGS method, primarily addressing occlusion and appearance changes. The main improvements are as follows: 1.Appearance Encoding with MLP: Introduce a Multi-Layer Perceptron (MLP) to encode the appearance of images. This involves two trainable embeddings as inputs: per-image embedding, per-Gaussian embedding, and base color (SH=0). The output consists of color transformation parameters, γ and β. 2.Uncertainty Modeling: Introduce uncertainty modeling to mask occlusions and dynamic objects. The author uses DINOv2 to calculate the feature similarity between the predicted image and the training image, thereby guiding 3DGS to avoid reconstructing occlusions. The motivation of the article is relatively clear, the structure is complete, the content is substantial, and the writing is easy to understand with minimal writing issues. The experimental part is also quite comprehensive, which to some extent demonstrates the effectiveness of the proposed improvement method. However, the submission did not include a video, which reduced the persuasiveness of the paper's results in the task of NVS and view transitions. Strengths: 1. The problem that the authors attempt to solve is indeed a real one, and there is a significant demand in the industry for reconstructing 3D scenes from in-the-wild data. 2. The idea of leveraging DINO features for jointly optimizing occlusion is sound and interesting. 3. The paper has a complete structure, with clear language expression and minimal writing issues. The content is well-organized and easy to understand. Weaknesses: 1. The two improvement ideas proposed in the article (MLP color mapping and uncertainty model) are not uncommon, especially in NeRF research. Although 3DGS is still in its early stages, many works such as SWAG, GS-W, and Scaffold-GS also mention appearance encoding and the introduction of pre-trained models. Therefore, I do not see any significant differences between this work and these existing works. Additionally, in the calculation of feature similarity, a pre-trained model was used for fine-tuning, making this work seem more like a combination of existing methods (A+B stitching) rather than presenting original contributions or theoretical derivations. 2. In Lines 126-128, the author mentions that the latest methods (e.g., mip-splatting, Absgs, Gaussian Opacity Fields) are introduced to improve the original 3DGS, indicating that the research in this article is based on a stronger baseline than the original 3DGS. However, in the experimental comparison, it is only compared with the original 3DGS. As far as I know, the methods introduced also aim to address issues with artifacts and floaters. Therefore, it is difficult to determine how much the performance enhancements are due to the improvement strategies proposed in this paper. A comparison with the stronger baseline should be added in Tables 1, 2, and 3. 3. I don't think the comparisons shown in Figures 4 and 5 are very effective. Although there is an improvement in simulating ambient light compared to the original 3DGS, I cannot agree with the statement in the article that "we can adeptly handle changes in appearance such as day-to-night transitions without sacrificing fine details." Clearly, in terms of stone carvings, wall details, and water surface textures, the original 3DGS appears sharper. Perhaps your improvement sacrifices some details but achieves better fitting for appearance changes and smoother view transitions. Unfortunately, you did not submit a video to support this. 4.Some typos: -Figure2 can be more clear,you can add some legends to indicate the gradient flow. And what does "affi" stands for? -The Eq.(2) is wrong(incomplete). -The superscript in Eq.(7) is incorrect:C~ -L.430,the output of MLP is (β,γ) as mentioned in L.134 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Related Work (Sec. 2) In Line 87, you mention two related works, SWAG and GS-W, but the results analysis section does not compare your method with theirs. I understand that they also use MLP to encode appearance. However, what is the most significant difference between your work and theirs? This distinction does not seem very clear. 2. Method (Sec. 3) 1)Regarding Appearance Modeling, do you still use SH=3 to represent the color of Gaussians? Given that neural networks are already being used, why not directly predict the final color like Scaffold-GS? Instead, you predict color transformation parameters. Could you clarify this choice? 2)In Line 155, you mention that "DSSIM and L1 are used for different purposes in our case." However, it is still unclear why one uses "image rasterized without appearance modeling" for calculation, while the other uses the correct appearance for calculation. You also state that DSSIM is more robust to appearance changes and focuses more on structure and perceptual similarity. Shouldn't it be possible to calculate DSSIM on the corrected image? 3)Is Equation (11) the entire loss function term? 4)In Line 220, you mention that "we project them to all training cameras, removing any points not visible from at least one camera." However, my understanding is that the pruning strategy of the original 3DGS should already prune out invisible points. Is this step critical? 3. Experiments (Sec. 4) 1)In Table 1, why are there only two methods compared on this dataset, while more methods are compared on the Photo Tourism Dataset in Table 2? 2)In Table 2, why is there such a significant difference in FPS between your method and 3DGS? I would expect your method to have a similar rendering speed to 3DGS. Is the difference due to the introduction of previous methods to 3DGS (Mip splitting, Absgs, Gaussian Opacity Fields)? These methods might compress and prune the number of Gaussian points, making it an unfair comparison to the original 3DGS. 3)Ablation Study: Sky Handling. You have three improvements, but I haven't seen any ablation experiments on sky handling. Is its impact on the results considered less significant? 4)Can you explain why adding appearance modeling to some datasets in Table 3 actually leads to a decrease in metrics? Based on all the experimental results, can I understand it this way: the methods proposed in this paper are effective primarily under conditions of strong occlusion and significant appearance changes. For minor occlusion or appearance changes, using the methods in this paper may not necessarily yield improvements and could even decrease the performance metrics? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the positive and constructive feedback! Below, we address the concerns raised in the review. We will adjust the paper accordingly and fix the typos. **W1: The ideas in the paper are not uncommon, similar to SWAG, GS-W, and Scaffold-GS** To our understanding, Scaffold-GS (CVPR 2024) neither uses appearance encoding to model appearance changes nor pre-trained models, and it is not evaluated on datasets with significant appearance changes or moving occluders. We consider SWAG and GS-W as concurrent work. Our approach, developed independently, addresses the same problem (modeling scenes with changing appearances and occluders) and follows NeRF-based methods but differs in implementation: - **SWAG**: Uses a feature grid with an MLP for color prediction, leading to slower inference. We use a shallow MLP for an affine transformation of SH coefficients, making it more efficient. For transient objects, SWAG uses a trainable, image-dependent occupancy term per Gaussian, but we use pre-trained DINO features to predict regions with dynamic objects. - **GS-W**: Combines per-Gaussian appearance features with features from a reference image via MLPs. It uses a Unet model to predict visibility maps, but we predict uncertainties by comparing features from rendered and actual images. All three approaches differ significantly in implementation. Additionally, our approach outperforms both SWAG and GS-W (cf. PDF). We will extend discussion, and add results for SWAG and GS-W. **W1: This work seem like a combination of existing methods** As detailed above, our approach differs significantly from existing approaches in technical details. - **Appearance modeling:** SWAG shows that our approach is non-trivial: it reports that authors tried to predict an affine transformation of the SH coefficients, but observed significantly worse results than with their approach (cf. Tab. 8 in 2403.10427v2). They conclude that “affine color transformations cannot model all appearance changes”. We show that this conclusion is false, as we outperform SWAG. - **Transient objects:** Ours is the first work in the context of 3DGS that uses a pre-trained foundation model for uncertainty modeling. **W2: Comparisons over stronger baselines** Similar to SWAG and GS-W, we compare to standard 3DGS in Tab. 1, 2. Tab. 4 contains a comparison with a stronger baseline: *w/o appearance&uncertainty* corresponds to Mip-Splatting+AbsGaussian(GOF) + our sky modeling. This baseline is not substantially better than 3DGS on Photo Tourism (PSNR: 18.48 vs. 18.13). We also add a comparison on NeRF On-the-go dataset in the PDF, where we clearly outperforms both the baselines and GS-W, especially with lots of occlusions. We will add the results and baselines to Tab. 1, 2. **W3: Disagreement with the claim to “handle changes in day-to-night transitions without sacrificing fine details”** Indeed, being able to handle appearance changes seems to come at the price of a (slight) loss in detail. We will soften the claim to “Compared to 3DGS, we can adeptly handle changes in appearance such as day-to-night transitions at the cost of a slight blurring in fine details.” **Q1: Comparison to SWAG and GS-W** GS-W only released code about a month ago and SWAG has not released code. Looking at the code of GS-W, it uses the full test image for computing the appearance (see our reply to BAXf for details), and fixing the code to use NeRF-W evaluation protocol significantly reduces performance (see PDF). Our approach outperforms both SWAG and GS-W. **Q2: Why not predicting color directly like Scaffold-GS?** We still use SH coefficients so that after fixing the appearance embedding, we can bake our representation back to 3DGS for better portability and faster runtime. In our experience, predicting offsets from a base color stored in Gaussians leads to more stable training. **Q2: Use of DSSIM and L1** We follow VastGaussian and use L1 to ground the appearance (taking the final color into account) and DSSIM to ground the structure. Using the renderings without appearance modeling, which increases the performance slightly in our experiments. We will clarify this in the final version. **Q2: Eq. (11) entire loss function?** The loss is sum of Eq. 11 and Eq 9. We will clarify this. **Q2: When initializing sky Gaussians, why remove points not visible in any camera when 3DGS will prune them automatically?** We remove these points as they will never be seen in any view and, therefore, there is no point adding them and slowing down the training (3DGS will never prune points not visible in any camera). **Q3: Only two methods compared on NeRF On-the-go dataset** The dataset was released right before the submission deadline, and we were only able to compare to 3DGS and NeRF On-the-go in time. Above, we show results for GS-W, Mip-Splatting, and Gaussian Opacity Fields. We will include the results in Tab. 1. **Q3: Unfair to compare with 3DGS when you use Mip-Splatting and AbsGaussian (GOF)** In Table 4, we also report the numbers on our model with app. modeling and uncertainty modeling disabled. This is essentially Mip-Splatting+AbsGaussian(GOF) with our sky modeling. **Q3: Why is 3DGS slower than WildGaussians?** 3DGS cannot explain the transient objects and it grows Gaussians to account for that. The excessive Gaussians (floaters) slow it down. We will make this clearer in the final version. **Q3: Ablation study for sky handling** We will add an ablation study to Table 4. Unfortunately, we did not have time to do this for the rebuttal. **Q3: Why adding appearance modeling to datasets without appearance changes decreases the performance?** Indeed, enabling appearance modeling for datasets with no appearance changes reduces performance slightly. The same happens with NeRFs, e.g., see Mip-NeRF 360 paper. The added (unnecessary) degrees of freedom, in our case appearance embeddings, make the optimization problem more difficult. --- Rebuttal Comment 1.1: Title: Comment: After reading all the review and the rebuttal, I would like to keep my rating since the rebuttal addressed most of my concerns.
Rebuttal 1: Rebuttal: **Global Response** We thank all reviewers for their constructive comments. Please check the attached one-page PDF for more numerical and visual results on: * **Figure 1**: Multiview consistency for single app. embedding * **Table 1**: SWAG and GS-W comparison on Photo Tourism * **Table 2**: Additional baselines comparison on NeRF On-the-go Pdf: /pdf/69f280f72caf80a9e707bdfcc06739ac979335c3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Amnesia as a Catalyst for Enhancing Black Box Pixel Attacks in Image Classification and Object Detection
Accept (poster)
Summary: The authors present a few-pixel querying blackbox attack for image classification and object detection models, called Remember and Forget Pixel Attack using Reinforcement Learning (RFPAR). As this is a blackbox attack, the setting assumes the attacker does not have access to the target model weights, though as a querying attack, they assume they can query many test inputs to the victim model to optimize the attack. The resulting attacks edit only a small number of pixels and are effective at changing the classification target or at suppressing all objects in detection models. The authors demonstrate their attack on several models and dataset settings. They additionally present ablations of their attack optimization method. Strengths: The authors present an effective attack that achieves high ASR with few pixel edits. Moreover, the authors demonstrate their attack not only on image classification but on object detection as well. The authors also demonstrate their attack on a diverse collection of models for both the classification and detection settings. Compared to the baseline methods, the proposed RFPAR achieves high ASR and lower queries. In terms of the image edit distance (L0), theirs is second only to OnePixel, though their ASR is significantly higher than OnePixel. Weaknesses: In terms of presentation, it seems like the “Forget Process” is very simple, as it is only resetting the RL agent and passing the output of the previous optimization round. I recognize that the authors have demonstrated the importance of this step in their ablations. I just feel like they present the Remember Process and Forget Process as if both are equally major steps, when Forget is only a minor step compared to Remember. While the percentage of pixels edited is small, the attacks can still be noticeable to the human eye, especially when you have dark pixels on a bright background region like the sky. This is less noticeable when not zoomed in on the images. The author’s method optimizes pixels with an all-or-nothing approach, setting each pixel channel to either 0 or 255, which makes the edits more visible. The results on Argoverse (Section 3.4) are somewhat limited. If I understand correctly, the analysis is only applied to one image that was attacked. The paper lacks a dedicated section to discuss related works, though some parts of the introduction discuss related works. Technical Quality: 4 Clarity: 3 Questions for Authors: Details of the Ablations (Section 3.5) could be presented more clearly. If M is ablated, how does the optimization change? Is it equivalent to only running the inner loop once? If I is ablated, does this mean the RL model is not reset between outer iterations? Is the optimization still run for a similar number of steps? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors do not extensively discuss the limitations of the work. In Section 4 they briefly discuss the potential negative impact that adversarial attacks can have. The authors do not present any defense testing for their attack. It would be good to see some defense testing, to know if this attack is easily defended against or not. As this is a work focused on presenting an adversarial attack, there is a potential risk of negative impact. Flag For Ethics Review: ['Ethics review needed: Safety and security'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer R24Y, Thank you for reading our paper and providing comments to help improve it. Below, we address your concerns. --- **C1: "Details of the Ablations could be presented more clearly. "** 1) **M ablation:** M stands for Memory in the Remember process, and I represents Initialization in the Forget process. If Memory is ablated, the Agent's reward is used for the bound condition. Convergence occurs when the Agent's reward meets this condition, and the final image is passed to the Forget process. Without Memory, only the Agent is reinitialized, repeating until the maximum iterations are reached. Q) If M is ablated, how does the optimization change? No, the optimization process proceeds as before. However, the bound condition that halts this process now depends on the Agent's reward. Q) Is it equivalent to only running the inner loop once? No, the Forget and Remember processes are conducted without memory for the maximum number of iterations we have specified. We used the same maximum number of iterations (100) for all ablation tests. 2) **I Ablation:** If Initialization is ablated in the Forget process, the Agent is not reinitialized and retains information from the previous Remember process. This can cause the search to focus on specific pixels with strong rewards, potentially missing vulnerabilities in other regions. Q) If I is ablated, does this mean the RL model is not reset between outer iterations? Yes, the Agent is not reinitialized and retains previously learned information, leading to a focus on specific areas. Q) Is the optimization still run for a similar number of steps? Yes, we used the same hyperparameters for all experiments. We will present the ablation study query results. | Ablation | ViT | ResNeXt| RegNetX| DenseNet| MNASNet | MobileNet-V3| |----------------|------|--------|--------|---------|---------|-------------| | RFPAR | 614 | 529 | 623| 534 | 461 | 548 | | RFPAR$_{I}$ | 662 | 404 | 444| 404 | 364 | 348 | | RFPAR$_{M}$ | 712 | 889 | 820| 723 | 726 | 659 | | RFPAR$_{M+I}$ | 613 | 442 | 484| 464 | 442 | 596 | --- **C2: "It seems like the “Forget Process” is very simple. Forget is only a minor step compared to Remember."** Yes, the Forget process is simple and mainly reinitializes the Agent. However, our ablation study showed that both the Forget and Remember processes are essential for effective adversarial attacks. If either process is omitted, performance degrades significantly. Although the Forget process is simple now, it has the potential for enhancement with meta-learning. Therefore, we consider the Forget process a major step along with the Remember process. --- **C3: “The paper lacks a dedicated section to discuss related works.”** Due to page limitations, the essential related works have been included in the introduction. We will add a more detailed discussion on related works in the Appendix. --- **C4: "While the percentage of pixels edited is small, the attacks can still be noticeable to the human eye"** As you mentioned, people might notice pixel changes when zoomed in or in specific situations. Finding perturbations within the [0,255] range using the Agent is very difficult, so we chose an all-or-nothing approach. Fortunately, this method attacks fewer pixels and targets the entire area of the image, preserving its meaning for viewers. This is demonstrated in the "Argoverse Attack Video.pptx" in our supplementary materials. While some pixel irregularities might be noticeable, the objects in the video remain recognizable, although the detection model malfunctions. --- **C5: "The results on Argoverse are somewhat limited."** We apologize for any confusion. We conducted experiments on a single video sequence from the Argoverse dataset, specifically "e9a96218-365b-3ecd-a800-ed2c4c306c78" from the Argoverse-1.1 validation set, which includes 469 images. you can see the "Argoverse video attack" in the supplementary material. --- **C6: “The authors do not present any defense testing for their attack.”** We conduct experiments on adversarially pre-trained VIT and RexNeXt101, with results presented in Table 5 of the (PDF). Our method achieves the highest success rate, effective queries, and ranks second in $L_0$. Proportional calculations indicate that RFPAR reduced VIT's performance from 69.10% to 37.11%. According to Appendix D of [6], this reduction is more effective than those achieved by CW20 (38.92%), PGD-20 (37.96%), and PGD-100 (37.52%), but less effective than AutoAttack (34.62%). This demonstrates that our Black-box attack, RFPAR, is as effective as White-box attacks, even though it uses only limited information. --- **C7: “The authors do not extensively discuss the limitations of the work. In Section 4 they briefly discuss the potential negative impact that adversarial attacks can have.”** Our attack effectively deceives the model by targeting only a small number of pixels with effective queries. This highlights the risk of deploying models solely based on their training performance in the real world. However, in object detection, while the query efficiency is maintained, the attack still requires over 1000 queries, which can be easily thwarted by simply limiting the number of queries. In image classification tasks, adversarial training can increase the required number of queries and reduce the attack success rate by approximately half. This implies that applying adversarial training and query limitations to classification models can successfully defend against pixel attacks. Therefore, it is essential to implement defense techniques such as adversarial training or query limitations when deploying models in real-world applications. --- We will add the discussed points into the final version. If you have no remaining concerns, we would greatly appreciate it if you could increase your score. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses and discussion. I believe the authors should add these ablation clarifications to main work. While I still believe this paper has some weaknesses, I think the contributions warrant an accept rating, and I will maintain my initial rating.
Summary: This paper proposes a new adversarial attack method called Remember and Forget Pixel Attack using Reinforcement Learning (RFPAR) for image classification and object detection models. The key contributions are: - A novel pixel-based black-box attack using reinforcement learning with "Remember" and "Forget" processes - Extension of query-based pixel attacks from image classification to object detection - Experimental results showing improved performance over existing methods on ImageNet and MS-COCO datasets Strengths: The paper's approach is original in using a dual-stage reinforcement learning process to manage adversarial attacks, which is a unique combination in the context of pixel-based black-box attacks. The paper is well-written with a clear structure. Weaknesses: Lack of Theoretical Foundation: The paper primarily focuses on empirical results without providing theoretical insights or proofs to back the efficacy of the Remember and Forget processes in adversarial contexts. Limited Evaluation: The robustness of the method to various defenses is not evaluated. It's unclear how RFPAR would perform against adversarially trained models or other defense techniques. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Can you provide more theoretical insight into why the Remember and Forget processes lead to improved performance? Perhaps some analysis of the optimization landscape? 1. How does the computational cost of RFPAR compare to other methods, especially for larger images? Are there ways to improve efficiency further? 1. Have you tested RFPAR against any adversarial defenses? How do you expect it to perform against adversarially trained models? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The ethics and potential negative impacts of this work are not sufficiently discussed. Developing more effective adversarial attacks could have serious consequences. This paper should discuss potential negative impacts and how they might be mitigated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer RX8p, Thank you for reading our paper and providing comments to help improve it. Below, we address your concerns. --- **C1) "Can you provide more theoretical insight into why the Remember and Forget processes lead to improved performance?"** We initially used a multi-step REINFORCE approach but identified issues, leading us to propose the Forget and Remember processes using one-step REINFORCE. Generating adversarial examples with multi-step REINFORCE involves the objective function $U = E \left[ \sum_{0}^{\mathcal{\tau}} \gamma^{\tau-t} R[s_t,a_t|\pi_\theta] \right]$, where $\gamma$ is the discount factor, $s_t$ is the image at step $t$, $a_t$ is the action at $s_t$, and the reward is $R[s_t,a_t|\pi_\theta] = f_{\theta,y}(s_{0}) - f_{\theta,y}(s_{t+1}), f_{\theta,y}$ is the confidence score of the true label $y$. Here, $a_t$ is a single pixel perturbation. We find that significant oscillations can be observed in the objective function. Let $\tau^*$ be the minimum number of steps(pixels) to create an adversarial example. The sequence of pixels does not matter, leading to variations in the value of the objective function due to different orderings of $a_t$. Thus, for $i_t \in \\{0,1,2,\cdots,\tau^*\\} \text{ and } i_j \neq i_k$, the optimal objective function value is $U^* = E \left[ \sum_{0}^{\mathcal{\tau^*}} \gamma^{\tau-t} R[s_t,a_{i_t}|\pi_\theta] \right]$, with $\tau^*!$ permutations. This complicates training and increases the queries and $L_0$. To address this, we proposed the Forget and Remember process using one-step REINFORCE. Pixel perturbations at $\tau^*$ are defined as $A_{\tau^*} = \sum_0^{\tau^*} a_t$. By the intermediate value theorem, there exists a $C$ in $[x,x+A_{\tau^*}]$ such that $f_{\theta,y}(x) > N > f_{\theta,y}(x+A_{\tau^*})$ for some $N$. We propose a Forget and Remember process using one-step REINFORCE to iteratively find this $C$, assuming $C \in \\{x + a_0, x + a_1, \cdots, x + a_{\tau^*}\\}$. This one-step approach avoids the oscillations of multi-step REINFORCE, offering better query efficiency and lower $L_0$. --- **C2) “How does the computational cost of RFPAR compare to other methods, especially for larger images? Are there ways to improve efficiency further?”** Given the input dimension size $N$ and constants $K_i$: OnePixel has $O(K_1)$ complexity, ScratchThat has $O(N^2)$, Pixle has $O(K_2)$, RFPAR has $O(N)$, PRFAR has $O(K_3)$, and GARSDC has $O(N)$. In image classification tasks, RFPAR's time complexity increases linearly with larger image sizes, which is more favorable than ScratchThat's exponential increase but less advantageous compared to OnePixel and Pixle. In object detection tasks, both RFPAR and GARSDC exhibit linear increases in time complexity with larger images, making them less advantageous than PRFAR. This can be seen as a limitation of our method. However, RFPAR generates attacks using neural networks, similar to GARSDC, and benefits from the high performance of GPUs, allowing for faster computations despite the increased time complexity. We present the experimental times in Table 4 in the (PDF). To improve efficiency, we propose integrating our method with meta-learning. RFPAR involves the agent learning afresh on the image multiple times, which can mitigate overfitting but also results in unnecessary queries. Meta-learning could enable the agent to quickly adapt to new tasks, enhancing efficiency by learning more rapidly. --- **C3) “Have you tested RFPAR against any adversarial defenses? How do you expect it to perform against adversarially trained models?”** We conducted experiments on adversarially pre-trained VIT and RexNeXt101 , with results presented in Table 5 of the (PDF). Compared to other pixel attacks, our method achieves the highest success rate, effective queries, and ranks second in $L_0$. Proportional calculations indicate that RFPAR reduced VIT's performance from 69.10% to 37.11%. According to Appendix D of [6], this reduction is more effective than those achieved by CW20 (38.92%), PGD-20 (37.96%), and PGD-100 (37.52%), but less effective than AutoAttack (34.62%). This demonstrates that our Black-box attack, RFPAR, is as effective as White-box attacks, even though it uses only limited information. --- **C4) “The ethics and potential negative impacts of this work are not sufficiently discussed. This paper should discuss potential negative impacts and how they might be mitigated.“** Our proposed method causes models to malfunction with fewer queries and higher success rates than previous pixel attacks. In the "Argoverse Attack Video.pptx" (0:30 to 0:40) from our supplementary material, our attack successfully misclassifies a nearby person detected with high confidence, posing a safety risk for vision-based systems. However, pixel attacks currently require over 1000 queries, making real-time application impractical. In image classification tasks, adversarially trained models halved the success rate of pixel attacks and increased the required queries, showing that adversarial training and query limitations can effectively mitigate these attacks. According to Table 1 in the (PDF), effective attacks on transformer-based models require about 1000 queries, while effective attacks on CNN-based models can be achieved with approximately 500 queries. This result shows that CNN models are relatively easy to deceive, highlighting the necessity of applying defensive techniques to CNN models in real-world scenarios. We will add the discussed content into the final version. --- Thank you once again for your thoughtful comments aimed at improving our paper. If you have no remaining concerns, we would greatly appreciate it if you could increase your score. --- Rebuttal 2: Comment: Dear Reviewer RX8p, The authors have provided a rebuttal. Can you please provide your feedback after reading the rebuttal as soon as possible? The deadline is approaching fast. Thanks, AC --- Rebuttal Comment 2.1: Comment: Thank the authors for the detailed feedback, which addressed my main concerns. While the proposed attack is interesting, can the authors elaborate more on its real world implications? --- Reply to Comment 2.1.1: Comment: Reviewer RX8p, We thank the reviewer for responding and raising important questions. We would like to explain the real-world implications of the proposed attack through specific examples. --- **1. Reproducing a physical issue with the camera.** Physical defects in camera sensors, such as hot pixels(255) or dead pixels(0), can impact image quality and degrade the performance of neural network models. our approach is similar to a physical defect in the camera. In this paper, RFPAR simulates these physical issues by replacing specific pixels with values of either 0 or 255, which induces incorrect predictions by the neural network. This perturbation could occur in real-world scenarios, which is why neural networks must be robust against it. However, research on pixel($L_0$) attacks is limited compared to other types of attacks. This approach allows us to analyze the vulnerabilities of the model with respect to physical defects. By addressing these vulnerabilities, we contribute to the development of more robust neural networks that can withstand such defects in real-world scenarios **2. Impact on Defective Product Detection AI System[R1].** If the proposed attack is applied to a defective product detection AI system, it could prevent the system from accurately detecting defective products, leading to significant issues in product quality control. This increases the likelihood of defective products reaching end consumers, negatively impacting the company's credibility and brand image. Additionally, it could result in large-scale recalls or legal issues, causing substantial financial losses for the company. Therefore, by raising awareness about the potential for such attacks and emphasizing the need for defensive strategies at the corporate level, we can contribute to enhancing security. **3. Impact on Disease Prediction Systems[R2].** If the proposed attack is applied to disease prediction systems, there is a possibility of incorrect predictions. This could lead to fatal consequences, especially in the medical field, where misdiagnosis may result in inappropriate treatment plans. For example, if errors occur in predicting infectious diseases like COVID-19, patients might not receive timely treatment or could undergo unnecessary treatments, posing a significant threat to their lives. Given the potential severe impact on public health, implementing safety measures and defensive mechanisms against such attacks can contribute to preventing these risks. **4. Impact on Object Detection Systems.** When object detection technology is applied in autonomous vehicles or commercial AI robots, the proposed attack could disrupt the system, preventing it from correctly recognizing people or other objects. This could lead to traffic accidents where an autonomous vehicle fails to detect pedestrians, or safety incidents in workplaces where commercial robots are deployed. Such scenarios are not merely technical issues but are directly related to human life, making them critical considerations. The supplementary material we provided, "Argoverse Attack Video.pptx," demonstrates a practical example of such an attack.By addressing these vulnerabilities, we contribute to the development of stronger security measures that enhance the safety and reliability of AI-driven systems, ultimately protecting human lives. In conclusion, our proposed attack is not merely a theoretical construct; it addresses practical issues that could realistically occur in camera-based vision systems. While there are challenges in applying this method in practice, which limits its immediate real-world applicability, it effectively simulates camera defects that closely align with real-world problems. Therefore, it is crucial for camera-based vision systems to proactively investigate and defend against these potential vulnerabilities. --- We would be happy to continue this discussion if further clarification is needed. If there are no additional concerns, we kindly request the reviewer to consider raising the rating. Thank you. [R1] T. Czimmermann et al, "Visual-Based Defect Detection and Classification Approaches for Industrial Applications - A SURVEY," in Sensors, 2020. [R2] L. Wang et al, "Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images," in Scientific reports, 2020.
Summary: The paper proposes a pixel-based black box attack called RFPAR that uses RL to perturb pixels. The paper has a remember and forgetting mechanism. In the remember step, they memorize the perturb images to reduce the number of queries and in forgetting, they try to remove them from the memory. Although their main target are object detection algorithms, they are showing good accuracy drop, aka attack effectiveness on image classification task such as ImageNet-1K. Strengths: The experiments of the paper are thorough and comprehensive. They are proving RFPAR's effectiveness. The numbers and the effectiveness of the method is perfectly proven. Weaknesses: In my opinion, paper doesn't have that much of flaw. However, most of the models that are used in the paper are CNN-based model. The only transformer model used is the VIT which paper didn't specified which variant. Given the numbers, assuming the authors used VIT-Based, I'd like to know the effect of the attack on larger models like ViT-Huge, ViT-large, DeIT, etc. Besides, unlike the claim of the paper that is main target is object detection, most of the experiments are targeting image classification methods. Technical Quality: 3 Clarity: 3 Questions for Authors: I like the idea and the methodology of the paper. However, the drop of accuracy in ViT is the lowest and the other object detection/image classification methods are all CNN-Based. Therefore, the effectiveness of the attack on transformer based model is vague in my opinion. It would be great if the authors provide more experiments on transformers or limit the scope of their method to to convolutional networks. Alternatively, an experiment that shows the effect of higher number of queries can also be helpful. Despite its success, the rate of the attack succession on ViT is significantly lower than CNN based methods. Perhaps increasing the number of queries could fix this ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer p4SQ, Thank you for reading our paper and providing comments to help improve it. Below, we address your concerns. --- **C1:“The only transformer model used is the VIT which paper didn't specified which variant. Given the numbers, assuming the authors used VIT-Based, I'd like to know the effect of the attack on larger models like ViT-Huge, ViT-large, DeIT, etc."** Thank you for your valuable suggestions regarding the transformer-based models. We have conducted experiments on transformer-based models, specifically ViT-large, Swin-V2-t and Deit-B, and the results are presented in Table 1 of the (PDF). Following your advice, we increased the number of queries for Pixle and RFPAR in our experiments. Our method, RFPAR, continues to demonstrate superior effectiveness compared to other pixel attack methods, even on different transformer-based models. This result implies that our method is effective even for transformer-based models. Furthermore, while the attack success rates on transformer-based models are slightly lower than on CNN-based models, they require more than twice the number of queries and $L0$. This outcome indicates that transformer-based models exhibit greater robustness against pixel attacks compared to CNN-based models. --- **C2: “Besides, unlike the claim of the paper that is main target is object detection, most of the experiments are targeting image classification methods.”** In response to your suggestions, we conducted additional experiments on two new models. We tested Atss with a "Resnet101" backbone and two-stage Deformable DETR with a "Resnet50" backbone using the MS-COCO dataset. The results are presented in Table 2 of the (PDF). Our experimental results show that our method can remove more than 78\% of detected objects, resulting in a reduction of mAP by 97\% for Atss and 78\% for Deformable DETR, demonstrating the success of our attack. Notably, the attack on Atss reduced the number of queries by 30\% to 1288, compared to 1837 queries presented by the state-of-the-art query-based attack, GARSDC, showing that RFPAR is query-efficient, similar to the results on Yolo. The Attack Area (ATA) is minimal, at 0.04\% for Atss and 0.01\% for Deformable DETR, making the attacks difficult to detect by human observers and preserving the overall semantics of the images. This result demonstrates that our method is an efficient and effective approach for queries. --- **C3: “The effectiveness of the attack on transformer based model is vague in my opinion. Despite its success, the rate of the attack succession on ViT is significantly lower than CNN based methods. Perhaps increasing the number of queries could fix this ?”** Thank you for your suggestion. We have incorporated your feedback and increased the number of queries for our experiments on transformer-based models. We conducted experiments with a maximum of 100 and 200 iterations, and the results are presented in Table 3 of the (PDF). Increasing the queries demonstrated that RFPAR achieved an average success rate of approximately 80\% on transformer-based models. This result indicates that our method remains effective even with transformer-based models. However, as mentioned earlier, it required more than twice the number of queries and $L0$ compared to CNN-based models, indicating that transformer-based models are more robust against pixel attacks. Nevertheless, given sufficient queries, pixel attacks can still achieve high success rates on transformer-based models. These results underscore the necessity of implementing defense mechanisms when applying most models in real-world scenarios. --- Thank you once again for your thoughtful comments aimed at improving our paper. If you have no remaining concerns, we would greatly appreciate it if you could increase your score. --- Rebuttal Comment 1.1: Comment: Unfortunately, I couldn't find the results that you are mentioning in the PDF. --- Reply to Comment 1.1.1: Comment: Dear Reviewer p4SQ, It appears that there may have been an issue that made it difficult to find the results in the (PDF) attached to the global rebuttal. To help clarify the results for you, I have prepared the tables below, which consolidate the results for easy reference. --- **Table1:The results of transformer-based classifiers.** This table presents the experimental results on various transformer-based models, including VIT-L[1], Swin-V2-T[2], and Deit-B[3], compared to other attacks. The success rate is more effective when higher, and both $L_0$ and the query are lower. | Model | Attack | Succes Rate | $L_0$ | Query | |------------|----------|-----------|------|-------| | ViT-L [1] | OnePixel | 8.9% | 15 | 1654 | | | Pixle | 66.4% | 531 | 1396 | | | **RFPAR** | **78.0%** | 355 | 1042 | || | Swin-V2-T [2]| OnePixel | 5.0% | 15 | 1686 | | | Pixle | 66.8% | 1052 | 1509 | | | **RFPAR** | **69.4%** | 608 | 1096 | || | Deit -B[3] | OnePixel | 8.4% | 15 | 1137 | | | Pixle | 71.0% | 551 | 1473 | | | **RFPAR** | **84.3%** | 412 | 1161 | --- **Table 2 : The results of object detection models.** This table presents the experimental results on object detection models, ATSS[4] and Deformable DETR[5]. RM is more effective when higher, while mAP, $L_0$ and the query are lower. | Model | Attack | RM | mAP | L0 | Query | |---------------------|--------------|------|-------|------|-------| | Atss [4] | clean | - | 0.227 | - | - | | | RFPAR$_{0.01}$| 0.74 | 0.048 | 491 | 1530 | | | RFPAR$_{0.02}$| 0.88 | 0.026 | 1025 | 1633 | | | RFPAR$_{0.03}$| 0.90 | 0.026 | 1357 | 1504 | | | RFPAR$_{0.04}$| 0.91 | 0.008 | 1666 | 1243 | | | **RFPAR$_{0.05}$**| **0.92** | **0.006** | 2074 | **1288** | || | Deformable DETR [5] | clean | - | 0.339 | - | - | | | RFPAR$_{0.01}$| 0.61 | 0.170 | 333 | 1466 | | | RFPAR$_{0.02}$| 0.69 | 0.134 | 512 | 1502 | | | RFPAR$_{0.03}$| 0.72 | 0.135 | 869 | 1488 | | | RFPAR$_{0.04}$| 0.76 | 0.110 | 1200 | 1488 | | | **RFPAR$_{0.05}$**| **0.78** | **0.073** | 1274 | **1335** | --- **Table 3: The performance of RFPAR on transformer-based models with different iteration limits.** This table presents the experimental results on transformer-based models (ViT-B, L, H, Swin-V2-T, Deit-B) with iteration limits of 100 and 200. | Model | Maximum of Iteration | Success Rate | L0 | Query | |---------|-----------------|--------------|------|-------| | ViT-B | 100 | 64.1% | 211 | 613 | | | 200 | **83.4%** | 352 | 995 | || | ViT-L | 100 | 59.9% | 209 | 618 | | | 200 | **78.0%** | 355 | 1042 | || | ViT-H | 100 | 62.2% | 166 | 582 | | | 200 | **73.5%** | 229 | 917 | || | Swin-V2-T | 100 | 46.2% | 352 | 611 | | | 200 | **69.4%** | 608 | 1096 | || | Deit-B | 100 | 60.2% | 249 | 676 | | | 200 | **84.3%** | 412 | 1161 | --- I hope this helps clarify the results. Please feel free to reach out if there are any further questions or concerns. [1] A. Dosovitskiy el al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," in International Conference on Learning Representations, 2021. [2] Z. Liu et al., "Swin Transformer V2: Scaling Up Capacity and Resolution," in Conference on Computer Vision and Pattern Recognition, 2022. [3] H. Touvron el al., "Training data-efficient image transformers & distillation through attention" in International Conference on Machine Learning, 2021. [4] S. Zhang et al., "Bridging the Gap Between Anchor-Based and Anchor-Free Detection via Adaptive Training Sample Selection," in Conference on Computer Vision and Pattern Recognition, 2020. [5] X. Zhu et al., "Deformable DETR: Deformable Transformers for End-to-End Object Detection," in International Conference on Learning Representations, 2021. --- Rebuttal 2: Comment: Dear Reviewer p4SQ, The authors have provided a rebuttal. Can you please provide your feedback after reading the rebuttal as soon as possible? The deadline is approaching fast. Thanks, AC --- Rebuttal 3: Comment: Dear Reviewer p4SQ, Thank you very much for your continued engagement with our work and for your valuable feedback. I apologize for any confusion regarding the results mentioned in my previous response. These results are detailed in the (PDF) provided in the global rebuttal, which was due to space constraints. Specifically: - Table 1 presents the results of additional experiments conducted on ViT-large, Swin-V2-t, and Deit-B, where we compared the performance of OnePixel, Pixle, and our proposed RFPAR method. - Table 2 includes the outcomes of further experiments on object detection models, specifically Atss and Deformable DETR. - Table 3 shows the results of RFPAR on transformer-based models with varying query counts. I hope this helps in locating the information you are looking for, and I would be more than happy to provide any further details or clarification if needed. Thank you once again for your careful consideration of our work.
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you once again for reading our paper and providing insightful comments. We have made every effort to address your concerns. If our responses adequately address the reviewers’ concerns, we kindly request that the reviewers consider increasing your scores. Due to space constraints, we are unable to present all tables within each response. So, we have submitted them as a PDF here. In the image classification task, we conducted experiments on a dataset consisting of one accurately predicted image per label from ImageNet-1K, similar to main paper. For the object detection task, we utilized the same dataset employed in the main paper. - Table 1: Experimental results on various transformer-based models, VIT-L[1], Swin-V2[2], and Deit[3]. SR means Succes rate. - Table 2: Experimental results on object detection models, Atss[4] and Deformable DETR[5]. - Table 3: Experimental results on transformer-based models (ViT-B, L, H, Swin-V2, Deit-B) with iteration limits of 100 and 200. - Table 4: Experimental times for the experiments presented in Table 1 of the main paper. - Table 5: Experimental results on adversarially trained Adv. ViT[6] and ResNeXt101[7]. Reference [1] A. Dosovitskiy el al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," in International Conference on Learning Representations, 2021. [2] Z. Liu et al., "Swin Transformer V2: Scaling Up Capacity and Resolution," in Conference on Computer Vision and Pattern Recognition, 2022. [3] H. Touvron el al., "Training data-efficient image transformers \& distillation through attention" in International Conference on Machine Learning, 2021. [4] S. Zhang et al., "Bridging the Gap Between Anchor-Based and Anchor-Free Detection via Adaptive Training Sample Selection," in Conference on Computer Vision and Pattern Recognition, 2020. [5] X. Zhu et al., "Deformable DETR: Deformable Transformers for End-to-End Object Detection," in International Conference on Learning Representations, 2021. [6] Y. Mo et al., "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture," in Advances in Neural Information Processing Systems, 2022. [7] D. Hendrycks et al., "The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization," in International Conference on Computer Vision, 2021. Pdf: /pdf/22d801a3c3873940fa8c22397cd07e2a548a4a80.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Meta-Diffu$B$: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration
Accept (poster)
Summary: This paper presents a novel approach to improving sequence-to-sequence (Seq2Seq) text generation models using diffusion models. The authors identify limitations in existing Seq2Seq-Diffusion models, which typically rely on fixed or hand-crafted noise scheduling rules that do not account for the specific characteristics of Seq2Seq tasks. To address these limitations, they propose the Meta-DiffuB framework, which introduces a scheduler-exploiter paradigm designed to provide contextualized noise scheduling. The scheduler model dynamically schedules contextualized noise based on the semantics of each sentence. The scheduler model is trained using Meta-Exploration techniques to optimize noise scheduling and can function as a "plug-and-play" model. The exploiter model (an S2S-Diffusion model) utilizes the noise scheduled by the scheduler for updating and text generation. Strengths: - The introduction of a scheduler-exploiter framework to schedule contextualized noise for each sentence in Seq2Seq tasks is a departure from existing methods that use fixed or hand-crafted noise scheduling rules. This idea is intuitive and straightforward. - Experimentation on four benchmark Seq2Seq datasets somehow validates the effectiveness of the proposed Meta-DiffuB framework. - The paper follows a logical structure, starting with the introduction and problem statement, followed by the methodology, experiments, results, and conclusions. Weaknesses: # More Comprehensive Comparison My first concern is that this paper mainly compares the proposed method with Diffuseq, SeqDiffuSeq, and Dinoiser. There are many other diffusion models tailored for general seq2seq tasks [1-6]. I would suggest the authors carefully review the existing literature and select more recent baselines to compare. 1. Empowering Diffusion Models on the Embedding Space for Text Generation. *NAACL* 2. Latent Diffusion for Language Generation. *NeurIPS* 3. DiffuSeq-v2: Bridging Discrete and Continuous Text Spaces for Accelerated Seq2Seq Diffusion Models. *EMNLP* 4. AR-Diffusion: Auto-Regressive Diffusion Model for Text Generation. *NeurIPS* 5. Can Diffusion Model Achieve Better Performance in Text Generation? Bridging the Gap between Training and Inference! *EMNLP* 6. TESS: Text-to-Text Self-Conditioned Simplex Diffusion. *EACL* The authors may also find more related works in: - https://github.com/StevenYuan666/Awesome-Diffusion-Models-for-NLP - https://github.com/westfish/Awesome-NLP-Diffusion-Models - https://github.com/AoiDragon/Awesome-Text-Diffusion-Models Also, some more recent works focus on the noise schedule: 7. Effective Integration of Text Diffusion and Pre-Trained Language Models with Linguistic Easy-First Schedule. *LREC-COLING* # Presentation - As the paper mentioned, parts of the methodology are motivated by the Meta-Exploration. I would suggest the authors add a section in the preliminary to provide more necessary details about the meta-exploration techniques, which will benefit readers who are not familiar with Meta-Exploration. # Analysis - The authors claim that the proposed method results in a contextualized noise schedule, which makes sense to me in terms of the scheduler generating Meta-Instructions conditioned on $\boldsymbol{w}^x$. However, from Figure 3, even though the total amount of noise indeed fluctuates across different training epochs, it always keeps at the same level and the differences seem insignificant. I'm wondering why this makes the proposed methods beat existing baselines. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Following my argument above, can you provide a deeper interpretation of the noise scheduling visualizations and their implications for different types (easy or hard as mentioned in the paper) of sentences? How do different noise schedules affect the generation quality and diversity for various types of sentences? Any patterns or insights observed from these visualizations? 2. Could you provide an intuitive explanation of why the scheduler can function as a plug-and-play model? From the analysis in section 4.7, I'm suspecting it is because the learned schedule is not quite different from the pre-defined noise schedule. 3. Did you run the empirical evaluation for several runs by setting different random seeds? If so, could you also highlight which results are statistically significant than the baseline method? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The proposed method is in the domain of text generation and does not directly have any negative impact by itself. The authors have provided a paragraph to discuss the negative potential of misinformation created by the text generation model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Title: Thank you for your valuable suggestions. We have included additional baseline model experiments and addressed each of your questions to clarify any doubts. We assure you that these experiments will be included in the final version. Comment: >Q1: My first concern is that this paper mainly compares the proposed method with Diffuseq, SeqDiffuSeq, and Dinoiser. There are many other diffusion models tailored for general seq2seq tasks [1-6]. I would suggest the authors carefully review the existing literature and select more recent baselines to compare. **Response:**: Thank you for the insightful feedback. We have included additional baselines in the following Table, specifically those that are evaluated on the same dataset as ours and have open-source implementations (Diffuseq-v2, BG-DiffuSeq and TESS). We demonstrate that our Meta-Diffu$B$ framework can integrate various baselines as exploiter models, achieving superior performance compared to the original models. This is consistent with the experiments and results presented in Appendix F. **More recent baselines compared with our Meta-Diffu$B$ on QG and QQP datasets** |Methods|BLEU ↑ (QQP)|BERTScore ↑ (QQP)|BLEU↑ (QG)|BERTScore ↑ (QG)| |-|-|-|-|-| |DiffuSeq|0.2413|0.8365|0.1731|0.6123| |Meta-Diffu$B$ (exploiter=DiffuSeq)|**0.2552**|**0.8821**|**0.1826**|**0.6357**| |DiffuSeq-v2 |0.2411|0.8393|-|-| |Meta-Diffu$B$ (exploiter=DiffuSeq-v2)|**0.2556**|**0.8829**|-|-| |BG-DiffuSeq |0.2619|0.8427|0.1744|0.6280| |Meta-Diffu$B$ (exploiter=BG-DiffuSeq)|**0.2790**|**0.8757**|**0.1838**|**0.6571**| |TESS |0.3020|0.8570|0.1950|0.6580| |Meta-Diffu$B$ (exploiter=TESS)|**0.3142**|**0.8975**|**0.2055**|**0.6761**| >Q2: As the paper mentioned, parts of the methodology are motivated by Meta-Exploration. I would suggest the authors add a section in the preliminary to provide more necessary details about the meta-exploration techniques, which will benefit readers who are not familiar with Meta-Exploration. **Response:** To assist readers in understanding the foundational concepts, we will move the discussion of Meta-Exploration from the Related Work section to the Preliminary section in our final version. >Q3: The authors claim that the proposed method results in a contextualized noise schedule, which makes sense to me in terms of the scheduler generating Meta-Instructions conditioned on. However, from Figure 3, even though the total amount of noise indeed fluctuates across different training epochs, it always keeps at the same level and the differences seem insignificant. I'm wondering why this makes the proposed methods beat existing baselines. **Response:** In Figure 3, our method differs from approaches like SeqDiffuSeq or Dinoiser, which rely on predefined rules. Instead, our approach is data-driven, with the model learning the underlying patterns. As a result, the degree of variation—whether large or small—is not the primary focus. Additionally, the learned noise can be flexibly integrated into other systems in a plug-and-play manner. >Q4: Can you provide a deeper interpretation of the noise scheduling visualizations and their implications for different types (easy or hard as mentioned in the paper) of sentences? How do different noise schedules affect the generation quality and diversity for various types of sentences? Any patterns or insights observed from these visualizations? **Response:** From the visualizations, we observe that the contextualized noise scheduling mechanism helps in maintaining a delicate balance between exploration (diversity) and exploitation (quality). The patterns indicate that our model has learned different strategies for different types of sentences. These insights demonstrate the effectiveness of our Meta-Diffu$B$ framework in handling various sentence complexities, resulting in superior performance across multiple datasets. >Q5: Could you provide an intuitive explanation of why the scheduler can function as a plug-and-play model? From the analysis in section 4.7, I'm suspecting it is because the learned schedule is not quite different from the pre-defined noise schedule. Our scheduler model learns directly from sentence semantics, allowing it to provide learnable noise across different text datasets. Figure 3 shows that Dinoiser and SeqDiffuSeq rely on rule-based noise variations, which do not necessarily lead to better performance. >Q6: Did you run the empirical evaluation for several runs by setting different random seeds? If so, could you also highlight which results are statistically significant than the baseline method? **Response:** Thank you for your question regarding the empirical evaluation with different random seeds. We followed the evaluation protocols of DiffuSeq and Diffusion-LM, running multiple experiments with different random seeds. In Tables 1, 2, and 3, we used MBR, so the results in bold are all statistically significant compared to the baseline methods. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for your detailed response. I have read through most of the comments by other reviewers, as well as the rebuttal. I decide to keep my original score. --- Reply to Comment 1.1.1: Title: Thank you very much for your suggestion. We have also supplemented the experiment you mentioned. Comment: In addition to the baselines we previously supplemented, we have also included the easy-first schedule and the discrete S2S-Diffusion model you mentioned. The discrete S2S-Diffusion models, which rely on absorbing states, still require a set of $\beta$ values. Therefore, our Meta-Diffu$B$ can still be directly applied to the discrete S2S-Diffusion model. The paper on the easy-first schedule, Diffusion-LEF, did not implement the method on the same dataset as ours. Hence, we applied this noise schedule to DiffuSeq without using a pre-trained BERT model to ensure fairness. Below is the table summarizing the additional experiments we conducted. **We compared more recent baselines with our Meta-Diffu$B$ on the QG and QQP datasets. These baselines include both discrete and continuous S2S-Diffusion models.** |Methods|BLEU ↑ (QQP)|BERTScore ↑ (QQP)|BLEU↑ (QG)| BERTScore ↑ (QG)| |-|-|-|-|-| |DiffuSeq|0.2413|0.8365|0.1731|0.6123| |Diffusion (Easy-First Schedule)|0.2503|0.8692|0.1812|0.6253| |Meta-Diffu$B$ (exploiter=DiffuSeq)|**0.2552**| **0.8821**|**0.1826**|**0.6357**| |DiffuSeq-v2|0.2411|0.8393|-|-| |Meta-Diffu$B$ (exploiter=DiffuSeq-v2)|**0.2556**| **0.8829**|-|-| |BG-DiffuSeq|0.2619|0.8427|0.1744|0.6280| |Meta-Diffu$B$ (exploiter=BG-DiffuSeq)|**0.2790**| **0.8757**|**0.1838**|**0.6571**| |TESS|0.3020|0.8570|0.1950|0.6580| |Meta-Diffu$B$ (exploiter=TESS)|**0.3142**|**0.8975**| **0.2055**|**0.6761**| |RDM (Discrete Diffusion)|0.2510|0.8472|0.1802|0.6310| |Meta-Diffu$B$ (exploiter=RDM)|**0.2684**|**0.8724**|**0.2271**|**0.6542**| As shown in the Table, when combined with the discrete S2S-Diffusion model RDM, we also achieve satisfactory results. Additionally, the noise generated by the scheduler through learning, compared to the noise produced by the easy-first schedule that relies on many rules, can further enhance the performance of DiffuSeq.
Summary: Comprehensive Evaluation of the new S2S Diffusion framework Meta-Diffu$\beta$. Strengths: - Clear and well-written presentation of the method - It provides a new framework that uses an additional scheduler based on Meta-Exploration to schedule contextualized noise, which performs well in the four given benchmarks. Weaknesses: - **Incremental Innovation**: In terms of the scheduler, the use of the Meta-Exploration method for scheduling contextualized noise is noted. However, I concern that the exploiters fully follow the DiffuSeq work, resulting in a lack of significant innovation. - **Missing Experimental Baselines**: The cited baselines are all from before February 2023. It would be beneficial to include more recent baseline results [1] [2] [3] and take both generation efficiency and training speed into evaluation metrics. - **Dataset Limitations**: The selected dataset completely follows DiffuSeq and does not take into account machine translation tasks and related datasets from SeqDiffuSeq and DINOISER. It would be advantageous to include test results on IWSLT14 and WMT14. - **Plug-and-Play Capability**: The study only demonstrates the plug-and-play model and its corresponding effects on DiffuSeq. It's recommended to show the plug-and-play capability in other S2S Diffusion models like SeqDiffuSeq, DINOISER and other baselines mentioned to prove the claim in the conclusion. Minor: - The main text introduces the datasets but does not provide information about the downstream Seq2Seq tasks. Consider moving the introduction of these tasks from the appendix to the main text to enhance readability. - The pre-trained model and environment dependencies are missing from your GitHub repository. It would be helpful to include these components to facilitate reproducibility. [1] Ding, Y., Tian, J., Mei, S., Zhou, Y., Dong, Y., He, H. and Hu, W., 2023, December. LDSeq: Latent Diffusion Models for Sequence to Sequence Text Generation. In *Proceedings of the 2023 7th International Conference on Computer Science and Artificial Intelligence* [2] Gong, S., Li, M., Feng, J., Wu, Z. and Kong, L., 2023. Diffuseq-v2: Bridging discrete and continuous text spaces for accelerated seq2seq diffusion models. *arxiv preprint arxiv:2310.05793*. [3] Tang, Z., Wang, P., Zhou, K., Li, J., Cao, Z. and Zhang, M., 2023. Can Diffusion Model Achieve Better Performance in Text Generation? Bridging the Gap between Training and Inference!. *arxiv preprint arxiv:2305.04465*. Technical Quality: 3 Clarity: 3 Questions for Authors: - Regarding the selection of fixed noise, have you attempted to use different mathematical functions to obtain the $\beta$ values as inputs for the skipping function? How is the performance using different $\beta$ in the scheduler? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Title: Thank you for your valuable suggestions. We have included additional baseline model experiments, machine translation experiments, and a variety of different baselines. We have addressed each of your questions to clarify any doubts. We assure you that these experiments will be included in the final version. Comment: >Q1: Incremental Innovation: In terms of the scheduler, the use of the Meta-Exploration method for scheduling contextualized noise is noted. However, I concern that the exploiters fully follow the DiffuSeq work, resulting in a lack of significant innovation. **Response**: Thank you for your feedback. As noted in Appendix F, our framework can utilize different diffusion models as exploiters, demonstrating its versatility. The key innovation is integrating these models with our scheduler, significantly enhancing performance. We propose a general method that strengthens other S2S-diffusion models, providing more than incremental improvements. In Sections 4.5 to 4.7, we compare our Meta-Diffu$B$, using DiffuSeq as the exploiter model, with other S2S-Diffusion models that incorporate adaptive noise to highlight its effectiveness. >Q2: Missing Experimental Baselines: The cited baselines are all from before February 2023. It would be beneficial to include more recent baseline results [1] [2] [3] and take both generation efficiency and training speed into evaluation metrics. **Response:** Thank you for the insightful feedback. We have included additional baselines in the following Table, specifically those that are evaluated on the same dataset as ours and have open-source implementations (Diffuseq-v2, BG-DiffuSeq and TESS). Results where Meta-Diffu$B$ combined with different models show improved performance are indicated in bold. Our Meta-Diffu$B$ framework can integrate various baselines as exploiter models, achieving superior performance compared to the original models. This is consistent with the experiments and results presented in Appendix F. **More recent baselines compared with our Meta-Diffu$B$ on QG and QQP datasets** |Methods|BLEU ↑ (QQP)|BERTScore ↑ (QQP)|BLEU↑ (QG)|BERTScore ↑ (QG)| |-|-|-|-|-| |DiffuSeq|0.2413|0.8365|0.1731|0.6123| |Meta-Diffu$B$ (exploiter=DiffuSeq)|**0.2552**|**0.8821**|**0.1826**|**0.6357**| |DiffuSeq-v2|0.2411|0.8393|-|-| |Meta-Diffu$B$ (exploiter=DiffuSeq-v2)|**0.2556**|**0.8829**|-|-| |BG-DiffuSeq|0.2619|0.8427|0.1744|0.6280| |Meta-Diffu$B$ (exploiter=BG-DiffuSeq)|**0.2790**|**0.8757**|**0.1838**|**0.6571**| |TESS|0.3020|0.8570|0.1950|0.6580| |Meta-Diffu$B$ (exploiter=TESS)|**0.3142**|**0.8975**|**0.2055**|**0.6761**| >Q3: Dataset Limitations: The selected dataset completely follows DiffuSeq and does not take into account machine translation tasks and related datasets from SeqDiffuSeq and DINOISER. It would be advantageous to include test results on IWSLT14 and WMT14. **Response:** Thank you for the valuable feedback. We supplement additional machine translation datasets in the following table and employed the same evaluation metrics used by SeqDiffuSeq and Dinoiser for these tasks. Results where Meta-Diffu$B$ combined with different models show improved performance are indicated in bold. Furthermore, we demonstrate that our Meta-Diffu$B$ framework, when combined with different S2S-Diffusion models, achieves superior performance in machine translation tasks as well. **Experiment of Meta-Diffu$B$ on Machine Translation dataset** |Methods|SacreBLEU ↑ (IWSLT14 DE-EN)|SacreBLEU ↑ (WMT14 DE-EN)| |-|-|-| |DiffuSeq|29.43|22.72| |Meta-Diffu$B$ (exploiter=DiffuSeq)|**31.71**|**26.17**| |SeqDiffuSeq|30.16|23.28| |Meta-Diffu$B$ (exploiter=SeqDiffuSeq)|**32.41**|**26.14**| |Dinoiser|31.61|30.30| |Meta-Diffu$B$ (exploiter=Dinoiser)|**33.82**|**32.09**| >Q4: Plug-and-Play Capability: The study only demonstrates the plug-and-play model and its corresponding effects on DiffuSeq. It's recommended to show the plug-and-play capability in other S2S Diffusion models like SeqDiffuSeq, DINOISER, and other baselines mentioned to prove the claim in the conclusion. **Response:** Thank you for your suggestion. We supplement our experiments with plug-and-play capabilities on Dinoiser and SeqDiffuSeq in the following tables. Our scheduler, trained on different datasets, improves the performance of both Dinoiser and SeqDiffuSeq. The fields "Dinoiser" and "SeqDiffuSeq" indicate which dataset these two models are trained on. When the scheduler field is null, it indicates the use of the model's own noise scheduling. Results where the model performs better with its own noise are indicated in bold. **Plug-and-play experiments on SeqDiffuSeq integrated with our scheduler** |Scheduler|SeqDiffuSeq|BLEU ↑|BERTScore ↑|Dist-1 ↑ | |-|-|-|-|-| |WA|QQP|**0.2627**|**0.8481**|**0.9814**| |Null|QQP|0.2434|0.8400|0.9807| |WA|QT|**0.1834**|**0.6226**|**0.9369**| |Null|QT|0.1746|0.6174|0.9248| **Plug-and-play experiments on Dinoiser integrated with our scheduler** |Scheduler|Dinoiser|BLEU ↑|BERTScore ↑|Dist-1 ↑| |-|-|-|-|-| |WA|QQP|**0.2079**|**0.8121**|**0.9765**| |Null|QQP|0.1949|0.8036|0.9723| |WA|QT|**0.0495**|**0.4740**|**0.8289**| |Null|QT|0.0477|0.4690|0.8191|
Summary: The paper introduces the Meta-DiffuB, a scheduler-exploiter diffusion framework that focuses on sequence-to-sequence (Seq2Seq) setting. Its novel trainable contextualized noise scheduler, inspired by Meta-exploration, is also flexible and plug-and-play with other models like DiffuSeq without re-training. This approach dynamically schedules noise levels based on the characteristics of each sentence, improving the performance of text generation tasks. Meta-DiffuB outperforms existing models and fine-tuned pre-trained language models across four Seq2Seq benchmark datasets, demonstrating the effectiveness of its adaptive noise scheduling. Strengths: - The paper is well-motivated and well-structured. - The introduction of a contextualized noise scheduling strategy using Meta-Exploration is a significant advancement, addressing limitations of fixed or non-contextualized noise schedules in existing diffusion models. - Meta-DiffuB shows superior performance across four Seq2Seq benchmark datasets. - The ability to integrate the scheduler model into existing Seq2Seq diffusion models without the need for fine-tuning during inference enhances the practicality and usability of the approach. - The paper provides details of the proposed framework, pseudo-code with actual code for ease of reproduction. Weaknesses: - While the model performs well on benchmark datasets, there is limited discussion on its scalability to larger, real-world datasets and more complex text generation tasks. - The dynamic noise scheduling process might introduce additional computational overhead during inference, which is not thoroughly analyzed in the paper. - Meta-DiffuB is built upon and compared with S2S-Diffusion models known as continuous-based text diffusion models. It lacks a detailed comparison with other discrete-based text diffusion models such as D3PM [1] or RDM [2]. RDM claimed to achieve better performance and inference speed compared to DiffuSeq. - The effects of the dynamic noise scheduling on the overall training dynamics and convergence rate of the model are not explored in detail, which could be crucial for understanding the method’s effectiveness. Technical Quality: 3 Clarity: 3 Questions for Authors: - L120, 123: the quote for 'skipping' is wrongly formatted - L202-203: it should be either "The maximum Minimum Bayes Risk (MBR)..." or "The Minimum Bayes Risk (MBR)....", also it cites paper [20] (the Rouge package) which I think is incorrect for MBR. - Besides the increased training time compared to DiffuSeq (L217-218), it would be beneficial to know the increased inference time too. - How well does the Meta-DiffuB model generalize to other types of text generation tasks beyond the Seq2Seq framework? Have experiments been conducted to evaluate its performance in different text generation scenarios? - How does the model scale with larger datasets or more complex text generation tasks? Are there any specific optimizations or modifications required to handle real-world data volumes? [1] https://arxiv.org/pdf/2107.03006 [2] https://arxiv.org/pdf/2302.05737 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Title: Thank you for your valuable suggestions. We have included the inference times for the models and added machine translation experiments. In the final version, we will also address and revise the format and references you mentioned. Comment: >Q1: L120, 123: the quote for 'skipping' is wrongly formatted **Response**: Thank you for pointing out the formatting issue. We will correct the quotation marks for 'skipping' in lines 120 and 123 to ensure proper formatting. >Q2: L202-203 it should be either "The maximum Minimum Bayes Risk (MBR)..." or "The Minimum Bayes Risk (MBR)....", also it cites paper [20] (the Rouge package) which I think is incorrect for MBR. **Response**: Thank you for your reminder. We will correct this in the final version by citing the Diffusion-LM and DiffuSeq papers in this reference. >Q2: Besides the increased training time compared to DiffuSeq (L217-218), it would be beneficial to know the increased inference time too. **Response**: Thank you for this suggestion. We will supplement the increased inference time compared to DiffuSeq in the following Table. **Meta-DiffuB's computational complexity compared to DiffuSeq** |Method|increased parameters (%)|increased training time (%)|increased inference time (%)| |-|-|-|-| |Meta-Diffu$B$|2.2%|5%|0.5%| >Q3: How well does the Meta-Diffu$B$ model generalize to other types of text generation tasks beyond the Seq2Seq framework? Have experiments been conducted to evaluate its performance in different text generation scenarios? **Response**: We conducted extensive experiments to evaluate the generalization capabilities of the Meta-Diffu$B$ model across various text generation task. These tasks include generating informative dialogue responses (CC dataset), question generation (QT dataset), text simplification (WA dataset), and paraphrase generation (QQP dataset), as discussed in Section 4.1 and Appendix A. Thanks to the reviewers' suggestion, we supplemented additional machine translation datasets, as shown in the following table, and employed the same evaluation metrics used by SeqDiffuSeq and Dinoiser. Results where Meta-Diffu$B$ combined with different models show improved performance are indicated in bold. Our results show that Meta-Diffu$B$, when combined with different S2S-Diffusion models, achieves superior performance on machine translation tasks as well. **Experiment of Meta-Diffu$B$ on Machine Translation datasets** |Methods|SacreBLEU ↑ (IWSLT14 DE-EN)|SacreBLEU ↑ (WMT14 DE-EN)| |-|-|-| |DiffuSeq|29.43|22.72| |Meta-Diffu$B$ (exploiter=DiffuSeq)|**31.71**| **26.17**| |SeqDiffuSeq|30.16|23.28| |Meta-Diffu$B$ (exploiter=SeqDiffuSeq)|**32.41**| **26.14**| |Dinoiser|31.61|30.30| |Meta-Diffu$B$ (exploiter=Dinoiser)|**33.82**| **32.09**| >Q4: How does the model scale with larger datasets or more complex text generation tasks? Are there any specific optimizations or modifications required to handle real-world data volumes? **Response**: Meta-Diffu$B$ scales effectively with larger datasets, as demonstrated by our experiments on the Commonsense Conversation (CC) dataset with 3 million data points. We maintained consistent model parameters, architecture, and optimization methods across all datasets, including CC. Our framework's exceptional performance, even on large datasets, highlights its robustness and scalability. Additionally, as shown in the table from Q3, our Meta-Diffu$B$ performs well on machine translation tasks with the IWSLT14 dataset (160k data points) and the WMT14 dataset (4475k data points), without requiring any adjustments. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. Since one of the key contributions of this paper is a strategy (noise scheduler) that enhances S2S-Diffusion models: - I still have concerns about its generalizability, as the authors only consider continuous-based diffusion models and do not consider discrete-based models. - While comparing S2S-Diffusion models before and after applying Meta-Diffu$\beta$ might be sufficient, the authors did not compare with other noise schedule baselines (as mentioned by reviewer qgkV). Given these points, I will maintain my score. --- Reply to Comment 1.1.1: Title: Thank you for your suggestion. To address your concerns, we have added more experiments. Comment: >Q1: I still have concerns about its generalizability, as the authors only consider continuous-based diffusion models and do not consider discrete-based models. In the discrete S2S-Diffusion model, a set of $\beta$ values is still required to control the magnitude of the noise. Our scheduler inherently generates these $\beta$ values to regulate the noise. In the discrete S2S-Diffusion model, by multiplying the $\beta$ values generated by the scheduler with a Bernoulli random vector instead of Gaussian noise, our Meta-Diffu$B$ framework can be seamlessly integrated. As a result, we also include a comparison with the Reparameterized Diffusion Model (RDM) in the Table below. >Q2: While comparing S2S-Diffusion models before and after applying Meta-Diffu might be sufficient, the authors did not compare with other noise schedule baselines (as mentioned by reviewer qgkV). In our study, we compared the noise scheduling from Denoiser, SeqDiffuSeq, and other S2S-Diffusion models as described in their papers. When paired with our scheduler, the noise scheduling consistently produced better results. The Diffusion-LEF paper (which proposed the Easy-First Schedule as noted by reviewer qgkV) used different datasets from ours. Therefore, we tested the Easy-First Schedule with DiffuSeq on the QQP and QG datasets. For a fair comparison, we used the Diffusion-LEF results without BERT. As shown in the table below, while the Easy-First Schedule improves DiffuSeq, it doesn't achieve the same level of enhancement as our Meta-Diffu$B$. **We compared more recent baselines with our Meta-Diffu$B$ on the QG and QQP datasets. These baselines include both discrete and continuous S2S-Diffusion models.** |Methods|BLEU ↑ (QQP)|BERTScore ↑ (QQP)|BLEU↑ (QG)| BERTScore ↑ (QG)| |-|-|-|-|-| |DiffuSeq|0.2413|0.8365|0.1731|0.6123| |Diffusion (Easy-First Schedule)|0.2503|0.8692|0.1812|0.6253| |Meta-Diffu$B$ (exploiter=DiffuSeq)|**0.2552**| **0.8821**|**0.1826**|**0.6357**| |DiffuSeq-v2|0.2411|0.8393|-|-| |Meta-Diffu$B$ (exploiter=DiffuSeq-v2)|**0.2556**| **0.8829**|-|-| |BG-DiffuSeq|0.2619|0.8427|0.1744|0.6280| |Meta-Diffu$B$ (exploiter=BG-DiffuSeq)|**0.2790**| **0.8757**|**0.1838**|**0.6571**| |TESS|0.3020|0.8570|0.1950|0.6580| |Meta-Diffu$B$ (exploiter=TESS)|**0.3142**|**0.8975**| **0.2055**|**0.6761**| |RDM (Discrete Diffusion)|0.2510|0.8472|0.1802|0.6310| |Meta-Diffu$B$ (exploiter=RDM)|**0.2684**|**0.8724**|**0.2271**|**0.6542**|
Summary: This paper proposes a meta-learning based noise scheduler to incoporate contextualisation in texts. The scheduler is a plug-and-play module which could be applied to other similar sequential setting. The experimental results demonstrate its superiority comparing to the baselines, in terms of generating quality and diversity. Besides, the ablation study investigates into and provide insights about the relationship of the difficulty levels and noise schedule, which is quite interesting. The paper is well organised and easy to follow. Strengths: It is the first work to take the contextilisation into consideration when doing adaptive noise schedule for text generation. Its effectiveness has been demonstrated, upon the questions listed below are addressed. The experiments are well planned to support the arguments proposed. The presentation is clear and easy to follow. Weaknesses: While the diffusion process is now designed to incoporate a time variable noise schedule, the denoising process is still supposed to handle fixed denoising steps. More insights on how this will be handled effectively would be expected. Some statements in the text seems not align well with the information disclosing by the figures. For example, Figure 3, the noise level is quite stable for the proposed method but in the text below, it claims 'Meta-DiffuB applies adaptive noise at varying training epchos'. I am wondering if there are any typos involved. Another example is in Figure 4 at least for QQP dataset - Meta-DiffuB applies less noise than the other methods as its curves lay below the curves of the other methods. thus, the claim 'This noise-scheduling approach—applying more noise to the harder sentences (H) and less to the easier sentences' looks conflict with such observation from the figure. Besides, as algorithm 1 alternates the training of scheduler and exploiter, It would be guideful if some insights about the convergence rate are provided. Technical Quality: 3 Clarity: 4 Questions for Authors: I have three major concerns as listed in the weakness section about the reasonability, the consistence of figures and in-text description about some experiemtnal results, and the time complexity of training algorithm. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: I suspect the main limitation would be the converge speed but it was not mentioned in the paper. The schedular is implemented with a Seq2Seq structure, and other options could be provided as well to inspire other potential application scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Title: Thank you for your valuable suggestions. We have included the training curves and addressed your comments as below. Comment: >Q1: For example, Figure 3, the noise level is quite stable for the proposed method but in the text below, it claims 'Meta-Diffu$B$ applies adaptive noise at varying training epchos. **Response**: In Figure 3, our method differs from approaches like SeqDiffuSeq or Dinoiser, which rely on predefined rules. Instead, our approach is data-driven, with the model learning the underlying patterns. As a result, the degree of variation—whether large or small—is not the primary focus. Additionally, the learned noise can be flexibly integrated into other systems in a plug-and-play manner. >Q2: Another example is in Figure 4 at least for QQP dataset - Meta-Diffu$B$ applies less noise than the other methods as its curves lay below the curves of the other methods. thus, the claim 'This noise-scheduling approach—applying more noise to the harder sentences (H) and less to the easier sentences' looks conflict with such observation from the figure. **Response**: We appreciate the opportunity to correct our previous statement. We will correct this error in the final version. >Q3: Besides, as algorithm 1 alternates the training of scheduler and exploiter, It would be guideful if some insights about the convergence rate are provided. **Response**: We supplement the [Training Curve Image](https://github.com/metabeta-diffusion/metabeta-diffusion/blob/main/img/training%20curve.jpg?raw=true). Our Meta-Diffu$B$ shows Our model converges significantly faster than DiffuSeq, achieving convergence within just 10k epochs.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Dual-Diffusion for Binocular 3D Human Pose Estimation
Accept (poster)
Summary: The paper proposes a dual-diffusion model for Binocular 3D Human Pose Estimation, which models the uncertainties integrated in the binocular configuration. Since constructing the depth distribution of 3D poses is difficult, the authors use 2D poses with triangulation to initialize the 3D poses during the diffusion process. Additionally, they adopt the depth of the 3D root as an additional condition and introduce pose normalization to improve generalization ability. The experiments show that the method achieves superior performance on the MHAD and H36M datasets. Strengths: - Using multi-view 2D poses with triangulation to construct the initial distribution for 3D poses during the diffusion process is interesting and superior to previous designs in diffusion-based 3D pose estimation. - The Z-embedding Condition and Baseline-width-related Pose Normalization are simple but reasonable approaches that consider the varying depth of humans and the baseline-width in the binocular setting. Weaknesses: - The comparison between triangulation and common 3D initial distributions like DiffPose [8] should be conducted, which is necessary for revealing the strength of the proposed dual diffusion structure. - The scalability of the proposed method to address more camera views should be clarified. It would be helpful if the authors could discuss how their approach handles an increased number of cameras and any potential impacts on performance. - 3D poses can be directly obtained by triangulating multi-view 2D poses. What is the accuracy of the 3D poses from direct triangulation? This comparison is important to assess the necessity of the proposed diffusion process. Technical Quality: 3 Clarity: 3 Questions for Authors: - How does the proposed method perform in multi-view scenarios with more than 2 views? - The proposed framework does not require 3D supervision. Could the performance be improved with additional 3D pose supervision? - What is the performance gap between the proposed triangulation and common 3D initial distributions like DiffPose [8] in the 3D pose diffusion process? - How could the proposed diffusion framework enhance the results of direct triangulation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations and societal impact have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 9xXX, We greatly appreciate your thoughtful review and the time you have taken to provide insights and feedback on our submission. We are encouraged by the positive aspects you've highlighted and grateful for the critical points you've raised. In response, we have addressed the weaknesses and questions to further reveal the strength of our method. **1. The comparison between Triangulation and common 3D initial distribution.** It is crucial for us to compare the 3D uncertainty distribution reconstructed by Triangulation and statistical methods like DiffPose. We opt to use 2D uncertainty to reconstruct 3D uncertainty for two reasons: **the difficulty in formulating 3D uncertainty and the greater stability of 2D uncertainty.** We will now elaborate on the second reason. **The reconstruction from 2D poses using Triangulation is more robust than statistical methods because the 2D uncertainty is more stable than 3D uncertainty.** >We have evaluated the 2D MPJPE and 3D MPJPE estimated by ResNet50 under binocular settings with varying baseline widths. >|baseline (mm)|2D MPJPE (pixel)| 3D MPJPE (mm)| >|:--:|:--:|:--:| >|100|3.084|120.37| >|200|3.079|63.18| >|400|3.079|54.69| >We have compared the 2D and 3D uncertainty obtained from ResNet152 across different depths (we separate the MHAD training set into large and small datasets based on the average depth of GT pose). >|depth|2D MPJPE (pixel)| 3D MPJPE (mm)|3D MPJPE STD (mm)| >|:--:|:--:|:--:|:--:| >|large|5.26|40.39|1025.88| >|small|5.33|38.17|161.88| The 3D MPJPE and 3D STD highlight the instability of 3D uncertainty. To further verify the superiority of our reconstruction method over the statistical method, **we trained two models on large and small datasets respectively and tested in another.** ||Method|3D MPJPE (mm)|||Method|3D MPJPE (mm)| |:--:|:-|:--:|:--:|:--:|:-|:--:| |training in large-dataset|Tri-ResNet152|38.17||training in small-dataset|Tri-ResNet152|40.39| |and|Dual-Diff|**35.23**||and|Dual-Diff|**39.11**| |testing in small-dataset|DiffPose|40.26||testing in large-dataset|DiffPose|54.12| Compared to the Triangulation baseline, performance improves with Dual-Diffusion refinement but decreases with DiffPose. **DiffPose suffers from the change of 3D poses uncertainty distribution while our Dual-Diffusion remains stable.** This stability is due to the fact that Dual-Diffusion models the diffusion target using 2D uncertainty, which is significantly more stable than 3D uncertainty. **2. The scalability to multi-view settings.** **Our Dual-Diffusion module can be directly applied to multi-view settings (more than 2 views) without the need for additional fine-tuning.** Since the input to the denoiser network is the 3D pose reconstructed from 2D poses, the network structure remains unchanged as the number of views increases. Furthermore, the 3D uncertainty in 2-view settings often encompasses the uncertainty in 3-view and 4-view settings, making fine-tuning optional, though beneficial. We have tested Dual-Diffusion, trained on binocular 2D poses, in 3-view and 4-view scenarios and compared the 3D MPJPE results with those from Triangulation. |2-view|MPJPE (mm)|BL (mm)| Sym (mm)| JDR (%)| |:-|:--:|:--:|:--:|:--:| |Tri-ResNet152|31.51|14.38|16.29|95.81| |Dual-Diffusion-ResNet152|29.15|12.06|13.37|95.92 |3-view|MPJPE (mm)|BL (mm)| Sym (mm)| JDR (%)| |:-|:--:|:--:|:--:|:--:| |Tri-ResNet152|30.13|13.26|15.75|96.46| |Dual-Diffusion-ResNet152|28.69|11.99|12.42|96.60| |4-view|MPJPE (mm)|BL (mm)| Sym (mm)| JDR (%)| |:-|:--:|:--:|:--:|:--:| |Tri-ResNet152|29.93|13.14|14.57|94.96| |Dual-Diffusion-ResNet152|28.44|12.22|12.39|95.21| **The performance improvements across four metrics validate the scalability of Dual-Diffusion.** However, it needs to be acknowledged that the gains are more modest when the baseline performance is already high. **3. The comparison between Dual-Diffusion and Triangulation.** The baseline comparison, highlighted in light blue in Tables 1 and 2, shows the 3D poses reconstructed by Triangulation. The red values indicate the improvements achieved by Dual-Diffusion over Triangulation. To prevent any misunderstanding, we will provide a clear explanation of the baseline method in Section 4.1. **4. Adding 3D supervision.** We appreciate the insightful suggestion. We have implemented additional experiments by using 3D supervision alone and in combination with 2D supervision to retrain Dual-Diffusion, based on the RSB152 backbone. |Loss|MPJPE (mm)|BL (mm)| Sym (mm)| |:-|:--:|:--:|:--:| |2D|27.76|7.56|9.83| |3D|29.38|8.62|10.57| |2D+3D|**27.73**|**7.43**|**9.43**| Adding 3D supervision additionally enhances 3D accuracy and plausibility. Considering that Dual-Diffusion still has room for improvement, future work will focus on addressing the limitations discussed in the paper and exploring more additional enhancement methods. In summary, we sincerely thank you for your valuable comments. We believe that by addressing these points, our work will be significantly improved and provide a solid contribution to the community. We hope our responses provide clarity, and we remain open to further feedback. Thanks! --- Rebuttal Comment 1.1: Comment: Thank you for the thorough rebuttal. It has addressed all of my concerns. I believe the idea is strong, particularly in the use of diffusion models for tackling 3D pose estimation. I hope the authors can include a discussion about scalability to multi-view settings in the main paper, as this is important for a multi-view method. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 9xXX, We sincerely appreciate your recognition of the ideas presented in our paper and your feedback on the additional experiments. We will include the multi-view experiments and discussions in the revised version of the manuscript. Once again, we would like to express our heartfelt gratitude to you. Best, --- Rebuttal 2: Title: Further Discussion with Reviewer 9xXX Comment: Dear Reviewer 9xXX, We sincerely appreciate the time and effort you have dedicated to reviewing our submission. We have carefully addressed your comments and provided corresponding responses and results. We believe that these responses and results adequately address your concerns. We would value an opportunity to further discuss whether your reservations have been resolved. Should there remain any aspects of our work that are unclear to you, please do not hesitate to inform us. Once again, thank you for your invaluable feedback. Best,
Summary: This paper presents a new method for binocular human pose estimation. The key idea is to employ diffusion model into this problem. The authors propose a novel dual-diffusion process: jointly diffuse 2D pose uncertainty and 3D pose uncertainty. Because 3D uncertainty relates to both 2D noise (which directly relates to diffusion step t) and depth, the authors further proposed z-embedding and BaseL-norm. Results on bioncular H36M and MHAD datasets demonstrate the effectiveness of the proposed method. Strengths: 1. The dual-denoising is interesting for human pose estimation. The novelty of the proposed method is clear. 2. The quantitative results seem good in the Table.1 and Table.2. 3. The paper is clearly written. Weaknesses: I have some questions about the evalution. 1. The paper compares a lot with RSB-pose. However, in the orignial paper of RSB-pose, the absolute MPJPE is not reported. How do the authors compute the results of RSB-pose? Why not compare with RSB-pose with MPJPE_re (relative to pelvis) ? 2. MIssing comparison with recent paper "Triangulation Residual Loss for Data-efficient 3D Pose Estimation (NeurIPS 2023)". It optimizes both multiview 2D and 3D with an extension of epipolar loss. at least the authors should discuss it. 3. Why not use transformer based backbones, which are the mainstreams of 2D pose estimation? Accuracy may improve with transformer backbones. The table.1 may indicate that larger backbone may mitigate the gap among different methods. The table.2 may indicate that larger scale may narrow the gap too. 4. I want to know the parameter size of different methods (parameters other than the same backbone part) in table.1 and table.2. I want to make sure that the better accuracy does not simply come from large parameter capability but come from the network design. Technical Quality: 3 Clarity: 3 Questions for Authors: No detailed question. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer o1bT, First and foremost, we'd like to express our gratitude for your comprehensive review and insightful comments. In response, we have supplemented the evaluation and provided clarifications to enhance the clarity of our experiments. **1. About the comparison with RSB-Pose.** **Given the geometric constraints inherent in two-view setups, it is entirely feasible to reconstruct the absolute position of 3D poses.** Therefore, we propose that absolute MPJPE should replace relative MPJPE as the primary metric for 3D accuracy here, not like the monocular settings. Since RSB-Pose is specifically designed for binocular 3D HPE, we compare a lot with it. We have communicated with the authors of RSB-Pose and suggested including absolute MPJPE results. They agreed to incorporate this metric in the published version and shared their code with us. They have updated the results in arXiv. We reproduced their training process and calculated the absolute MPJPE results accordingly. **2. About the comparison with "Triangulation Residual Loss for Data-efficient 3D Pose Estimation (NeurIPS 2023)".** Considering that "Triangulation Residual Loss for Data-efficient 3D Pose Estimation (NeurIPS 2023)" (TR) is a method to optimize 2D and 3D poses simultaneously, similar to our approach, we apologize for previously overlooking it and have now included a comparison and discussion as follows: * To reduce reliance on 3D annotations, TR proposes the TR loss, which utilizes multi-view geometric constraints to optimize multi-view 2D poses. This loss essentially minimizes the sum of distances from the triangulated 3D points to all view rays originating from 2D points. Minimizing the smallest singular value of the Triangulation matrix is used for computational optimization. It is a simple yet effective method. * Similar to our approach, TR improves 2D pose estimation, which in turn enhances the 3D results by using geometric constraints during the optimization process. The overall framework is quite similar. * The key difference is that **TR optimizes individual points without considering the entire pose globally, lacking pose prior constraints**. In contrast, we introduce a diffusion model to capture the distribution of 3D poses and optimize the entire pose as a whole. **Our Dual-Diffusion approach incorporates both geometric and prior constraints**, offering a more comprehensive optimization objective. We cannot provide a direct evaluation comparison because the model weights for TR are unavailable. However, we acknowledge the relevance and similarity of this approach to ours and **will include a discussion of it in the related work section**. **3. About the comparison with transformer-based backbones.** We thank for the kind advice to compare with the transformer-based 2D pose detector. We have added the comparison using ViTPose[1] and the results are shown as: |Dataset|Method | 2D Pose Detector | Scale|MPJPE (mm) | BL (mm) | Sym (mm) | JDR (%)| |:-|:-|:--:|:--:|:--:|:--:|:--:|:--:| |MHAD|Tri-ViTPose|ViTPose|256|70.84|42.55|48.43|95.83| ||Dual-Diffusion-ViT|ViTPose|256|**61.02**|**37.90**|**30.09**|**95.88**| ||||||||| |H36M|Tri-ViTPose|ViTPose|256|41.49|18.09|20.75|93.33| ||Dual-Diffusion-ViT|ViTPose|256|**35.20**|**16.02**|**19.66**|**95.77**| The baseline results (ViTPose + Triangulation) are lower than expected because ViTPose has not been fine-tuned on these two datasets, limiting the effectiveness of 2D pose estimation and, consequently, impacting 3D pose estimation results. The comparison between ViTPose and ResNet152 (found in the PDF of the global rebuttal) reveals that **as 2D accuracy increases, the gap between the baseline method and Dual-Diffusion decreases.** With the same training process, more accurate 2D estimation often relies on a larger backbone and higher image scale. As clarified in the third paragraph of the introduction, **more accurate 2D poses lead to more accurate 3D pose reconstruction since their uncertainties are intrinsically linked within the geometric framework.** Consequently, a smaller refinement space remains for the 3D poses. **4. About the comparison of model parameters.** We sincerely appreciate the thoughtful advice to include the model parameters. The results are listed below. |Methods|Params of Model other than Backbone (M)| |:-|:--:| |TPPT|9.7| |Epipolar_Tri|0.08| |Algebraic_Tri|10.88| |AdaFuse|1.02| |RSB-Pose|9.25| |Dual-Diffusion|**0.74**| Our Dual-Diffusion requires only 0.74 million parameters, significantly fewer than other methods. **This demonstrates that the effectiveness of pose refinement in our method does not rely on a large number of parameters, but rather on the dual diffusion modeling and training.** The model parameters will be **added to Table 2 in the revised manuscript** and can be previewed in the PDF of the global rebuttal. In summary, we sincerely thank you for your valuable comments. We believe that by addressing these points, our work will be significantly improved. We hope our responses provide clarity, and we remain open to further feedback. Thanks! [1] Xu Y, Zhang J, Zhang Q, et al. Vitpose: Simple vision transformer baselines for human pose estimation[J]. Advances in Neural Information Processing Systems, 2022, 35: 38571-38584. --- Rebuttal Comment 1.1: Comment: The authors' response enhances their paper with more experiments. This is why I lift rating to "weak accept".
Summary: The authors suggest a 3D human pose estimation (HPE) in binocular (two-view) settings. In order to reduce the uncertainty of triangulation method, the authors leverage the diffusion model. They propose a method called Dual-Diffusion method, which simultaneously denoises uncertainties in both 2D and 3D to produce accurate results. Additionally, they introduce Z-embedding and pose normalization related to baseline width. Experiments validate the effectiveness of the proposed method, showing performance improvements over the triangulation method. Strengths: - The proposed method achieves state-of-the-art results in binocular 3D HPE settings. Weaknesses: - The concept of applying diffusion models to 3D HPE is similar to DiffPose[8]. - While the method can be applied to any multi-view setting, the authors limited their experiments to binocular settings, which restricts the applicability of the proposed method. Additionally, the performance improvements due to the dual diffusion over triangulation are marginal. Since accurate 2D pose detection reduces uncertainty in the triangulation method, it is challenging to assess the significance of the dual-diffusion models. It would be more convincing to show how the proposed method performs better with noisy 2D inputs. - The clarity and readability of the paper need to be enhanced. Technical Quality: 3 Clarity: 2 Questions for Authors: Method - What is the meaning of the sentence in Line 176? "However, if the uncertainty of the 3D pose follows a Gaussian distribution is still confusing." - According to Fig2 (b), the denoiser estimate $y_0$ from $y_t$, not $y_{t-1}$. Why does the denoising process needs to be repeated $K$ times? I also found that only $K=1$ is used for the experiments. Experiments - What is the performance of the proposed method in 3-View or 4-View settings? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations - The method can be used only for binocular settings. Suggestions - Extending the framework to multi-view 3D HPE and validating the superiority over other multi-view 3D HPE methods would strengthen the paper. Typos - Figure 2. Initial estimated -> initially estimated - Line 149. Inherently -> inherent - Line 182. $0, 1$ -> ${0,1}$ - Line 320, by $P_v$ -> by multiplying $P_v$ Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer cXv4, We appreciate the detailed feedback and hope our clarifications address your concerns. We apologize if there was any misunderstanding or misinterpretation of our paper. We're eager to highlight the significance and potential of our work. **1. The difference from DiffPose.** **Methodology Novelty.** We acknowledge that both our Dual-Diffusion and DiffPose use diffusion models to denoise the initial estimated 3D pose. **However, two methods differ in how the uncertainty distribution of 3D initial pose is modeled in the forward diffusion process.** DiffPose uses statistical methods, specifically a histogram described in its Sec. 4.1 (notably, in the code on GitHub, the Context Encoder is only applied during inference). **In contrast, Dual-Diffusion models 3D uncertainty by reconstructing from noisy 2D poses using Triangulation.** Since 2D uncertainty is well-defined, the noisy 2D poses simulated from GT in the forward diffusion process can encompass the uncertainty distribution. **Consequently, the 3D uncertainty distribution can be captured, which is challenging for statistical methods given the limited 3D pose data.** The many-to-many correspondence between 2D and 3D poses in monocular settings restricts 3D uncertainty reconstruction to statistical methods. But, in multi-view scenarios, geometric constraints inherently link these two uncertainties, making our Dual-Diffusion approach feasible. **Dual-Diffusion can be seen as an extension of DiffPose to multi-view settings, innovating by proposing a more effective method for 3D uncertainty modeling.** **Performance Superiority.** **3D uncertainty reconstruction through 2D poses is more robust than statistics because the 2D uncertainty is more stable than 3D (illustrated in the PDF of the global rebuttal).** We have compared the 2D and 3D uncertainty obtained from ResNet152 across different depths (we separate the MHAD training set into large and small parts based on the average depth of GT). |depth |2D MPJPE | 3D MPJPE (mm)|3D MPJPE STD (mm)| |:--:|:--:|:--:|:--:| |large|5.26|40.39|1025.88| |small|5.33|38.17|161.88| The 3D MPJPE and STD highlight the instability of 3D uncertainty. To further verify the superiority of our Dual-Diffusion over the statistical DiffPose, **we trained two models on large and small datasets respectively and tested in another.** ||Method|3D MPJPE (mm)|||Method|3D MPJPE (mm)| |:--:|:-|:--:|:--:|:--:|:-|:--:| |training in large|Tri|38.17||training in small|Tri|40.39| |and|Dual-Diff|**35.23**||and|Dual-Diff|**39.11**| |testing in small|DiffPose|40.26||testing in large|DiffPose|54.12| **Diffpose suffers from the change of 3D poses uncertainty distribution while our Dual-Diffusion remains stable.** This stability is since Dual-Diffusion models the diffusion target using 2D uncertainty, which is more stable than 3D uncertainty. **2. Why the applicability is limited in binocular settings?** We have conducted experiments to denoise the initial estimated 3D poses using ResNet152 in 2-view, 3-view, and 4-view H36M testing sets. |2-view|MPJPE (mm)| |:--|:--:| |Tri|31.51| |Dual-Diff|**29.15**| |3-view|MPJPE (mm)| |:--|:--:| |Tri|30.13| |Dual-Diff|**28.69**| |4-view|MPJPE (mm)| |:--|:--:| |Tri|29.93| |Dual-Diff|**28.44**| **The improvement compared to Triangulation demonstrates the denoising capability of Dual-Diffusion.** However, we acknowledge that the improvement decreases as the view number increases. As we clarified in Sec. 1, the 3D uncertainty distribution is more ambiguous in binocular settings compared to multi-view setups. **The 3D uncertainty in multi-view settings is smaller making denoising unnecessary.** **3. The performance improvements due to the dual diffusion over Triangulation.** We need to clarify that all experiments are conducted using 2D estimated poses, which are noisy data rather than accurate data. The red values in Tab. 1 and 2 highlight the improvements achieved with Dual-Diffusion over Triangulation. **Although some improvements appear small, the ratios are comparable.** For instance, even though MPJPE improvement of Dual-Diffusion over Triangulation using RSB152 in H36M is 1.87mm, it represents a 6.12% improvement. **Additionally, we have added more backbones to verify that our method is superior to Triangulation (refer to tables in the PDF of the global rebuttal).** **4. The meaning of the sentence in Line 176.** We argue that a key characteristic of noise added in the diffusion process is the additive property. **This property allows for the skipping process in Eq. (2) and ensures that the noisy data at t=T aligns with the initial uncertainty distribution.** Consequently, the reverse process can start from this uncertainty distribution. Typical diffusion models use Gaussian noise and start from a Gaussian distribution, which itself is additive. However, in binocular 3D HPE, the 3D pose uncertainty is still complex and lacks a clear formula. **This complexity prevents us from modeling the diffusion process in the 3D pose but in the 2D pose.** We are deeply sorry for the misunderstanding this sentence has caused you and will clarify it in the revision. **5. The meaning of K in the denoising process.** Fig. 2(b) illustrates the inference of Dual-Diffusion, where t ranges from (0, T). In classical DDPM, the denoising inference is performed step-by-step from T to 0. Then DDIM is proposed to speed the inference by skipping some steps. Here, we utilize the DDIM method. K is the number of denoising steps during inference which is a hyper-parameter change from 1 to T. K=1 is finally decided by ablation study. **6. Typos** We deeply regret the oversight. These errors will be addressed in the revised manuscript. Finally, we humbly request you to kindly reconsider our submission based on the above justification. If you have any further questions, please let us know. We’d be very happy to do anything we can that would be helpful in the time remaining! Thanks! --- Rebuttal 2: Title: Further Discussion with Reviewer cXv4 Comment: Dear Reviewer cXv4, Thank you for the time and effort you dedicated to reviewing our manuscript. We sincerely appreciate your valuable feedback. In response to your comments, we have provided thorough explanations and updated results. We believe that these address the concerns you raised. We are eager to ensure that all of your concerns have been addressed adequately. Should there be any aspect of our work that remains unclear to you, please do not hesitate to inform us. Once again, we extend our gratitude for your constructive feedback. Best, --- Rebuttal Comment 2.1: Comment: I appreciate the authors addressing my concerns and questions. The authors demonstrated that the method improves 3D HPE accuracy in multi-view settings and clarified its advantages over DiffPose. The method also showed consistent performance improvements across various 2D detectors. Based on this, I am raising my rating to borderline accept. However, I still have some reservations about the practicality of binocular HPE and the proposed method, given that many supervised multi-view HPE methods in the literature already achieve high accuracy. The paper would be strengthened if the authors could discuss the advantages of their method over existing multi-view supervised HPE methods (e.g., cross dataset evaluation, performance on complicated poses, etc.) --- Reply to Comment 2.1.1: Comment: Dear Reviewer cXv4, We sincerely appreciate your recognition of our additional experiments and the acknowledgment of our method's advantages over DiffPose. The motivation behind our study on binocular 3D HPE was driven by the fact that **binocular setups impose fewer constraints on the scene compared to multi-view approaches**, especially in short-baseline scenarios. However, your suggestion to explore the advantages of binocular 3D HPE over existing multi-view supervised HPE methods, such as in cross-dataset evaluations and performance on complicated poses, is indeed a valuable point that we had overlooked. **We will certainly consider this aspect in future research.** Once again, we thank the reviewer for the time, effort, and insightful comments provided in reviewing our paper. Sincerely,
Summary: This paper presents a method for 3D human body keypoint estimation from binocular images. Different from traditional multi-view settings, such methods only take two views as input, which suffer from larger uncertainty at detph wise. To alleviate the depth ambiguity, the paper describes a diffusion-based framework. The forward diffusion process is used to simulate the noisy 2D poses from GT 2D poses. The reverse denoising process refines the 3D poses obtained by triangulation of noisy binocular 2D poses. The denoised 3D poses are then projected back to 2D poses to make the loop. The proposed method has been added to previous SOTA method, RSB-Pose, and improves the performance on MHAD and Human3.6M in binocular settings. Strengths: 1. The idea of using diffusion process to simulate noisy 2D poses and to refine the 3D poses via denising is interesting. The insights behind these designs have been clearly explained and sound reasonable. 2. Experiments of adding the proposed method to previous method in Tab. 1 and 2 verify the effectiveness of the proposed method. The detailed ablation study futher shows the rationality of the architecture design. 3. The paper is easy to understand and clearly written. Weaknesses: 1. Relatively limited experiments. The proposed method is mainly added to one previous method, RSB-Pose. More such experiments will make the paper be more convincing. 2. Lacking discussion about whether the proposed method can effectively model the 3D keypoint uncertainty. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Could you provide more evidence to further verify the generalization ablity of the proposed refinement method, like equipping more SOTA method? 2. The in-depth discussion about how the proposed method models the 3D keypoint uncertainty. Is there any evidence to prove this? In rebuttal, these questions have been well addressed by the feedbacks. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer cBDy, First and foremost, we extend our deepest gratitude for your thorough review and insightful feedback. Your recognition of our insights, experiments, and writing is appreciated. The suggestions to add more experiments and discuss 3D uncertainty modeling are particularly valuable. We have addressed these suggestions by conducting additional experiments. The detailed response is provided below: **1. More Experiments to verify the generalization of the Dual-Diffusion.** In our manuscript, we added the Dual-Diffusion approach to two prior 2D pose detectors, RSB-Pose* and ResNet. To further verify the generalization of our method, **we extended the evaluation to include two additional methods, Epipolar_Tri and AdaFuse**, across both MHAD and H36M datasets. Additionally, **we added a transformer-based 2D pose detector, ViTPose**. The results are as follows: |Dataset|Method | 2D Pose Detector | Scale|MPJPE (mm) | BL (mm) | Sym (mm) | JDR (%)| |:-|:-|:--:|:--:|:--:|:--:|:--:|:--:| ||Tri-ViTPose|ViTPose|256|70.84|42.55|48.43|95.83| ||Dual-Diffusion-ViT|ViTPose|256|**61.02**|**37.90**|**30.09**|**95.88**| |MHAD|Epipolar_Tri|Epipolar_Tri*|256|90.73|33.67|34.21|-| ||Dual-Diffusion-Epi|Epipolar_Tri*|256|**76.42**|**27.02**|**26.42**|-| ||AdaFuse|AdaFuse*|384|70.27|36.07|30.08|83.46| ||Dual-Diffusion-Ada|AdaFuse*|384|**53.77**|**24.59**|**23.19**|**95.37**| ||||||||| ||Tri-ViTPose|ViTPose|256|41.49|18.09|20.75|93.33| ||Dual-Diffusion-ViT|ViTPose|256|**35.20**|**16.02**|**19.66**|**95.77**| |H36M|Epipolar_Tri|Epipolar_Tri*|256|41.22|20.39|20.18|-| ||Dual-Diffusion-Epi|Epipolar_Tri*|256|**37.03**|**16.90**|**18.08**|-| ||AdaFuse|AdaFuse*|384|30.27|15.23|14.36|94.25| ||Dual-Diffusion-Ada|AdaFuse*|384|**29.17**|**13.85**|**13.57**|**96.06**| The supplemented experimental results further demonstrate that Dual-Diffusion is capable of generating more accurate 2D and 3D results regardless of the 2D pose detector and input scale. We have updated the above results to Tab. 1 and 2, which are included in the PDF of the global response. **2. Discussion about the 3D uncertainty modeling.** We appreciate the suggestion to include an analysis and discussion of 3D uncertainty modeling, as it is crucial for strengthening the paper. We model the 3D pose uncertainty distribution by reconstructing from noisy 2D binocular poses and trained a Denoiser to refine the noisy 3D poses. **The hypothesis is that if the 3D uncertainty is well-modeled, the Denoiser's performance in refining both estimated 3D poses and 3D poses sampled from the uncertainty distribution should be similar.** However, we cannot define the 3D pose uncertainty with a clear formulation. To simulate the uncertainty distribution: >We first calculate the error between the 3D estimation and the 3D ground truth (GT) **along each axis in the MHAD training set, storing this as the noise set**. Then, **we add noise sampled randomly from this set** to the 3D GT along each axis. Finally, we use the Denoiser to refine both the **"GT + noise" and "estimated" 3D poses**, comparing the MPJPE results to assess the similarity and effectiveness of the uncertainty modeling. The results are illustrated as: |2D Pose Detector| dataset|3D Initial Pose|MPJPE (mm)||2D Pose Detector| dataset|3D Initial Pose|MPJPE (mm)| |:-|:--:|:--:|:--:|:-|:-|:--:|:--:|:--:| ||training|estimted|15.93|||training|estimated|10.96| |ResNet152|training|GT+noise|17.07||RSB152|training|GT+noise|11.95| ||testing|estimated|43.57|||testing|estimated|27.76| ||testing|GT+noise|17.51|||testing|GT+noise|12.46| Regardless of the 2d pose detector, **the accuracy of denoised GT+noise in both training and testing sets is all close to the denoised estimated 3D poses in the training set**. The MPJPE of denoised estimated 3D poses in the testing set is different from that in the training set. This is primarily attributed to the different 3D uncertainty distribution across these two sets. Hence, we draw the conclusion that the 3D uncertainty distribution is well-modeled. Thank you again for your constructive feedback. **We will incorporate the corresponding modifications and expansions into the revised paper.** In addition, the corresponding code will be open-source to ensure replication. If you have any further questions, please let us know. We'd be very happy to do anything we can that would be helpful in the time remaining. Thanks! --- Rebuttal 2: Title: Further Discussion with Reviewer cBDy Comment: Dear Reviewer cBDy, We sincerely appreciate the time you devoted to reviewing our manuscript and the invaluable feedback you provided. We have diligently addressed your comments and provided corresponding responses and results. We believe that these responses adequately address the concerns you raised. We would be grateful for an opportunity to discuss whether your reservations have been resolved. Should there be any aspect of our work that remains unclear, please do not hesitate to inform us. Once again, thank you for your constructive insights. Warm regards, --- Rebuttal Comment 2.1: Title: Great replies. Comment: Thanks for the great replies, which solve my concerns. Based on these valuable feedbacks, I have improve the score to WA. Best,
Rebuttal 1: Rebuttal: Dear ACs and Reviewers, We are very grateful for your time and effort in reviewing this manuscript. We value every helpful suggestion and comment. **The two most frequently concerned questions are: 1) comparative evaluation with more 2D pose detectors, and 2) comparative evaluation with Diffpose.** We would like to make more clarifications here. We have included comparisons with previous methods, such as Epipolar-Tri and AdaFuse, as well as a transformer-based 2D detector, ViTPose, in Tables 1 and 2. The parameters of modules other than the backbone have also been added. **The updated tables can be previewed in the PDF.** Our Dual-Diffusion method demonstrates superiority over DiffPose in terms of robustness to changes in 3D pose uncertainty. We have provided experimental results in the rebuttal to reviewers. **Here, we add a figure in the PDF to illustrate that the 3D uncertainty in binocular 3d HPE is more unstable than 2D uncertainty,** which will be affected by the baseline width and 3D target depth. Therefore, diffusion modeled by 2D uncertainty is advantageous and necessary. We hope these replies address all the concerns. If you have any further questions, please feel free to contact us. We will be happy to do anything we can that would be helpful in the time remaining! Best regards, Pdf: /pdf/a43e2c8b06b29d737c89f81ce671e7f842aadaf0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Harmonizing Stochasticity and Determinism: Scene-responsive Diverse Human Motion Prediction
Accept (poster)
Summary: The paper present a method for human motion prediction that take into account the environment (3d point cloud) in which the person moves. The method work with several steps: first from the past motion an area of interest is estimated and several object of interest (bed, chair...) with which the person might interact. after selecting the most likely object the method will estimate an interaction with the object (e.g sitting on the bed or lying on the bed) and plan a trajectory to reach the object. Finally with the trajectory and the estimated interaction the model generate the 3d motion from the end of the past motion to the end of the interaction. Since this is a novel task the author compare to two type of method : motion prediction and environment aware motion synthesis and outperform the state of the art on both tasks. Strengths: The paper introduce a new task of environment aware motion prediction where the environment is a static 3d point cloud. Interesting approach where the models first find the most likely item to be used then create a trajectory and finally generate the future motion based on that trajectory. The paper provides extensive and detailed experiments and ablation study on two datasets The qualitative results look very good. Despite tackling a new task the authors compare with sota methods for two related tasks. The paper is clear, well written and well detailed. Weaknesses: At least some results of the scene aware motion synthesis from the appendix should be moved to the main paper. Especially since there is no qualitative comparison with other scene aware method in the main paper. In figure 1 it is not clear that diffusion is being used. A qualitative comparison with bifu would have been interesting since its results are the best after the proposed method. Technical Quality: 4 Clarity: 4 Questions for Authors: Videos of the generated motion would have been appreciated line 261 introduces => introduced Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Limitations and impacts are discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Synthesis methods move to the main paper. **A1**: We will incorporate the discussion of the synthesis method into the main body of the paper in the next revision. Specifically, Section B.1 will be moved to Section 2.3, B.2 will be positioned in a new subsection between Sections 4.2 and 4.3, and B.3 will be relocated to Section 4.4. Additionally, we will expand our discussion on the differences between tasks in the introduction and in Section 3.1, and we will elaborate on the experimental results of the synthesis method in the newly added subsection of Section 4. **Q2**: In figure 1 it is not clear that diffusion is being used. **A2**: Figure 1 showcases the diverse predictions of DiMoP3D. Diffusion model adapts better to our diverse prediction task and outperforms both cVAE and GAN in terms of performance. Its stochastic nature also enhances the stochastic factors in DiMoP3D. If you are referring to Figure 2, we detail the structure of DiMoP3D there, illustrating how diffusion is implemented within the Self-Prompted Motion Generator. We will enhance the clarity of the use of diffusion model in Figure 2 in the next version. **Q3**: Qualitative results with BiFU. **A3**: We have included a qualitative comparison with BiFU in Section-5 of the anonymous page (see Appendix. A, line-588). Notably, DiMoP3D not only generates diverse predictions but also achieves closer alignment with the ground truth than BiFU in sequences with the same targets. This accuracy is facilitated by the guidance of predicted interactive poses and trajectories. **Q4**: Videos. **A4**: Kindly refer to the anonymous page in Appendix. A (line-588). **Q5**: Typos in line 261. **A5**: We will correct the typo in the next version. [1] Zheng, et al. "Gimo: Gaze-informed human motion prediction in context." ECCV2022. --- Rebuttal Comment 1.1: Title: rating after rebuttal Comment: the rebuttal answered most of my and the other reviewers concerns, I keep my original rating. --- Reply to Comment 1.1.1: Title: Comment from authors Comment: Happy to solve your concerns, and thank you for the time and effort!
Summary: This work studies human motion prediction in 3D scenes. The proposed DiMoP3D leverages context-aware intermodal interpreter, behaviorally-consistent stochastic planner, and self-prompted motion generator to solve the task. The authors conduct experiments on GIMO and CIRCLE to demonstrate a superior performance. Strengths: - The paper is well-organized and easy to follow. - In extensive experiments, DiMoP3D demonstrates a superior performance over the baselines with analysis. Weaknesses: - I question the novelty of this work. Novelties in Context Awareness and Autonomous Intention Estimation are overclaimed. They have been studied in previous works but authors fail to provide a discussion: [29] uses path planning to perform context-aware navigation and GoalNet to estimate the end-point state. [98] uses 3D scene point to encode the scene context and gaze to estimate the intention. Still more work to supplement... I believe the novelty is very overclaimed. - The system estimates the motion of a fixed length $\Delta L$. Is it possible to not reach an end state? Taking Fig.2. as a case, how do you guarantee the human is sitting on the chair but not still walking to it after $\Delta L$? It may lead to unnature speed and motions. - How do you evaluate the diversity and natureness of motions? Technical Quality: 2 Clarity: 2 Questions for Authors: See weakness part. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: As discussed in Appendix G, the system is not as efficient as baselines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The novelty of this work. **A1**: The task of predicting stochastic human motions with real-world scene awareness is crucial for embodied applications like robotics and autonomous vehicles, enhancing their navigational systems to effectively avoid collisions by considering dynamic real-world conditions. This aspect is usually overlooked by current tasks, and we believe we are the pioneers in introducing scene perception in diversity prediction to achieve better performance in real-world applications. Besides, a novel framework DiMoP3D is proposed to tackle this challenge. Please refer to **A4 to reviewer nnKQ** for details due to the limit of space. Moreover, compared with BiFU [98]: **(1)** DiMoP3D’s scene interpreter explicitly identifies potential targets as conditional factors for motion generation, in contrast to BiFU’s basic use of PointNet which lacks crossmodal analysis of past motions and intention analysis; **(2)** Human gaze is frequently unavailable in real-world settings, yet it forms the sole basis for inferring human intentions in BiFU. This heavy reliance significently limits its practicality; **(3)** Different from human gaze which generally indicates deterministic intentions, DiMoP3D analyzes the distribution of human intentions through integrated scene-motion analysis, making it better suited for diverse prediction tasks. Overall, we will provide a more thorough discussion of our novelty and the differences between the components of DiMoP3D and existing methods in the next version. **Q2**: The system estimates the motion of a fixed length. **A2**: For scenarios where the subject reaches the goal within 5 seconds, DiMoP3D sets the subject to remain relatively static upon arrival. If targets are too far to be reached within 5 seconds, our scene interpreter would prevent their selection. In cases where the target is beyond reach within this timeframe, DiMoP3D predicts the initial 5 seconds of motion towards the goal, designating the 'floor' midway as the intended target and 'walk through' as the action of end-pose. Although this issue rarely occurs in the relatively small indoor scenes, it worths further discussion. The 'floor' is categorized as a special class of object and is segmented by the point segmentator; points from the 'floor' also contribute to the interest calculation. In Equation 4, we calculate the average interest for each object to facilitate sampling of target objects. However, this approach is not suitable for the 'floor' object, as the 'floor' object spread a large area, and the interest levels across different areas of the floor can vary significantly. To address this, we specifically discuss the selection of the 'floor' as a target. We start by calculating the probability of choosing the 'floor' with $\dfrac{\sum_{p \in \text{floor}} \exp(M[p])}{\sum_{p \in S} \exp(M[p])}$. If the 'floor' is selected, a precise target point is identified using GumbelSoftmax within the floor's point cloud. In our experiments in small indoor scenes, the probability of selecting the 'floor' as a target is below 1%; therefore, we have not detailed this in the current version but will include a comprehensive discussion in the next update. **Q3**: How do you evaluate the diversity and natureness of motions? **A3**: **For diversity**, we adopt the APD metric (Average Pairwise Distance) following [1,2,3]. It reportes the L2 distances between poses and trajectories of the generated sequences. **For natureness**, we employ the ACPD (Average Cumulated Penetration Depth) to assess scene-motion consistency and physical naturalness, along with the FID (Fréchet Inception Distance) following [4,5,6], which quantifies the discrepancy between the latent distributions of generated motions and ground truth in synthesis. Given the complexity of accurately gauging naturalness, visualization serves as a valuable tool for a more intuitive evaluation. **Q4**: System efficiency. **A4**: We argue that the baseline methods are not scene-aware as they do not process 3D scene features, which explains their faster performance. However, DiMoP3D significantly outperforms these baselines, and the trade-off in speed is justifiable given its enhanced capabilities. Additionally, integrating efficient diffusion modules [7,8] and advanced vision modules [9] holds promise for further accelerating DiMoP3D's performance. [1] Barquero, et al. "Belfusion: Latent diffusion for behavior-driven human motion prediction." ICCV2023. [2] Mao, et al. "Generating smooth pose sequences for diverse human motion prediction." ICCV2021. [3] Yuan, et all. "Dlow: Diversifying latent flows for diverse human motion prediction." ECCV2020. [4] Wang, et al. "Move as You Say Interact as You Can: Language-guided Human Motion Generation with Scene Affordance." CVPR2024. [5] Tevet, et al. "Human motion diffusion model." ICLR2023. [6] Wang, et al. "Towards diverse and natural scene-aware 3d human motion synthesis." CVPR2022. [7] Wang, et al. "Patch diffusion: Faster and more data-efficient training of diffusion models." NIPS2024. [8] Baranchuk, et al. "Label-efficient semantic segmentation with diffusion models." arXiv preprint arXiv:2112.03126. [9] Zhu, et al. "Vision mamba: Efficient visual representation learning with bidirectional state space model." arXiv preprint arXiv:2401.09417. --- Rebuttal Comment 1.1: Comment: I thank the authors' responses in their rebuttal. They have addressed my concerns about evaluation and efficiency. I have also read discussions with other reviewers and I share the same concerns about the novelty. I agree a mixture of existing techniques and the mixed task (historical conditions, scene awareness, intention estimation) can be considered as novelty, but somehow incremental. Besides, it is important to review related works with similar motivations faithfully and attribute their contributions and limitations. However, I am not very convinced by their texts. Plus the missing non-trivial details in tackling corner cases, this work is not very convincing to me and not ready for publication. Though I appreciate the authors' efforts in their rebuttal, I still suggest a major revision and can not recommend acceptance at this time. --- Rebuttal 2: Title: Comment to Reviewer wnWw from authors Comment: Dear Reviewer wnWw: Thank you for the time and effort you've invested in reviewing our manuscript. In our previous response, we addressed your concerns directly and comprehensively. We greatly appreciate your further feedback on our responses and look forward to discussing them with you! --- Rebuttal Comment 2.1: Title: Dear Reviewer wnWw Comment: Thank you for your insight to our work! We kindly remind you that the discussion deadline is within 24 hours. We believe that our rebuttal have addressed your concerns, and we are also prepared to answer any further questions you may have. With this in mind, would you consider revising your rating, or if there are further questions or concerns, can we engage in more discussion! --- Rebuttal 3: Title: Response to Reviewer wnWw by Authors Comment: Thank you for your thoughtful feedback on our rebuttal. We aim to clarify why our work represents a significant contribution to the field. **Novelty and Impact:** Our work introduces a new perspective on human motion prediction that is both task-driven and methodologically innovative. While it is true that the proposed framework integrates existing techniques, the way in which these are combined and the novel problems addressed represent a substantial advancement: - **Task-Oriented Approach:** Our work focuses on a specific challenge—predicting human motions in 3D scenes with real-world awareness, a problem that has not been adequately addressed in previous literature. We argue that *this novel task is a critical step towards enabling intelligent agents to navigate complex environments more safely and effectively*. By incorporating real-world scene information into the motion prediction process, our solution goes beyond merely predicting skeletal poses. - **Pioneering Approach: Real-World Scene Awareness.** Our work, DiMoP3D, *pioneers the integration of 3D real-world scene awareness*into the inherently stochastic task of human motion prediction. This is a fundamental shift from traditional skeleton-based methods, enhancing the predictive capabilities of AI systems in complex environments. - **Performance Gains: Enhanced Diversity and Naturalness.** Our framework achieves higher diversity and naturalness in motion predictions, as evidenced by our quantitative evaluations (APD, ACPD, and FID scores) and qualitative visualizations. These improvements are crucial for practical applications where the ability to predict a wide range of plausible human motions is paramount. **Response to Incremental Concerns:** We acknowledge that the combination of techniques may appear incremental at first glance, but we contend that the integration itself is non-trivial and leads to a significant leap forward in terms of performance and applicability (as commentator nnKQ notes). Moreover, the specific challenges addressed, **corner cases** (e.g., how to deal with a person arriving early in a prediction window, or not being able to arrive), and the **new challenges of this new task** (e.g., how to simultaneously balance the deterministic nature of the 3D scene given, as well as the stochastic nature of the human body's movements), require careful consideration and innovative solutions. All these considerations suggest that the proposed DiMoP3D possesses enough novelty to go beyond simply incremental progress, especially for this newly proposed task of motion prediction in a 3D real-world point cloud. /* **Remark:** *We respectfully request that Reviewer wnWw consider the broader implications of our work within the context of human motion prediction in realistic environments. DiMoP3D represents a significant stride in advancing the field towards more human-like AI systems capable of understanding and anticipating behavior in complex, real-world scenarios. We believe this contribution represents a substantial leap forward for the community and the practical application of embodied AI.* */ We are grateful for the opportunity to refine our work based on the feedback provided. Addressing these concerns will not only strengthen our manuscript but also emphasize the importance of DiMoP3D in advancing the field of computer vision and AI towards better human-scene understanding and interaction capabilities.
Summary: The paper introduces a novel task that incorporates real-world 3D scene information into the existing Human Motion Prediction task. The authors propose a model (DiMoP3D) that, starting from the observed motion sequence and the 3D scene, stochastically predicts the future poses and the interactions with the context; in particular, DiMoP3D infers the individual intention and the human-object interactions. Upon benchmarking on two datasets, GIMO and CIRCLE, the model surpasses all the baselines on most of the considered metrics. Strengths: The proposed task lies in the intersection of existing ones while capturing a problem setting that hasn’t been explored yet. Similar tasks either considered only the contact points within the scene but with no stochasticity in the prediction ([1,2,3]) or focused on stochastically generating human motion conditioning on the observed pose sequence and the 3D scene but fixing the goal (e.g., [4]). As such, the discussion is technically sound and original. The proposed model is convincingly designed as the role of its submodules is properly motivated and analyzed both in the method’s section and in the ablation studies. The results on CIRCLE and GIMO confirm the effectiveness of the proposal. [1] Mao, Wei, Richard I. Hartley, and Mathieu Salzmann. "Contact-aware human motion forecasting." Advances in Neural Information Processing Systems 35 (2022): 7356-7367. [2] Luca Scofano, Alessio Sampieri, Elisabeth Schiele, Edoardo De Matteis, Laura Leal-Taixé, Fabio Galasso. “Staged Contact-Aware Global Human Motion Forecasting.” BMVC 2023: 589-594 [3] Xing, Chaoyue, Wei Mao, and Miaomiao Liu. "Scene-aware Human Motion Forecasting via Mutual Distance Prediction." arXiv preprint arXiv:2310.00615 (2023). [4] Hassan, Mohamed, et al. "Stochastic scene-aware motion prediction." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. Weaknesses: W1. While it is clear how their proposed task diverges from human motion synthesis in 3D scenes, human-object and scene interaction prediction (e.g., [5]), and social navigation, it is not clear enough how it differs from scene-aware 3D human motion forecasting or synthesis. [5] Xu, Sirui, et al. "Interdiff: Generating 3d human-object interactions with physics-informed diffusion." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: I acknowledge that Sec. B.1 of the supplementary extends the literature review in Sec. 2 of the main manuscript, though the discussion could benefit from a tighter and more direct comparison with similar tasks in [1,2,3,4]. Typos and minor issues with the writing: L99: forgot the full stop at the end of the sentence. L119: “intermodal insights. utilizing…” L181: “To tackle ress these” L211: “When \alpha_T is approaches 0…” L261: “...is introduces…” L304: “...by the subject en route…” Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, they included the discussion about limitations and societal impact in Section G of the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: How it differs from scene-aware 3D human motion forecasting or synthesis, and comparisons with similar tasks. **A1**: **For scene-aware motion forecasting**: Traditional scene-aware 3D human motion forecasting [5,6,7] typically predicts a single sequence based on observation. In contrast, our task models a distribution of potential movements, enabling diverse future scenarios to emerge from a single historical motion. This capability is essential in autonomous vehicles and robotics to effectively anticipate and avoid potential collisions. **For scene-aware motion synthesis**: Synthesis methods [8,9,10] generate motions from arbitrary starting positions without considering historical context, while our task requires each generated sequence to be consistent with observed dynamics. Our method contributes to the robotics and navigation, while synthesis methods are often geared towards multimedia and gaming with less constraints. The mentioned [1,2,3] focus on deterministic prediction, while [4] deals with motion synthesis. We will enhance the discussion of these similar tasks in the next version of our paper. **Q2**: Typos. **A2**: We will correct these typos in the next version. [1] Mao, et al. "Contact-aware human motion forecasting." NIPS2022. [2] Luca, et al. “Staged Contact-Aware Global Human Motion Forecasting.” BMVC 2023. [3] Xing, et al. "Scene-aware Human Motion Forecasting via Mutual Distance Prediction." arXiv:2310.00615. [4] Hassan, Mohamed, et al. "Stochastic scene-aware motion prediction." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [5] Zheng, et al. "Gimo: Gaze-informed human motion prediction in context." ECCV2022. [6] Araújo, et al. "Circle: Capture in rich contextual environments." CVPR2023. [7] Mao, et al. "Contact-aware human motion forecasting." NIPS2022. [8] Wang, et al. "Humanise: Language-conditioned human motion generation in 3d scenes." NIPS2022. [9] Hassan, et al. "Stochastic scene-aware motion prediction." ICCV2021. [10] Huang, et al. "Diffusion-based generation, optimization, and planning in 3d scenes." CVPR2023. --- Rebuttal Comment 1.1: Comment: I thank the authors and the other Reviewers for their insights. I acknowledge that the authors' rebuttal answers my concerns and I appreciate the authors' commitment to further improve the clarity of the discussion of the submitted manuscript. I understand the points that Reviewer nnKQ made: adding more comparisons with methods from similar tasks could clarify more the difference between the proposed task and model and existing works, and why the latter cannot be naively repurposed. However, I disagree with the comment regarding the limited novelty; to the best of my knowledge, the proposed task is indeed part of the paper's novelty and, even if it shares similarities with other tasks, it differs from them likewise the task of human motion forecasting differs from stochastic human motion forecasting or human motion forecasting with contact points. In my opinion, leveraging existing methodologies to tackle a novel problem shouldn't necessarily be regarded as a weakness, since the selection and the adaptation of such methods, if wisely devised, are already a contribution and a starting point to face the new task. --- Reply to Comment 1.1.1: Title: Reply to Reviewer p6AA from Authors Comment: Thank you for your insightful feedback and for recognizing the core ideas and motivations of our work. We are honored to address your concerns and appreciate your alignment with our approach. This article introduces a novel and crucial task in the fields of robotic and autonomous navigation. DiMoP3D is proposed as a solution to this task accordingly, addressing challenges that existing methods cannot overcome. If there are any further issues, we are more than willing to discuss them!
Summary: This paper introduces a task that is scene-aware diverse human motion prediction. To be specific, given a 3D scene and history motion, this task aims to predict diverse future human motions that are consistent with the scene and history motion. This paper also proposed a model, DiMoP3D, to tackle this task. This model includes a Context-Aware Intermodal Interpreter to encode the scene and find the goal object, a Behaviorally-Consistent Stochastic Planner to generate the end pose and an obstacle-free trajectory, and a diffusion model to generate the motions on the path. Strengths: 1. This paper tackles an interesting task. 2. The numerical results are good. Weaknesses: 1. As my understanding, if the author follows the original data split of CIRCLE and GIMO, there are no multiple possible future motions for history motion. How to make sure the model can generate diverse future motions trained on such datasets? 2. Please compare the proposed task with [9, 29]. These two works have already tackled the task of scene-aware stochastic prediction. Why do the authors claim they proposed a new task? 3. The author claims that the predictions must adhere to deterministic constraints, including physical consistency. The proposed method only considers the obstacle-free trajectory, but there is no constraint for the generated poses, e.g., there could be penetration with the ground for the walking sequence. 4. What is the insight and novelty of DiMoP3D to make it different from the previous paper? Most techniques are similar to previous work. The scene encoder is from [71], the prediction of diverse goals is similar to [9], the HOI estimation is similar to [1, 2], the trajectory planning is similar to [77, 82, 29], the motion generator is similar to [87]. 5. Some parts are hard to read, e.g., the introduction. 6. For the 3D human motion prediction/generation tasks, I prefer to see a video rather than only some figures for the visualization part. [1] Zhao, Kaifeng, et al. "Compositional human-scene interaction synthesis with semantic control." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [2] Hassan, Mohamed, et al. "Populating 3D scenes by learning human-scene interaction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. Technical Quality: 2 Clarity: 2 Questions for Authors: The question is the same as Weaknesses. Overall, the paper seems to propose a valuable task and has good results. Addressing the limitations mentioned above would significantly strengthen the work and I am positive to change the score. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: How to generate diverse motions training on such datasets? **A1**: Similar to [1,2] utilizing single\_history-to-single\_future data, DiMoP3D is designed to model the posterior distribution of potential motions $P(\hat{X}_{L:L+\Delta L} | X\_{1:L}, S)$, rather than a deterministic mapping function $F(\textbf{X}\_{1:L},S)=\textbf{X}\_{L+1:L+\Delta L}$ during training. This approach enables DiMoP3D to predict multiple future sequences by repeatedly sampling from this distribution, facilitating a single-to-multiple mapping. **Q2**: Other works in scene-aware stochastic prediction. **A2**: We argue that [9, 29] focus on simplistic, non-realistic scene representations, while our task utilizes real-captured, unstructured, and complex 3D scene point clouds. The methods in [9, 29] cannot adequately address the challenges presented by our configuration, making a direct comparison infeasible. **Compare with [9]**: This work processes the scene using a single 2D image and represents observed sequences with 2D skeletons, which significantly limits interaction with real-world 3D scenes. **Compare to task in [29]**: This is a synthesis method rather than prediction. It employs a predefined target object and does not infer intended targets accordingly. Furthermore, while it considers scene features, it lacks the capability for comprehensive perception and understanding necessary for real-world scene-aware motion predictions. **Q3**: Physical consistency. **A3**: Our emphasis on physical consistency prioritizes the crossmodal rationality and realism of motions in real-world scenes. While foot penetration is a recognized challenge, it has traditionally been addressed using various techniques [3,4]. After integrating PhysDiff [4] into DiMoP3D, we observed a significant reduction in Average Contact Penetration Depth (ACPD) from 0.98 to 0.56 and a decrease in foot penetration rate from 13.8% to 1.2% in the GIMO dataset. However, this integration relies on external libraries like IsaacGym, which substantially increase computational overhead. Although these methods are complementary to our approach, focusing on effectively avoiding foot penetration remains a promising direction for future research. **Q4**: What is the insight and novelty of DiMoP3D? **A4**: The proposed task of real-world scene-aware stochastic motion prediction is novel yet important to the field of autonomious vechiles and robotics. Stochastic prediction is essential for navigation applications to prevent collisions, yet existing methods fail to account for real-world scenes, yielding suboptimal results. To the best of our knowledge, we are the first to propose this task and develop DiMoP3D to address it. The components in DiMoP3D significantly diverges from existing approaches in several key ways: **Scene interpreter**: Our interpreter innovatively proposes to explicitly predict potential target objects through crossmodal analysis of past motion and the scene. The scene encoder [71] processes the point cloud to map it into the motion feature space, with target estimation remaining the focus of our interpreter. Besides, although [9] predicts diverse goals, it only identifies pixel points from simple 2D images, failing to recognize interactive objects with low accuracy, and thus does not effectively capture motion intentions in real 3D scenes. **Trajectory planning**: We plan stochastic obstacle-free trajectories towards targets in real-world scenes, addressing shortcomings of previous methods: **(1)** [29] uses traditional A* that generates deterministic trajectories, contrasting with our diverse prediction strategy. **(2)** [77] treats the scene as a binary map, where any position or point with a height greater than zero is considered an obstacle and thus blocked. In contrast, we recognize that some obstacles in real-world scenes are partially traversable and develope a numerically-continuous scene map (refer to Table 7). **(3)** [82] applies a module-based controller to navigate, which struggles with direct adaptation to real-world 3D point clouds and is challenging to fine-tune due to data limitations. **Human estimator**: While our estimator is inspired by previous research, its application is distinct. In our model, it serves as a deterministic semantic constraint, guiding the motion generator to comply with the scene context. This integration is part of our approach to tackling the challenge of scene-aware diverse motion prediction. **Motion generator**: The main insight of our motion generator is to harmonize the stochasticity of human movements with the deterministic constraints of the real-world scenes. Unlike [87], which only considers deterministic constraints between the contact joint (e.g., hand) and a single object, our approach accounts for whole-body constraints within the entire real-world 3D scene, guided by a self-prompted stochastic factor. **Q5**: Some parts are hard to read. **A5**: The introduction comprises five paragraphs: (1) the importance of stochastic motion prediction; (2) the limited consideration of real-world scenes in existing stochastic prediction methods; (3) the key challenges associated with scene-aware stochastic motion prediction; (4) the structure of our DiMoP3D, designed to address these challenges; (5) our contributions. We will refine the structure of the article to enhance readability and clarity in the next version. Welcome for any additional feedback! **Q6**: Video samples. **A6**: Kindly refer to the anonymous page in Appendix. A (line-588). [1] Barquero, et al. "Belfusion: Latent diffusion for behavior-driven human motion prediction." ICCV2023. [2] Yuan, et al. "Dlow: Diversifying latent flows for diverse human motion prediction." ECCV2020. [3] Liu, et al. "Learning basketball dribbling skills using trajectory optimization and deep reinforcement learning." TOG2018. [4] Ye, et al. "Physdiff: Physics-guided human motion diffusion model." ICCV2023. --- Rebuttal Comment 1.1: Comment: Thanks for the author's clarification! Q1: I understand the authors are trying to model the posterior distribution of potential motions. However, for the stochastic human motion prediction task, the posterior distribution is $P(X_{L:L+\Delta{L}}|X_{1:L})$ and there are many motions with similar history but different futures in the dataset, allowing them to learn a good distribution. But for the scene-aware stochastic human motion prediction task, the posterior distribution is $P(X_{L:L+\Delta{L}}|X_{1:L}, S)$ and there are very few motions with similar history but different futures in the same scene. Q2: For [29], what is the key difference? And why does it lack such a capability? Q3: Thank you for your experiments. However, the scene in this task is considerably more complex than in PhysDiff. The generated motion is not constrained solely by foot placement; other factors must also be considered. For example, there might be cases where your generated motion results in penetration with objects like tables or chairs when sitting down. Walking and penetrating the ground is just one scenario among many. Q4: In my opinion, scene-aware stochastic motion prediction is distinctively conditioned on historical motion, setting it apart from other scene-aware synthesis methods. Therefore, it is equally important to compare it with these scene-aware synthesis methods rather than solely focusing on stochastic motion prediction. While the proposed framework is well-structured, it builds upon and integrates existing methodologies, leading to a limited novelty. The primary contribution seems to lie in the Scene Interpreter, which predicts potential target objects from 3D point clouds, as opposed to the previous work [9] that predicted targets from images. Q5: Thanks for your clarification. Q6: There are only comparisons with BelFusion. A comparison with scene-aware methods is also important. Q7: I acknowledge that your results will likely surpass those of BelFusion since you incorporate scene information, whereas BelFusion does not. However, I'm more curious about the comparison between your method and other scene-aware approaches. For instance, you could easily adapt language-conditioned scene-aware synthesis methods by replacing the condition from language to history motion. What insights do you have that demonstrate your method's superiority over previous scene-aware methods? In what cases do previous methods fall short, and how does your approach address these limitations? --- Rebuttal 2: Title: Reply to Reviewer nnKQ from Authors Comment: **A1**: We appreciate the reviewer's point highlighting the challenging of our task as compared to traditional stochastic prediction. DiMoP3D overcomes this by decomposing the problem into target prediction, interactive pose estimation, path planning, and self-prompted motion generation that handle different aspects of the scene-aware diverse motion prediction. Thus, $ P(\hat{X}_{L:L+\Delta L} | X\_{1:L}, S) $ could be represented as the product of terms: $P(O_g | X\_{1:L}, S)$, $P(\hat{X}\_{L+\Delta L} | X\_{1:L}, S, O_g)$, and $P(\hat{X}\_{L:L+\Delta L} | X\_{1:L}, \hat{X}\_{L+\Delta L}, \hat{\tau}^{plan}, S)$. **(1)** Our interpreter learns the distribution rules of potential targets within scenes $P(O_g | X\_{1:L}, S)$. For instance, a subject traversing an aisle is modeled to likely interact with objects along that path, while a subject facing a chair is likely to sit. This module capitalizes on patterns in the relationship between past motion and scene context to predict potential targets, effectively modeling the target distribution despite the limited frequency of specific motion histories. **(2)** Pose estimator models how interactive poses vary with different objects within the scene $P(\hat{X}\_{L+\Delta L} | X\_{1:L}, S, O_g)$. while similar motions in different scenes might lead to different interactions, the core interaction with objects (like sitting or picking) remains consistent across contexts. **(3)** Motion generator models coherent human movement based on learned stochastic factors $P(\hat{X}\_{L:L+\Delta L} | X\_{1:L}, \hat{X}\_{L+\Delta L}, \hat{\tau}^{plan}, S)$. The distribution of walking movements and the coherent switching of different poses is rich in our scene-aware data, making it easy to learn. **A2**: **(1)** They ultilize pre-defined targets, meaning that the target object and location is fixed and known before the prediction begins, while we have to predict the distribution of potential targets dynamically by analyzing past motion and the scene context. **(2)** The scene-awareness of [29] is only based on the 3D voxel grid (8x8x8) of the pre-defined target and planning deterministic paths. In the contrary, DiMoP3D engages with the entire 3D scene, incorporating detailed, real-world environmental data. **(3)** They operate within a single, manually synthetic scene, where all objects' shapes, positions and features are known to the system, eliminating the need for the module to learn. In real-world applications, the ground truth of the scene and object is unaccessible, limiting its use. However, our DiMoP3D is designed to recoginize and analyze potential targets in a variety of real-world scenes. **A3**: Incorporating the Signed Distance Field (SDF) of the human mesh and scene point cloud into the reinforcement learning module’s reward function may help avoiding issues like object penetration. While this approach is promising, the complexity of integrating it with real-world 3D scene data is substantial. Given that it is not the core focus of this work, we leave it in future research, which is a promising direction. **A4**: For comparison to scene-aware synthesis, kindly refer to Appendix. B. We emphasise that DiMoP3D significantly adapts existing methods to the novel task of scene-aware diverse motion prediction. This is not merely an incremental improvement; it addresses a previously unexplored task. We believe our method represents a significant leap in the field, pioneering a new and promising track for motion prediction in real-world scenarios. **A5**: Welcome for any additional feedback! **A6**: We emphasize that our task diverges from scene-aware synthesis and deterministic prediction. Nonetheless, for the sake of completeness, we compare our approach with scene-aware synthesis in Appendix B. Additionally, we provide a supplemental visualization comparison with scene-aware BiFU in Section 5 on our anonymous page (line 588 in Appendix A). **A7** Even with the last observed frame and embedding of past motion incorporated, synthesis methods still struggle to predict future motions coherently (especially switching between observation and prediction). In time-series predictive tasks, history should not be treated as a conditional factor but as a foundational element that future sequences must adhere to. Treating historical information as a condition ignores the temporal correlation between history and the future, while using it as a fundamental element means that the predicted results must strictly maintain consistency with historical information. This is the underlying reason why using historical information as a condition for LLM (or the synthesis methods) cannot achieve high fidelity prediction. Appendix B provides specific qualitative experiments, in which the discontinuity between historical and predicted actions in the comparative method (AffordMotion, CVPR'24) provides evidence for our statement. **We are looking forward for further discussions!** --- Rebuttal 3: Comment: My main concerns, specifically regarding Q4 and Q7, have not yet been addressed. For Q4, the novelty and insight of the proposed method remain underwhelming. 'While the proposed framework is well-structured, it builds upon and integrates existing methodologies, leading to a limited novelty. The primary contribution seems to lie in the Scene Interpreter, which predicts potential target objects from 3D point clouds, as opposed to the previous work [9] that predicted targets from images.' For Q7, although the results are good, it lack a compelling argument. Your reasoning is that "In time-series predictive tasks, history should not be treated as a conditional factor but as a foundational element that future sequences must adhere to. Treating historical information as a condition overlooks the temporal correlation between history and the future, whereas using it as a fundamental element ensures that the predicted results strictly maintain consistency with historical information." However, this appears to be just a different design choice for the diffusion process. Diffusion-based scene-aware synthesis methods could also adopt this approach, such as [1, 2], to predict future frames by infilling ground truth past motion into the denoised motion at each diffusion step. On the other hand, Interdiff [3] observes that encoding the historical motion as a condition leads to better performance. I do not see any limitation here. Please review my questions carefully. I believe the key difference between scene-aware human motion synthesis methods and stochastic scene-aware human motion prediction lies in the condition. Please provide a robust argument regarding this distinction for your proposed methods. [1] Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, and Jan Kautz. PhysDiff: Physics-guided human motion diffusion model. In ICCV, 2023. [2] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. MotionDiffuse: Text-driven human motion generation with diffusion model. arXiv preprint arXiv:2208.15001, 2022. 3, 4 [3] Xu, Sirui, et al. "Interdiff: Generating 3d human-object interactions with physics-informed diffusion." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. --- Rebuttal 4: Title: Reply to Reviewer nnKQ from Authors Comment: **A7**: Thank you for your insights. While treating historical motion as a condition has proven effective in certain synthesis methods, it poses significant limitations in the context of diverse prediction: **(1) Lack of crossmodal analysis between past motion and scene**. Even when incorporating and infilling the observed sequence into diffusion-based synthesis methods, they often lack an explicit crossmodal analysis between past motion and the scene. This analysis is crucial for inferring potential human intentions and goes beyond traditional scene-motion attention in synthesis methods. **(2) Difficulty in reconstructing observation**. To ensure consistency between the prediction and observation, the predictor in [3] is forced to predict observation as well (train_diffusion_skeleton.py, line-190 in official github repo of InterDiff). However, accurately reconstructing observation is also challenging. During inference, the predicted observation part (0.33sec out of 1.07sec in total in InterDiff) is in fact apart from ground truth. In our long-term prediction task with a 3-second observation phase, this method would struggle to reconstruct long-term observations accurately, which negatively impacts performance. **(3) Motion incoherence**. For methods that not infilling observation during inference, there may be issues in motion incoherence between observed and predicted sequences, as noticed by [4,5]. As a remedial measure, [4] proposes adversarial training to force the module to reconstruct the sequence (including observation) from the past motion embedding; [5] employs a post-refinement method to align the sequences better. These methods are more complex and introducing additional computational overhead. **(4) Posterior collapse**. [5] finds that encoding past motion as a condition can cause posterior collapse during joint training, as strong decoders may ignore the learned latent variables. DiMoP3D overcomes this by decomposing the task into sub-tasks, making each sub-module easy to learn, ensuring both high fidelity and diversity even with limited data. **(5) Scene-awareness**. There are limited scene-aware synthesis methods, most of them (including [1,2]) lack scene awareness. The SoTA diffusion-based scene-aware synthesis method AffordMotion(CVPR24) [6], discussed in Appendix B, has limitations: **(a) past motion embedding** causes significant motion incoherence (see Appendix B.3), and **(b) observation infilling** may lead to significant posterior collapse. We will incorporates more detailed dicussion and experimental comparison with synthesis methods in the next version! **A4**: The primary novelty of our work lies in introducing the task of diverse motion prediction in real-world 3D scenarios, a challenge not addressed by current methods, as confirmed by Reviewer p6AA. Our approach deconstructs the challenge and adapts advanced methods from the community to fit this unique context. Significant enhancements have been made to existing technologies, including Scene Interpreter (crossmodal analysis), Path Planner (efficient stochastic navigation), and Motion Generator (self-prompted on the end-pose and trajectory). We kindly request that you reevaluate our contributions and novelty from this perspective, considering our comprehensive approach to addressing the complexities of diverse scene-aware motion prediction. [1] Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, and Jan Kautz. PhysDiff: Physics-guided human motion diffusion model. In ICCV, 2023. [2] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. MotionDiffuse: Text-driven human motion generation with diffusion model. arXiv preprint arXiv:2208.15001, 2022. 3, 4 [3] Xu, Sirui, et al. "Interdiff: Generating 3d human-object interactions with physics-informed diffusion." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [4] Barquero, German, Sergio Escalera, and Cristina Palmero. "Belfusion: Latent diffusion for behavior-driven human motion prediction." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [5] Wei, Dong, et al. "Human joint kinematics diffusion-refinement for stochastic motion prediction." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 5. 2023. [6] Wang, Zan, et al. "Move as You Say Interact as You Can: Language-guided Human Motion Generation with Scene Affordance." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. --- Rebuttal Comment 4.1: Title: Dear Reviewer nnKQ Comment: Thank you for your interest in our work, your prompt responses, and your thorough engagement with us! We kindly remind you that the discussion deadline is within 24 hours. We trust that our comprehensive responses regarding synthesis tasks and methods have addressed your concerns. We are also prepared to answer any further questions you may have. With this in mind, would you consider revising your rating, or if there are further questions or concerns, can we engage in more discussion!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Robustly overfitting latents for flexible neural image compression
Accept (poster)
Summary: The authors build on previous autoencoder-based neural image compression methods. Earlier works have shown that the quantized latent representations output by an end-to-end trained encoder for any given image are suboptimal for a decoder. Concretely, for a fixed set of decoder weights, one can usually find latent representations that yield a better rate-distortion than the representations output by the encoder. Hence, previous works have proposed optimizing the rate-distortion loss of an image using gradient descent for a fixed set of decoder weights. Since the latent representations we wish to optimize are discrete, previous works have developed "soft-quantization" approaches based on the Gumbel-Softmax trick, which introduces a temperature parameter $t$, such that for $t > 0$ the optimization problem becomes continuous, while $t \to 0$ recovers hard quantization. One starts with a temperature $t \gg 0$, which is then annealed towards $0$ during the optimization. Specifically, the authors build on stochastic Gumbel annealing (SGA, Yang et al., 2020), which uses the function $f_\tau(x) = -\tanh^{-1}(x) / \tau$ to compute the soft quantization log-probabilities for a given temperature $\tau$. In the present paper, the authors propose three alternatives for $f_\tau$ and conduct experiments analogous to the ones in Yang et al. (2020). ## References Yang, Y., Bamler, R., & Mandt, S. (2020). Improving inference for neural image compression. Advances in Neural Information Processing Systems, 33, 573-584. Strengths: Unfortunately, I couldn't find any particular strengths worth highlighting. Weaknesses: Unless I have completely misunderstood the paper, the authors' motivation for the paper is incorrect. Concretely, for the function $\exp(-\tanh^{-1}(x))$ used by SGA, the authors claim on Line 155 that "The problem is that the gradients tend to infinity when the function approaches the limits of $0$ and $1$." While this is true for the limit point $1$, this does not hold for $0$. This is a crucial point, as the derivative tending to $\infty$ at $1$ is desirable, as it prevents rounding to the wrong value, while if this occurred $0$, it would cause optimization difficulties. However, it can be easily verified that the function I mentioned above doesn't have an infinite derivative at $0$ by plotting it. In fact, I am not sure what function the authors have plotted in Figure 1a under the name "atanh", as the graph of $\tanh^{-1}$ is not centrally symmetric. In fact, the training instability of SGA arises because the derivative at $0$ becomes infinite as the temperature $\tau$ is annealed to $0$. All of the authors' proposed alternatives have the same issue, meaning they do not tackle the very problem they set out to solve. The experiments are also not insightful and do not demonstrate that any of the proposed modifications lead to a significant improvement. Furthermore, the paper's writing should be significantly improved. The authors review many unnecessary, basic details (such as the first two paragraphs of the introduction or the societal impact section, which should be cut altogether). The notation should also be cleared up; for example, the authors introduce duplicate or equivalent notations for the same objects: $\pi$ in Eq (3), $p(y)$ in Eq (4) for the soft quantization probabilities, $v_L$ and $w$ for the fractional part of $v$, and $n$ for the inverse temperature $\tau$. Finally, the font sizes should be increased in each figure as they are currently unreadable without zooming in significantly. Technical Quality: 3 Clarity: 2 Questions for Authors: n/a Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you reviewer Vgvs for your comments. We think there is a misunderstanding, the reviewer believes that the probability function used in [1] only contains infinite gradients at 1. However, the reviewer may have overlooked the **normalization** from the Gumbel softmax that causes infinite gradients at 0. See the proof below, this will be included in the appendix. Recall that the probability given by a 2-class softmax is defined as: $$ K(v) = \frac{e^{f(v)}}{e^{f(v)} + e^{g(v)}}, $$ where $f(v) = -\text{atanh}(v)$ and $g(v)= -\text{atanh}(1-v)$. We will study the softmax for the first class $0$, since the softmax is symmetric this also holds for the second class. The problem is that the gradients of the function $K(v)$ will tend to $-\infty$ for both $v \to 1$ (as agreed by the reviewer) but also for $v \to 0$. Here we show that the gradients also diverge for $v \to 0$, via the normalization with the term $g(v)$. First, take the derivative to $v$: $$ \frac{d K(v)}{dv} = \frac{dK(v)}{df(v)} \cdot\frac{df(v)}{dv} + \frac{dK(v)}{dg(v)} \cdot\frac{dg(v)}{dv}, $$ where $\frac{dK(v)}{df(v)} = K(v)(1-K(v))$ and $\frac{dK(v)}{dg(v)} = -K(v)\frac{e^{g(v)}}{e^{f(v)} + e^{g(v)}}$. Recall that $\frac{d\text{atanh}(v)}{dv} = \frac{1}{1 - v^2}$ therefore $\frac{df(v)}{dv} = - \frac{1}{1-v^2}$ and $\frac{dg(v)}{dv} = \frac{1}{1-(1-v)^2 }$. Plugging this in and computing $\frac{dK(v)}{dg(v)}$ gives us: $$ \frac{dK(v)}{dv}= K(v)(1-K(v))\left(- \frac{1}{1-v^2}\right) -K(v) \cdot \frac{e^{g(v)}}{e^{f(v)} + e^{g(v)}} \cdot \frac{1}{1-(1-v)^2 }. $$ Taking the limit to $0$, (recall that $\lim_{v \to 0} K(v) = 1, \lim_{v \to 0} e^{f(v)} = 1$ and $\lim_{v \to 0} e^{g(v)} = 0$) allows the following simplifications: $$ \lim_{v \to 0} 0 \cdot \left(-\frac{1}{1-0^2} \right) - 1 \cdot \frac{e^{g(v)}}{1 + 0} \cdot \frac{1}{1-(1-v)^2 } $$ For simplicity we substitute $q=1-v$ (when $q \to 1$, then $v \to 0$) which will result in the following: $$ \lim_{v \to 0} -e^{-\text{atanh}(1 - v)} \cdot \frac{1}{1-(1 - v)^2} = \lim_{q \to 1} -e^{-\text{atanh}(q)} \cdot \frac{1}{1-q^2 }, $$ Recall, $-\text{atanh}(q) = -\frac{1}{2}\ln{\frac{1+q}{1-q}}$, so $e^{-\text{atanh}(q)} = 1 / \sqrt{\frac{1 + q}{1 - q}}$ thus: $$ -\lim_{q \to 1} \sqrt{\frac{1 - q}{1 + 1}} \cdot \frac{1}{1-q^2 } = -\lim_{q \to 1} \sqrt{\frac{1}{2}\frac{(1 - q)}{(1-q^2)^2} } $$ Since $\frac{1}{2}$ is a constant and $\lim_{x \to \infty} \sqrt{x} = \infty$ the final step is to simplify and solve: $$ -\lim_{q \to 1} \sqrt{\frac{(1 - q)}{(1-q^2)^2}} = -\lim_{q \to 1}\sqrt{\frac{-1}{(q-1)(q+1)^2}} = -\infty. $$ This concludes the proof that the gradients tend to $-\infty$ for $v \to 0$. Further, the reviewer mentions that "I am not sure what function the authors have plotted in Figure 1a under the name 'atanh' ". Again, this is after normalization from the softmax. Regarding the temperature of 0 in the Gumbel, this is a separate issue and the temperature is therefore annealed to values above 0, both in [1] and our paper. Regarding the experiments, if the reviewer has more suggestions, we are happy to include them. The experiments do show issues with the sensitivity of [1]. Regarding notations: as we mention in the paper, directly above Eq. (3): $\log \pi$ represents unnormalized log probabilities. In contrast $p(y)$ is normalized. We will define Eq. (3) with normalization and with $p(y)$ for clarity. Regarding $v_L, v_R, w$, these are different $v_L, v_R \in [0, 1]$ whereas $w \in [-0.5, 0.5]$. $w$ is more helpful to define the 3-class because it has a center class. It is fair to point out that $n$ and $ \tau$ can be fused into a new temperature in this function and we will point this out. However, following [1] ([https://github.com/mandt-lab/improving-inference-for-neural-image-compression/blob/6a97aba5b17c70847465f5865bd9e2bf58ccbe73/sga.py](https://github.com/mandt-lab/improving-inference-for-neural-image-compression/blob/6a97aba5b17c70847465f5865bd9e2bf58ccbe73/sga.py) #L118) the temperature is used twice, once on the logits and once again in the Gumbel softmax. However, $n$ is only applied to logits to scale the function and not to modify the Gumbel temperature. [1] Yang, Y., Bamler, R., \& Mandt, S. (2020). Improving inference for neural image compression. Advances in Neural Information Processing Systems, 33, 573-584. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. Their response helped me realise that I misunderstood not only their paper but also the original SGA paper (I was familiar with it but never implemented it myself). I never realised that the original SGA paper had such a glaring suboptimality. As I now understand, the `atanh` reparameterisation proposed by the authors of the SGA paper is superfluous, as the authors did not realise that $v - \lfloor v \rfloor$ is already in the correct range. Could the authors comment on their understanding of why the `atanh` reparameterisation could have been chosen? After the authors' rebuttal, I also re-read the paper and now believe that they make a valuable contribution. I have updated my score accordingly. My main concerns that I would like to see the authors address in the camera-ready version of the papers are: - To avoid colossal misunderstandings like mine above, I would like to ask the authors to update the manuscript and be crystal clear about what function produces the probabilities and what is being plotted in Fig 1a. - Improve the Figures by making them vector instead of raster graphics and improving their readability by increasing their tick and legend font sizes, and increasing the line widths. --- Reply to Comment 1.1.1: Comment: Thank you reviewer Vgvs for your reply and for updating your score. To answer your question regarding why the $\text{atanh}$ reparametrisation could have been chosen. This may be due to the fact that in the unnormalized log space, the $\text{atanh}$ looks like an appropriate function that satisfies some useful, and non-trivial properties, as mentioned in their paper. For example, the function $-\text{atanh}(v)$ being strictly decreasing on $(0,1)$ guarantees that the closer some value $v$ gets to an integer, the higher the probability it gets rounded to that $v$. Thank you for pointing out the following points, we will adjust these to avoid misunderstanding in the future: - We will add a clear explanation on how the probabilities of the functions in Fig 1a) are created. - We will improve the readability of each figure accordingly. We hope this answers your question. Feel free to reach out for any further comments.
Summary: This paper proposes a technique which takes a neural compression method based on a VAE and runs an optimization using the same loss function as the original method was trained with, but instead of back-propagating into the weights of the network, the gradients accumulate into the quantized latents. Due to the fact that the latents are quantized, the authors propose 3 alternatives on how to handle this non-trivial issue. The results presented show that the method is improving RD (as one would expect) over the original latents, but the computational cost per image is rather high. Strengths: - The paper presents some relatively straight-forward approximations which should be easy to implement in practice - Strong results, and good ablations in the appendix (I actually had a bunch of comments I ended up deleting because the answers were in the appendix, so I think it's a very detailed paper from multiple respects) Weaknesses: - I am rather confused by the motivation behind 3-class rounding. - I would have liked to see a discussion on whether it's necessary to apply this algorithm to the hyper latents or not (technically the hyper latents can be computed from the latents, but I am unclear whether they don't suffer from the same problem as the latents) - The fact that there are 6 distinct possible methods that actually get presented is a bit confusing (2-class + 3-class) * 3 variants. The appendix does go into ablating over all of these, which is great, but it's still a bit confusing. - The method is acknowledged to be very expensive, and the gains are modest (though the authors do mention that this is a generic method that can be applied to any base architecture). Technical Quality: 3 Clarity: 4 Questions for Authors: - What made you choose the 3 probability functions you chose? Technically there could be many functions $p$ which would fit the requirements. - Given the ablations, I saw that the 3-class rounding offers a modest improvement over 2-class, while being quite a bit more confusing to understand. Are there any other advantages besides "better RD performance" of 3-class rounding (i.e., do you need fewer iterations before convergence, etc)? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have addressed the limitations of their method adequately. The only comment here would be that the energy impact of using such a method on a global scale would be quite large, while the reduction in bits for the amount of energy used to achieve this would be... questionable, unless one were to chase the absolute limits of compression. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you reviewer 5M8z for your comments and questions. The reviewer thinks the method is straightforward and easy to implement. _To answer the weaknesses:_ Concerning point 1: the motivation behind the 3-class rounding. Besides our scientific curiosity, we did find that 3-class rounding improved performance. When there is a limited budget, it may be worth it to use the 3-class, since at earlier iterations, this method converges a bit faster than its 2-class counterparts. We will add a loss plot in the paper that shows the difference between the 2- versus 3-class rounding. The added complexity may not be worth it, which is why we see the 3-class as an extended version of the main functions. Therefore, we always recommend the 2-class linear as a starting point that does not require much tuning, we will highlight this in the paper. We will include a loss plot over the iterations in the paper that shows the difference between the 2- versus 3-class rounding. Regarding point 2, whether it is necessary to apply this algorithm to the hyper latents: We performed a run using the linear 2-class approach where we optimized only the latents (and not the hyper latents). We found a loss of 0.6233 when optimizing only the latents, and a loss of 0.6220 when optimizing both latents and hyper latents. We will add a discussion of this in the paper. Regarding point 3: for the six distinct methods, we wanted to be precise and show the possible options and their possible equivalences (such as SSL being able to interpolate between the other functions, and 3-class having settings that match the 2-class). Nevertheless, we understand this can be overwhelming and we hope that the recommendation from the point above (for the 2-class linear as a starting point) helps readers to make a quick and robust choice. _To answer the questions:_ Regarding point 1, the main reason behind the three probability functions: We tried various methods while looking into the probability space. The main reason behind the linear version is the fact that it is the only function with constant gradients which is also the most robust choice, the cosine version is approximately mirrored across the diagonal of the $-\text{atanh}(x)$ which shows that it is more stable compared to the $-\text{atanh}(x)$, and the reason behind the SSL is that it is a function that can interpolate between all possible functions and can be tuned to find the best possible performance when necessary. We will add this in the paper. Regarding point 2: additional advantages of 3-class rounding, see our answer for weaknesses 1. We will add a loss plot over iterations, where the 3-class approach indeed converges faster. We hope this clarifies your points. Feel free to reach out for any further comments.
Summary: In this paper, the authors proposed the method to improve the compression performance of the pre-trained end to end neural image compression methods by fine-tuning the latents of each image at the test time with the rate-distortion loss. They proposed three class rounding method, named SGA+ which is an extension of stochastic Gumbel annealing (SGA). The authors demonstrated that their method on the two pre-trained models with two different datasets and showed that their method was able to achieve better R-D performance than the existing methods. Strengths: 1. The proposed SGA+ method has the potential to round the latent's to slightly far way quantization grid. This might be optimal with respect to the rate-distortion loss function. The modeling of the probability for three classes are well formulated. 2. modeling of the probabilities using three methods to overcome the limitations of the gradients in the corners of the $atanh$ function 2. Interpolation based function using sigmoid scaled logit to model the probabilities Weaknesses: 1. The authors described the existing SGA method uses $atanh$ function to model the probabilities and as a result the gradients to tend infinity at the borders, and they proposed different ways to counter this problem by different functions. The instability of the gradients occurs only when the discretization gap is very minimal which might be the case at the higher BPP points in the R-D curve, so in these regions proposed method should provide higher gain. It is not evident from the results whether the proposed are able to achieve this? 2. The authors did not provide the B-D rate gain of their method with respect to the existing methods to quantify the average compression gain. 3. Analysis of what is percentage of the latents that were assigned into each category of three classes is missing to see the advantage of the proposed method. 4. Results are missing with recent end to end neural image compression methods. 5. The authors missed few works in the related section [a] M. Balcilar et. al, "Latent-Shift: Gradient of Entropy Helps Neural Codecs", 2023 IEEE International Conference on Image Processing (ICIP). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Whether the gradients problem of the SGA at the corners cannot be solved with tuning the temperature parameter? 2. The results of STE method is not consistent with iterations at 500 and 2000. At 500 iterations, the STE is above the base model and at 2000 iterations is below the base model. what is the reason for this behavior? Is the baselines are tunned properly? It seems there is a divergence in the optimization. 3. The analysis of BD rate gain [b] is important to quantify the average gain of the proposed method. 4. what are the differences and similarities between the Trellis Coded Quantization (TCQ) and your proposed method? In TCQ, also you have the possibility to have rounding the more classes. There are few works which make TCQ differentiable in the deep learning framework [c] [b] G. Bjontegaard, "Calculation of average PSNR differences between RD-curves", VCEG-M33, Austin, TX, USA, April 2001. [c] Deep Learning-based Image Compression with Trellis Coded Quantization https://arxiv.org/pdf/2001.09417 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, limitations are addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you reviewer N6DE for your comments and questions. _To answer the weaknesses:_ Regarding 1, whether the method might work better for higher BPP points in the R-D curve: For certain experiments this seems to be the case (e.g. on Kodak), for other experiments not. Due to the complexity of optimization, it is very difficult to predict which type of model suffers more from these spikes. E.g. it could also be the case that a lower BPP model would be more sensitive to gradient issues. Regarding 2: Thank you for pointing out that we did not provide the B-D rate gain of our method with respect to the existing methods. We will include these results in the attached PDF file. Regarding 3: The reviewer suggests an analysis of what percentage of the latents were assigned to the 3-classes. Therefore, we ran an extra experiment for the best settings of the 3-class extended version of the linear with $r=0.98$ and $n=2.5$. At the first iteration, the probability is distributed as follows: $p(y=\lfloor v \rceil) = 0.9329$, for $p(y=\lfloor v \rceil -1)= 0.0312$, and $p(y=\lfloor v \rceil +1)=0.0359$. This indicates that the class probabilities are approximately $3.12\%$ for class $-1$ and $3.6 \%$ for class $+1$. This is a lot when taking into account that many samples are taken for a large dimensional latent. Regarding 4: Missing results with recent end-to-end neural image compression methods. We choose two different models to show the effect of latent optimization. The first model is trained from scratch (see Appendix), which is similar to the one trained in [1] to make a fair comparison. The other model is a pre-trained model for which we also showed similar improvements. We acknowledge that it would be interesting to see the method on other models as well. Regarding 5: Missing a few works in the related section. Thank you for pointing this out, we will add [2] in the paper. _To answer the questions:_ Regarding 1: the gradients problem at the corners, by tuning the temperature parameter. This is difficult. The problem is that when the temperature increases (which traditionally stabilizes) the diverging gradient of the atanh logits becomes worse. Lowering the temperature mitigates the gradient issue but makes the Gumbel sampling itself more unstable. Experimentally, we demonstrate this in Table 1, where atanh logits over a wide range of temperatures lead to worse performance. Regarding 2: the reason for the behavior of the STE method. We tuned the STE method, just as the other baselines. However, the STE method is the only method that has a lot of trouble converging. Even with smaller learning rates, the method performed poorly. The instability of training is not only observed by us, but is something that is also mentioned in [1] and [4]. In [1] they have tried to overcome this following [4], by changing the gradient of the backward pass to (clipped)ReLU instead of using the identity. However, this did not work. We will highlight this clarification in the paper. Regarding 3: The missing B-D rate gain, we will add this in the paper and included the results in the attached PDF file, as mentioned above. Regarding 4: Trellis-coded quantization (TCQ) is related to our approach in that regard. They also consider a softmax over different quantization levels, although they consider all possible quantization levels. However, while we maintain a scalar quantization approach, TCQ is a method to more efficiently perform vector quantization (VQ). Their rounding approach is also not stochastic, but deterministic based on the optimal Trellis path. We will add a discussion in the paper concerning [3]. We hope this clarifies your questions. Feel free to reach out for further comments. [1] Yang, Y., et al., "Improving inference for neural image compression", 2020 Advances in Neural Information Processing Systems (NeurIPS) [2] M. Balcilar et. al, "Latent-Shift: Gradient of Entropy Helps Neural Codecs", 2023 IEEE International Conference on Image Processing (ICIP) [3] Deep Learning-based Image Compression with Trellis Coded Quantization [https://arxiv.org/pdf/2001.09417](https://arxiv.org/pdf/2001.09417) [4] Yin, P., et al., "Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets", 2019 International Conference on Learning Representations (ICLR)
null
null
Rebuttal 1: Rebuttal: Contains an additional experiment requested by reviewer N6DE. Pdf: /pdf/38981bc9ca8d21ad7aab210b2d0af4db9a360ee3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Depth Networks with Skippable Sub-Paths
Accept (poster)
Summary: The authors propose an easy-to-train supernet, in which you can then adaptively change the network depth by disabling some blocks (skippable sub-paths) and using skip connections instead, while always keeping mandatory sub-paths. They found that the length of mandatory and the skippable sub-paths should be the same, for maximum efficiency in terms of accuracy and computational complexity. During training there are 2 forward-backward passes: 1. Training the whole supernet/teacher (FFFF) network as usually for task objectives (ImageNet classificaiton) 2. Self-distillation intermidiate feature maps from supernet/teacher (FFFF) to the base/student (TTTT) network using KL-divergence-loss (where is base/student (TTTT) is a sub-network of supernet/teacher (FFFF) ) Switchable LayerNorm or BatchNorm operators are used in mandatory sub-paths. This is the first approach to adaptive networks that provides a general principle and theoretic basis for supporting predictable depth adaptation with minimal performance degradation. Strengths: The resulting base/students (TTTT) network achieve higher accuracy than a separately trained networks with the same flops, depth and structure. The resulting supernet/teacher (FFFF) network achieves higher accuracy than a separately trained networks with the same flops, depth and structure. In Figure 4 (b), the proposed approach lies on the Pareto optimality curve, outperforming all other state-of-the-art approaches in terms of accuracy and computational complexity (Flops), except for SpViT-Swin-Ti, which is more accurate and has fewer flops. The approach is very simple and in most cases it shows state of the art results. Controled experiments for different variants of the proposed approach are presented. In Figure 5: (b) this is shown that the inference latecny of the proposed approach drops more than that of the S-ResNet50 approach with the same reduction in computational complexity. It is tested on several widely used networks ResNet50, Swin-T and ViT-b/32, and show that it can be applied to both CNNs and transformers with minimal training effort. Compared to methods of distillation, quantization, etc. this method is more flexible when, after training at test time, you can quickly increase the speed (at the expense of a decrease in accuracy) or increase the accuracy (at the expense of a decrease in speed) in a certain range, because at test time, sub-networks with various depths can be selected instantly from a single network. Weaknesses: In Figure 4 (b), SpViT-Swin-Ti is more accurate and has fewer flops than the proposed approach Swin-T-ADN. There is no Out-of-Domain (OoD) comparison when very different datasets are used for training and testing, which shows the robustness of the model and the extent to which it can be applied to real-world problems. Most of the charts and tables compare the computational complexity (Flops), with the exception of Figure 5(b), while Latency (ms) is more important for real-world problems. It would be better to have as many charts as possible with Out-of-Domain accuracy vs Latency. Technical Quality: 3 Clarity: 3 Questions for Authors: Does your approach achieve higher accuracy than intermediate feature maps distillation: if you train the Supernet first, then freeze it and distill intermediate feature maps from the frozen Supernet to the separate randomly initialized Basenet? So maybe your approach is not only more flexible than intermediate feature maps distillation, but also more accurate? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Training and testing are carried out on different splits of the same dataset, which is an In-domain comparison, while for real-life applications Out-of-domain is much more important, when training and testing on very different datasets, where completely different approaches may be the best. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Weaknesses a)** *In Figure 4 (b), SpViT-Swin-Ti is more accurate and has fewer flops than the proposed approach Swin-T-ADN.* Thank you for your positive feedback and valuable suggestion. While our work outperforms many state-of-the-art approaches, some efficient networks, notably SpViT-Swin-Ti, surpass some of our sub-networks on the Pareto front. SpViT-Swin-Ti excels because it better exploits transformer-specific features. For instance, SpViT-Swin-Ti dynamically prunes tokens, significantly reducing computation by eliminating less important tokens. In contrast, our depth adaptation approach is architecture-agnostic, making it applicable to both CNNs and transformers. We believe that our approach and SpViT-Swin-Ti are complementary. For example, SpViT-Swin-Ti could be trained to achieve better accuracy-efficiency trade-offs by skipping some encoder/decoder blocks, as suggested by our work. * **Weaknesses b)** *There is no Out-of-Domain (OoD) comparison ... It would be better to have as many charts as possible with Out-of-Domain accuracy vs Latency.* Out-of-domain (OoD) generalization in deep learning is crucial for real-world applications, as deployment data often differs from training data. However, achieving effective OoD generalization is challenging, and we are not aware of standardized benchmarks. We greatly appreciate the reviewer’s insights and would be grateful if references or benchmarks could be provided. This would significantly assist us in enhancing our approach for real-world applications. * **Question)** *Does your approach achieve higher accuracy than intermediate feature maps distillation: if you train the Supernet first, then freeze it and distill intermediate feature maps from the frozen Supernet to the separate randomly initialized Basenet? So maybe your approach is not only more flexible than intermediate feature maps distillation, but also more accurate?* The reviewer asked about the effectiveness of applying knowledge distillation (KD) from a super-net, such as ResNet50-ADN(FFFF), to a separate ResNet50-Base model. Firstly, our observations indicate that the naïve application of KD is not effective when using large datasets like ImageNet. In Figure 4(b), we illustrate the impact of applying KD to equivalent networks. Contrary to common belief, the naïve application of KD does not enhance performance. In fact, following the same training schedule (150 epochs for ResNets), KD results in worse performance compared to ordinary training using target labels. For instance, ResNet50-Base (KD individual), which is trained using pytorch pretrained ResNet50 as a teacher, achieves only 73.8% Acc@1, which is 1.2% lower than ResNet50-Base trained without KD. This finding aligns with prior work, such as [52][53][54], which also indicates that achieving positive results with KD on ImageNet is very challenging. To obtain positive outcomes with KD, an extended training schedule and a right combination of teacher/student and optimization techniques are required. Next, we conducted an additional experiment to investigate the effectiveness of exploiting ResNet50-ADN(FFFF), which was trained using our self-distillation strategy, as a teacher to train a separate ResNet50-Base model. The results show that ResNet50-Base trained using ResNet50-ADN(FFFF) as a teacher achieves 76.30% Acc@1, which is 0.2% higher than our subnetwork ResNet50-ADN (TTTT). This result demonstrates that the right combination of teacher and student is crucial for effective knowledge distillation. We conjecture that since our self-distillation process enforces ResNet50-ADN (FFFF) to produce intermediate features (and logits) compatible with ResNet50-ADN (TTTT), the knowledge was more effectively transferred to ResNet50-Base, which has the same architecture to ResNet50-ADN(TTTT). This is an interesting result and requires further investigation. The following table summarizes the evaluation results. | Teacher | Student | Student Acc@1 | |---|---|---:| |ResNet50 (Pytorch pretrained) | ResNet50-Base | 73.8% | |ResNet50-ADN(FFFF) | ResNet50-Base | 76.3% | --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the answers and clarifications. Based on the answers, I leave the article rating 7: Accept. "The results show that ResNet50-Base trained using ResNet50-ADN(FFFF) as a teacher achieves 76.30% Acc@1, which is 0.2% higher than our subnetwork ResNet50-ADN (TTTT). " Did you use only output logits/pseudo-labels or also feature map tensors from **intermediate layers** (for example, after each downsampling layer) for distillation in this case? --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive feedback and questions. In this experiment, we explored two scenarios for knowledge distillation (KD) of ResNet50-Base: (1) using only logits, and (2) using both logits and intermediate features. As shown in the table below, both scenarios yield slightly better performance compared to our self-distillation method. While the difference between (1) and (2) is marginal, there may be a chance to enhance the performance of (2) with further hyper-parameter tuning, such as applying different KD temperatures for different layers. However, we have not yet explored this path. This finding highlights the importance of selecting the right teacher-student combination for effective knowledge distillation. For instance, KD using ResNet50 (PyTorch pretrained) as a teacher did not yield positive results. | Teacher | Student | Student Acc@1 | Note | |-----------------------------------------|------------------|---------------:|-------:| | ResNet50(FFFF) | ResNet50(TTTT) | 76.1 | our approach | | ResNet50 (PyTorch pretrained), logits only | ResNet50-Base | 73.8 | | | (1) ResNet50-ADN(FFFF), logits only | ResNet50-Base | **76.3** | | | (2) ResNet50-ADN(FFFF), logits + intermediate features | ResNet50-Base | 76.2 | | Thank you again for your comments, and please let us know if you have any further questions.
Summary: The submission presents an approach to adaptive depth networks, where each hierarchical residual stage is divided into two sub-paths and they are trained to acquire different properties: the first sub-path is essential for hierarchical feature learning, the second one is trained to refine the learned features and minimize performance degradation even if it is skipped. In addition, a formal reason of why the proposed training method can reduce overall prediction errors while minimizing the impact of skipping sub-paths is also provided. Experimental results on ImageNet classification are provided to prove the effectiveness. Strengths: There are 2 major innovations in this submission: 1. Training Sub-Paths with Self-Distillation 2. Skip-Aware Batch/Layer Normalization In the ablation study, both of these two innovations show improvements. The strength of this submission is that it does not only present the ideas but also analytically proved the effectiveness. Weaknesses: The overall contribution of the paper is solid. The only thing I would suggest is to expend the experiments to 1 more low-level vision task, for example Single-Image Super Resolution, image denoising and so on. Since usually the decoder part of such low-level vision task is computational heavy. Technical Quality: 3 Clarity: 3 Questions for Authors: I don't have specific questions to this submission. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No problem in limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses)** *The overall contribution of the paper is solid. The only thing I would suggest is to expend the experiments to 1 more low-level vision task, for example Single-Image Super Resolution, image denoising and so on. Since usually the decoder part of such low-level vision task is computational heavy.* Thank you for your positive feedback and valuable suggestion. We appreciate your recognition of the solid contribution of our paper. We understand the importance of demonstrating the effectiveness of our approach across various vision tasks. Although we are not very familiar with Single-Image Super Resolution and image denoising, we acknowledge that our approach can be highly effective for computationally intensive tasks. Given the short rebuttal period, it is challenging to include the experimental results at this stage. However, we will strive to incorporate experiments on these tasks in the final version of the paper. We believe this will provide a more comprehensive evaluation of our method’s performance and computational efficiency. Thank you once again for your insightful feedback. We look forward to enhancing the overall impact of our paper with these additional experiments. --- Rebuttal Comment 1.1: Comment: Fair. I don't require you to add new comparisons in this short rebuttal period. I tend to accept the submission even without it. --- Reply to Comment 1.1.1: Comment: Thank you once again for your valuable suggestions and understanding.
Summary: The paper presents adaptive depth networks. During training, the network is trained like a “super-net” that contains all the paths. During inference, the skippable networks can be skipped in devices that have limited resources. The whole framework is useful in real-world applications that require various accuracy-efficiency trade-offs. Strengths: - The motivation is clear. Proposing such a framework is useful in real-world applications. - The authors conduct extensive experiments, including CNNs and Transformers. - The experiments show that the proposed method is effective. Even on the TTTT setting, the performance drop is acceptable. Weaknesses: - There seem to be several solutions with ideas similar to those in this paper, such as [13]. Although there may be differences in the approaches, all of these methods can achieve the same goals in this paper. In my opinion, it would be better to include a comprehensive analysis of the differences and performance. If the proposed method could stand out in performance among the methods, it would make the proposed method very competitive among these methods. - A recent study [R1] found that with longer schedules, the ResNet-50 model can achieve about 80% accuracy on ImageNet. Considering that we can choose to give smaller models a longer training schedule to achieve better results, is there a chance to use a longer schedule to make the Base model achieve better results and avoid using an adaptive network? [R1] ResNet strikes back: An improved training procedure in timm. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the second point in weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Weakness a)** *There seem to be several solutions with ideas similar to those in this paper, such as [13]. ... it would be better to include a comprehensive analysis of the differences and performance. If the proposed method could stand out in performance among the methods, it would make the proposed method very competitive among these methods.* Thank you for your insightful and encouraging suggestion. We focused our approach on vision tasks due to the simplicity of the model architectures and training steps, which effectively highlight the advantages of our proposed method. However, as you rightly pointed out, the field of NLP has seen significant advancements recently, and there are some notable works such as [13] that shares the same goal. For broader impact, we need to compare our work to those works in NLP. Our method, which is applicable to transformer-based models in vision tasks, can indeed be extended to NLP models such as BERT and GPT. However, NLP models typically require more sophisticated training steps, including pretraining and fine-tuning, as well as substantial computational resources. We plan to explore these applications in a follow-up paper, where we will also investigate transformer-specific adaptation techniques. * **Question)** *A recent study [R1] found that with longer schedules, the ResNet-50 model can achieve about 80% accuracy on ImageNet. Considering that we can choose to give smaller models a longer training schedule to achieve better results, is there a chance to use a longer schedule to make the Base model achieve better results and avoid using an adaptive network?* As noted in [R1], the performance of ResNet can be enhanced by employing a longer training schedule and recent augmentation techniques, such as CutMix. Our ResNet50-ADN model benefits from these recent training techniques as well. Following table shows the results when we applied the PyTorch v2 training script (https://github.com/pytorch/vision/issues/3995) to train ResNet50-ADN and individual networks. Using the PyTorch v2 training script, ResNet50-ADN (FFFF) and ResNet50-ADN (TTTT) achieve 80.44% and 78.78% top-1 accuracy on ImageNet-1K, respectively. The equivalent individual networks, ResNet50 and ResNet50-Base, achieve 80.44% and 78.17% top-1 accuracy, respectively. While ResNet50 (FFFF) and ResNet50 perform equally well, our smallest sub-network ResNet50 (TTTT) outperforms the equivalent ResNet50-Base by 0.62%. This is the result without active hyperparameter tuning, indicating that there is a chance for further performance improvement for our models. These results demonstrate that our approach can be effectively combined with recent training techniques. | | Acc@1 | |---|---| |ResNet50-ADN (FFFF) | 80.44%| |ResNet50-ADN (TTTT) | 78.79%| || |ResNet50 (individual) | 80.44%| |ResNet50-Base (individual) | 78.17%| [R1] ResNet strikes back: An improved training procedure in timm.
Summary: The paper provides a training methodology to develop small deployable subnetworks (also high performing) when training a single large network. The key claim of this paper is that due to their innovative training techniques the smaller subnetworks learns better feature and proves the point with reasonable baselines. Strengths: Some of the key strengths of this paper is written below. a) The results provided in this paper looks strong with a lot of baselines. b) The depth adaptation analysis in sec 3 looks a bit hand wavy but tries to provide some prospective, which in general is a good practice. c) The writing and presentation of this paper is very clear and concise. Weaknesses: Some major weaknesses of this paper are below. a) The claim made in the paper " To the authro's knowledge, this is the first approach ..." is a bit of an overclaim. In prior CNN literature there are some works that performs subnetwork distillation. b) In sec 3.2 the authors claim that h_{base} learns compact representation. I would request the authors to quantity this more concretely. c) The results in Figure 4 a) are a bit tricky to evaluate as author's models are trained with distillation and the baseline models are trained with regular training. How do we know if the high accuracy is due to distillation or due to the proposed technique. The baseline has to be fixed. d) More baseline comparisons are required with other techniques such as Matformer (https://arxiv.org/abs/2310.07707), Inheritune (https://arxiv.org/abs/2404.08634) etc. Technical Quality: 3 Clarity: 4 Questions for Authors: Can this method work in medium size LLMs where distillation could be tricky? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the authors have tried to address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Weakness a)** *The claim ... is a bit of an overclaim.* Thank you for your positive feedback and valuable suggestion. We appreciate the opportunity to clarify our claims. We acknowledge that there have been prior works in the CNN literature that perform sub-network distillation. Our intention was to highlight the effectiveness of our self-distillation strategy that focuses on training sub-paths, not sub-networks. To address your concern, we will revise the statement to more accurately reflect the novelty of our work in the context of prior studies. Revised statement: *“To the best of our knowledge, while prior CNN literature has explored sub-network distillation for adaptive networks, this approach uniquely provides a principle for training sub-paths for predictable depth adaptation. This principle allows us to avoid typical exhaustive training of target sub-networks and instead instantly construct sub-networks of varying depths from specifically trained sub-paths.”* We hope this revision clarifies our contribution and addresses your concern. * **Weakness b)** *Authors claim...$h_{base}$ learns compact representation. I would request the authors to quantity this more concretely.* We mention the *compactness* of $h_{\text{base}}$ because sub-networks, such as ResNet50-ADN(TTTT), achieve higher classification performance (e.g., by 1.1%) than equivalent ResNet50-Base, as shown in Figure 4-(a). We hypothesize that this performance improvement results from compressing the knowledge from $h_{\text{super}}$ to $h_{\text{base}}$ through the self-distillation process. Quantifying the compactness of representation is challenging, but we might attempt it by measuring the *cosine similarity* between feature representations $h_{\text{super}}$ and $h_{\text{base}}$. We can infer that the greater the similarity, the more knowledge has been transferred from $h_{\text{super}}$ to $h_{\text{base}}$. The following table shows the cosine similarity between $h_{\text{base}}$ and $h_{\text{super}}$ measured during forward passes of 1000 ImageNet validation data. At every residual stage, our approach demonstrates much higher similarity between $h_{\text{base}}$ and $h_{\text{super}}$, implying that more knowledge has been transferred to $h_{\text{base}}$. | | Stage 1 | Stage 2| Stage 3 | Stage 4 | |---|---:|---:|---:|---:| | ResNet50-ADN (ours) | 0.98 | 0.96| 0.81 | 0.87| | ResNet50 (Pytorch pretrained) | 0.91 | 0.82| 0.67 | 0.81| Additionally, in Figure 8 (Appendix B.2), we visualize $h_{\text{base}}$ and $h_{\text{super}}$ using Grad-CAM. These visualized images also show high similarity between $h_{\text{base}}$ and $h_{\text{super}}$. * **Weakness c)** *...How do we know if the high accuracy is due to distillation or due to the proposed technique....* In Figure 4(b), we illustrate the impact of applying knowledge distillation (KD) to equivalent networks. For instance, ResNet50 (KD individual) represents the performance when equivalent individual networks are trained with KD, using a PyTorch pretrained ResNet50 as the teacher network. Contrary to common belief, the naïve application of KD does not enhance performance. In fact, following the same training schedule (150 epochs for ResNets), KD results in worse performance compared to ordinary training using target labels. This finding aligns with prior work, such as [52][53][54], which also indicates that achieving positive results with KD on ImageNet is very challenging. To obtain positive outcomes with KD, an extended training schedule, a right combination of teacher/student and optimization techniques are required. For example, [52] achieves state-of-the-art ResNet50 performance with KD on ImageNet by employing a 1200-epoch training schedule. In contrast, our ResNet50-ADN trained with the proposed self-distillation strategy consistently achieves better performance than counterpart ResNets. This demonstrates that the high performance of adaptive depth networks does not simply come from distillation effect. | | Acc@1 | |---|---:| |ResNet50-ADN (FFFF) | 77.6 | |ResNet50-ADN (TTTT) | 76.1 | | | | |ResNet50 (individual) | 76.7 | |ResNet50-Base (individual) | 75.0 | | | | |ResNet50 (KD individual) | 75.1 | |ResNet50-Base (KD individual) | 73.8 | * **Weakness d)** *More baseline comparisons are required ... such as Matformer, Inheritune* Thank you for suggesting related works. We reviewed both papers and found that Matformer shares a similar goal with our approach. Both Matformer and our method train a universal model from which smaller sub-networks can be extracted without additional training. We believe that Matformer’s nested FFN architecture could complement our work by providing more fine-grained adaptation of FLOPs and parameters within transformer blocks. However, the Matformer paper does not include detailed performance for sub-networks, such as FLOPs versus accuracy, and the source code is not available to reproduce the results. Therefore, it is challenging to include Matformer as a baseline. Consequently, we will compare Matformer in the Related Work section. * **Question)** *Can this method work in medium size LLMs where distillation could be tricky?* We have not applied our approach to large language models (LLMs) as we currently lack the resources to train them. However, our results demonstrate that our depth adaptation approach is effective for Vision Transformers (ViTs), which utilize transformer encoders. Although not included in the paper, we recently applied our approach to DETR [R1], an object detection network with transformer decoders, and achieved depth adaptation without any loss of mean Average Precision (mAP). While we have not yet had the opportunity to apply our method directly to LLMs, we believe it can be effective since LLMs also consist of transformer encoders/decoders. [R1] Carion et al., “End-to-End Object Detection with Transformers”, ECCV 2020. --- Rebuttal 2: Title: Thank you Authors for the Rebuttal Comment: a) The authors' has just updated the statement from subnetwork to subpath. But haven't differentiated it why these two are conceptually different. I have highlighted cause "later layers distilling knowledge to initial layers or a subset of layers" is not novel. Please clarify how these are conceptually different. b) If the compactness of representation is difficult to prove then authors should refrain from making such claims. I could not understand the table; a self contained caption is needed. What are the values? what are the stages? how are they trained? how the evaluation is done? A layerwise confusion matrix would have been a better substitute to establish this correlation. c) I do not agree; KD has shown great results with Imagenet-1K and Imagenet-21K for instance look at DEIT [1] and TinyVIT [2]. I guess the author's are comparing with their version of KD. I request the authors to find the SOTA KD for such a setting. I acknowledge the table shown by the authors but I would request the authors to cite papers for their KD baselines. d) I acknowledge the Matformer discussion but the authors didn't say anything about the Inheritune paper. Question- Fair!! such an experiment would take time and rebuttal period is too short. Overall, this is a good work but some of my concerns that I have raised remain unanswered/ambiguously answered. I would retain my score. Again I re-iterate this is a good work but loose ends needs to be tightened with reasonable baselines and claims. [1] https://arxiv.org/abs/2012.12877 [2] https://arxiv.org/abs/2207.10666 --- Rebuttal 3: Comment: * **Comments a) and b)** We appreciate your insight. After considering your feedback carefully, we realized that some of our initial statements might be misleading. Specifically, our initial claim that ‘$h_{base}$ learns a compact representation from $h_{super}$’ can be misleading. Since, in our approach, every sub-network shares parameters and $h_{super}$ is not fixed, the knowledge transfer is not uni-directional from $h_{super}$ to $h_{base}$, but rather bidirectional. As demonstrated in Section 3.3, the goal of our self-distillation strategy is to encourage $h_{base}$ and $h_{super}$ to learn similar feature distributions, rather than unidirectional knowledge distillation. Since $h_{base}$ and $h_{super}$ have similar distributions, the selected layers of every residual stage become skippable with minimal performance loss. In the revision, we will remove the claim that is hard to quantify and potentially misleading. In light of this, we will also rephrase our contribution as follows: *Our approach uniquely introduces a principle for training selected sub-paths to be skippable with minimal performance loss. This principle allows us to avoid typical exhaustive training of target sub-networks and instead instantly construct sub-networks of varying depths from specifically trained sub-paths.* * **Comment c)** *I do not agree; KD has shown great results with Imagenet-1k and ImageNet-21K...* We appreciate your feedback and agree that knowledge distillation (KD) is a crucial technique for developing efficient networks. Previous works, such as DEIT [1] and TinyVIT [2], have demonstrated its effectiveness. In Figure 4-b, we compare our work with KD to illustrate that our positive results are not solely due to distillation effect. To ensure a fair comparison, we tried to maintain identical training settings for both our approach and KD. Our objective was not to claim that our method is superior to KD in general. In the revision, we will include efficient KD approaches, such as DEIT [1] and TinyVIT [2], in Table 1 as state-of-the-art (SOTA) baselines. * **Comment d)** *... but the authors didn't say anything about the Inheritune paper.* Inheritune shares some similarities with our approach as it leverages a few transformer blocks from a larger language model (LM) to train smaller models. However, unlike our method, Inheritune trains separate smaller models using the larger LM, making it not directly comparable to our approach. Additionally, a quantitative comparison between Inheritune and our method is not feasible since they are applied to different domains. We may consider Inheritune as a baseline if our work is extended to large language models (LLMs). * **Final Comment)** *Again I re-iterate this is a good work but loose ends needs to be tightened with reasonable baselines and claims.* Thank you once again for your valuable suggestions and comments. They have given us an excellent opportunity to reflect on our work, considering both its strengths and weaknesses. We will update our revision based on your feedback.
Rebuttal 1: Rebuttal: Dear reviewers, We sincerely thank all reviewers for their positive feedback and constructive suggestions. We have made every effort to address each question and suggestion in detail. Specifically, we conducted three additional experiments to respond to the reviewers’ questions and suggestions. 1. **Quantifying compactness of $h_{base}$**: To quantify the compactness of $h_{base}$, we measured *cosine similarity* between $h_{base}$ and $h_{super}$ at every stage. The result shows that our models manifest higher similarity between two feature representations. Through this, we can infer that more knowledge was transferred from $h_{super}$ to $h_{base}$ with our self-distillation strategy. 2. **Effect of a longer training schedulep**: One reviewer asked whether training a smaller network, such as ResNet50-Base, with a long training schedule [R1] could achieve better results and avoid the need for an adaptive network. To address this question, we applied the PyTorch v2 training script (https://github.com/pytorch/vision/issues/3995) to train ResNet50-ADN and individual networks. The results show that ResNet50-ADN(FFFF) and ResNet50-ADN(TTTT) achieve 80.44% and 78.78% top-1 accuracy on ImageNet-1K, respectively. The equivalent individual networks, ResNet50 and ResNet50-Base, achieve 80.44% and 78.17% top-1 accuracy, respectively. While ResNet50 (FFFF) and ResNet50 perform equally well, our smallest sub-network ResNet50 (TTTT) outperforms the equivalent ResNet50-Base by 0.62%. These results demonstrate that our ResNet50-ADN model benefits from these recent training recipes as well. 3. **Effectiveness of ResNet50-ADN(FFFF) as a teacher**: We investigated the effectiveness of using ResNet50-ADN(FFFF) as a teacher to train a separate ResNet50-Base model. The results show that ResNet50-Base trained with ResNet50-ADN(FFFF) as a teacher achieves higher accuracy than our subnetwork ResNet50-ADN(TTTT). In contrast, applying knowledge distillation (KD) to train ResNet50-Base using a vanilla ResNet50 as a teacher does not yield positive outcomes. We conjecture that since our self-distillation process enforces ResNet50-ADN(FFFF) to produce features compatible with ResNet50-ADN(TTTT), the knowledge was more effectively transferred to ResNet50-Base. Despite those efforts, due to the short rebuttal period, some suggestions remain for follow-up work. In particular, we received valuable suggestions about applying our approach to different tasks and domains. We recognize their importance for broader impact and will try to accommodate these suggestions in the final version of the paper and in follow-up works. We deeply appreciate the reviewers’ efforts and insights. Best regards, [R1] Wirghtma, et al, “ResNet strikes back”, 2021
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Decompose, Analyze and Rethink: Solving Intricate Problems with Human-like Reasoning Cycle
Accept (oral)
Summary: The paper introduces a reasoning framework called Decompose-Analyze-Rethink (DeAR) for enhancing the reasoning capabilities of large language models (LLMs). DeAR mimics human cognitive reasoning by decomposing complex problems into simpler sub-problems using a Reasoning Tree structure, analyzing these sub-problems independently, and rethinking the answers in light of new insights from sub-problem solutions. This iterative cycle allows for dynamic adjustments and error corrections in the reasoning process. The proposed framework shows promising results across multiple reasoning datasets. Strengths: 1) The paper makes solid progress on improving how LLMs solve complex problems, an important area in AI research. 2) The paper is well-organzied and the proposed method is well-explained. The entire idea is reasonable and well-aligned with human thinking. 3) The experiments are sound. The proposed method is thoroughly tested on different types of complex problems and the performance improvement over SOTA methods (e.g., CoT, ToT and GoT) is significant. Weaknesses: 1) The process for obtaining decomposition demonstrations in the logic heuristics lacks a detailed explanation. 2) As mentioned in Section 2.2, there are also other works that explore problem decomposition in LLMs. Lacking further discussion or evaluations on how their work differs from existing paradigms of problem decomposition may limit the technical contribution of the paper. 3) The experimental design could be enhanced by providing a more detailed analysis. For example, the effectiveness of the “self-check” mechanism is not well evaluated. The authors may consider showing the error rates of the generated rationales and demonstrate how the “self-check” stage contributes to reducing these errors. 4) The paper could benefit from improvements in presentation. For example: a) Table 4 is incorrectly referenced within the text, b) a typo in Algorithm 1 (stgt;). Technical Quality: 4 Clarity: 3 Questions for Authors: Please address the concerns outlined in Weaknesses. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: No negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your affirmation of the contribution of our paper. For your concerns: **Q1**: The process for obtaining decomposition demonstrations in the logic heuristics lacks a detailed explanation. **A1**: Thank you for your comments. We have provided a detailed explanation for the obtaining of question decomposition demonstrations and examples of prompts for the Decomposition Stage in **Appendix B.1**. We use a BERT encoder to transform target question Q and human-annotated question decomposition demonstrations into vector representations. Then we use cosine similarity to select the top K (K=3 in our setting) similar demonstrations as logic heuristics. **Q2**: Lacking further discussion or evaluations on question decomposition. **A2**: Thank you for your insightful comments. Other methods that use question decomposition to solve complex problems typically decompose the original question into sub-questions through a simple prompting method and then solve them step by step, such as the least-to-most approach. In contrast, the DeAR framework we propose not only uses logic heuristics during the **Decompose** stage to enhance the logic of the question decomposition but also provides more refined planning and updating of the problem-solving process during the **Analyze** and **Rethink** stages. This further ensures the reliability of the problem-solving process, preventing the spread of errors. Additionally, the reasoning tree generated by DeAR makes the reasoning process more interpretable. **Q3**: The effectiveness of the “self-check” mechanism is not well evaluated. **A3**: Thank you for your insightful comments. Here, we verify the effectiveness of the self-check method by comparing the predict accuracy of DeAR and DeAR(w/o self-check) on the ScienceQA dataset. DeAR(w/o self-check) refers to the version where the self-check part is removed during the Analyze stage, while the rest of the implementation process is identical. The experimental results are shown in the following table, from which it can be seen that under different backbones, DeAR has higher predict accuracy, demonstrating the necessity of the self-check method for correcting errors. | | | ScienceQA| | | ---- | ---- | ---- | ---- | | | GPT3.5|LLaMA2-7B|ChatGLM3-6B| |**DeAR w/o self-check**| 82.76 | 69.44 | 50.35 | |**DeAR**| 83.68 | 70.57 | 51.08 | **Q4**: The paper could benefit from improvements in presentation. **A4**: Thank you very much for your suggestions. We will carefully review and correct any writing errors. --- Rebuttal 2: Title: Thank You for Any Valuable Feedback Comment: Thank you for your insightful feedback once again. We hope that our response addresses your concerns and questions. As our author-reviewer discussion nears its end, we'd appreciate knowing if your concerns are resolved. We are open for any further discussion if needed. --- Rebuttal Comment 2.1: Comment: Thank you for your response. I do not have further concerns and I will keep my positive score.
Summary: The paper proposes DeAR-prompting (decompose, analyze, rethink) as a new prompting paradigm. The framework basically consists of decomposing the original question into subquestions, answering the subquestions and analyzing the answers and potentially rethinking answers to earlier questions based on the new answers to correct mistakes. The approach is evaluated experimentally and shows significant improvement over ToT and GoT prompting on three benchmarks. Strengths: I find the idea very intuitive, well motivated and mostly well presented. The experiments indicate that DeAR improves the state-of-the-art and this seems intuitively plausible. Weaknesses: From my perspective, there are some weaknesses, but most of them I wouldn't consider as serious issues, but rather as starting points for future work. - one problem is decomposing the question. This step can probably itself be improved by different prompting strategies. In B1, I was indeed surprised by the first decomposition example. It seems to me that there is nothing that suggests that the Pantheon is a Mausoleum or that it is reserved for citicens of a particular country. It seems to me that the example is much more than just a decomposition as it involves a lot of Background Knowledge that may or may not be available. The second and third example seem much more natural. - another problem is evaluating the answers. It seems very naive to ask the LLM for a score. While some People claim that LLMs can give meaningful quantitative evaluations, there is also a lot of evidence to the contrary. Given the nature of LLMs, it seems to me that there is a good chance that the LLM will just return a score that frequently occured in "similar contexts" during training and is not particularly meaningful. The idea of applying voting methods for this step sounds more convincing to me. - the paper currently argues that particular stages are important because they do not exist in other frameworks (e.g., "the improvements over ToT highlight the advantage of Decompose stage"). It would be more convincing to do an ablation study. - there are quite a few typos and grammatical problems in the paper. It would be good to apply a spell checker. Just two examples: ** line 114: an novel -> a novel ** line 257: Graph-of-Thoughtss -> Graph-of-Thoughts? - finally, a minor philosophical point that will not affect my evaluation: personally, I am not a fan of the whole ANN-vs-human discussion. A neuron in an ANN is just a numerical parameter, a human neuron is a biological cell, which can itself be seen as a primitive life form with its own metabolism. Perhaps something intelligent will evolve from ANNs, but comparing them to biological NNs seems rather far-fetched to me. I appreciate that the paper does not really go into that direction, but does the whole human cognitive reasoning discussion really add anything to the paper? I agree that the proposed approach is more natural than other prompting approaches, but does it really resemble what humans do? Decomposition is certainly a part of what humans do, but do we really have to go back and revise our previous answers because we hallucinated a random answer at some point? This does not really seem to be a reasoning problem in general, but an artifact of the probabilistic-generative nature of LLMs. Of course, it's important to deal with this in LLMs, but do we really need to sell this as human-like? Technical Quality: 3 Clarity: 3 Questions for Authors: I was surprised by the first question example in B1. Was there a rationale for adding so much information in the decomposition example that goes beyond the original question? It's hard to evaluate how often this really happens experimentally, but did you look into some examples to see how far the subquestions go beyond the original question? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your affirmation of the motivation of our idea, and the implementation of our approach DeAR. As for your concerns: **Q1**: Question decomposition examples. **A1**: Thanks for your insightful comments. In our approach, the logic heuristics provided in the problem decomposition prompt vary dynamically depending on the question. The logic heuristics presented in Table 5 are specific to a particular case of a question. For a given question, we select problem decomposition examples from the demonstration pool based on the calculated cosine similarity. Different examples may be selected as part of the prompt for different questions (see Appendix B.1 for a detailed description). This method can effectively adjust the prompt according to the differences in questions, allowing for better decomposition tailored to the characteristics of each problem compared to a fixed prompt. **Q2**: Using LLMs for answer scoring. **A2**: Thanks for your valuable comments. In section 4.2, line 217, we mentioned that scoring for the answer could also be achieved using other voting or classification methods. In the ToT method, a comparison was made between "value" and "vote" scoring approaches, demonstrating that both are effective for verifying the accuracy of answers[1]. In our paper, for simplicity, we explored the method of directly generating scores using the backbone LLMs , and we will supplement the results obtained using voting in the future. [1] Tree of thoughts: Deliberate problem solving with large language models. **Q3**: An ablation study about Decompose Stage. **A3**: Thanks for your insightful comments. Given that each stage in our method is essential for constructing the reasoning tree, it is challenging to perform an ablation study by simply removing one stage. If we were to conduct an ablation study that eliminates the decompose stage, then the subsequent analyze and rethink stages would be hindered, because these stages rely on analyzing and updating the sub-questions that result from the decomposition process. A possible way to validate the effectiveness of the decompose stage is to replace its prompt with prompts from other methods, such as those used for problem decomposition in the Least-to-Most approach. In the table below, we have included supplementary comparative experiments on ScienceQA that demonstrate the superiority of our designed decompose stage. As shown in the table, the performance will decline after we replace our prompt with Least-to-Most decomposition prompt, indicating the effectiveness of our method. We will consider designing additional experiments to further validate the effectiveness of different stages. | | DeAR+GPT3.5| DeAR+GPT3.5 (Least-to-Most decomposition prompt)| | ---- | ---- | ---- | |Accs on ScienceQA |83.68 | 81.33 | **Q4**: Typos and grammatical problems. **A4**: Thanks for your suggestions, we will correct these errors in the revised version. **Q5**: Human cognitive reasoning discussion. **A5**: Thank you for your insightful comments and for raising a philosophical consideration regarding the comparison between ANNs and human cognition. Regarding the human cognitive reasoning discussion in our paper, we included it with the intention of drawing parallels to human problem-solving strategies, like the decompose, analyze and rethink stages, which can provide intuitive understanding and potentially guide the development of more natural and effective AI systems. **Q6**: The first question example in B1. **A6**: Thank you for your comments. For each question, we employ cosine similarity to select the most semantically similar questions from the demonstration pool to construct decomposition examples. For instance, regarding the original question "Does the actress who played Elizabeth II speak fluent Arabic?", the questions chosen from the demonstration pool are "Will Queen Elizabeth be buried in the Pantheon?", "Was Elizabeth II the Queen during the Persian Gulf War?", and "Does Elizabeth II reign over the Balearic Islands?". These selected questions and their decomposition examples might contain additional information. However, compared to direct prompting methods (such as using the least-to-most decomposition prompt for prompting), this method is more effective, as we have also demonstrated in our response table for Q3. --- Rebuttal 2: Title: Thank You for Any Valuable Feedback Comment: Thank you for your valuable feedback. We hope that our response has effectively addressed your concerns and questions. As our author-reviewer discussion comes to a close, we would appreciate if your concerns have been resolved. We are always open to further discussion if necessary. --- Rebuttal Comment 2.1: Comment: Thank you for the clarifications. As I wrote in my review, I do not have any serious concerns about this paper and remain on the acceptance side.
Summary: This paper proposes a recursive method for LLMs to solve complex reasoning tasks. The approach formulates problem-solving as a hierarchical tree structure, where each problem is broken down into a tree of sub-problems. Each sub-problem is then analyzed and updated. This method has been evaluated on datasets such as ScienceQA, StrategyQA, and GSM8K, demonstrating improved accuracy on LLMs like Llama-2, GPT-3.5, and ChatGLM3. Strengths: S1. The concept of the proposed framework is sound, and the cycle algorithm is shown with clear examples. The main idea, which mimics human reasoning, is easy to understand and is presented in a straightforward way. S2. The performance improvements over SOTA methods like ToT and GoT are significant. The experiments on different LLMs (GPT3.5, Llama2, ChatGLM3) also demonstrate the method’s versatility. S3. The structure of the framework is more flexible and reasonable compared to CoT, ToT and GoT. The method can generate the reasoning path based on the specific logic of the problems and timely correct errors. Weaknesses: W1. The effectiveness of the “self-check” method in “Analyze Stage” may need further validation. The paper (Jie Huang et al, "Large Language Models Cannot Self-Correct Reasoning Yet") show that LLMs cannot correct themselves. W2. Could the authors provide a more detailed explanation of each step of the algorithm's execution, including the input and output results, in the case study? For instance, in the example in Figure 9, only the final reasoning process of each node is shown. It would be better if the authors could explain how the contents of these nodes are updated. Technical Quality: 3 Clarity: 4 Questions for Authors: Q1. Is the method effective for more complex tasks? In the context of math reasoning, the community might be more interested in results on MATH or MathQA, as opposed to GSM8K, which is relatively simple for models like GPT-3.5. Q2. The paper provides an efficiency analysis based on ChatGLM3. Could the authors provide a more detailed analysis based on GPT-3.5? For example, could they present a comparison of the number of API calls and the number of input tokens compared to ToT and GoT? This is beneficial to verify the method's efficiency on API-based LLMs. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors addressed the limitations in Appendix D: 1. The self-check method may add more computational complexity. 2. The autonomy in generating branches might result in inconsistency in the reasoning quality. 3. A broader range of datasets should be considered to validate its real-world applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your acknowledgement of on our study motivation, model design, experimental results, and presentation. Your suggestions are insightful for us. **Q1**: The effectiveness of the “self-check” method in “Analyze Stage” may need further validation. **A1**: Thank you for your insightful comments. Here, we add an ablation study focusing on the self-check mechanism, utilizing the ScienceQA dataset for our analysis, as illustrated in the table below. The results demonstrate that across three LLM backbones, the DeAR model outperforms its counterpart without the self-check method, thereby validating the self-check method's efficacy. Although other methods that employ LLMs for self-correction may not be sufficiently effective, our experiments have demonstrated that incorporating a self-check method during the Analyze stage is very necessary. We intend to incorporate these findings into the updated version of our work. | | | ScienceQA| | | ---- | ---- | ---- | ---- | | | GPT3.5|LLaMA2-7B|ChatGLM3-6B| |**DeAR w/o self-check**| 82.76 | 69.44 | 50.35 | |**DeAR**| 83.68 | 70.57 | 51.08 | **Q2**: More detailed explanation of case studies. **A2**: Yes. We'll break down how DeAR enhances the reasoning process with a real example from Figure 9 in Appendix C.4. Imagine we need to solve a comparison question, "#2": "Who is younger between these two directors?" In the **Decompose** stage, DeAR breaks "#2" into simpler sub-questions, "#3" and "#4", asking for the ages of the directors of "Zakhm" and "Telefono Rosso," which are more manageable for Large Language Models (LLMs) to figure out. Once we have the answers to these sub-questions, DeAR moves on to the **Analyze** stage. Here, it not only gets the specifics but also spots and fixes a mistake: it corrects the age of Mahesh Bhatt from 70, born in 1954, to the accurate 76 years old, born in 1948. With the correct information in hand, in **Rethink stage**, DeAR then revisits the original question and makes the necessary update, correcting the initial guess of "Mahesh Bhatt" to the right answer, "Nanni Moretti." This step-by-step approach allows DeAR to catch and correct any faulty reasoning along the way, stopping errors from spreading. In the next version of our work, we'll add more such detailed examples to paint a clearer picture. **Q3**: Is the method effective for more complex tasks? **A3**: Thank you for your comments. We conduct further experiments based on GPT-4, particularly on the more challenging MATH dataset, to address your inquiry. The results are presented in the table below. For more complicated questions in MATH, DeAR also performs better. | Methods (with GPT-4 backbone) | ACCs on MATH | | ---- | ---- | |CoT | 56.99 | |CoT+SC **[1]** (sample 5 solutions each time) | 57.24 | |ToT | 57.18 | |ToT-variant **[2]** | 57.02 | |GoT | 58.78 | |DeAR | **62.25** | **[1]** Self-consistency improves chain of thought reasoning in language models. **[2]** Large language model guided tree-of-thought. **Q4**: Could the authors provide a more detailed efficiency analysis based on GPT-3.5? **A4**: Yes. On the ScienceQA dataset, using GPT3.5 as backbone, to ensure a fair comparison, we compare DeAR (with parameters b=1.58 and d=3.62) with ToT, which has the closest values for branch b and depth d (b=3, d=4). We've looked at the average number of API calls for each question, and ACCs on test set, as shown in the table below. It's clear that our method makes fewer API calls on average, which means less time under the same conditions, and achieves higher ACCs at the same time. We'll add more detailed experiments about the average input tokens in the updated version. | | DeAR| ToT (b=2, d=4) | GoT (b=2, d=4) | | ---- | ---- | ---- | ---- | |**Avg API calls** | **9.82** | 11.35 | 13.74 |**ACC** | **0.837** | 0.826 | 0.831 --- Rebuttal 2: Title: Thank You for Any Valuable Feedback Comment: Thank you for your constructive feedback. We sincerely hope our response has answered your concerns and questions. As we near the end of this discussion, we would appreciate it if you could let us know whether all your concerns have been addressed. We are open to further discussion if needed. --- Rebuttal Comment 2.1: Comment: Thanks for the detailed rebuttal. I will keep my positive opinion on this paper.
Summary: The paper presents **DeAR**, a new reasoning framework for large language models to perform intricate reasoning tasks. Inspired by human cognition, it decomposes problems into sub-questions within a Reasoning Tree, refining solutions through iterative Decompose-Analyze-Rethink cycles. Compared to existing state-of-the-art approaches like **ToT** and **GoT**, **DeAR** offers more flexibility and continuous rationale refinement, leading to reduced logical errors and improved performance across various reasoning benchmarks. Strengths: - A novel reasoning framework implemented with a **Decompose-Analyze-Rethink** (**DeAR**) cycle has been proposed to enhance the capabilities of LLMs in solving intricate problems. - The proposed framework is capable of generating rationales with better logical consistency while achieving better accuracy in less time per question. - Extensive experiments on three complex reasoning benchmarks demonstrate the superiority of **DeAR** over state-of-the-art approaches (e.g., **ToT**, **GoT**), showcasing its ability to improve performance for intricate reasoning with different LLMs. Weaknesses: - Lack of ablation studies to analyze the contribution of each individual step, i.e., ***Decompose***, ***Self-Check*** and ***Rethink***. - A small number of participants in human evaluations leads to statistically unreliable conclusions. I notice that different prompting methods elicit the LLM to produce responses of different lengths, which is also a confounding factor that can affect the choice, as humans prefer more concise responses. - The values for threshold hyperparameters, i.e., $\epsilon\_1$ and $\epsilon\_2$ should be carefully set. - Typo. In Line 322, *Table 3* should be *Tabel 4*. Technical Quality: 3 Clarity: 3 Questions for Authors: - How large is the (human-annotated question decomposition) demonstration pool for each of the datasets? - I notice that the authors employ a cosine similarity-based strategy to pick appropriate demonstrations when constructing prompts at the ***Decompose*** stage, is the same strategy used to test the performance of the baseline prompting methods? - How to assign the proper values for $\epsilon\_1$ and $\epsilon\_2$ if there is no validation set available? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, the authors have covered several limitations in Appendix 4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive comments on the novelty and efficiency of our DeAR and the affirmation of its superior performance over SOTA methods. **Q1**: Lack of ablation studies to analyze the contribution of each individual step. **A1**: Thanks for your insightful comments. The construction process of our proposed reasoning tree is such that the three stages—decompose, analyze, and rethink—are indispensable. If we conduct a ablation study that omits one of these stages, for example, removing the decompose stage, then both the analyze stage and the rethink stage would be unable to proceed, as the latter two stages must analyze and update the sub-problems generated from the decomposition. Similarly, eliminating the analyze stage would result in the inability to obtain the rationales for each node, thereby preventing the rethink stage from taking place. Removing the rethink stage would also render the first two stages pointless; the entire framework would then devolve into using a zero-shot approach to directly solve the problem at the root node and obtain the result. Here, the only part where an ablation study can be reasonably conducted is the self-check method within the analyze stage, as removing self-check will not structurally affect the other two stages. Therefore, we have added an ablation study on the self-check method using the ScienceQA dataset, as shown in the table below. It can be observed that, based on different LLM backbones, DeAR consistently performs better than DeAR without self-check, which also proves the effectiveness of the self-check method. We will include this experiment in the updated version. | | | ScienceQA| | | ---- | ---- | ---- | ---- | | | GPT3.5|LLaMA2-7B|ChatGLM3-6B| |**DeAR w/o self-check**| 82.76 | 69.44 | 50.35 | |**DeAR**| 83.68 | 70.57 | 51.08 | **Q2**: Human evaluations. **A2**: Thank you for your comments. We have adopted some approaches to help minimize the bias of annotators towards rationales of varying lengths. For example, we provided each annotator with detailed annotation instructions, allowing them to select the most logical response from the answers given by different models, as shown in Figure 6, Appendix C.2. At the same time, we performed multiple random samplings from the dataset, each time with a different set of annotators, to further prevent the unreliability of results due to the subjective factors of individual annotators. We have five annotators, which is a similar number to that used in other studies employing human evaluation methods, such as in [1]. We will include more details about the sampling and annotation process in the updated version. [1] Guiding Mathematical Reasoning via Mastering Commonsense Formula Knowledge **Q3**: The values for threshold hyperparameters. **A3**: Thank you for your insightful comments. We set the thresholds by conducting the threshold combination experiment in Section 5.6. We selected the threshold combination that yields the highest reasoning accuracy for our configuration. **Q4**: Typo about table’s name: Table 3 should be Tabel 4. **A4**: Thank you for the reminder, we will correct it in the updated version. **Q5**: How large is the demonstration pool? **A5**: For the ScienceQA dataset, we randomly selected some questions from each topic in the training set and annotated 500 examples as a demonstration pool. For GSM8K and StrategyQA, since their training sets already have annotations for problem decomposition, we directly chose 500 items from them as the demonstration pool. We will include this in the updated version. **Q6**: I notice that the authors employ a cosine similarity-based strategy to pick appropriate demonstrations when constructing prompts at the Decompose stage, is the same strategy used to test the performance of the baseline prompting methods? **A6**: Thank you for your question. Selecting demonstrations for problem decomposition prompts will only be effective for methods that include a problem decomposition step. Among the baselines in this experiment, only the least-to-most method includes a problem decomposition step. Therefore, for least-to-most, we use the same demonstration pool and cosine similarity selection method as DeAR. As for other baselines, such as CoT, ToT, GoT, since they do not include a problem decomposition step, naturally we do not select decomposition demonstrations for prompting. **Q7**: How to assign the proper values if there is no validation set available? **A7**: Thank you for your question. A portion of the data from the training set can be selected as a validation set to verify the effects of different threshold combinations, and the optimal combination can be chosen for testing on the test set. --- Rebuttal 2: Title: Thank You for Any Valuable Feedback Comment: We greatly appreciate your feedback and hope that our responses have addressed your concerns. As we approach the end of our author-reviewer discussion, we would be grateful if your concerns have been resolved. We remain available for any further discussion if needed. --- Rebuttal Comment 2.1: Title: Thank you for the rebuttal Comment: Thanks for the detailed clarifications. After reading all reviews and rebuttals I found that all my concerns have been well resolved. I would like to keep my rating.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers’ efforts in reviewing our paper. We would like to thank all of them for providing constructive and valuable feedback, which we will leverage to improve this work. We are encouraged by the positive comments from reviewers, including: - **Motivation**: “offering a fresh perspective on how LLMs can tackle complex problems.” (Reviewer dFYv), “A novel reasoning framework” (Reviewer vfU8), “I find the idea very intuitive, well motivated and mostly well presented” (Reviewer Zsa1) - **Method**: “novel” (Reviewer dFYv, Reviewer vfU8), “DeAR ensures greater logical consistency compared to traditional methods” (Reviewer dFYv), “The concept of the proposed framework is sound” (Reviewer WGVX), “The entire idea is reasonable and well-aligned with human thinking.” (ANC9), “The structure of the framework is more flexible and reasonable compared to CoT, ToT and GoT” (Reviewer WGVX) - **Experimental Results**: “ DeAR achieves significant improvements” (Reviewer dFYv), “ the superiority of DeAR over state-of-the-art approaches” (Reviewer vfU8), “The experiments also demonstrate the method’s versatility” (Reviewer WGVX), “DeAR improves the state-of-the-art and this seems intuitively plausible” (Reviewer Zsa1), “The experiments are sounds” (Reviewer ANC9). We will specify the detailed responses to all reviewers as follows.
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper presents a novel reasoning framework DeAR (Decompose-Analyze-Rethink), which aims to advance the capabilities of large language models (LLMs) in handling complex reasoning tasks. DeAR introduces a Decompose-Analyze-Rethink cycle that involves breaking down intricate problems into simpler sub-questions, analyzing these to form rationales, and revisiting prior answers to refine the reasoning process. Different from the rigid structures of existing methods like Tree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT), this approach allows each branch to be independently generated without preset configurations, thereby enhancing logical coherence. Extensive experimentation on several benchmarks are conducted to demonstrate the effectiveness of the framework. Strengths: 1.The DeAR framework introduces a novel reasoning cycle that mimics human cognitive reasoning, offering a fresh perspective on how LLMs can tackle complex problems. 2.By decomposing problems into sub-questions and rethinking rationales, DeAR ensures greater logical consistency compared to traditional methods like ToT and GoT. 3.Experimental results show that DeAR achieves significant improvements over state-of-the-art methods, particularly in reducing logical errors and enhancing the reasoning process with different LLMs. 4.By constructing a reasoning tree through a three-stage framework, DeAR provides a clear and interpretable reasoning process, which aids in understanding the decision-making of LLMs. Weaknesses: 1. While the framework is designed to enhance reasoning accuracy and flexibility, the iterative nature of the cycle may lead to increased computational demands, particularly when dealing with highly complex problems. The authors can further discuss this point. 2. The paper should include experimental comparisons between the DeAR framework and more strong baselines, such as other variants of ToT and CoT+SC approach, an enhanced Chain-of-Thoughts approach that incorporates self-consistency checks [1][2]. [1] Long J. Large language model guided tree-of-thought[J]. arXiv preprint arXiv:2305.08291, 2023. [2] Mo S, Xin M. Tree of uncertain thoughts reasoning for large language models[C]//ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024: 12742-12746. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the authors present more case studies where the DeAR framework has been or could be effectively applied, and how the problem-solving process might benefit from the enhanced reasoning capabilities? 2. Does the method work with stronger LLMs? The paper should present the framework performance with GPT4 as its backbone to validate its effectiveness. If the computation cost is too high, the authors can consider running experiments on subsets of the original datasets. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of the proposed DeAR framework, including potential computational overhead, variability in reasoning quality, and the need for broader real-world testing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your affirmation of the motivation of our paper, the significance of our experimental results and the novelty of DeAR. **Q1**: The iterative nature of the cycle may lead to increased computational demands. **A1**: Thank you for your comments. In section 5.7, we compare the efficiency of our framework with different variants of ToT/GoT with ChatGLM3-6B backbone. As shown in Figure 5, compared to state-of-the-art ToT/GoT methods, the point corresponding to our method achieve better ACC with less time. **Q2**: Include comparisons with more strong baselines (one variant of ToT**[1]** and CoT+SC**[2]**). **A2**: Thank you for your comments. We conducted further comparisons with one variant of ToT**[1]** and CoT+SC**[2]** based on GPT-4 backbone, on a more challenging dataset MATH, to address your inquiry. The results are presented in the table below. | Methods (with GPT-4 backbone) | ACCs on MATH | | ---- | ---- | |CoT | 56.99 | |CoT+SC **[1]** (sample 5 solutions each time) | 57.24 | |ToT | 57.18 | |ToT-variant **[2]** | 57.02 | |GoT | 58.78 | |DeAR | **62.25** | **Q3**: More case studies. **A3**: Here, we use the case in Figure 9, Appendix C.4, to further explain how the reasoning process benefit from DeAR’s decomposition, analyze and rethink. First, to answer the comparison question “#2”: Which of these two directors has a smaller age?”, our framework decomposes it into sub-questions “#3”: What is the age of Zakhm’s director?” and “#4”: What is the age of Telefono Rosso’s director?”, which are easier for LLMs to solve. Second, in the Analyze stage, DeAR obtains the answers of sub-question #3 and #4, and also corrects the wrong answer “Mahesh Bhatt was born in 1954, he is 70 years old now” to the right one “Mahesh Bhatt was born in 1948, he is 76 years old now”. After that, the corrected answer to #3 is used to update the answer of #2, and corrects #2’s answer “Mahesh Bhatt” to “Nanni Moretti”. Through the above process, DeAR is able to help correct wrong reasoning steps and avoid error propagation, which is crucial in enhancing model’s reasoning ability. We will include more detailed cases in the revised version. **Q4**: Does the method work with stronger LLMs? **A4**: Thank you for your insightful comments and valuable feedback. In response to your interest, we have conducted further experiments using the GPT-4 backbone to robustly illustrate the effectiveness of our DeAR framework. As indicated in the response to "W2," we present these results to demonstrate the superiority of our approach. On the MATH dataset, a comprehensive benchmark that challenges models with a variety of mathematical reasoning tasks, DeAR has demonstrated superior performance compared to different SOTA methods, including CoT, CoT-SC, ToT, a variant of ToT, and GoT. **[1]** Self-consistency improves chain of thought reasoning in language models. **[2]** Large language model guided tree-of-thought. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Thanks for your reply. My concerns have been well addressed. I would like to keep my positive score. --- Rebuttal 2: Title: Thank You for Any Valuable Feedback Comment: Thank you for your valuable feedback. We hope that our response adequately addresses your concerns and questions. As our discussion draws to a close, we would appreciate knowing if all your concerns have been resolved. We are open to further discussion if needed.
null
null
null
null
null
null
Smoothed Online Classification can be Harder than Batch Classification
Accept (poster)
Summary: The paper studied online classification under smoothed adversaries. They constructed a hypothesis that is learnable under batch learning (PAC learning) but not learnable under smoothed online learning. They also showed that a sufficient condition that a hypothesis class learnable under the PAC learning is also learnable under the smoothed online learning. Strengths: - The result that smoothed online learning can be more difficult than PAC learning is interesting as it changes the conventional ideas in the community. As smoothed (non-adversarial) online learning gets more important as large models are emerging, the result may also contribute the theoretical insights for such models. Weaknesses: - They didn’t show a necessary and sufficient condition that a hypothesis class learnable under the PAC learning is also learnable under the smoothed online learning. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We are unsure what the reviewer meant by "They didn’t show a necessary and sufficient condition that a hypothesis class learnable under the PAC learning is also learnable under the smoothed online learning." We would like to point out that the main result of our work shows that PAC learnability is provably not sufficient for smoothed online classification for unbounded label spaces. However, PAC learnability is certainly necessary for smoothed online classification for unbounded label spaces. --- Rebuttal Comment 1.1: Comment: Thank you for the correction. I misunderstood the definition of PAC learnability. I will keep my current score, but other reviewers probably have better understanding of the paper.
Summary: This paper studies online learning under the smoothed analysis framework. Under the smoothed analysis framework, the example that is presented to the learner in each round is not chosen adversarially but instead is drawn from some distribution that is close to a known based distribution. Previous works have shown that for binary classification or regression tasks if a hypothesis class can be learned under the PAC learning model, then it can also be learned under the smoothed online learning model. In this work, the authors consider multi-class classification against a non-adaptive adversary. They show that if the label class $\mathcal{Y}$ is unbounded, then one can construct a hypothesis class $H$, which has a finite sample compression scheme (and is thus PAC learnable) but is not smoothed online learnable. On the other hand, the paper extends previous results on binary classification and proposes a sufficient condition that ensures the PAC learnability of a hypothesis class is sufficient for its smoothed online learnability. Strengths: The paper studies a well-defined learning problem that combines multiclass classification with smoothed online learning and gives several results on the learnability of the problem. These results enhance our understanding of the smoothed online learning problem. The hardness result for the case where the label class $\mathcal{Y}$ is unbounded is interesting and non-trivial. Weaknesses: My main concern is about the significance of the work. The learning model studied in this paper competes with a non-adaptive adversary. Though the paper generalizes the condition for the learnability of the problem from binary classification to multi-class classification, the high-level idea of the proof seems to be a straightforward extension of the analysis used in Haghtalab's thesis (though a more complicated analysis is needed due to the more complicated setting). Furthermore, for binary classification/regression, as mentioned in the introduction of the paper, there are results that can compete with an even stronger adaptive adversary. Given these, I am not quite convinced about the importance of the sufficient condition on the learnability proposed by the paper. On the other hand, the hard instance is constructed based on a not very extensively studied learning model, multiclassification with infinite label sets. Though the author points out several very recent papers that study this setting, it is still not very clear to me the motivation for studying the setting and how such a hardness result would inspire new theories or algorithms for related problems. Technical Quality: 3 Clarity: 3 Questions for Authors: I would like the authors to comment on the weakness pointed out above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for noting that our results "enhance our understanding of the smoothed online learning problem." We address their concerns below. - Our sufficiency condition strictly improves the sufficiency condition presented in Haghtalab's thesis, as noted in lines 312-316. Specifically, there are classes where the VC dimension is infinite and the upper bounds in Haghtalab's thesis are therefore vacuous, but our sufficiency condition still provides a meaningful upper bound. We believe that the significance of our sufficiency condition is more conceptual than technical. Our upper bound highlights that distribution-dependent complexity measures quantify the rates more precisely than distribution-independent complexity measures. This is an important point worth highlighting because any complexity measure that characterizes smoothed online learnability has to be a joint property of both $\mathcal{H}$ and $\mu$ (as noted in the Discussion section). Lastly, we only considered non-adaptive adversaries because the primary focus of our work is the hardness result. However, our upper bounds can be extended to adaptive adversaries using the coupling argument from [1]. - We note that Theorem 3.3 shows that, in terms of regret bound, the separation between PAC learnability and smoothed online learnability holds even for finite label spaces as long as the size of the label space is $\geq 2^{T \log(T)}$. More precisely, for any time horizon $T$, there exists a class $\mathcal{H}_T \subseteq \mathcal{Y}^{\mathcal{X}}$ with $|\mathcal{Y}|\geq 2^{T \log{T}}$ for which its PAC error bound is $O\left(\sqrt{\frac{\log{n}}{n}} \right)$, but its regret is $\Omega(T)$ (or average regret is $\Omega(1)$). That is, $\mathcal{H}_T$ has a vanishing PAC upper bound (independent of $T$) but a constant non-vanishing average regret lower bound. The infinite label space is required only to show a separation of \emph{learnability}. In particular, showing that the class is non-learnable according to Definition 1 requires establishing $\Omega(1)$ average regret for every $T$ even as $T \to \infty$. - Finally, we disagree with the reviewer that multiclass classification with infinite label sets is not ``a not very extensively studied learning model." This setting has been studied in several seminal works in learning theory in the last 40 years beginning with [2,3] and more recently in [4,5,6] to name a few. Studying infinite label spaces is important for understanding when one can establish learning guarantees independent of the label size. This is quite a practical question as many modern machine learning paradigms have massive label space, such as in face recognition and protein structure prediction, where the dependence of label size in learning bounds would be undesirable. [1] Haghtalab, Nika, Tim Roughgarden, and Abhishek Shetty. ``Smoothed analysis with adaptive adversaries." Journal of the ACM 71.3 (2024): 1-34. [2] Balas K. Natarajan. Some results on learning. 1988. [3] B. K. Natarajan. On learning sets and functions. Machine Learning, 4:67–97, 1989. [4] A. Daniely, S. Sabato, S. Ben-David, and S. Shalev-Shwartz. Multiclass learnability and the ERM principle. In Proceedings of the 24th Conference on Learning Theory, 2011. [5] A. Daniely and S. Shalev-Shwartz. Optimal learners for multiclass problems. In Proceedings of the 27th Conference on Learning Theory, 2014. [6] N. Brukhim, D. Carmon, I. Dinur, S. Moran, and A. Yehudayoff. A characterization of multiclass learnability. In Proceedings of the 63rd Annual IEEE Symposium on Foundations of Computer Science, 2022. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for making comments on my review and providing related references. However, I am still not very convinced about the significance of the work and I am not sure how much interest the setting smoothed online learning with infinite labels could attract from the NeurIPS community. I think more work should be done and included to convince readers that this setting is not a simple combination of smoothed online learning and multiclass classification with infinite labels. For this reason, I will keep my current rate. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response and would like to respond to some of their concerns. > "Still not very convinced about the significance of the work" Both smoothed learning and multiclass classification with infinite labels are recent but well-established topics. Smoothed analysis originated in the analysis of one of best known problem in CS, viz. linear programming. In recent years, smoothed analysis has been extended to learning problems. Multiclass classification is one of the most fundamental ML problems. The significant of infinite labels is both theoretical and practical. Theoretically, it was the infinite labels setting that led to the recent complete characterization of multiclass learnability. From a practical point of view, infinite labels is a means to study what happens with extremely large label spaces. This is relevant to work in NLP (large vocabularies) and in extreme multiclass classification (e.g., recommender systems). > "how much interest the setting … could attract from the NeurIPS community” We note that there have been several NeurIPS papers regarding smoothed online learning, even dating back to 2011 [1-4]. There is also a long history of NeurIPS papers studying classification with extremely large label spaces. This line of work is known as ``Extreme Classification" [5-11]. Thus, we think studying the intersection of these two settings is natural and of interest to the NeurIPS community. > "this setting is not a simple combination of smoothed online learning and multiclass classification with infinite labels” We respectfully disagree with this criticism. Yes, the setting is simple — it combined smoothed online learning with multiclass classification with infinite labels. But the analysis is far from simple and our conclusions are far from obvious. We think that a simple setting with non-obvious analysis and conclusions should be of interest to NeurIPS. We think that the simplicity of the setting should not be a drawback of our contribution. [1] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Stochastic, constrained, and smoothed adversaries. Advances in neural information processing systems, 24, 2011. [2] Nika Haghtalab. Foundation of Machine Learning, by the People, for the People. PhD thesis, Carnegie Mellon University, 2018. [3] Nika Haghtalab, Tim Roughgarden, and Abhishek Shetty. Smoothed analysis of online and differentially private learning. Advances in Neural Information Processing Systems, 33:9203–9215,379 2020. [4] Adam Block, Yuval Dagan, Noah Golowich, and Alexander Rakhlin. Smoothed online learning is as easy as statistical learning. In Conference on Learning Theory, pages 1716–1786. PMLR, 2022 [5] K. Bhatia, H. Jain, P. Kar, M. Varma, and P. Jain, Sparse Local Embeddings for Extreme Multi-label Classification, in NeurIPS 2015. [6] D. Hsu, S. Kakade, J. Langford, and T. Zhang, Multi-Label Prediction via Compressed Sensing, in NeurIPS 2009. [7] Y. Chen, and H. Lin, Feature-aware Label Space Dimension Reduction for Multi-label Classification, in NeurIPS, 2012. [8] M. Cisse, N. Usunier, T. Artieres, and P. Gallinari, Robust Bloom Filters for Large Multilabel Classification Tasks , in NIPS, 2013. [9] I. Evron, E. Moroshko and K. Crammer, Efficient Loss-Based Decoding on Graphs for Extreme Classification in NeurIPS, 2018. [10] R. You, S. Dai, Z. Zhang, H. Mamitsuka, and S. Zhu, AttentionXML: Extreme Multi-Label Text Classification with Multi-Label Attention Based Recurrent Neural Network, in NeurIPS 2019. [11] S. Kharbanda, A. Banerjee, R. Schultheis and R. Babbar, CascadeXML : Rethinking Transformers for End-to-end Multi-resolution Training in Extreme Multi-Label Classification, in NeurIPS 2022.
Summary: This paper studies the problem of distinguishing batch learning from smoothed online learning when the label set size is unbounded. It shows that there exists a class that can be PAC-learned but does not admit sublinear regret, even with features generated i.i.d. The paper then provides a sufficient condition based on an empirical covering number that guarantees sublinear regret for smoothed adversarial learning. Strengths: The contribution of this paper is primarily conceptual. It demonstrates that while the learnability of hypotheses with bounded label sets is fairly well understood, there can be mysterious behavior in the unbounded label case that requires further attention. From a technical standpoint, this paper constructs several instances that demonstrate separations which may be of independent interest. Overall, this is an interesting paper in a niche area that is suitable for the NeurIPS community. Weaknesses: While I do like the philosophical message delivered by this paper, from a purely technical standpoint, it feels somewhat half-baked. I outline some specific comments below: 1. While the title indicates "Smoothed Online Classification can be Harder than Batch Classification," the real separation here is actually between batch learning and *adversarial* (label) online learning. The example constructed employs no property of the smoothed adversarial setting. 2. While the paper provides a sufficient condition for smoothed online learning, it does not demonstrate how strong this condition is. Is it necessary as well? Theorem 4.3 (ii) seems to indicate this is not the case. 3. There are many relevant problems that should have been studied in this paper but are unfortunately omitted or left to future work. See the Questions section below. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It seems all your hard instances boil down to dealing with *adversarial* labels. Can the authors comment on what happens for *realizable* labels? Does the separation still hold? (Note that in this case, you will have to use properties of smooth adversaries, as i.i.d. features are trivial.) 2. It seems the failure in Theorem 4.3 (ii) is due to the non-sequential nature of your empirical cover. Can a similar notion, such as the stochastic sequential cover from Wu et al., [2023] (perhaps using the approximate variant from Hanneke et al., [2023]), be sufficient to characterize the learnability? 3. Does the learnability of smoothed adversarial in the realizable case imply learnability in the adversarial label case, perhaps using a similar sequential cover construction from Hanneke et al., [2023]? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their comments and address their concerns below. **Weaknesses** 1. In online classification, smoothness is only an assumption on the instances $x_1, ..., x_T$, and the labels can still be adversarial. This is the standard smooth model considered in [1,2,3,4]. If by 'adversarial labels' the reviewer means noisy labels, then we would like to clarify that our hardness result is actually derived in the realizable setting. This is established in the math display between lines 280 and 281, where we show the existence of a hypothesis that perfectly labels the data and achieves $0$ cumulative loss. 2. The reviewer is correct in that the sufficient condition is not necessary. We mention this fact in lines 97-99 and formalize it in Theorem 4.3. As noted in the Discussion section, the smoothed model allows some pathological edge cases that make characterizing learnability difficult. **Questions** 1. We would like to clarify that our hardness result is in the realizable setting. This is established in the math display between lines 280 and 281, where we show the existence of a hypothesis that perfectly labels the data and achieves $0$ cumulative loss. 2. The notion of stochastic sequential cover provided in Definition 1 of (Wu et al. 2023) does not characterize smoothed learnability. In fact, this can be established using the same examples used in the proof of Theorem 4.3. To see why it is not sufficient, consider $\mathcal{X}=[0,1]$ and $\mathcal{H}= \lbrace{ x \mapsto 1[x \in S]\, : S \subset \mathcal{X} \text{ and } |S|<\infty\rbrace}$. Let $\mathcal{P}$ denote set of all distributions on $\mathcal{X}^{T}$ such that for any $\nu \in \mathcal{P}$, its marginal at each $t$ is $\sigma$-smooth with respect to $\mu=\text{Uniform}(\mathcal{X})$ for some fixed $\sigma>0$. It is easy to see that the size of the stochastic sequential cover of $\mathcal{H}$ with respect to $\mathcal{P}$ under $0$-$1$ metric is $1$ (see proof of Theorem 4.3 (i) for details). However, as shown in Theorem 4.3 (i), this class is not learnable under the smoothed model. On the other hand, the class $\mathcal{H} = \lbrace{x \mapsto a: a \in \mathbb{N}\rbrace}$ shows that the stochastic sequential cover is not necessary, as its size of sequential cover for any distribution family $\mathcal{P}$ is $\infty$. 3. Realizable and agnostic learnability are equivalent for smoothed online classification even when the label space is unbounded. We have some ongoing work that establishes this result using an adaptation of Theorem 11 in [6] for infinite label spaces. [1] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Stochastic, constrained, and smoothed adversaries. Advances in neural information processing systems, 24, 2011. [2] Nika Haghtalab. Foundation of Machine Learning, by the People, for the People. PhD thesis, Carnegie Mellon University, 2018. [3] Nika Haghtalab, Tim Roughgarden, and Abhishek Shetty. Smoothed analysis of online and differentially private learning. Advances in Neural Information Processing Systems, 33:9203–9215,379 2020. [4] Adam Block, Yuval Dagan, Noah Golowich, and Alexander Rakhlin. Smoothed online learning is as easy as statistical learning. In Conference on Learning Theory, pages 1716–1786. PMLR, 2022. [5] Wu C, Heidari M, Grama A, Szpankowski W. Expected Worst Case Regret via Stochastic Sequential Covering. Transactions on Machine Learning Research, 2023. [6] Raman, Vinod, Unique Subedi, and Ambuj Tewari. "A Characterization of Multioutput Learnability." arXiv preprint arXiv:2301.02729 (2023). --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarification on the realizability. However, I still feel this work is not quite "complete"; perhaps a more theoretically oriented conference (such as COLT/ALT) will better appreciate it. Anyway, I have adjusted my rating to 6, but I will not fight for acceptance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. We would like to point out that the smoothed online learning model was first studied by [1] in NeurIPS 2011 with the goal of bridging theory and practice. In addition, there is also a long history of NeurIPS papers studying classification with extremely large label spaces. This line of work is known as ``Extreme Classification" [2-7]. So, we think that our work is still of interest to the NeurIPS community. [1] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Stochastic, constrained, and smoothed adversaries. Advances in neural information processing systems, 24, 2011. [2] K. Bhatia, H. Jain, P. Kar, M. Varma, and P. Jain, Sparse Local Embeddings for Extreme Multi-label Classification, in NeurIPS 2015. [3] D. Hsu, S. Kakade, J. Langford, and T. Zhang, Multi-Label Prediction via Compressed Sensing, in NeurIPS 2009. [4] Y. Chen, and H. Lin, Feature-aware Label Space Dimension Reduction for Multi-label Classification, in NeurIPS, 2012. [5] M. Cisse, N. Usunier, T. Artieres, and P. Gallinari, Robust Bloom Filters for Large Multilabel Classification Tasks , in NIPS, 2013. [6] I. Evron, E. Moroshko and K. Crammer, Efficient Loss-Based Decoding on Graphs for Extreme Classification in NeurIPS, 2018. [7] R. You, S. Dai, Z. Zhang, H. Mamitsuka, and S. Zhu, AttentionXML: Extreme Multi-Label Text Classification with Multi-Label Attention Based Recurrent Neural Network, in NeurIPS 2019. [11] S. Kharbanda, A. Banerjee, R. Schultheis and R. Babbar, CascadeXML : Rethinking Transformers for End-to-end Multi-resolution Training in Extreme Multi-Label Classification, in NeurIPS 2022.
Summary: They consider the problem of smoothed online classification under oblivious adversaries. From earlier work it is known that this problem is as easy as batch classification when the label space is bounded. However, when the label space is unbounded they provide a lower bound and show that this problem can be harder than batch classification in PAC model. Furthermore, they provide a sufficient condition for smoothed online learnability. Their conditions are in terms of covering/packing numbers of the hypothesis class using a distance metric that depends on the base measure $\mu$. However, their sufficient condition is not a necessary condition for the smoothed learnability. They leave it as an open question to find a condition that is both necessary and sufficient for smoothed online learnability. Strengths: I did not go through all the proofs, but overall this paper is well-written and seems to be sound. Weaknesses: same as above. Technical Quality: 3 Clarity: 3 Questions for Authors: Minor comments: \Sigma is not defined in line 118. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: same as above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding our paper well-written. We will make sure to define $\Sigma$ in the camera-ready version. --- Rebuttal Comment 1.1: Comment: I went through the other reviews and responses, and I agree that a more theoretically oriented venue like ALT/COLT might be a better fit. I keep my current score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. We would like to point out that the smoothed online learning model was first studied by [1] in NeurIPS 2011 with the goal of bridging theory and practice. In addition, there is also a long history of NeurIPS papers studying classification with extremely large label spaces. This line of work is known as ``Extreme Classification" [2-7]. So, we think that our work is still of interest to the NeurIPS community. [1] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Stochastic, constrained, and smoothed adversaries. Advances in neural information processing systems, 24, 2011. [2] K. Bhatia, H. Jain, P. Kar, M. Varma, and P. Jain, Sparse Local Embeddings for Extreme Multi-label Classification, in NeurIPS 2015. [3] D. Hsu, S. Kakade, J. Langford, and T. Zhang, Multi-Label Prediction via Compressed Sensing, in NeurIPS 2009. [4] Y. Chen, and H. Lin, Feature-aware Label Space Dimension Reduction for Multi-label Classification, in NeurIPS, 2012. [5] M. Cisse, N. Usunier, T. Artieres, and P. Gallinari, Robust Bloom Filters for Large Multilabel Classification Tasks , in NIPS, 2013. [6] I. Evron, E. Moroshko and K. Crammer, Efficient Loss-Based Decoding on Graphs for Extreme Classification in NeurIPS, 2018. [7] R. You, S. Dai, Z. Zhang, H. Mamitsuka, and S. Zhu, AttentionXML: Extreme Multi-Label Text Classification with Multi-Label Attention Based Recurrent Neural Network, in NeurIPS 2019. [11] S. Kharbanda, A. Banerjee, R. Schultheis and R. Babbar, CascadeXML : Rethinking Transformers for End-to-end Multi-resolution Training in Extreme Multi-Label Classification, in NeurIPS 2022.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ActFusion: a Unified Diffusion Model for Action Segmentation and Anticipation
Accept (poster)
Summary: This paper extends DiffAct to perform both action segmentation and action anticipation. An anticipative masking with a learnable mask token is proposed. Experiments are conducted on the three common benchmark datasets. Strengths: 1. The motivation for unifying action segmentation and action anticipation is reasonable, given the task similarity. It is also intuitive and reasonable to extend a generative framework from segmentation to anticipation, given the generative nature of anticipation. 2. The learnable mask token is interesting. 3. The ablation studies are relatively comprehensive. 4. Codes are provided in the supplementary. Weaknesses: 1. The major technical problem is that the proposed method assumes the ground truth video length is known for action anticipation at the test time. Without the ground truth video length T, the anticipative mask M^A can not be constructed during the inference time for anticipation. This is also shown in the code provided, where 'full_len' is input into the ddim inference function. This is problematic, conflicting with the goal of anticipation, and leading to incomparable experimental results. 2. The technical novelty is limited. The main contribution is extending the DiffAct method with an anticipative mask, while other modules are from existing methods. But given the contribution of unifying the segmentation and anticipation tasks, this is not a deciding factor for me. 3. It will be better to conduct experiments on Assembly101. Given the small data size and the saturated performance on GTEA, I would recommend a transition from GTEA to Assembly101 for this task. Minor: - In Figure 1c, what do the triangle and the circle mean? - In Table 1, some recent methods are missing, such as MVGA ICCV23 and RTK ICCV23. Technical Quality: 1 Clarity: 3 Questions for Authors: NA Confidence: 1 Soundness: 1 Presentation: 3 Contribution: 3 Limitations: The limitations were briefly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Inference setup in LTA]** We would like to clarify that our model can predict future actions with an arbitrary length by adjusting the number of mask tokens needed for prediction; our model itself does not require a ground-truth length of anticipation, and the ground-truth length is used to generate the same length of anticipation for the convenience of evaluation in testing. Note that previous methods [19, 24, 43] also commonly used ground-truth video lengths during inference to generate the final predictions, which can be found in the released original codes. We obtain the codes for [19] from the authors. We hope this clarifies the reviewer's concern. Otherwise, please let us know. ### **[Novelty]** Please refer to the general response for the novelty. ### **[Experiments on Assembly101]** Due to limited resources and time, we were unable to finish the full-scale experiment on time. In response to suggestions regarding Assembly 101, we instead did a smaller scale experiment by randomly sampling 10% of the entire training set to train the models and evaluate the performance on the full validation set to ensure a fair comparison. Table R5 compares the performance of ActFusion and LTContext [5], the state-of-the-art TAS model on Assembly 101 from the experiment. We find that our model outperforms LTContext across all metrics, showing the potential advantages of the proposed method. We will provide a full-scale experiment in the final manuscript. **[Table R5. Experiments on Assembly101]** | method | F1@10 | F1@25 | F1@50 | Edit | Acc | Avg. | |------------|-------|-------|-------|------|------|------| | LTContext [5] | 18.7 | 15.9 | 11.2 | 17.6 | 27.3 | 18.2 | | ActFusion (ours) | 21.7 | 19.3 | 14.0 | 19.8 | 28.1 | 20.6 | ### **[Explanation of Figure 1(c)]** Please refer to the general response for a detailed explanation of Figure 1(c). ### **[Missing references]** Thank you for letting us know. We will include MVGA [R4] and RTK [R5] in our final manuscript. [R4] B. Jiang et al. RTK: Video action segmentation via contextually refined temporal keypoints. In ICCV’23. [R5] N. Aziere and S. Todorovic. Markov game video augmentation for action segmentation. In ICCV’23. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks very much for the response. W1: I am not convinced. This is not an evaluation choice as argued by the authors. This is a test data leakage issue. For any machine learning model, you should not use any ground truth when obtaining predictions on test data. One possible solution might be using the mean length of training videos as the 'full_len' during testing. But I guess this will lower the results. As for previous methods mentioned by the authors, I did not check their code. Did other previous methods except for those mentioned also use this ground truth? Even if ground truth was used in some of previous codes, I would consider this as an issue to be fixed in the following works rather than a convention to be inherited. W2: I appreciate the novelty of the unification, but not the methodology. This is subjective though. W3 and minor have been addressed. --- Rebuttal 2: Comment: ### **[W1: Using the ground-truth video length during inference]** Thanks to your response, now we fully understand the point regarding the use of the ground truth video length during inference. To address this concern, we conducted additional experiments where no ground truth length is used in testing; following your suggestion, in testing, we fixed the length of future frames (i.e., mask tokens) to the maximum number of future frames in the training set. Experimental results in this setting on the 50 Salads dataset are reported in Table R6. In this table, the column ‘use of GT length’ indicates if each model exploits ground-truth length during testing. We found that, following the first paper introducing long-term dense action anticipation [2], all previous methods utilize the ground-truth length during testing, except for those whose codebases and/or inference setups are not available [25, 51, 65]; we marked these methods 'unknown' in the column. In the table, **ActFusion*** represents **our original model using a fixed number of mask tokens without retraining.** To ensure a fair comparison, we applied the same testing scheme to Farha et al [19] (denoted by Farha et al*) and FUTR [24] (denoted by FUTR*), *both of which originally utilize the ground-truth length during testing*. In this setting, ActFusion* still outperforms Farha et al* and FUTR*, and as the reviewer expected, all of these methods perform worse. This performance drop is due to the use of the ground-truth video length in training, which results in a discrepancy between the training and inference setups. To mitigate this issue, **we retrained our model while fixing the number of mask tokens to cover the maximum number of future frames in the training set.** The retrained results, presented in the last row of Table R6 and denoted as **ActFusion†**, achieve the state of the art in long-term action anticipation (LTA). We observed that fixing the number of mask tokens brings performance gain when the prediction ranges are relatively short, as the model benefits from more stable predictions. However, for longer predictions, particularly when the prediction ratio is set to 0.5, we observed performance degradation compared to ActFusion. This degradation is probably due to the fact that it becomes more difficult for the model to determine the end of an activity. Nonetheless, these results demonstrate that our method can be flexibly adapted to different numbers of mask tokens, ultimately achieving the state of the art in LTA. We sincerely hope this clarification addresses your concerns. We believe that the experimental setup you suggested will contribute significantly to the field by providing more realistic and reasonable evaluation protocols, and we will include all of the above results in the revision. **[Table R6. LTA results with and without using ground truth length]** | method | use of GT length | $\alpha=0.2,\beta=0.1$ | $\alpha=0.2,\beta=0.2$ | $\alpha=0.2,\beta=0.3$ | $\alpha=0.2,\beta=0.5$ | $\alpha=0.3,\beta=0.1$ | $\alpha=0.3,\beta=0.2$ | $\alpha=0.3,\beta=0.3$ | $\alpha=0.3,\beta=0.5$ | Avg. | |:-------------------|:-------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:---------:| | Temporal Agg. [51]| unknown| 25.50| 19.90| 18.20| 15.10| 30.60| 22.50| 19.10| 11.20| 20.26| | A-ACT [25]| unknown| 35.40| 29.60| 22.50| 16.10| 35.70| 25.30| 20.10| 16.30| 25.13| | Object Prompt [65]| unknown| 37.40| 28.90| 24.20| **18.10**| 28.00| 24.00| **24.30**| 19.30| 25.53| | Farha et al. [19] | ✓| 34.76| 28.41| 21.82| 15.25| 34.39| 23.70| 18.95| 15.89| 24.15| | Farha et al.* [19]| -| 29.07| 23.83| 20.49| 12.77| 26.51| 17.78| 14.35| 11.19| 19.50| | FUTR [24]| ✓| 39.55| 27.54| 23.31| 17.77| 35.15| 24.86| 24.22| 15.26| 25.96| | FUTR* [24]| -| 28.84| 20.01| 16.65| 11.37| 22.48| 16.49| 13.21| 9.21| 17.28| | ActFusion (ours) | ✓| 39.55| 28.60| 23.61| 19.90| 42.80| 27.11| 23.48| 22.07| 28.39| | ActFusion* (ours) | -| 34.50| 26.17 | 20.27 | 11.87 | 34.58 | 22.75 | 17.31 | 11.33 | 22.75| | ActFusion† (ours)| - | **41.30** | **30.83** | **24.40** | 16.10 | **41.70** | **28.08** | 22.48 | **19.56** | **28.06**| - In the table, the bolded values represent the highest accuracy among the models that do not use ground truth length, ensuring a fair comparison. --- Rebuttal 3: Title: Test Data Leakage and Difficult Case Comment: Thanks very much. I really appreciate the additional experiments. It is helpful to see the new results without using ground-truth length, which are quite reasonable. It is surprising that all previous methods have utilised the ground-truth length during testing. I would flag this as a critical issue worth the community's attention. Revealing this issue using the above experiments might be a significant contribution to the community. Therefore, I have greatly increased my score, on the condition that the additional experiments will be included in the later version. And all the other experiments should also be 'totally' updated to this no ground-truth version. However, this might be too substantial and there is no mechanism to guarantee that. Overall, this is really a difficult case. Therefore, I have also changed my rating to the lowest confidence, and I would leave this to the chairs for the final judgement. --- Rebuttal Comment 3.1: Comment: We sincerely appreciate the reviewer’s constructive feedback, which has been invaluable in raising an important issue within the community and guiding us to improve our submission. We will ensure that all experiments in our final manuscript are fully updated without the use of ground-truth length during inference. As the reviewer zKDZ mentioned, we also believe that revealing this critical issue and rectifying it would be a significant contribution to the community. We will thoroughly analyze and bring to light the issues shared by previous methods (at least, including [19, 24]), ensuring that all models are compared under adequate and fair conditions.
Summary: This paper proposes a new unified diffusion model called ActFusion, which solves the tasks of Time Action Segmentation (TAS) and Long Term Action Prediction (LTA) in a joint learning framework. To unify the two tasks, the model effectively handles the visible and invisible parts of the sequence during the training phase; The visible part is used for observed video frames, while the invisible part is used for future expectations. The experiment showed that the model achieved state-of-the-art performance in standard benchmark tests of 50 Salad, Breakfast, and GTEA. Strengths: 1. The design of the ActFusion model is novel, with its anticipative masking strategy and random masking method unifying the tasks of TAS and LTA. 2. The model enhances its performance on both tasks through mutual promotion of TAS and LTA. 3. The reported results in the paper surpass existing techniques on multiple evaluation metrics, demonstrating significant performance improvements. Weaknesses: 1. What is the difference between the Diffusion model used in the paper and the DiffAct model? They appear to be similar overall, despite using different Masks to unify the tasks of TAS and LTA. 2. The paper employs so many loss functions for model training; the authors should analyze the impact of different losses rather than simply exploring the effects of the encoder and decoder. 3. Since the denoising process of the diffusion model is a time-consuming task, I am interested in the computational efficiency of the model proposed in the paper. The authors are advised to provide information on the model's GFLOPs, parameter scale, and inference time. 4. The authors need to further improve the interpretation of the figures, such as what the circles and triangles in Figure 1(c) represent. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the difference between the Diffusion model used in the paper and the DiffAct model? They appear to be similar overall, despite using different Masks to unify the tasks of TAS and LTA. 2. The paper employs so many loss functions for model training; the authors should analyze the impact of different losses rather than simply exploring the effects of the encoder and decoder. 3. Since the denoising process of the diffusion model is a time-consuming task, I am interested in the computational efficiency of the model proposed in the paper. The authors are advised to provide information on the model's GFLOPs, parameter scale, and inference time. 4. The authors need to further improve the interpretation of the figures, such as what the circles and triangles in Figure 1(c) represent. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: In summary, the authors have creatively used different Masks to unify the tasks of TAS and LTA with a single model and achieved significant results. However, compared to existing models, there does not seem to be much modification, highlighting a disadvantage in innovation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Comparison with DiffAct]** Please refer to the general response to the comparison with DiffAct. ### **[Loss ablation studies]** We conduct ablation studies on the loss functions: boundary loss, smoothing loss, and cross-entropy loss. Table R4 presents the results, demonstrating that the combination of bounding loss $L_{bd}$ and smoothing loss $L_{smo}$ is effective for both TAS and LTA. While the effectiveness of these losses in TAS is well-documented in previous research [18, 64, 42], their impact on LTA has been less explored. Notably, the smoothing loss leads to significant performance gains in both tasks, indicating that smoothed predictions are beneficial. **[Table R4. Loss ablation studies]** **(a) Results on TAS** | $L_{\text{bd}}$ | $L_{\text{smo}}$ | $L_{\text{ce}}$ | F1@10 | F1@25 | F1@50 | Edit | Acc | Avg. | |------------|------------|------------|-------|-------|-------|------|------|------| | | | &check; | 88.4 | 86.5 | 79.1 | 82.5 | 84.9 | 84.3 | | | &check; | &check; | 91.3 | 90.0 | 84.5 | 86.3 | 88.8 | 88.2 | | &check; | &check; | &check; | 91.6 | 90.7 | 84.8 | 86.0 | 89.3 | 88.5 | **(a) Results on LTA** | $L_{\text{bd}}$ | $L_{\text{smo}}$ | $L_{\text{ce}}$ | $\alpha=0.2, \beta=0.1$ | $\alpha=0.2, \beta=0.2$ | $\alpha=0.2, \beta=0.3$ | $\alpha=0.2, \beta=0.5$ | $\alpha=0.3, \beta=0.1$ | $\alpha=0.3, \beta=0.2$ | $\alpha=0.3, \beta=0.3$ | $\alpha=0.3, \beta=0.5$ | |------------|------------|------------|-------------|-------|-------|----------|-------|-------|--------|----------| | | | &check; | 35.62| 27.04 | 20.17 | 15.93| 34.38 | 22.33 | 19.96 | 16.94| | | &check; | &check; | 39.19 | 28.99 | 23.13 | 19.45 | 39.53 | 25.19 | 22.67 | 19.88 | | &check; | &check; | &check; | 39.55 | 28.60 | 23.61 | 19.90 | 42.80 | 27.11 | 23.48 | 22.07 | ### **[Computational efficiency]** Table R5 compares the computational cost of our model with ASFormer [64] for TAS, and FUTR [24] for LTA, in terms of the number of parameters, GPU memory usage during inference, and inference time. For TAS, as shown in Table R5 (a), although ASFormer has fewer parameters with lower inference time, it requires more GPU memory during inference and obtains lower performance. To improve computational efficiency, we reduce the DDIM inference steps to 10 and 1. This reduction decreases inference time while maintaining superior performance over ASFormer. For LTA, as shown in Table R5 (b), our model is approximately eleven times smaller than FUTR, uses less GPU memory, but has a longer inference time. By reducing the DDIM inference steps to 1, our model achieves a similar inference time to FUTR. Overall, our model is practical and efficient since it can handle both TAS and LTA tasks with a unified model, eliminating the need for separate models and reducing GPU resource usage and the time required for separate training. Note that we use model checkpoints from the official GitHub repositories for all comparisons. **[Table R5. Computational efficiency]** **(a) Results on TAS** | method | # inference steps | Avg. Performance | # parameters (M) | memory (GB) | inference time (s) | |-------------------|-------------------|------------------|------------------|-------------|--------------------| | AsFormer [64] | 1 | 81.9 | 1.134 | 0.272 | 1.66 | | ActFusion (ours) | 1 | 86.0 | 1.576 | 0.164 | 0.42 | | ActFusion (ours) | 10 | 87.9 | 1.576 | 0.164 | 1.17 | | ActFusion (ours) | 25 | 88.5 | 1.576 | 0.164 | 2.01 | **(a) Results on LTA** | method | # inference steps | Avg. Performance | # parameters (M) | memory (GB) | inference time (s) | |-------------------|-------------------|------------------|------------------|-------------|--------------------| | FUTR [24] | 1 | 26.0 | 17.38 | 0.156 | 0.21 | | ActFusion (ours) | 1 | 26.2 | 1.576 | 0.151 | 0.26 | | ActFusion (ours) | 10 | 27.8 | 1.576 | 0.151 | 1.04 | | ActFusion (ours) | 25 | 28.4 | 1.576 | 0.151 | 2.14 | ### **[Explanation of Figure 1(c)]** Please refer to the general response for a detailed explanation of Figure 1(c). --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. The author has clarified most of my concerns. I keep my score as weak accept. --- Rebuttal 2: Comment: We thank reviewer uA7n for the motivating feedback. We are pleased to hear that most of the concerns have been addressed by our rebuttal. The results discussed in the rebuttal will be included in the final manuscript.
Summary: The author introduce a unified diffusion model for temporal action segmentation (TAS) and long-term action anticipation (LTA), dubbed ActFusion, where a single model is jointly trained to address these two problems effectively. A new anticipative masking is presented for the effective unification of two tasks, along with random masking to learn intra-action relations. ActFusion achieves the state-of-the-art performance on both TAS and LTA, demonstrating the effectiveness of joint learning of two tasks across standard benchmark datasets, 50 Salads and Breakfast, and GTEA. Strengths: * Originality * The paper presents a approach to integrating two popular vision tasks, temporal action segmentation and long-term action anticipation, within a unified model. This is the first time these two problems have been investigated together in a single framework, highlighting the originality of the research. * Clarity * The paper is clearly articulated and provides sufficient details. It offers an adequate background on diffusion model and how it is utilized. The inclusion of pseudo-code and actual code facilitates a better understanding for the reader. Additionally, the experimental settings are thoroughly described, which aids in the reproducibility of the results. Weaknesses: * The methodology lacks novelty. The model architecture, loss function (Cross-Entropy Loss, Temporal Smoothness Loss, Boundary Alignment Loss) and mask strategy (no mask, relation mask, boundary mask) closely resemble those used in the Diffusion Action Segmentation (https://arxiv.org/pdf/2303.17959v2). The primary distinction in this work is only its extension to include the long-term action anticipation task, and the introduction of anticipative masking. * The performance improvement relative to other state-of-the-art works is marginal. On TAS task, when compared to DiffAct, this model shows an approximate 1-point improvement across various metrics (F1, edit score, frame-wise accuracy) on different benchmarks (50 Salads, Breakfast, GTEA). * The model consistently achieves sub-optimal results when assessed using frame-wise accuracy as the metric on TAS task. As the authors point out, this could potentially be addressed by employing reconstruction methods for masked features. I am eager to see how these adjustments could enhance the model's performance. * More ablation study for LTA task could be included, e.g., How important is past context for the models (\alpha) ? How far into the future can models predict (\beta)? Table 2 reports only a limited range of settings. A more thorough analysis on these aspects would be highly valuable. Technical Quality: 2 Clarity: 3 Questions for Authors: * The approach is currently limited to predicting action labels within a closed set. To extend this work to predict open-set action labels, what can be done? * While "segmentation helps anticipation" is evident from table 3, "anticipation helps segmentation" is considerably less significant in table 4. What's the reason behind this? * If further training the model to reconstruct the original features from the masked features, would it also improve the LTA task? * Paper writing * What does circle and triangle means in figure 1(c)? * In table2, why there are multiple underlined values (suppose to be second-highest value) in each column? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: * Limitation * The author notes the sub-optimal performance when using frame-wise accuracy to evaluate the TAS task and suggests a potential solution, though its effectiveness remains unproven. I am eager to see how these proposed adjustments might improve the model's performance. * Potential negative societal impact * There is no potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Novelty]** Please refer to the general response for our novelty. ### **[Marginal performance]** We would like to clarify that the performance improvements achieved by our model are significant. Figure R1 in the pdf file illustrates the performance of the Top 10 TAS models for each dataset listed in Table 1, based on their average performance across all metrics. The average performance gain between the adjacent models is 0.5 percentage points (pp), 0.7 pp, and 0.6 pp for the 50 Salads, Breakfast, and GTEA datasets, respectively. Our model, ActFusion, achieves performance gains of 0.7pp, 0.4pp, and 1.1pp compared to the second-best models for each dataset, and 1.1pp, 0.4pp, and 1.5pp compared to DiffAct. We believe the performance gains are meaningful, with notable improvements in two datasets. ### **[Reconstruction of masked features]** Masked auto-encoding is a technique used in training NLP models like BERT [R1] and has recently been adapted to vision models [21, 27, 68]. Inspired by this approach, we train our model to reconstruct input video features from the masked tokens as an auxiliary task. Specifically, we employ MLP layers on the encoder embeddings to reconstruct the input video features and apply mean squared error (MSE) loss $L_{\text{recon}}$ during training. Table R2 shows the overall results on both TAS and LTA tasks. In TAS, overall performance increases. We conjecture that reconstruction helps the model gain a deeper understanding of the underlying data structure and temporal dynamics by predicting the missing parts of the input. In LTA, we find that reconstruction is more effective on relatively short-term anticipation. Since short-term predictions are often based on more immediate context, there is less uncertainty. As a result, reconstructing masked features helps the model capture immediate patterns and transitions more accurately. However, for long-term predictions, as the model attempts to predict further into the future, the uncertainty increases significantly. Long-term predictions involve more variables and potential changes, making them inherently less predictable. This increased uncertainty might cause performance degradation, making reconstruction less effective for action anticipation. **[Table R2. Effects of reconstruction loss]** **(a) Results on TAS** |$L_{\text{recon}}$ | F1@10 | F1@25 | F1@50 | Edit | Acc | Avg. | |-|-|-|-|-|-|-| | -| 91.6|90.7|84.8|86.0|89.3|88.5| | ✓| 92.0|90.9|86.6|86.9|89.6|89.2| **(a) Results on LTA** |$L_{\text{recon}}$|$\alpha=0.2, \beta=0.1$ | $\alpha=0.2, \beta=0.2$ | $\alpha=0.2, \beta=0.3$ | $\alpha=0.2, \beta=0.5$ | $\alpha=0.3, \beta=0.1$ | $\alpha=0.3, \beta=0.2$ | $\alpha=0.3, \beta=0.3$ | $\alpha=0.3, \beta=0.5$ | |-|-|-|-|-|-|-|-|-| |-| 39.55| 28.60| 23.61| 19.90| 42.80| 27.11| 23.48| 22.07| | ✓ | 40.80| 31.02| 25.59| 13.94| 46.56| 26.22 | 18.56| 16.15| [R1] J. Devlin et al. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL’19. ### **[More analysis on LTA]** Table R3 shows the LTA performance across different observation ($\alpha$) and prediction ($\beta$) ratios. Average anticipation performance improves as the observation range increases. With more observations, uncertainty about future actions is relatively reduced, leading to more accurate predictions. Conversely, average anticipation performance decreases as the prediction range increases. Predicting further into the future presents more challenges due to increased uncertainty, as future actions become less predictable and more variable. **[Table R3. Analysis of observation and prediction ranges in LTA]** || $\alpha = 0.2$ | $\alpha = 0.3$ | $\alpha = 0.4$ | $\alpha = 0.5$ | $\alpha = 0.6$ | $\alpha = 0.7$ | $\alpha = 0.8$ | **Avg.** | |-|-|-|-|-|-|-|-|-| | $\beta = 0.1$ | 39.56|42.81| 29.87| 37.07| 32.74| 27.53| 36.73| **35.2** | | $\beta = 0.2$ | 28.6| 27.11| 25.41| 27.16| 27.68| 26.91| 42.28| **29.3** | | $\beta = 0.3$ | 23.61| 23.48| 22.46| 24.50| 27.99| 29.94 | -| **25.3** | | $\beta = 0.4$ | 22.71| 21.04| 22.44| 25.03| 31.08| -| -| **24.5** | | $\beta = 0.5$ | 19.9| 22.07| 22.85| 28.08| -| -| -| **23.2** | | $\beta = 0.6$ | 19.28| 22.78| 25.71|-|-| -| - | **22.6** | | $\beta = 0.7$ | 19.93 | 25.62|-|-|-|-|-|**22.8**| | $\beta = 0.8$ | 23.34|-|-|-|-|-|-| **23.3** | | **Avg.**| **24.6**| **26.4**| **24.8**| **28.4**| **29.9**| **28.1**| **39.5**|| ### **[Extension to the open-set action recognition]** We appreciate the reviewer’s insight on extending our work towards open-set action recognition. To achieve this, we can use frozen image and text encoders from CLIP [R2] to obtain shared representations for actions and text embeddings. Similar to [R3], these embeddings can then be integrated into our model to enable open-set action recognition. We plan to explore this as a future direction for improving our approach. [R2] A. Radford et al. Learning transferable visual models from natural language supervision. In arXiv’21. [R3] D. Chatterjee et al. Opening the vocabulary of egocentric videos. In Neurips’23. ### **[Reasons: segmentation is more helpful on anticipation]** We find that segmentation greatly enhances anticipation, while the effect of anticipation on segmentation is less significant (L271-272). Segmentation directly improves anticipation by providing accurate contextual cues and action boundaries of the observations, enabling the model to make more precise future anticipation. In contrast, anticipation helps segmentation more indirectly. Anticipation encourages the model to consider long temporal relations of actions within an activity, which may not result in immediate performance improvement in segmentation. ### **[Explanation of Figure 1(c)]** Please refer to the general response for a detailed explanation of Figure 1(c). ### **[Multiple underlined values in Table 2]** Thank you for pointing out. The underline of the performance of DiffAct on the 50 Salads dataset will be removed. --- Rebuttal 2: Title: A gentle reminder Comment: Dear reviewer FW3D, We'd like to thank again for your effort and time dedicated to our submission. We've addressed your concerns in our rebuttal, and it would be very helpful if you could give us any further thoughts and update your scores before the author-reviewer discussion phase ends. Your opinion would be invaluable to us in improving our work, and we would be glad to respond further to your questions. Thank you for your consideration. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Thank you to the authors for providing detailed explanations and conducting additional experiments. These have addressed all of my questions and concerns. Please ensure to include these in the final version of the manuscript. I will be adjusting my rating accordingly. --- Reply to Comment 2.1.1: Comment: Thank you for the response. We are glad to hear that most of the concerns have been addressed by our rebuttal. We would like to thank reviewer FW3D once again for the insightful comments for extensive experiments and directions for our work. We make sure to include all experimental results in the final manuscript.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments and suggestions. We are happy to see that the reviewers have given our work a positive evaluation, noting that “this is the first time these two problems have been investigated together in a single framework, highlighting the originality (FW3D)”, “the design of the AcFusion is novel (uA7n)”, “the model enhances its performance on both tasks through mutual promotion of TAS and LTA (uA7n)” and “it is also intuitive and reasonable to extend a generative framework from segmentation to anticipation, given the generative nature of anticipation (zKDz)”. Nevertheless, the reviewers also point out important comments that: 1. the novelty of the proposed method should be explained, 2. revealing the effects of learning reconstruction of masked features for TAS is suggested, 3. further analyses on LTA, loss functions, computational cost, and dataset are suggested. Through this rebuttal, we aim to clearly expose our novelty and provide further experimental results and analyses. We will revise the manuscript by incorporating the detailed comments from the reviewers. In the general response, we address the questions posed by all reviewers regarding novelty and the explanation of Figure 1(c). ### **[Novelty]** The primary novelty of the proposed method lies in unifying the two popular video tasks, temporal action segmentation (TAS) and long-term action anticipation (LTA). **The unification is not merely an extension but a novel framework that leverages the bi-directional benefits between TAS and LTA, maximizing synergies between these tasks.** None of the previous work [42, 24, 51] neither has introduced a single unified model to tackle the two tasks nor explored the bi-directional benefits. This integration is indeed crucial for practical applications, such as human-assistant robots, which need to recognize and anticipate future actions simultaneously. **To achieve successful task integration, we introduce two types of masking strategies: anticipative masking ($M^\texttt{A}$) and random masking ($M^\texttt{R}$).** Anticipative masking plays a crucial role in effective task integration and random masking leverages to learn intra-action relations from a video. **However, simply incorporating these masking strategies does not necessarily guarantee the optimal performance for both TAS and LTA.** In Table R1, we applied anticipative and random masking to DiffAct by replacing visual embeddings with zero vectors after encoder processing and using them as conditions for the diffusion process in the decoder. The results in Table R1 show that these masking strategies improve TAS performance but remain below existing state-of-the-art models for LTA [51, 19, 25, 24, 65], We hypothesize that this is likely due to the limited information from the zero vectors used in future anticipation, which does not fully leverage the information from the visible tokens. To address this, **we propose a learnable masking strategy, where input visual features are replaced with learnable mask tokens provided to the encoder.** These tokens are trained to learn temporal relations between visible and invisible parts through attention mechanisms. **In our model, both the encoder and decoder are trained to handle visible and invisible parts for effective task unification.** The introduced masking strategy maximizes synergies between the two tasks, leading to achieving state-of-the-art performance in both TAS and LTA in Tables 1 and 2. We believe our approach presents a novel integrative framework for unifying the two tasks by introducing an effective learnable masking strategy with two types of masking. **[Table R1. Effects of a learnable masking strategy]** (a) **Results on TAS** | method | $M^\texttt{A}$ | $M^\texttt{R}$ | F1@10 | F1@25 | F1@50 | Edit | Acc | Avg. | |---------|-------|-------|-------|-------|-------|------|------|------| | DiffACT | - | - | 90.1 | 89.2 | 83.7 | 85.0 | 88.9 | 87.6 | | DiffACT | ✓ | ✓ | 91.1 | 89.8 | 84.1 | 85.9 | 88.9 | 87.9 | | ActFusion (ours) | ✓ | ✓ | 91.6 | 90.7 | 84.8 | 86.0 | 89.3 | 88.5 | (a) **Results on LTA** | method | $M^\texttt{A}$ | $M^\texttt{R}$ | $\alpha = 0.2$ $\beta = 0.1$ | $\alpha = 0.2$ $\beta = 0.2$ | $\alpha = 0.2$ $\beta = 0.3$ | $\alpha = 0.2$ $\beta = 0.5$ | $\alpha = 0.3$ $\beta = 0.1$ | $\alpha = 0.3$ $\beta = 0.2$ | $\alpha = 0.3$ $\beta = 0.3$ | $\alpha = 0.3$ $\beta = 0.5$ | |---------|-------|-------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------| | DiffACT | - | - | 11.8 | 11.3 | 11.3 | 10.7 | 20 | 17.2 | 16.5 | 16.6 | | DiffACT | ✓ | ✓ | 30.3 | 27.0 | 19.1 | 11.3 | 37.4 | 22.1 | 15.6 | 13.0 | | ActFusion (ours) | ✓ | ✓ | 42.8 | 33.9 | 26.0 | 20.7 | 43.1 | 25.8 | 21.3 | 20.7 | ### **[Explanation of Figure1(c)]** We apologize for not providing detailed explanations for Figure 1(c). In this figure, the circles represent the main tasks the models are proposed to address, while the triangles indicate auxiliary tasks used during training but not evaluated. We will include the descriptions in the final manuscript. Pdf: /pdf/bafb504277f8451957755978f363d926d58a99b7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ManiBCI: Manipulating EEG BCI with Invisible and Robust Backdoor Attack via Frequency Transform
Reject
Summary: The paper presents ManiBCI, a novel backdoor attack method targeting EEG-based brain-computer interface (BCI) systems. ManiBCI leverages a three-stage clean label poisoning approach without needing access to the training phase of the target deep learning models. This method optimally selects EEG electrodes and frequency masks for each class using reinforcement learning. The attack involves injecting these learned masks into the EEG data, leading to high misclassification rates while maintaining the original task's accuracy. Extensive experiments on three EEG datasets demonstrate ManiBCI's effectiveness and robustness. The key contributions of this work are: (1) Introducing a new type of stealthy and effective backdoor attack for EEG data. (2) Proposing a method that can manipulate multiple classes simultaneously without requiring control over the model's training process. (3) Providing experimental evidence of the attack's success across various datasets. This research highlights potential vulnerabilities in EEG-based BCI systems, emphasizing the need for robust defense mechanisms. Strengths: * Introduces a novel and stealthy backdoor attack method for EEG-based BCI systems using frequency transform. * Demonstrates the ability to manipulate multiple target classes without needing access to the model's training phase. * Provides strong experimental evidence of the method's effectiveness and robustness across multiple EEG datasets. Weaknesses: * Standard baselines (fast gradient sign method and universal adversarial perturbation) are not included for comparison [1][2] * Limited to the datasets used in the experiments, raising questions about generalizability to other EEG datasets or real-world scenarios. * The practical implementation of the proposed attack might be complex and computationally intensive due to the need for reinforcement learning optimization. [1] Xiao Zhang and Dongrui Wu. On the vulnerability of CNN classifiers in EEG-based BCIs. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27(5):814–825, 2019. [2] Zihan Liu, Lubin Meng, Xiao Zhang, Weili Fang, and Dongrui Wu. Universal adversarial perturbations for CNN classifiers in EEG-based BCIs, 2021. Technical Quality: 2 Clarity: 2 Questions for Authors: None Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments, and we'd like to express our appreciation that the novelty and strong experimental evidences of our work are well recognized. Below we have addressed your questions and concerns point-by-point. > Standard baselines (fast gradient sign method and universal adversarial perturbation) are not included for comparison [1,2] Thanks for providing the additional references. Actually, we have compared these adversarial-based methods [1,2] in our work (the baseline *AdverMT*). [1,2] are about adversarial perturbation on EEG BCI models, which is not the same as backdoor attack. Although backdoor and adversarial attacks are all about the vulnerability of deep models, backdoor attacks is different from adversarial perturbation in two ways: (1) Attacking phase: while adversarial perturbation attacks the model in the inference phase, backdoor attack injects a backdoor in the training phase; (2) Attacking objective: while adversarial perturbation aims to let deep models misclassify (the attacker doesn't care about the target class models will misclassify), backdoor attack aims to let deep models misclassify the samples with particular tirggers to target class (the attacker clearly knows the target classes models will misclassify, thus can manipulate the model's output by injecting different triggers). The adversarial perturbation can also be used as a trigger for backdoor attack, which has been researched in [3]. In our paper, we have compared [3] (the baseline *AdverMT*) at the multi-trigger and multi-target settings. As the adversarial perturbation is designed for single-target attack, it fails to attack multi-target classes. Please kindly refer to Table 1, it can be observed that our ManiBCI outperforms the adversarial-based backdoor attacks. > Limited to the datasets used in the experiments, raising questions about generalizability to other EEG datasets or real-world scenarios. EEG BCI tasks are diverse and can not be investigated whitin a single paper. What we can do is meticulously selecting some representative datasets to evaluate our methods. The datasets we used are cover three widely-studied tasks, and the configurations of EEG signals from these three datasets are quite different from each others: | Dataset | Emotion Recognition | Motor Imagery | Epilepsy Detection| P300 | |:-:|:-:|:-:|:-:|:-:| |Montages|unipolar|unipolar|bipolar|unipolar| |Electrodes|62|22|23|8| |Sampling Rates|200 Hz|250 Hz|256 Hz|250Hz| From table 1, our ManiBCI has been proven to be effective when facing various EEG signals with diverse montages, electrode numbers, and sampling rates. Namely, our work is generalizable when facing various situations. Hence, we have to argue that even though our methods are evaluated on three datasets (not more datasets), this does not negate the generalizability of our work to other EEG datasets or real-world scenarios. Furthermore, we evaluate our ManiBCI on another public dataset which studies the P300 tasks [4,5]. The attack performances of three different EEG models on the dataset are still excellent: | | Clean | ASR | 0 | 1 | |:-:|:-:|:-:|:-:|:-:| |EEGNet| 0.818 | 0.993 | 1.000 | 0.986 | |DeepCNN| 0.807 | 0.940 | 0.997 | 0.883 | |LSTM| 0.779 | 0.855 | 0.995 | 0.714 | It is worth mentioning that these results are obtained by only running the reinforcement learning 30 iterations, which takes only 0.5 hour on each model. These results can be another strong evidence to demonstrate the generalizability to other EEG datasets and real-world scenarios. Since our ManiBCI has sucessfully attacked EEG models on four different EEG tasks (emotion recognition, motor imagery, epilepsy detection, and P300 spell), where the EEG configurations of these four datasets are all different from each other. > The practical implementation of the proposed attack might be complex and computationally intensive due to the need for reinforcement learning optimization. We can't agree more that it is a little more time-consuming for the reinforcement learning (RL) optimization (which has been discussed in the Limitations sections in Appendix). Here, we present three ways to mitigate these problems: 1. Do not use any optimization algorithm and choose the injecting strategies randomly, which has a relatively good attack performance. As shown in Table 2, for random strategies, the ASRs are 0.771 for 3-classes, 0.857 for 4-classes, and 0.721 for 4-classes, all significantly exceeding chance levels. 2. Reducing the iteration numbers of the RL. The time we reported in Table 2 are counted when running K=250 iterations of RL. However, from Figure 14 in the appendix, it can be seen that though the learning of RL is nonstationary, some strategies with relatively high performances are acquired within the first 50 iterations. Hence, we can adjust the iteration according to actual requirements, e.g., in our experiments we can reduce the K to K=50 and save 80% of optimization time while not harming performance much. 3. Using tremendous EEG data, it might be possible that a general injecting strategy can be learned for each EEG BCI tasks, which can achieve a relatively good performance without any adpations. We anticipate that future studies can further refine and enhance the optimization process, leading to even more efficient, invisible, and robust backdoor attacks for EEG modality. [1] X Zhang, et al. "On the vulnerability of CNN classifiers in EEG-based BCIs", IEEE TNSRE, 27(5):814–825, 2019. [2] Z Liu, et al. "Universal adversarial perturbations for CNN classifiers in EEG-based BCIs", 2021. [3] L. Meng, et al. "Adversarial filtering based evasion and backdoor attacks to EEG-based brain-computer interfaces", Information Fusion, 2024. [4] U. Hoffmann, et al. "An efficient P300-based brain-computer interface for disabled subjects", J. Neurosci.Methods, 2008. [5] Rodrigo Ramele. P300-Dataset. https://www.kaggle.com/datasets/rramele/p300samplingdataset --- Rebuttal Comment 1.1: Title: Thanks for the clarificaiton Comment: Thank you for the detailed rebuttal. I have read it thoroughly. As I mentioned that 'my assessment is an educated guess,' I will not make the judgment alone but will discuss it with other reviewers to reach a comprehensive decision. Thank you. --- Reply to Comment 1.1.1: Title: Thank you for the prompt reply Comment: Thank you for carefully reading our paper and responses! Our work is a relatively new direction and we fully understand your decision. If you have further questions and concerns, please feel free to ask us and we're quite willing to address your questions. Thanks again!
Summary: This paper proposes a backdoor attack strategy for EEG, addressing three inherent issues: low quality, task variances, and morphology variances. The authors introduced a three-stage clean label poisoning attack. The proposed algorithm has been evaluated on three EEG datasets, demonstrating its effectiveness and robustness across datasets. This is an interesting work investigating backdoor attacks on EEG, and the customized strategy shows effectiveness in this particular domain. I believe this contribution will be beneficial to the community. Strengths: * This is a very interesting work, investigating backdoor attacks on EEG, and the customized strategy shows effectiveness in this particular domain. * The experiments are relatively sufficient and validate the claimed contributions adequately. Weaknesses: * I am not the expertise in BA domain. In terms of general EEG analsyis, one of my main concern is the experiment settings. In normal EEG analysis domain, we usually set inter-subejct and intra-subject settings. I failed to see the calrifications of these experiment settings. Whether this strategy can work across subjects, and generalize on the EEG signals collected from new/unseen subject? Technical Quality: 3 Clarity: 3 Questions for Authors: As mentioned in the Weakness area, please clarify the experimental settings. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations mentioned by the authors are appreciated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank you for your appreciation of our work and the positive comments of "a very interesting work". Our point-by-point responses are as follows. > In terms of general EEG analsyis, one of my main concern is the experiment settings. In normal EEG analysis domain, we usually set inter-subejct and intra-subject settings. I failed to see the calrifications of these experiment settings. Whether this strategy can work across subjects, and generalize on the EEG signals collected from new/unseen subject? Our methods can work under both inter-subejct and intra-subject settings. For EEG BCI, it is easier and have better performance under intra-subject setting, or the subject-dependent setting due to the inter-subject variability of EEG data. In our previous experiments, our ManiBCI attack has excellent performances under the intra-subject setting, achieving attack success rate (ASR) of over 90% on three datasets while not influencing clean accuracy (CA). However, only using one's EEG data is simple and not generalizable, thus, we follow the previous EEG backdoor attack work [1] and adopt the same poisoning attack process. This poisoning attack is a inter-subject setting, presenting more challenges but will have a wider application for more senarios. The whole attack process are as follow: 1. For a dataset contains N subjects, we select one subject as the poisoning set D_p, and only use the EEG data from D_p to generate poisoned data. Thus, the poisoned data all comes from the selected subject. 2. We perform a cross-validation test on the rest N-1 subjects, that is, select one subject as the test set D_test, and the rest N-2 subjects compose the training set D_train. 3. Randomly choose C triggers from D_p, C is the number of classes. 4. Run the reinforcement learning on the D_p, D_train, and D_test. Specifically, the policy network outputs a policy P, we generate the poisoned data S_p with EEG data from D_p used P. Combine S_p and D_train to acquire the dataset S and train a backdoor model on S. Calculate the CA and ASR of backdoor model on D_test. Finally, update the policy network with the testing CA and ASR. 5. After we run the reinforcement learning multiple times and get the final best policy P, adopt the policy P to generate poisoned data, train backdoor model, and calculate the best CA and ASR on D_test. 6. Back to step 2, and choose the next subject from N-1 subjects as the new test set D_test, and the remaining N-2 subjects composes the training set D_train. We have to choose every single subject as the test set, resulting in the process repeating N-1 times. At last, we reported the average of these N-1 results as our final results. This whole process will run 3 times (choose 3 different subjects as poisoning set D_p for eliminating the influence of the selection of poisoned subjects). Actually, we have brefily described the whole process in section 3.1 from line 106-116 due to the page limitation. Of course, our response is more detailed and hope it can address your concerns. If you want to know the details of our ManiBCI, please kindly refer to the PDF file in the general response, where we write an algorithm to demonstrate the process of frequency injection and reinforcement learning. [1] L. Meng, et al. “EEG-based brain-computer interfaces are vulnerable to backdoor attacks,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023.
Summary: Unfortunately, the authors begin the manuscript by demonstrating a lack of knowledge about the topic. They claim that deep learning (DL) has been highly successful in the field of brain-computer interfaces (BCI) based on electroencephalogram (EEG) data. However, in reality, the application of deep learning in the BCI or EEG field is limited, and shallow learning with simple hand-engineered features is still the gold standard. Therefore, the paper's claims about the vulnerabilities of machine learning models seem to be more like science fiction and do not meet the standard of the NeurIPS. Strengths: Hard to spot any strength as this is an artificial toy example. Weaknesses: Lack of connection with real-world problems, especially the BCI and EEG fields, where shallow learning remains gold standards with non-existent vulnerabilities. ML in BCI has been trained for each subject at the bedside. Technical Quality: 1 Clarity: 1 Questions for Authors: Why did the authors create a science fiction problem that doesn't exist and then develop a theoretical methodology for it? Confidence: 5 Soundness: 1 Presentation: 1 Contribution: 1 Limitations: No application in the real world and a completely trivial problem below conference standards. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your time and effort in reviewing our work. We are learning from all the feedbacks from the reviewers and feel that this is a great opportunity to exchange ideas deeply. Therefore, we're making the following statements to ignite more discussion since we value more on the advances of the whole EEG BCI area. > However, in reality, the application of deep learning in the BCI or EEG field is limited, and shallow learning with simple hand-engineered features is still the gold standard. We acknowledge that for specific EEG BCI field like seizure detection, simple hand-engineered features are still the gold standard due to the interpretability of these features has been well studied for decades. For the clinical field, the interpretability plays the most important role in making responsible diagnosis. However, EEG BCI's applications are not constrained within the clinical field. EEG (especially scalp EEG) has been widely applied for many other interesting neuroscience researches, like decoding visual perceptions [1] and emotion recognition [2]. In [3], Deep ConvNet achieves better performance than Shallow ConvNet on some tasks. Recently, deep models pre-trained on large EEG dataset significantly outperforms the shallow models in a wide range of EEG field [4, 5]. Also, adopting hand-engineered features does not conflict with deep learning methods. In [4], DE feature is first extracted from EEG, then is used for pre-training the powerful deep encoders via masking. But should we rely solely on hand-engineered features? The most common features of EEG are power spectral density (PSD) and differential entropy (DE), which are all calculated using the frequency information of an EEG segment. For a T-length EEG segment, the PSD or DE feature only extracts a single value from it, resulting in the massive loss of information (T values -> 1 value). Of course, it is effective and convenient when datasets and computing resources were relatively small, but we should acknowledge that hand-engineered features can be biased and will ignore some informations (like DE amplifies the differences in high-frequency bands). Next, we would like to share some opinions of deep learning vs shallow learning. Data is the key factor for training deep models, if there is not enough data, deep models will overfit to the training set and thus are not competiable with shallow models. Let's take computer vision as an example, looking back to the era when there is no big dataset like ImageNet, the shallow learning with simple hand-engineered features (e.g., SIFT and HOG) was the gold standard for image classification. But what happened after the proposition of big dataset? Firstly, the Alexnet outperforms all hand-engineered features with 8 convolutional layers [7]. Then Resnet refreshed the record with up to 152 layers [8]. Until 2024, the deep models are already far ahead in CV for decades. Limited by the EEG signal acquisition technology in the past years, the amout of EEG data is too small to train a good deep models. But with the development of EEG acquisition technology and more EEG data being collected, some works have already trained a powerful deep encoders with tremendous EEG data that outperforms shallow net [4,5]. With the amount of EEG data becomes larger, why not embrace the deep learning that has been revolutionizing many fields? We can see the success of AlphaGO, AlphaFold, and ChatGPT, and we hope scientists can develope a powerful EEG model (like detecting epilepsy with 99% accuracy), which definetly will save lives of millions of people. > Lack of connection with real-world problems, ... > Why did the authors create a science fiction problem ... ? Security is always a important topic no matter it can work instantly or in the future for warning nowadays people of possible future safety issues. Seeing the promising future of deep learning for EEG BCI [4,5], there are large possibilities that deep EEG models will be deployed across diverse fields. Our work proposes a threatening security issues, which must be considered when deploying EEG models. Moreover, the EEG models tested in our experiment contains some widely-used EEG models, like EEGNet [6] and Deep ConvNet [3], and a shallow network which only possesses one LSTM layer and a linear layer. The results demonstrated that these common EEG models are easily to be injected backdoors, suggesting that we must consider the threatening security issues. Thus we would like to argue that our work is not having “no application in the real world”. Instead, our work alerts people to the possibilty of the EEG BCI models can actually be manipulated by an invisible and robust backdoor attacks, which leads to severe results if be ignored. As discussed in the Broader Impacts, our ManiBCI can also be used for protecting intellectual property of EEG datasets and EEG models with watermarking, which also negate the comment that "lack of connection with real-world problems". [1] Song Y, et al. "Decoding Natural Images from EEG for Object Recognition.", in ICLR, 2024 [2] Li X, et al. "EEG based emotion recognition: A tutorial and review." ACM Computing Surveys, 2022 [3] Schirrmeister, Robin Tibor, et al. "Deep learning with convolutional neural networks for EEG decoding and visualization." Human brain mapping, 2017 [4] Yi K, et al. "Learning topology-agnostic eeg representations with geometry-aware modeling.", in NeurIPS, 2023 [5] Jiang W-B, et al. "Large brain model for learning generic representations with tremendous EEG data in BCI." in ICLR, 2024 [6] V. J. Lawhern, et al. “EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces,” Journal of Neural Engineering, 2018 [7] K, Alex, et al. "Imagenet classification with deep convolutional neural networks.", in NeurIPS, 2012 [8] He, K, et al. "Deep residual learning for image recognition.", in CVPR, 2016 --- Rebuttal Comment 1.1: Title: Comments after author's rebuttal Comment: Thank you for your feedback. The reviewer, "a BCI practitioner," has carefully considered the authors' rebuttal and the comments from other reviewers. After all reviewers and the area chair have deliberated, the final decision will be made. Regrettably, the reviewer did not find the authors' detailed rebuttal convincing. The proposed idea, with all respect to the authors' efforts, appears to be artificial and unrealistic compared to the current state-of-the-art in BCI technology. --- Reply to Comment 1.1.1: Title: Responses to the reviewer bcDT Comment: Thanks for your reply. With all respect to your expertise as a BCI practitioner, we hold an opposite opinion as BCI researchers. It is a very common phenomena that there is a preference gap between practical application and academic research. The industry needs application that works now, while the academia prefers future-oriented research. Of course, it is impossible for everyone to have the same research philosophy. We respect your different opinions. But we believe our work, which studies the problems be raising in the future, is a good and novel work.
Summary: This paper presents an EEG backdoor for manipulating EEG BCI, called ManiBCI, where the adversary can arbitrarily control the output for any input samples. Experiments conducted on three EEG datasets demonstrate the effectiveness of ManiBCI; which easily bypass existing backdoor defenses. Strengths: - A backdoor attack for EEG BCI where the adversary can arbitrarily manipulate which target class the EEG BCI will misclassify without engaging the training stage. - The use of EEG electrodes and frequencies in EEG backdoor attacks with reinforcement learning. - Several experiments have been conducted to assess the proposed method. Weaknesses: - The proposed methodology is not well described. It mainly based on the application of Fourier transform and reinforcement learning. Technical Quality: 3 Clarity: 3 Questions for Authors: It is suggested to describe well the proposed method by highlighting the novelty and originality of the proposed contribution. It is also suggested to summarize the proposed method as an algorithm. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. The limitations were addressed in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Below we have addressed your questions and concerns point-by-point. **Weaknesses** > The proposed methodology is not well described. It mainly based on the application of Fourier transform (FFT) and reinforcement learning (RL). Thanks for your advise, there is absolutely room for improvement in our descriptions of methods, and we will refine our paper. We submit the algorithm in the general response's PDF file, please kindly refer to it. Meanwhile, we would like to take this opportunity to emphasize that though our method builds upon the established techniques (FFT and RL), the novelty of our work is not in implementing these techniques per se. Instead, it lies in applying these techniques in a new and challenging domain: how to inject invisible and robust triggers into the EEG modality? Our work is at the intersection of safety and EEG BCI, where the focus is not solely on inventing new tricks or models for the RL or FFT. Before our work, the backdoor attacks designed for EEG modality either require engaging the training stage, or fail to maintain high stealthiness. Instead, our work successfully designed an invisible and robust trigger in the frequency domain without engaging the training stage, offering a novel perspective of backdoor attack for multi-channel EEG modality. We appreciate that you can acknowledge these contributions when summarizing the strengths of our paper. Our work alerts people using EEG BCIs to potential safety issues and calls for defensive studies to counter ManiBCI for EEG modality. We truly value your suggestion and will add more details to the paper. **Questions** > It is suggested to describe well the proposed method by highlighting the novelty and originality of the proposed contribution. It is also suggested to summarize the proposed method as an algorithm. Thanks for your contructive and helpful suggestion! Adding an algorithm can make our method more readable and clear. We have written an algorithm and will add it in our future version. Please kindly refer to the PDF file in the general response. --- Rebuttal Comment 1.1: Title: Thanks for the clarificaiton Comment: Thank you for addressing my comments also for the detailed rebuttal. A discussion will be made with other reviewers to reach a comprehensive decision. --- Reply to Comment 1.1.1: Title: Thanks for reading our responses Comment: We are glad for having addressed your concerns! We sincerely value the reviewer's suggestions and we will update the methodology section accordingly.
Rebuttal 1: Rebuttal: We are grateful to all four reviewers and AC/SACs for their valuable time, insightful comments, and useful suggestions. We will carefully revise our paper according to the comments. Our point-by-point response to the reviewers’ comments has been added to the individual chat box for each reviewer. We believe that the revised manuscript has been enhanced and the concerns have been well addressed. Moreover, as requested by Reviewer hi3h, we submit the algorithm of our ManiBCI in the PDF file. Many thanks again! Pdf: /pdf/8608085d45cca16c7d6d23e183a53b670373a5d2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Understanding the Role of Equivariance in Self-supervised Learning
Accept (poster)
Summary: This paper provides a theoretical understanding of equivariant self-supervised learning (E-SSL) methods and their effectiveness. The authors propose an information-theoretic analysis that explains the synergy effect between class information and equivariant transformations, leading to improved downstream performance. They also identify three principles for E-SSL design: lossy transformations, class relevance, and shortcut pruning. The paper further discusses the importance of model equivariance and demonstrates its advantages in E-SSL. Overall, the contributions of this paper lie in the theoretical explanation of E-SSL and the guidance it provides for designing effective E-SSL methods. Strengths: 1. The article is well-structured, and the arguments are clear. 2. The author analyzes how equivariance affects self-supervised learning from an information theory perspective and proposes three principles for designing E-SSL. 3. The theoretical validity is demonstrated through theoretical proofs and some small experiments. Weaknesses: 1. The experiments are conducted on few datasets which is not very convincing. 2. There exist some typos in the paper, such as the “xIn” in line 352. 3. It is better for the authors to provide a design enhancement example and conduct experiments to prove the effectiveness of the example, as it only explains the success of previous E-SSL works. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Is 89.49% mentioned in line 279 the ACC of the SimCLR framework during testing? Even with the enhancement of SimCLR, the task of predicting rotation only achieved a maximum effect of 59.06 in downstream tasks. These two values are not very close. 2. In the paragraph starting from line 328, it is believed that the action space of the instance discrimination task is N, but I think it is actually a combination of many binary classification tasks with an action space of 2. So I think contrastive learning is not a special case to equivariant tasks. 3. Principle 2 states that it is best to be category related, does it mean that category label information needs to be known during pre-training stage in SSL? 4. Why can predictive patches like MAE be considered as an equivariant SSL task? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## We thank Reviewer PgKg for appreciating our writing and theoretical contributions. Below we address each of your concerns. --- **Q1.** The experiments are conducted on few datasets which is not very convincing. **A1**. Following your suggestion, we further validate our findings on ResNet-50 and CIFAR-100. The results all agree with our findings in the main paper, that 1) better rotation prediction brings better test error, and 2) model equivariance contributes to better test classification. Notably, model equivariance contributes even more on CIFAR-100. *Results on CIFAR-100.* | Augmentation | Network | Train Rotation ACC | Test Classification ACC | Gain | | --- | --- | --- | --- | --- | | none | ResNet18 | 99.83 | 11.31 | | | | EqResNet18 | 100 | 32.38 | + 21.07 | | crop+flip | ResNet18 | 90.94 | 13.19 | | | | EqResNet18 | 99.88 | 49.47 | +36.26 | | simclr | ResNet18 | 68.29 | 10.65 | | | | EqResNet18 | 82.69 | 37.11 | + 26.46 | *Results on ResNet-50 (crop+flip) on CIFAR-100.* | Network | Train Rotation ACC | Test Classification ACC | | --- | --- | --- | | ResNet18 | 90.94 | 13.19 | | ResNet50 | 99.30 | 14.53 | --- **Q2.** There exist some typos in the paper, such as the “xIn” in line 352. **A2**. Thanks for pointing out. We will fix them in the revision. --- **Q3.** It is better for the authors to provide a design enhancement example and conduct experiments to prove the effectiveness of the example, as it only explains the success of previous E-SSL works. **A3**. We note that the key focus of this work is to establish the first theoretical understanding of E-SSL, instead of devising a new variant. The former may have a larger significance because it is more fundamental to this field. Driven by our theoretical insights, we also first study **combining model equivariance with training equivariance for E-SSL**, and it improves RotNet by **25.22% on CIFAR-10 and 36.26% accuracy on CIFAR-100**. Remarkably, we are the first to attain comparable performance **by predicting augmentations alone** (while previous methods rely on merging with contrastive learning). This illustrates that our analysis can open doors for new paths for E-SSL designs. --- **Q4.** Is 89.49% mentioned in line 279 the ACC of the SimCLR framework during testing? **A4**. Indeed, we made a typo here. Later, when merging with the model equivariance (Sec 5.3), its performance (82.26%) can be close to SimCLR. We will fix this confusion in the revision. --- **Q5**. In the paragraph starting from line 328, it is believed that the action space of the instance discrimination task is N, but I think it is actually a combination of many binary classification tasks with an action space of 2. So I think contrastive learning is not a special case to equivariant tasks. **A5**. We understand that contrastive learning (CL) can be understood as many binary classification tasks between positives and negatives. Nevertheless, this understanding defines **infinitely many** binary tasks and **every task has been seen only once** during training, a setting rarely studied in the ML literature. The deviation from common learning tasks makes it hard for us to have a solid understanding of how contrastive learning actually behaves. Instead, it is a common and natural way to understand CL as instance classification (IC). In fact, in the seminal work by Wu et al. [1] (prior to InfoNCE) that **first proposed contrastive learning** (their loss is the same as InfoNCE), they do regard CL as **a non-parametric approach to IC**, as the title suggested. We refer to [1] for a detailed explanation of this connection. We further note that for a very large number of classes like IC, it is common to subsample the classes in the denominator of CE loss. A well-known example is the **negative sampling technique proposed in word2vec [2]** for learning on a very large vocabulary. In instance classification, existing works also subsample the negative classes [1,3], leading to a mini-batch number of negative samples during training. Hope this elaboration address your concern! We will clarify this part in the revision. Ref: [1] Wu, Zhirong, et al. "Unsupervised feature learning via non-parametric instance discrimination." CVPR. 2018. [2] Mikolov, Tomas, et al. "Distributed representations of words and phrases and their compositionality." NeurIPS 2013. [3] Cao, Yue, et al. "Parametric instance classification for unsupervised visual feature learning." NeurIPS 2020. --- **Q6**. Principle 2 states that it is best to be category related, does it mean that category label information needs to be known during pre-training stage in SSL? **A6**. No. For SSL, the labels are not supposed to be known during pretraining. What we are discussing in Principle 2 is that the choice of the augmentation type (eg the rotation or the flip) should be related to the class labels, in the sense that extracting class-relevant information from the input is helpful for predicting these transformations (no class label required) during E-SSL pretraining. We will make it clear in the revision. --- **Q7**. Why can predictive patches like MAE be considered as an equivariant SSL task? **A7**. In this paper, we consider a general notion of equivariant SSL, in the sense that the representations is learned to be aware of the pretext, eg aug parameters. MAE augments by masking patches at random **positions $p$ (the aug parameter in MAE)**. Then, MAE takes the position $p$ as input, and learns to reconstruct images at these locations, thus its representations can adapt to the input position variables $p$, in contrast with contrastive learning whose representation is global and invariant to augmentations. Thus, we regard MAE more as an equivariant method. We will explain it in the revision. --- We hope this clarifies your questions. If you would find it satisfactory, we respectfully hope that you may consider re-evaluating our work based on the revisions. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have checked the rebuttal and tend to keep my score.
Summary: This paper aims to reduce the gap between theory and practice for equivariant SSL, which refers here to the sub-family of discriminative SSL methods that do not enforce the invariance of representations to augmentations. The authors propose an explanation based on the "explain-away" (_“Explaining away” is a common pattern of reasoning in which the confirmation of one cause of an observed or believed event reduces the need to invoke alternative causes._) and conclude that this principle justifies strictly positive mutual information between the pre-task labels (e.g., the rotation angle) and the downstream task label (i.e., semantic class label) thereby explaining why the pretext and downstream tasks are not systematically misaligned. Strengths: - Relevance of the research question: while an intuition as to why E-SSL performs better than a random encoder might be easy to formulate (see below in weaknesses), a theoretical formulation supporting these intuitions eludes. A principled explanation would help the community in the design of better and more principles E-SSL approaches. Weaknesses: My main concern regards the soundness of the contribution, the reasoning leading to this concern follows: - in this work, the authors use the "explaining-away" principle which relies on the proposed latent variable model shown in Figure 2 to justify E-SSL performance. Authors consider a partitioning of the latent information in $C$ and $S$, the former encapsulates semantic information, the latter the non-semantic information (should hence include the object position & orientation, textures). - I believe a reasonable intuition as to why E-SSL (e.g., predicting the rotation angle of an image as a pretext task and then performing semantic classification) can perform better than a random encoder - as shown by authors in Figure 1 -, is because of a selection bias in images used for training: each semantic class/object is more likely to be depicted in a specific orientation (e.g., a person standing) than in any other orientation (i.e., a person upside down). This means that the pretext and downstream tasks are not completely misaligned: being aware of the class information allows one to make a better guess about the rotation angle than a random guess. If an object is as likely to be depicted in one orientation than any other orientation than the class label is uninformative to make a guess regarding the rotation angle. The difference in results between horizontal and vertical flip in Figure 1 supports that intuition. The explanation proposed by the authors does not support that intuitive explanation of E-SSL but instead seems to propose a overly-simplified - The explaining-away principle justify a strictly positive mutual information between $\bar{X}$ and $A$ which authors extend to a strictly positive mutual information between $\bar{C}$ and $A$. It is not obvious from the get-go how authors can theoretically justify this extension. To summarize, the two aforementioned points should be further discussed by the authors to confirm the soundness of the proposed explanation which seems to take shortcuts both from an intuitive and theoretical perspective. I would suggest discussing these points more thoroughly. Minor comments: - Relevance of certain results: Theorem 2 is tied to an over-simplified mixing model ($ X = A + C$) - Rigor: - Notation: notations should be consistent throughout the paper (e.g., line 250 and 256, line 236/237 and line 240) and assumptions should be made clear (e.g., eq (3) and line 293, lambda=1 is not stated) - Principle III: refers to theoretical results - "style features may also explain the equivariant target A" - , which have not been explicitly shown (in section 4.1 or in the appendix) - Contrastive learning vs. Instance discrimination: while instance discrimination performs classification with a number of classes equivalent to the number of training instances, contrastive learning like SimCLR usually discriminates at a batch level hence I believe line 339 does not accurately describe contrastive learning but rather instance discrimination. - References for claims: "style features are often easier for NN learning" - line 260, - Typos: line 352, 68 Technical Quality: 3 Clarity: 2 Questions for Authors: - Can you please clarify what is meant by "global" vs. "local" transformations (principle II)? - Can authors rigorously justify the extension of the explaining-away principle from X to C? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations are not explicitly discussed but do mention the use of simplified models (e.g., section 4.3) for theoretical purposes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer TYZN for a critical reading of our work. We carefully examine the intuition you propose, and find that it actually fits well into our theory. --- **Q1.** Discussion on the intuitive understanding of why E-SSL works and how it fits into our theory. **A1**. We resonate with your insight that for rotation prediction to be effective, the class/object itself must exhibit a rotational bias (i.e., it is a **necessary** condition). Nevertheless, this intuition lacks a solid theoretical explanation. We find it **naturally fits as a piece of our explaining-away framework, where it can have a rigorous explanation**. To extend our current analysis, besides the causal diagram in Fig 2, we further consider the inherent rotation angle $\bar A$ and an edge $\bar A\to \bar X$, i.e., the original object is also generated from its intrinsic angle. Notice that now there is a new collider structure $C\to \bar X\leftarrow \bar A$, which means that $I(\bar A;C|\bar X)>0$, which rigorously explains why class information $C$ is helpful for predicting the intrinsic rotation $\bar A$ from the original object $\bar X$. Since the intrinsic rotation $\bar A$ is unknown, RotNet further applies a (known) random rotation $A$ to $\bar X$ and get $X$, and the rotated image has an angle $A'=\bar A+A$. Apparently, figuring out $\bar A$ is directly helpful for predicting the random $A$ (the E-SSL target). Formally, we have a collider structure $\bar A\to X\leftarrow A$ and have a positive synergy $I(A;\bar A|X)>0$. Thus, our explaining-away analysis justifies that 1) knowing $C$ is also helpful for predicting $\bar A$, and 2) knowing $\bar A$ is helpful for predicting $A$ as well. In this way, introducing $\bar A$ does not invalidate our previous theory but provide **more fine-grained analysis on the way of how class information helps rotation prediction**. Thank you again for this valuable perspective. We will definitely incorporate it and acknowledge your insights. --- **Q2**. Theorem 2 is tied to an over-simplified mixing model ($X=A+\lambda C$). **A2**. We note that Theorem 1 already gives us **a general characterization that is agnostic of the data distribution or the model class**. Therefore, our framework and analysis does apply to general real-world scenarios. Due to its generality, it is also hard to obtain a very fine-grained quantitative analysis. Thus, the purpose of Thm 2 is to gain more quantitative insights with a linear model. Note that linear data assumptions are also commonly adopted in SSL theory, eg [1]. We leave more complex cases if left for future work. **Ref:** [1] Wen, Zixin, and Yuanzhi Li. "Toward understanding the feature learning process of self-supervised contrastive learning." ICML. PMLR, 2021. --- **Q3**. Rigor: > Notation: notations should be consistent throughout the paper (e.g., line 250 and 256, line 236/237 and line 240) and assumptions should be made clear (e.g., eq (3) and line 293, lambda=1 is not stated) Thanks. We will fix it. > Principle III: refers to theoretical results - "style features may also explain the equivariant target A" - , which have not been explicitly shown (in section 4.1 or in the appendix) Indeed, we didn’t elaborate the style features here. Similar to the intrinsic rotation variable $\bar A$, the feature variable $S$ also constitutes the object $\bar X$ (with a causal edge $S\to \bar X$). Due to the explaining-away effect in the collider structure $S\to X\leftarrow A$, $S$ can also explain and help predict $A$ with $I(S;A|X)>0$. Therefore, style features, as an easy-to-learn shortcut, can lead the model to learn fewer class features. We will elaborate this relationship rigorously in the revision. > Contrastive learning vs. Instance discrimination We note that contrastive learning can be seen as a minibatch approximation to instance classification by random subsampling a few instances (e.g., 2048) at each time. In expectation, the objective is still distinguishing the positive instance from all the other negative instances in the dataset. **This negative class sampling technique is widely used in standard classification tasks, e.g., negative sampling in word2vec** [1]. Thus, we think it is plausible to regard contrastive learning essentially as an instance classification task. References: [1] Mikolov et al. Distributed Representations of Words and Phrases and their Compositionality. NeurIPS 2013. --- **Q4**. Reference for claims and Typos. **A4**. Thanks. We will add the references and fix the typos. --- **Q5**. Can you please clarify what is meant by "global" vs. "local" transformations (principle II)? **A5**. Here, local changes refer to those that only modify pixel values but do not modify the pixel’s positions (eg color inversion). Therefore, to predict these operations, the NNs only need to look at each pixel’s values without relying on global context. Instead, global operations change pixel positions (e.g., rotation, flip), and in these cases, figuring out the global content can facilitate the prediction of these global changes. We will clarify it in the revision. --- **Q6**. Can authors rigorously justify the extension of the explaining-away principle from X to C? **A6**. Of course. In fact, according to probabilistic graphical models, **as long as there is a collider structure between the variables**, there will be an explaining-away effect. In the causal structure $C\to \bar X\to X$, we can also omit the intermediate node $\bar X$ (which also holds since $X$ is causally dependent on $C$) and write the collider structure $C\to X\leftarrow A$, which implies the explaining-away effect with $I(A;C|X)>0$. --- Thank you for your careful reading and for sharing your insights. If you would find our explanations satisfactory, we respectfully ask you to consider re-evaluating our work based on the revisions. We are also happy to address any remaining concerns during the discussion stage. --- Rebuttal Comment 1.1: Title: Answer to Rebuttal Comment: Thank you to the authors for their efforts in addressing my comments. I have re-read the paper and read the other reviews/rebuttals. I am happy to increase for the following reasons: - authors have cleared my concerns regarding the soundness of their theoretical results (i.e., use of explaining away effect) - going through the material once again, I see better how my intuition can be explained by the author's work, I believe connecting the author's contributions with a higher level intuition the reader might have regarding E-SSL and why it works would benefit the paper and hope the authors will adjust the manuscript accordingly. Overall, I believe the paper contributes to a better understanding of E-SSL methods. I believe the manuscript should be updated to incorporate the distinction between CL and ID made by the authors in the rebuttal for completeness.
Summary: This study contributes theoretical insights that enhance our understanding of conventional practices in SSL training. Building on these theoretical foundations and supported by experimental evidence, the study puts forward several principles for the practical implementation of equivariant self-supervised learning designs. Strengths: This study presents theoretical findings corroborated by experimental results. Specifically, the theory enhances our understanding of the design of augmentation functions. These findings are crucial as previous designs of augmentation functions have largely relied on empirical exploration. Weaknesses: Experiments are conducted using smaller-scale models. Typo in line 352. Technical Quality: 3 Clarity: 3 Questions for Authors: Could the principles 1,2,3 be applied to other domains beyond vision-based SSL? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer yP9X for acknowledging our contributions to theoretical understandings. Below, we further address your concerns on the empirical side. --- **Q1.** Experiments are conducted using smaller-scale models. **A1.** Thank for your advice! The experiments are mainly designed to validate our theoretical insights, and as shown in previous works, E-SSL methods generalize well across different model sizes. Following your suggestions, we further validate our findings on a larger model ResNet-50 and a more complex dataset CIFAR-100. As shown below, the results agree well with our findings in the main paper, that 1) better rotation accuracy under a larger model brings better test classification accuracy; and 2) model equivariance brings significant gains. Notably, the improvement of model equivariance is even more significant on CIFAR-100, considering that CIFAR-100 has 100 classes whose accuracy is harder to improve. *Results on CIFAR-100 (akin to Table 2 on CIFAR-10).* | Augmentation | Network | Train Rotation ACC | Test Classification ACC | Gain | | --- | --- | --- | --- | --- | | none | ResNet18 | 99.83 | 11.31 | | | | EqResNet18 | 100 | 32.38 | + 21.07 | | crop+flip | ResNet18 | 90.94 | 13.19 | | | | EqResNet18 | 99.88 | 49.47 | +36.26 | | simclr | ResNet18 | 68.29 | 10.65 | | | | EqResNet18 | 82.69 | 37.11 | + 26.46 | *Results on ResNet-50 (with crop+flip) on CIFAR-100.* | Network | Train Rotation ACC | Test Classification ACC | | --- | --- | --- | | ResNet18 | 90.94 | 13.19 | | ResNet50 | 99.30 | 14.53 | --- **Q2**. Could the principles 1,2,3 be applied to other domains beyond vision-based SSL? **A2**. Yes! Our principles are based on a theoretical analysis of the **general methodology of E-SSL**, not tailored to vision-based SSL. In other domains (text, graph, speech, time series etc), there are also many similar SSL methods that can be understood as E-SSL in the general sense (see e.g., the discussion of MAE and BERT in Sec 5.1). Taking the **BERT model in NLP** as example, our theory can understand its success from these the three principles: - **Principle I:** First, BERT learns representations by predicting the masked words (task $A$), and it is shown that a large masking ratio is important for learning useful features (for task $C$). As analysed in Principle I, this is also because we need to corrupt the input enough in order for the high-semantic class-related features to be utilized for masked prediction. - **Principle II:** Meanwhile, class information (high-level text semantics) is indeed helpful for masked prediction, thus demonstrating the importance of class relevance (Principle II). - **Principle III:** At last, masking prediction as a token-level dense prediction task helps prevent the shortcut features (Principle III). Thus, the insights of our principles are general and not limited to a certain modality. As the first theoretical work in the field of E-SSL, it can help explain many existing E-SSL-like approaches as well as inspire new designs in multiple domains. --- Hope the explanation above address your concerns! Please let us know if there is more to clarify. --- Rebuttal Comment 1.1: Comment: Thank you for conducting the additional experiments and providing the detailed discussions in response to the question. The authors' rebuttal has addressed and resolved previously raised concerns. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for the prompt response! We are very glad to hear that your concerns are now resolved in our rebuttal. We will be sure to incorporate these discussions in our revision.
Summary: This paper proposes a theoretical and empirical study of the role of invariant and equivariant representations in self-supervised learning. While a number of works have been focused on learning equivariant representations, it remains unclear whether or not equivariance is beneficial in specific tasks. By studying how applying/predicting transformations in latent space is a task that requires class information (through the explaining away effect) the authors derive insights on the behaviour of existing methods and provide directions for future work, notably that invariance may not be necessary at all to obtain good representations. Strengths: The theoretical analysis confirms previous empirical findings, for example that the considered augmentations must be complex/lossy [1] as well as diverse [1] to improve how much class information is present in the representations. The collider structure and explaining away phenomenon are an elegant way to understand the interplay between the original data and its transformations, showing that C and A cannot be considered to be independent. While invariance loss are commonly used to learn equivariant representations[2,3] (in conjunction with an equivariant objective), previous work had found that even with a purely predictive objective, competitive performance can be obtained[1,4]. This is in line with the conjecture line 281 "learning from equivariance alone can achieve competitive performance", and raises an important future line of work. [1]Garrido, Quentin, et al. "Learning and leveraging world models in visual representation learning." arXiv preprint arXiv:2403.00504 (2024). [2] Gupta, Sharut, et al. "Structuring representation geometry with rotationally equivariant contrastive learning." arXiv preprint arXiv:2306.13924 (2023). [3]Devillers, Alexandre, and Mathieu Lefort. "Equimod: An equivariance module to improve self-supervised learning." arXiv preprint arXiv:2211.01244 (2022). [4] Garrido, Quentin, Laurent Najman, and Yann Lecun. "Self-supervised learning of split invariant equivariant representations." arXiv preprint arXiv:2302.10283 (2023). Weaknesses: The paper seems to focus on two distinct families of methods. Methods which are augmentation aware (e.g. RotNet) and methods which are truly equivariant, which are explicitly designed to learn the transformation between transformed data in latent space, bringing more structured representations. While this distinction may not affect the general ideas and theoretical arguments, it may impact empirical results and practical considerations. It would be good to have a discussion/comparison of these distinct approaches. Most of the practical insights stemming from this work are qualitative. A conclusion such as "knowing C is beneficial to predicting A" is a useful general guideline, but doesn't directly translate to practical guidance. The results would be stronger with more quantitative assessments of class relevance (perhaps in another, more computationally friendly domain), i.e. estimating I(A;C|X). This should be seen more as an avenue for future work though. Technical Quality: 3 Clarity: 4 Questions for Authors: In experiments such as figure 1, what is the training loss/method considered ? Theorem 1 puts forward an argument which seems key to the success or not of equivariant SSL (when used without an invariant objective), "How much information about C do you need to know to predict A". Bringing this point (and class relevance in general) forward in section 3.1 would make the motivation from figure 1 clearer. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations are addressed in section 6, focusing on the theoretical nature of the work and how it doesn't directly translate into improved methods. It would be good to elaborate further on the limitation of the analysis, such as how different instantiations of an equivariant object may lead to different behaviors, or the difficulty of characterizing quantities such as I(A;C|X) for various transformations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer a2cS for acknowledging our theoretical contributions. Below we address your main concerns. --- **Q1**. Comparing augmentation aware methods (e.g. RotNet) and truly equivariant methods (eg CARE). > While this distinction may not affect the general ideas and theoretical arguments, it may impact **empirical results and practical considerations** > **A1**. Thank you for your insightful question. For a controlled study on the role of true equivariance, we add the equivariant objective proposed in CARE [1], which is known to enforce true equivariance when it attains an optimum. The results are shown below. We find that although the two have similar rotation loss, training with the equivariance regularization leads to further improvement in test accuracy (57.32% → 64.50%), which is quite similar to our findings on the model equivariance. As discussed in Sec 5.3, these pieces of evidence suggest that **enforcing feature equivariance (either through model architecture or feature regularization) does have considerable benefits compared to vanilla predictive loss**. This is a valuable insight and we will add it in the revision. Thanks for bringing it up! | | Rotation ACC | Equi loss (Equivariance) | Test Classification ACC | | --- | --- | --- | --- | | RotNet | 97.71 | 18.8444 | 57.32 | | RotNet + 0.1*Equi Loss | 99.95 | 0 | 64.50 | --- **Q2**. Quantitative estimate of I(A;C|X). **A2**. Indeed, in our analysis, the mutual information I(A;C|X) is the key factor that upper bounds the utility of class features during pretraining and thus determines the downstream transferability of E-SSL. In fact, the performance gap between “rotation” and “rotation+cls” shown in the controlled experiments in Sec 4.1 can serve as a surrogate metric for the MI as a variational estimate. Due to the limit of space, we put the detailed derivation in a following comment that you may read if interested. Please let us know if there is more to clarify. --- **Q3.** In experiments such as figure 1, what is the training loss/method considered? **A3**. For a fair comparison, we follow the standard RotNet training setting and adopt CE loss for predicting discrete variables. We adopt the same training hyperparameters for all settings. More details are described in Appendix B.1. We will make it more clear in revision. --- **Q4**. Theorem 1 puts forward an argument which seems key to the success or not of equivariant SSL (when used without an invariant objective), "How much information about C do you need to know to predict A". Bringing this point (and class relevance in general) forward in section 3.1 would make the motivation from figure 1 clearer. **A4**. Thank you for your suggestion. During our writing, we felt that although important, it was hard to bring out the “how C helps to predict A” perspective earlier in Sec 3.1 without explaining the explaining-away effect (that appears later in Sec 4.1). Though it may sound natural after understanding the explaining-away effect, before that, people might wonder, “why do you care about how C helps to predict A” (since this retrospective understanding is also new). Bearing this in mind, we did not want to introduce puzzles here. We will try to re-organize it for a better balance. Thanks again for sharing your thoughts! --- Thank you again for your insightful questions, which helps make this work more complete. Please do not hesitate to let us know if you have additional concerns. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and experiments with CARE. The experiments with CARE as well as the approximation of $I(A;C | X)$ are welcome as they help relate the theoretical insights to practical E-SSL methods as well as help in designing them. I hope that all discussions can be added in a revised version of the manuscript. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for the prompt response and for appreciating our new experiments and discussions. We concur with you that these results would enhance the theoretical and empirical insights of the proposed understanding. We will be sure to incorporate these results in our revision. --- Rebuttal 2: Title: Detailed elaboration on computing $I(A;C|X)$ Comment: The computation of I(A;C|X) requires knowledge of both the pretraining label $A$ and the class label $C$. When both are available, we can compute it following its decomposition $I(A;C|X)=H(A|X)-H(A|X,C)$. Estimating entropy in high-dimensional space is generally hard, and a common approach is to leverage variational estimates for the two entropy terms $H(A|X)$ and $H(A|X,C)$ using neural networks. **Variational estimates of entropy**. Take $H(A|X)$ as an example. We can learn a NN classifier $P_\theta(A|X)$ optimized by minimizing the cross-entropy $L(\theta)=-E_{P_d(X,Y)} P_{\theta}(A|X)$. The following inequality shows that CE loss is the variational upper bound of $H(A|X)$. When $\theta$ is minimized at the minimum with $P_\theta(A|X)=P_d(A|X)$, we have $L(\theta)=H(A|X)$ as the perfect estimate. Since NNs are generally expressive approximators, we believe that the (converged) CE loss can serve as a good estimate for $H(A|X)$. $L(\theta)=H(A|X)+KL(P_d(A|X)\|P_\theta(A|X))\geq H(A|X)$ **Variational estimates of** $I(A;C|X)$. Combing these two, we notice that a good estimate for $I(A;C|X)$ is the difference between the cross entropy losses of the predictor $P_\theta(A|X)$ and $P_\theta(A|X,C)$, which is exactly what we studied in the verification experiment in Sec 4.1. In other words, the gap between the CE losses (or similarly the accuracies) between “rotation” and “rotation+cls”, can serve as a quantitative measure of the synergy effect $I(A;C|X)$. A more computationally friendly way is to select only a small subset of samples and to train it shortly. As shown in Figure 3, the gap is already exhibited at the early stage of training.
Rebuttal 1: Rebuttal: We thank all reviewers for their positive comments and constructive critics on our work. Taking these valuable insights into consideration, we address these problems carefully in each response. We will also do the following: - **A series of validation experiments on more datasets and models.** We will add a series of new experiments that reproduce our verification experiments on more datasets (CIFAR-100 and Tiny-ImageNet-200) and larger networks (ResNet-50), including *the pretext comparison (Fig 1), the controlled study on class features (Fig 3), the base augmentation comparison (Tab 1) and model equivariance (Tab 2)*. See Figs A & B and Tabs A & B in the **rebuttal PDF** for details. - **Fix typos.** We will improve readability by fixing the typos pointed out by the reviewers. - **Discuss the Role of rotational bias.** Inspired by Reviewer TYZN on the role of intrinsic rotational bias, we will further incorporate the intrinsic equivariance variable $\bar A$ into the causal graph and elaborate on the explaining-away effect of $\bar A$ (elaborated below) for the prediction of class $C$. Notably, **this is a natural extension of our theory and does not change the overall framework, understanding, or message of E-SSL, but makes this understanding more complete**. We sincerely acknowledge Reviewer TYZN for sharing the insight. - **Compare equivariant to augmentation-aware methods**. Inspired by Reviewer a2cS, we will add the comparison between truly equivariant and augmentation-aware methods, showing that true feature equivariance has additional benefits. ## Pdf: /pdf/2d4525405232984604949a3201abfcfa5591bc70.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PTQ4DiT: Post-training Quantization for Diffusion Transformers
Accept (poster)
Summary: This paper presents PTQ4DiT, a quantization method designed for diffusion transformers. The method focuses on addressing quantization challenges due to extreme magnitudes in salient channels and the temporal variability of activations across multiple timesteps. To combat these issues, it incorporates techniques like Channel-wise Salience Balancing (CSB) and Spearmen’s ρ-guided Salience Calibration (SSC). These strategies help in redistributing magnitudes to minimize quantization errors and adapt dynamically across different timesteps, significantly enhancing the performance of quantized DiTs without re-training the original models. Strengths: - The method incorporates temporal information into the calibration process for salience balancing. - The experimental results are robust, covering a wide range of scenarios and effectively demonstrating the method's efficacy across different settings. Weaknesses: - The classifier-free guidance scales used for sampling are not specified, which could impact the reproducibility and evaluation of the model's performance. - Under the W4A8 setting, the method exhibits significant degradation in performance, suggesting limitations in its effectiveness at lower bit-widths. Technical Quality: 3 Clarity: 3 Questions for Authors: see weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This work reasonably discussed the limitations and future work Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Weakness 1: Classifier-free guidance scales** Thank you for pointing out this important concern. We set classifier-free guidance scales as 1.5 in all the experiments in our paper, following the origin DiT work [1]. [1] Scalable diffusion models with transformers. In CVPR, 2023. ## **Response to Weakness 2: W4A8 quantization** Thanks for the valuable feedback. Here, we would like to emphasize the challenging W4A8 quantization and the contribution of our PTQ4DiT. **a. Challenge of W4A8 quantization** Prior studies [1, 2] have revealed that W4A8 quantization presents non-trivial challenges due to the severe information loss in low-bit model weights and have attempted to mitigate the degradation of W4A8 U-Net-based diffusion models. However, we observed that these methods result in substantial performance loss when applied to W4A8 Diffusion Transformers (DiTs), indicating the significant difficulty of low-bit quantization for DiT architectures. **b. Contribution of PTQ4DiT** This is the first time a PTQ method has enabled high-quality generation at W4A8 bit-widths for DiTs, paving a promising path for future research in this field. Specifically, our work delves into DiT quantization and identifies two key challenges (Section 3). Then, we develop PTQ4DiT to address these challenges. Encouragingly, we find that PTQ4DiT consistently shows significant performance improvements over mainstream methods, as detailed in Tables 1 and 2 of our paper. Figure 5 further demonstrates that PTQ4DiT facilitates high-quality image generation despite the difficulty inherent in the lower bit-width of W4A8. [1] Q-Diffusion: Quantizing Diffusion Models. In ICCV, 2023. [2] PTQD: Accurate Post-Training Quantization for Diffusion Models. In NIPS, 2023. --- Rebuttal Comment 1.1: Comment: After reading the rebuttal and the comments from other reviewers, I keep my initial rating. --- Rebuttal 2: Title: Thank you for your response Comment: Dear Reviewer a8C3, We sincerely appreciate your prompt response and are grateful that you found our rebuttal beneficial. Thank you once more for your valuable feedback in enhancing our submission.
Summary: - The paper introduces PTQ4DiT, a post-training quantization method for Diffusion Transformers. This approach can facilitate the widespread deployment of DiTs. By investigating the distribution of activation and weight of DiTs, PTQ4DiT designs the Channel-wise Balancing and timestep-aware Salience Calibration. Through these, PTQ4DiT effectively quantizes DiTs to W4A8 and reduce computational costs while maintaining image generation quality. Strengths: - The paper identifies the two key challenges associated with quantizing DiTs and provides simple but effective methodology for effective DiT quantization. - The description and illustration of the paper is clear and easy to follow. Detailed description of the reparamterization process of the quantization parameters are provide Weaknesses: - **Effectiveness of Salience Calibration:** The methodology sections 4.1 and 4.3 of the paper resembles existing work on quantization [1]. The primary novel contribution of this paper is the 'Salience Calibration' technique, which addresses the specific challenge of activation timestep-variance in DiTs. It is critical to verify its importance. However, the evidence supporting the effectiveness of the Salience Calibration (SSC) is currently confined to Table 3, which presents results for ImageNet with 256x256 resolution, using a W4A8 configuration. The improvement observed with SSC is only moderate when compared to the CSB method. To more effectively emphasize the paper's unique contribution, additional evidence demonstrating the effectiveness of CSB would be beneficial." - **Overheads for Salience Calibration:** The 'Salience Calibration' technique introduces a timestep-wise correction coefficient to refine the activation $S(X(t))$ and weight $S(W)$ distributions, resulting in the adjusted binary matrices $B_{\rho}^x$ and $B_{\rho}^w$. This refinement implies that $B_{\rho}^w$ may differ for each timestep t. However, the weights are reparameterized offline prior to quantization. Introducing a time-varying $B_{\rho}^w$ could lead to the generation of distinct quantized integer (INT) weights for each timestep, which may complicate efforts to reduce the model's weight memory cost. This could involve either storing multiple sets of INT weights or repeatedly offloading weights for each timestep. [1] Xiao, Guangxuan, et al. "Smoothquant: Accurate and efficient post-training quantization for large language models." International Conference on Machine Learning. PMLR, 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: - Is the 'Spearman’s ρ-guided Salience Calibration (SSC)' method capable of generalizing across a different timesteps and solvers? It seems that the calibration process depends on the number of timesteps. Does this imply that recalibration is necessary when employing SSC with different models, timesteps, or solvers? - In the context of reparameterization, the paper specifically addresses the 'Post-adaLN' and 'Post-Matrix-Multiplication' scenarios. However, within the Feed-Forward Network (FFN) layers of DiTs, there are instances where two consecutive linear layers do not conform to either of these cases. It raises the question of how to approach reparameterization in such circumstances." - Some unclear details about the quantization process: - Are all layers in the DiTs quantized? e.g., time embedding linear layers, attention matmuls. - In Line 237, the authors state that " the optimization of quantization parameters follows the implementation of Q-Diffusion". Does this mean that the quantization process involves gradient-based optimization of scaling factors, and Adaround of zero-points? What is the cost of the PTQ process. - The statement “the lower the correlation between activation salience s(X(t)) and weight salience s(W), the greater the reduction effect in overall channel salience” in Sec. 4.2 should be more clearly explained. What is the definition of "correlation salience"? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The authors have discussed the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Weakness 1: Effectiveness of SSC** We clarify the innovation of CSB and provide additional evidence of the efficacy of SSC. **a. Innovation of CSB** While CSB shares the general concept of distribution re-scaling and re-parameterization with Smoothquant, it has unique innovation. Our CSB is designed for quantizing DiTs, whereas existing methods, such as Smoothquant, focus on quantizing LLMs. The quantization of DiTs presents unique challenges due to the extreme values in **both activations and weights**. In contrast, Smoothquant identifies outliers **only in activations** within LLMs and addresses this by migrating outliers from activations to weights. Fortunately, the extreme values do not occur in the same channels of activation and weight in DiTs. Our CSB is then derived from this complementarity to balance the salience between activation and weight, alleviating quantization errors for both. **b. More SSC ablation** To validate the importance of SSC, we conduct additional experiment on the challenging ImageNet 512x512 with W4A8 using 50 timesteps: | Method | FID ↓ | sFID ↓ | IS ↑ | Precision↑| |---|---|---|---|---| |Baseline|56.94|74.55|39.16|0.4178| |+CSB|25.45|58.05|90.73|0.6390| |+CSB+SSC|19.71|52.27|118.32|0.7336| The results demonstrate the significant impact of SSC, as it further enhances the benefits introduced by CSB. **c. Direct evidence of SSC's effect** Please see **General Response 2: Experiment on direct evidence**. ## **Response to Weakness 2: Overheads** We clarify that $B_\rho^X$ and $B_\rho^W$ are not time-varying. As defined in Eq. (10), $s_\rho(X^{(1:T)})$ aggregates activation saliences across all timesteps. Therefore, the resulting refined salience is a function of the number of timesteps $T$ and does not depend on any specific timestep $t$. We then formulate $B_\rho^X$ and $B_\rho^W$ based on $s_\rho(X^{(1:T)})$ and $s(W)$, which are not time-varying. Thus, we only perform reparameterization **once before quantization, without additional memory cost for model weights**. ## **Response to Question 1: Recalibration** While SSC is orthogonal to solvers and models, it does depend on the number of timesteps ($T$ in Eq. (10)) due to its design of aggregating channel salience across T. Encouragingly, SSC is capable of generalizing across different T without recalibration. To verify its generalizability, we conduct two sets of experiments on ImageNet 256x256 with W8A8: **(I) Downsampling**: We calibrate and quantize the model using PTQ4DiT with 250 timesteps and evaluate this model using 100 and 50 timesteps. **(II) Upsampling**: We calibrate and quantize the model using PTQ4DiT with 50 timesteps and evaluate this model using 100 and 250 timesteps. |Calibration Timesteps|Evaluation Timesteps|FID↓|sFID↓|IS↑|Precision↑| |---|---|---|---|---|---| |250|250|4.63 |17.72|274.86|0.8299| |100|100|4.73|17.83|277.27|0.8270| |50|50|5.45|19.50|250.68|0.7882| |**250**|**100**|4.86|17.76|269.93|0.8221| |**250**|**50**|5.47|19.32|249.01|0.7899| |**50**|**100**|4.99|18.12|261.84|0.8239| |**50**|**250**|4.73|17.89| 266.04|0.8312| The results show that both downsampling and upsampling does not significantly effect the performance, demonstrating the generalizability of SSC. We conjecture that this is due to two factors: **(I) Inherent generalizability of DiTs.** Previous studies [1,2,3] indicated that diffusion models possess strong ability to generalize across timesteps. Consequently, models trained with 250 timesteps can be directly applied to 100 and 50 timesteps with acceptable performance degradation. This interpolation ability is also evidenced by the performance of FP DiTs with 100 and 50 timesteps in Tables 1 and 2 of our paper (note that the original DiT work [4] only released the FP model for 250 timesteps). **(II) Effect of PTQ.** PTQ is formulated as a numerical problem and does not re-train the original models, which will not significantly affect the inherent interpolation ability if the quantization error remains low enough. [1] Diffusion Models Beat GANs on Image Synthesis. NIPS 2021. [2] Improved denoising diffusion probabilistic models. ICML 2021. [3] Post-training quantization on diffusion models. CVPR 2023. [4] Scalable diffusion models with transformers. CVPR 2023. ## **Response to Question 2: Reparameterization** We address 3 types of linear layers exhibiting significant channel salience: FC1, Projection1, and Projection2. We design reparameterization strategies for these layers. Specifically, FC1 follows the 'Post-adaLN' scenario. Thus, the Balancing Matrices can be integrated to the adaLN before Pointwise Feedforward. We further absorb them into MLPs regressing $\gamma_2$ and $\beta_2$, a process detailed in Appendix D. We illustrate the integration strategies in Figure 7 and discuss them in the caption. We apologize for causing any confusion. ## **Response to Question 3: Quantization details** **a. Are all layers in the DiTs quantized?** We quantize all layers in the DiT models. The time embedding linear layers and attention matmuls are quantized. **b. Quantization optimization** Following Q-Diffusion, we involve gradient-based optimization for the quantization parameters, which has similar PTQ cost as that of Q-Diffusion (about 19 hours on a single NVIDIA A6000 GPU) since we do not introduce additional parameters. ## **Response to Question 4: Explaining the statement** Correlation salience refers to the degree of alignment between activation salience $s(X^{(t)})$ and weight salience $s(W)$. A low correlation indicates that channels with extreme values in activations are not the same as those in weights, and vice versa. Leveraging this complementarity, we propose SSC to prioritize timesteps with lower correlation between activation and weight saliences. This approach helps distribute the quantization impact more evenly across channels and timesteps, improving overall quantization performance. --- Rebuttal Comment 1.1: Comment: Thank the authors for the clarification, most of my concerns are addressed. I keep my scoring as acceptance. --- Rebuttal 2: Title: Thanks for your response Comment: Dear Reviewer jrKh, Thank you for your thorough review and constructive suggestion. We are grateful for your acknowledgment of our efforts to address the concerns. Your expertise has significantly contributed to the enhancement of our work.
Summary: This paper proposes the first post-training quantization (PTQ) method for Diffusion Transformer (DiT). It addresses the presence of salient channels with extreme magnitudes and the temporal variability in the distributions of salient activations over multiple timesteps. Experimental results demonstrate comparable performance in low-bit scenarios. Strengths: 1. This paper is well-organized and includes clear illustrations. 2. It presents the first post-training quantization method for Diffusion Transformers. 3. In the W8A8 scenario, PTQ4DiT achieves lossless performance compared to full precision. 4. The paper also includes theoretical analysis. Weaknesses: 1. In line 158, the author proposes an important property: large values do not coincide in the same channels of activation and weight. This property is crucial for the proposed method but is only briefly illustrated by a sketch. We believe that such an important property should be demonstrated with statistical experiments. 2. In line 170, the author attempts to use the geometric mean to balance activation and weight channels. Is this approach well-designed? Could another type of mean be used? 3. PTQ4DiT is designed for Diffusion Transformers and is tested on traditional DiT. However, many new DiT-based models, such as pixart-alpha [1], pixart-sigma [2], SD3 [3], and Lumina [4], have emerged. To verify the generality of the method, it should be tested on a broader range of DiT-based models. 4. This method appears to work only with linear layers. [1] PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis, ICLR24. [2] PixArt-\Sigma: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation, arXiv. [3] Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, arXiv. [4] Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers, arXiv. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. The authors have addressed the limitations and broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your acknowledgement of our work and your constructive feedbacks and suggestions. ## **Response to Weakness 1: Statistical experiments for the important property** We validate the important complementarity property by additional statistical experiments on the well-established Jaccard similarity [1]. **a. Jaccard similarity** To quantitatively demonstrate the **complementarity** property (large values do not coincide in the same channels of activation and weight), we measure the **Jaccard similarity** [1] of salient activation and weight channels: $$ Jaccard(S_A,S_W)=\frac{|S_A\cap S_W|}{|S_A\cup S_W|}\in[0,1], $$ where $S_A$ is the set of indices of salient activation channels, $S_W$ is the set of indices of salient weight channels, and $|S|$ calculates the number of elements in set S. In this way, a lower $Jaccard(S_A, S_W)$ reflects stronger complementarity. **b. Detecting salient channels** To accurately construct the sets $S_A$ and $S_W$, we utilize a robust statistical outlier detector, **Interquartile Range (IQR)** [2], to identify the salient channels among all channels. This method identifies data points that lie significantly outside the middle 50% of the distribution as outliers. We perform IQR detection on channels' maximal absolute values to identify the salient channels. **c. Statistical experiment** We randomly select 100 samples for ImageNet 256x256 generation with total timestep T = 250. We evaluate the average Jaccard similarities at t = $\frac{1}{4}T, \frac{1}{2}T,$ and $\frac{3}{4}T$. The results are averaged over 100 samples and linear layers in DiTs: |Timestep t|$\frac{1}{4}T$|$\frac{1}{2}T$|$\frac{3}{4}T$| |---|---|---|---| |**$$Jaccard(S_A, S_W)$$**|0.0675|0.1364|0.1006| Our finding are: **(I)** We obtain relatively low Jaccard similarities (note that $Jaccard \in [0, 1]$), suggesting a significant complementarity between salient activation and weight channels. The complementarity property motivates our Channel-wise Salience Balancing (CSB) to redistribute extreme values among non-overlapping salient channels. **(II)** Different timesteps t exhibit various Jaccard similarities, aligning with the key observation in our paper: the temporal variation in salient channels. Such variability further indicates the necessity of our proposed Salience Calibration method (SSC) for various timesteps. [1] Distance between sets. Nature 1971. [2] Outlier detection: Methods, models, and classification. ACM Computing Surveys (CSUR) 2020. ## **Response to Weakness 2: Is the geometric mean well-designed?** **a. The effectiveness of geometric mean** The geometric mean was selected due to its suitability for the multiplicative relationship between activations and weights (Eq. (3) in our paper) and its ability to balance and minimize the influence of extremely large values, which is critical in our quantization method. **b. The uniqueness of geometric mean** Mathematically, alternative means could include the arithmetic, quadratic, or harmonic mean. However, interestingly, we found that the geometric mean is uniquely capable of ensuring the mathematical equivalence of the balancing transformation (as expressed by Eqs. (12), (15), (16) in our paper). Here, we provide a brief demonstration: $$ \quad\frac{mean(s(X_j), s(W_j))}{s(X_j)}\cdot \frac{mean(s(X_j), s(W_j))}{s(W_j)} = 1 $$ $$ \Leftrightarrow mean(s(X_j), s(W_j)) = (s(X_j) \cdot s(W_j))^{\frac{1}{2}} $$ This essential property offers the feasibility of reparameterization, thereby eliminating extra computation overhead of the balancing transformation during inference. ## **Response to Weakness 3: Generality of PTQ4DiT** Please refer to **General Response 1: Generality of PTQ4DiT**. ## **Response to Weakness 4: Work only with linear layers** **a. Extending PTQ4DiT to other structures** Although our method is derived from the quantization of linear layers, the underlying idea of balancing transformation can be seamlessly extended to other structures, such as convolutional layers. This can be achieved by reformulating the convolution operation into a matrix multiplication between activations and weights [1], which is common in the practical implementation of convolution [2]. **b. The reason for focusing on linear layers** The linear layers in Transformers incur significant computational and memory overhead due to the large matrix multiplications [3, 4, 5, 6], representing the primary efficiency bottleneck in Transformer models including Diffusion Transformers (DiTs). [1] Solving Oscillation Problem in Post-Training Quantization Through a Theoretical Perspective. CVPR 2023. [2] Optimizing hardware accelerated general matrix-matrix multiplication for cnns on fpgas. Transactions on Circuits and Systems 2020. [3] PTQ4ViT: Post-training quantization for vision transformers with twin uniform quantization. ECCV 2022. [4] RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers. ICCV 2023. [5] OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models. ICLR 2024. [6] QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models. ICLR 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification, which addressed most of my concerns. I keep my initial rating. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Dear Reviewer USPH, We deeply appreciate your response and are grateful that you found our rebuttal beneficial. Thank you once more for your constructive comments.
Summary: This paper proposes a new PTQ method for DiTs. It develops a Channel-wise Salient Balancing method to suppress the outliers of linear layers in transformer blocks when applying activation quantization. Besides, it designs the Spearmen’s ρ-guided Salience Calibration to tackle the timestep dimension’s variety. It can improve the performance of the quantized models, comparing to directly applying the conventional quantization methods for Unet-based diffusion models, ViTs or traditional convolution networks to DiTs. Strengths: +This paper proposes a new method for DiT quantization, which achieves relatively well performance on W8A8 and W4A8. Weaknesses: -Lacks of experiments. This paper does not report the W8A8 results of ImageNet 512x512. Besides, it only conducts the experiment on one DiT model. The proposed method should be validated on more architectures. -Lack of novelty. The idea of Re-Parameterization activation range has already been used in PTQ of ViT and LLM, such as Rep-Q [1] or outlier suppression+ [2]. -Lack of direct evidence. It lacks direct evidence to display the effectiveness of the proposed method, such as the visualization of activation before and after CSB and SSC to verify it actually suppress the salience channel. -Lack of motivation. This paper proposes several modules, but there is no direct evidence to prove that these modules can solve the problems. For example, in RepQ’s, it chooses the mean value of activation range to scale the distribution, but this paper uses the equilibrium between weight and activation without explanation. Besides, it also not explains why using inverse Spear-man’s ρ statistic as the weigh. [1]Li Z, Xiao J, Yang L, et al. Repq-vit: Scale reparameterization for post-training quantization of vision transformers. In ICCV, 2023. [2]Wei X, Zhang Y, Li Y, et al. Outlier Suppression+: Accurate quantization of large language models by equivalent and effective shifting and scaling. In EMNLP, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Please see the weaknesses. 2. Did the authors quantize the post-softmax layer? 4. The SOTA DiT models can achieve a FID around 3 on ImageNet, why are the results reported in this paper not aligned with the origin paper? Why did the authors choose different settings? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Weakness 1: More experiments** **a. W8A8 ImageNet 512x512** We evaluate PTQ4DiT on ImageNet 512x512 with W8A8, compared against strong Diffusion PTQ methods, including Q-Diffusion and PTQD: |Timesteps|Method|FID↓|sFID↓|IS↑|Precision↑| |---|---|---|---|---|---| |250|FP|8.39|36.25|257.06|0.8426| ||Q-Diffusion|39.91|47.91|60.78|0.644| ||PTQD|80.04|70.11|45.67|0.5636| ||**Ours**|**11.45**|**38.34**|**196.21**|**0.8266**| |100|FP|9.06|37.58|239.03|0.83| ||Q-Diffusion|38.77|46.77|63.82|0.6504| ||PTQD|90.77|79.16| 42.47|0.5392| ||**Ours**|**13.10**|**39.92**|**173.37**|**0.7866**| |50|FP|11.28|41.70|213.86|0.81| ||Q-Diffusion|37.51|**44.46**|69.55|0.642| ||PTQD|85.28|74.75|46.42|0.5458| ||**Ours**|**16.56**|45.52|**170.20**|**0.7944**| PTQ4DiT consistently outperforms mainstream methods across various timesteps. Figure 8 provides the images generated by PTQ4DiT in this setting, which closely mirror the FP model. **b. New DiT model** Please refer to **General Response 1: Generality of PTQ4DiT**. ## **Weakness 2: Novelty discussion** **a. The general concept of re-parameterization** The fundamental idea of re-parameterization in quantization is to absorb the extra affine transformation into adjacent layers, thereby enabling the construction of quantization-friendly distributions without incurring extra inference costs. **b. Existing works that use re-parameterization** Several quantization methods benefit from re-parameterization. For example, RepQ-ViT [1] uses scale re-parameterization to avoid the extra costs of channel-wise and $log\sqrt2$ quantizers for post-LayerNorm and post-Softmax activations in ViTs. OS+ [2] shifts and scales activations to address the asymmetry and concentration issues in LLMs and re-parameterizes them to avoid extra costs. **c. Unique innovation and contribution of PTQ4DiT** While PTQ4DiT shares the general concept of scale re-parameterization, its unique innovation lie in 3 aspects: **(I)** We explore the quantization of **DiTs**, while existing works focus on **ViTs** or **LLMs**. **(II)** We delve into the complementarity of **activations and weights**, while existing works [1,2] focus solely on **activations**. Specifically, we find that DiTs exhibit extreme values in both activations and weights, yet these extreme values do not simultaneously occur in the same activation and weight channels. We leverage this property to redistribute salience between activation and weight, alleviating quantization errors for both. **(III)** Unlike the **static** re-parameterization in existing works [1,2], PTQ4DiT **dynamically** adapts to the temporal variability of channel salience, a special characteristic of DiTs. [1] RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers. ICCV 2023. [2] Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling. EMNLP 2023. ## **Weakness 3&4: Direct evidence and Motivation** **a. Direct evidence** Please refer to **General Response 2: Experiment on direct evidence**. **b. Motivation of equilibrium** Unlike ViTs and LLMs which exhibit outliers **only** in activations, extreme values are presented in **both** activations and weights in DiTs. Fortunately, they do not coincide in the same activation and weight channels. Such complementarity property inspired us to seek a balance between activations and weights, where their extreme values can be mitigated. This balance is the essence of **equilibrium**. To realize this, we adopt geometric mean as it is suitable for the multiplicative relationship between activations and weights. **c. Motivation of inverse ρ** The motivation of inverse Spearman's ρ stems from the need to balance salient weights with salient activations from different timesteps, which have various degrees of complementarity. Spearman's ρ measures the rank correlation between two variables (the statistical dependence between their rankings). A high ρ suggests that two variables exhibit higher values simultaneously, which **inversely** reflects the complementarity. In our context, the high ρ implies that the same channel tends to have extreme values in both activations and weights, hindering the salience balancing. To counteract this, our SSC inversely weights timesteps with high ρ between activation and weight salience, thereby prioritizing timesteps with more significant complementarity. This approach helps distribute the extreme values more evenly across channels and timesteps, improving overall quantization performance. ## **Question 2: Post-softmax layer** We quantize the post-softmax layer. ## **Question 3: Generation setting** **a. Clarification on FID-10K** The origin DiT paper [1] achieves a 2.27 **FID-50K** on ImageNet 256x256. In our work, the only difference is that we use **FID-10K**, following pioneering studies on diffusion such as [2,3,4]. Specifically, the work on IDDPM by OpenAI [3] indicates that while FID-10K may result in a slight increase, it significantly reduces computational resource demands. Thus, we use FID-10K to facilitate more experiments for comprehensive evaluation. For fair comparisons, all baseline methods and our PTQ4DiT are evaluated using the same metrics. **b. Experiment on 50K samples** We perform an additional experiment generating 50K samples on ImageNet 256x256 as in [1], evaluating W8A8 quantization. The results indicate that PTQ4DiT effectively recovers the performance and closely matches the FP model: |Method|FID-50K↓|sFID-50K↓| |---|---|---| |FP|2.27|4.60| |PTQ4DiT|2.31|4.82| In this setting, generating 50K samples takes around 167 hours on an NVIDIA RTX A6000, which is less feasible for extensive comparisons. [1] Scalable diffusion models with transformers. CVPR 2023. [2] Diffusion Models Beat GANs on Image Synthesis. NIPS 2021. [3] Improved denoising diffusion probabilistic models. ICML 2021. [4] Post-training Quantization on Diffusion Models. CVPR 2023.
Rebuttal 1: Rebuttal: # **General Response by Authors** We express our gratitude to all the reviewers for dedicating their time and providing valuable comments. They acknowledged that our work is novel (N2VF, USPH), effective for DiT quantization (USPH, jrKh, a8C3) and well-written (USPH, jrKh). However, the reviewers also raised constructive concerns about the method's generality and statistical evidence to support our observations. To further enhance our paper, we added the corresponding experiments with analysis and presented them as follows. ## **General Response 1: Generality of PTQ4DiT** To verify the generality of PTQ4DiT, we extend our experiment to include PixArt-α [1], an advanced Diffusion Transformer model facilitating text-to-image generation. Consistent with the literature convention [1, 2, 3], we adopt the CLIP score as our metric and perform text-to-image generation on the COCO validation dataset. |Bit-width|Method|CLIP Score↑| |---|---|---| |FP|PixArt-α|31.5305| |W8A8|PTQ4DiT|31.5368| |W4A8|PTQ4DiT|31.5077| The results demonstrate that PTQ4DiT significantly recovers the generation ability and delivers comparable performance to FP PixArt-α, suggesting the general efficacy of our method for quantizing Transformer-based diffusion models. [1] PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis. ICLR 2024. [2] Clipscore: A reference-free evaluation metric for image captioning. EMNLP 2021. [3] Q-Diffusion: Quantizing Diffusion Models. ICCV 2023. ## **General Response 2: Experiments on direct evidence** We analyze the effects of our CSB and SSC on channel salience, providing direct quantitative evidence to support their efficacy. **Experiment design.** For an in-depth evaluation, we design a statistical experiment assessing the percentage of salient channels before and after CSB and SSC. To accurately identify the salient channels, we adopt a robust statistical outlier detector, Interquartile Range (IQR) [1]. This method identifies data points that lie significantly outside the middle 50% of the distribution as outliers. We perform IQR detection on maximal absolute values of channels to identify the salient channels. **Experiment result.** We assess the percentage of salient channels detected by IQR. For a comprehensive assessment, we average the results over 100 random samples on ImageNet 256x256 generation and across all DiT layers: | Model | Salient Activation Channel | Salient Weight Channel | | --- | --- | --- | | Original DiT | 5.37% | 6.10% | | + CSB | 3.53% | 3.41% | | **+ CSB + SSC (PTQ4DiT)** | **1.25%** | **1.32%** | The results demonstrate a significant reduction in the percentage of salient channels when applying CSB and SSC, highlighting the efficacy of our proposed method in suppressing salient channels. [1] Outlier detection: Methods, models, and classification. In ACM Computing Surveys (CSUR), 2020. ***For each reviewer's individual concerns, we would like to address them in the responses separately.***
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain
Accept (poster)
Summary: The paper reports on two new legal-specific LLMs based on Mixtral. The primary contribution is to train larger law LLMs than previously reported, using (1) an extensive dataset of legal materials, and (2) a variety of best practices in pre-training. The results indicate incremental but significant improvements on legal benchmarks. Strengths: The paper reports notable improvments in the state of the art in using LLMs for legal tasks. Most of these advantages are due to greater model size, more extensive training data for legal specialization, and engineering decisions. The results show that these techniques continue to bear fruit and have not yet reached a wall of fundamental architectural limitations. I found section 5 particularly illuminating, because it successful isolates some of the individual technical improvements to analyze which of them contribute to the performance improvement. Weaknesses: The paper's strength is also its weakness. It generates very little generalizable knowledge for understanding legal tasks, only a series of engineering improvements. There are some reasons to think that transformer-based LLMs will not be able to carry out successful human-level work without fundamental improvements in the underlying model architecture. But the research still gets us closer to the frontiers of what current LLM achitectures can do, and it is possible that similar continued improvements are all that are necessary. Technical Quality: 4 Clarity: 3 Questions for Authors: (See above.) Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, the paper is well scoped to describe what it does and doesn't do. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Comment: **Thank you for your detailed and thoughtful feedback on our paper. We are pleased that you found our contributions significant and our paper technically strong.** We understand your concern regarding the limited generalizable knowledge for understanding legal tasks. **This limitation is partly due to the constraints imposed by the available benchmarks.** The current benchmarks are limited because collecting detailed legal benchmarks requires significant investment, **as lawyer hours are quite expensive. We will address this in our paper by adding a section on the limitations of our benchmarks.** Thank you once again for your valuable feedback.
Summary: The paper introduces two large language models specialized in the legal domain with instruction-following capabilities. These models are an extension of previous work, particularly Colombo et al.'s “SaulLM-7B: A Pioneering Large Language Model for Law,” by scaling up the corpus size and the number of trainable model parameters. The model parameters are updated through continued pretraining, instruction-tuning, and alignment (RLHF, DPO, or similar methods). Additionally, the authors gather and preprocess the largest legal dataset for pretraining from various sources. They examine the impact of both model size and dataset size on the effectiveness of adapting large language models (LLMs) to the legal domain. Strengths: - The paper introduces the largest domain-specific legal language model to date, with parameters ranging from approximately 12B to 54B-141B. It curated the largest legal corpus for pretraining. - Their results demonstrate the superiority of their models compared to state-of-the-art general-purpose language models (both open-source and closed), such as Llama 3 and GPT-4. They perform a detailed analysis of their models' performance. - The process of training the model with all the parameters is discussed in detail. Additionally, the preprocessing of the curated corpus is clearly explained. They also depict their improvements through graphs and tables and study the impact of various factors on their results under separate headers. - They introduce the state-of-the-art legal-domain language model, which can provide critical support to lawyers and judicial systems. They also release their model, enabling future research in the field. Weaknesses: - This paper misses ablations on the three training methods (with and without continued pretraining), ± instruct tuning, and ± alignment training. I would have liked to see where the greatest improvement in results is obtained. For example, if we do not include continued pretraining at all but only perform instruct tuning and alignment. - The study does not include detailed results about the compared methods (Fig. 17 only compares Saul med with Saul large). Can these results be shown for the top-performing models for each training method and models of different architecture (e.g., GPT-4 and Saul large)? In the results section, they do not compare their proposed model with existing domain-specific language models like Saul-7B and legal-Flan-T5. - The paper does not explain the process of generating synthetic data clearly. A detailed explanation of what prompts have been used to generate the data, what model has been utilized, and which generation parameters have been used is required. - The paper discusses adding a math portion to the pretraining dataset to enhance reasoning capabilities, but the source of the data is unclear, and they don’t provide any supporting experiments to back up their claim. Technical Quality: 2 Clarity: 2 Questions for Authors: - What is “Score” on the y-axis of Figs. 3, 4, and 5? - Can the authors justify the usage of only “balanced accuracy” as the primary metric used in the results associated with performance (non-energy analysis related)? - Can the authors explain the process of preparing synthetic data for both instruction and preference fine-tuning in detail? Considering that generating synthetic data can lead to low-quality samples, did you quantitatively measure the quality of the generated data? Also, do you define any acceptance criteria for the generated data to avoid adding low-quality samples to the training corpus? - Considering that you included data from various jurisdictions, do you evaluate the performance of the model across different jurisdictions? - Based on the results presented in Figure 6, do you have any explanation or assumption as to why Mixtral-IFT is performing worse than Mixtral? - In the conclusion, you claim that “we have demonstrated substantial improvements compared to GPT-4.” The mean balanced average of GPT-4 and the proposed model are pretty close. Did you perform any significance tests to show that the difference is statistically significant? - The authors clearly indicated that the experiments are limited due to the proprietary nature of the datasets used to train Llama and Mixtral. Have the authors looked into open-source and not just open-weight models? For example, Olmo: https://arxiv.org/abs/2402.00838. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Comment: We thank the reviewer for their review. We are glad they found our paper is well written and our contribution extremely valuable. Below is the response to the concerns/questions: **About the additional ablations**, they are already reported in the paper. See general comments. **Adding comparison to GPT-4.** We acknowledge the request for more detailed comparisons similar to those depicted in Figures 17 and 18, which illustrate the relative improvement from DPO with respect to instruction fine-tuning and the effects of model scaling within the relying on the Mixtral backbone. In the revised version, we extend these with a comparison to GPT-4, although emphasizing that GPT-4's size—1.5 trillion parameters—makes direct comparisons challenging. Our focus remains on domain adaptation of base models, ensuring apple-to-apple comparisons, rather than contrasting models of substantially different sizes. We have included results in Table 3 and Figure 5 and note that our models compare favorably with GPT4. **On the addition of SaulLM-7B and Legal Flan T5:** See general comment **Synthetic data generation:** see General Comment. **Addition of math data:** We performed an ablation study on a 7B model (not reported in the paper). Below are the results from the reviewers: | Models | Size | Results | |----------------------|------|---------| | Mistral 7B + pretraining (without math) + IFT | 7B | 0.612 | | Mistral 7B + pretraining (without math) + IFT | 7B | 0.628 | In the revised version we did report the results in the appendix. **About the reviewer’s questions:** **Score refers to balanced accuracy.** We will improve the label. **Balanced accuracy** was originally used by the Legalbench authors (see section 5.1.3 of their paper). We reused their code for comparizon. **See the general comment for the synthetic data.** **We will add examples of generation to the appendix.** **Comparison on other jurisdictions.** For this study, we focused on LegalBench, which is emerging as a standard benchmark. While we plan to expand this benchmark to include other jurisdictions, such an extension requires significant time and financial resources; therefore, we did not evaluate other jurisdictions in this iteration. However, we conducted manual tests using concepts from European law, such as "imprevision", "basis of the bargain" or "abuse of dominant position", and found that the responses were significantly improved compared to the original model from Mixtral. We intend to explore this further and provide a detailed evaluation in a future paper. **We plan to add these examples in the revised version of the paper.** **IFT v.s. Mixtral:** We believe that the Mixtral models align more closely with DPO, whereas the IFT model does not. This alignment discrepancy may represent a key difference between the models. Due to the limited information available about their training processes, we conducted an ablation study in Section 5.2 to further dissect and understand these different factors. The reviewer may notice that once aligned with legal synthetic data the IFT + DPO outperforms Mixtral. **About the significance of the comparison with GPT4.** LegalBench reports over 90k samples by relying on the methodology of (D Card et al. EMNLP2020), we see a power of the test of over 95%. We have to restate that the goal of the paper is to study domain adaptation and thus the main comparison should be w.r.t. the mixtral models. **About Olmo.** We are excited about the Olmo initiative and plan to reference it in our upcoming paper. Given our interest in working with models larger than 7 billion parameters, the Mixtral family was our preferred choice. **We hope that we have addressed the reviewers’ concerns and hope they will increase their grades.**
Summary: - The paper introduce two LLMs (at different sizes) specialized for law. These models have been adapted for law through continued pretraining, specialized legal instruction following, and a “legal alignment” process - The paper studies the tradeoffs of domain adaptation at this scale and presents results for these models. Strengths: - To the best of my knowledge, this is the first study of domain adaptation for legal LLMs at this scale (model size and training corpora size). This makes it an extremely valuable contribution. - The paper is extremely clear and well-written. - The experiments seem thorough and well motivated. Weaknesses: - The paper is sparse on some key details related to the datasets used to train the model. - For the preference data, what are the synthetic scenarios designed? How were they constructed? What were the Mixtral prompts used to evaluate them? - For the legal instruction data, what were the legal documents initially chosen? What did the conversations look like? - Examples of both instruction and preference data in the Appendix would be very helpful to readers. - The ablations are extremely helpful but also difficult to read, in part because they’re mixed in with baselines. It would be nice if the authors could present a table showing the progression of performance between: Mixtral-54B, + continued pretraining on the legal corpora, + IFT, + legal alignment. I think some–but not all–of these in Figure 6? Technical Quality: 3 Clarity: 3 Questions for Authors: - The LegalBench website asks users of the benchmark to cite a collection of papers, not merely the original LegalBench paper. Since LegalBench-Instruct looks like a small modification to LegalBench, do you plan to cite that original collection of papers as well? Also the LegalBench citation seems wrong, and not to the actual LegalBench paper (https://arxiv.org/abs/2308.11462) - Do the authors plan on releasing these datasets? - Do the authors plan on releasing intermediate checkpoints for model training?] - Once concern in relying on LegalBench-Instruct is that the prompt format chosen favors Mixtral models over other types of models (e.g., Llama-3, GPT, etc.). Do the authors have a sense of how sensitive the Saul models are to the prompt format? - It would be very nice to see some examples of generations in the Appendix! I’ll increase my score if the additional details are added to the paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Comment: **We thank the reviewer for their review we are glad they acknowledged that our paper is well written and find that our contribution is extremely valuable.** Below is the response to the concerns/questions: **About the generation of the post-training datasets and examples.** See the general comment. **Ablation:** See general comment on section 5.2 **Missing citations:** have been identified as an oversight on our part and have now been corrected in the revised version. We will include references to LegalBench as well as all pertinent datasets, specifically citing Guha et al., 2023; Koreeda and Manning, 2021; Hendrycks et al., 2021; Wang et al., 2023; Wilson et al., 2016; Zheng et al., 2021; Zimmeck et al., 2019; Ravichander et al., 2019; Holzenberger and Van Durme, 2021; Lippi et al., 2019. **On the prompt format:** In our study, we adhered strictly to the original prompt formats from the LegalBench paper, relying solely on the prompts from the dataset itself for generating responses. It's important to clarify that our primary objective is to examine domain adaptation, which necessitates careful adjustments. While we also compare our results with GPT-4, it's important to note that this comparison may not be entirely equitable; GPT-4's model size is substantially larger—approximately 10 times that of a 140B model and 30 times that of a 54B model. Despite testing slight variations in prompt formatting, we observed minimal impact (less than 0.2%) on our models across approximately 100,000 total samples, leading us to report outcomes using the standard version. **Regarding release artifacts, upon acceptance, we plan to release both the curated training dataset and the checkpoints along with optimizer states for seamless training resumption.** **We hope that we have addressed the reviewers’ concerns and hope they will increase their grades.** --- Rebuttal Comment 1.1: Comment: Thanks for the additional details! I'll raise my score.
Summary: The paper introduces SaulLM-54B and SaulLM-141B, two large language models specifically designed for the legal sector. These models utilize the Mixtral architecture and are developed through extensive domain adaptation strategies, including continued pretraining on a large legal corpus, instruction fine-tuning, and preference alignment. The models incorporate synthetic data to enhance their capabilities in legal text interpretation and achieve state-of-the-art performance on LegalBench-Instruct. The study emphasizes the scalability of domain-specific adaptation and the potential benefits of larger model sizes for legal applications. Strengths: 1. SaulLM models are the first open-source legal LLMs with a larger model size, utilizing continual pretraining, instruction tuning, and RLHF. 2. The release of these models under the MIT License facilitates reuse and collaborative research in legal NLP. Weaknesses: 1. The techniques used, such as continual pretraining, instruction tuning, and preference alignment, are not novel compared to existing studies like SaulLM and LLMs in other domains. 2. There are issues with the license, privacy, and quality of data. The paper should explicitly illustrate the license of each used data source. The quality of synthetic legal instruction tuning data and preference data is uncertain. 3. The evaluation is not comprehensive. There is no ablation study to illustrate the effectiveness of continual pretraining, such as comparing the performance of Mixtral-54B and SaulLM-54B-base. It would be better to include other LLMs in the legal domain as baselines. 4. In section 5.2, the explanation of "How much does continued pretraining help for the legal domain" is confusing. The continual pretraining previously referred to unsupervised continual pretraining with a next-token generation task on large-scale legal data, but in this section, it seems to mean IFT+preference alignment. Technical Quality: 3 Clarity: 3 Questions for Authors: On page 7, it is mentioned: "The results of LLama3-70B and the scalability of our methods suggest that applying the same approach to the LLama3-70B base model could lead to even better performance than our best model, SaulLM-141B." Which figure's results support this illustration? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Comment: We thank the reviewer for their review we are glad they acknowledged the usefulness of our paper and our contribution to the community with these models. Below are the answers to the reviewer's comments: **There are no issues with the licenses:** The licenses permit commercial use for both the pretraining and instruction finetuning data. Below is a breakdown of the licenses: | Source Name | Tokens (B) | License | |---------------------------------------------|------------|--------------------------------| | FreeLaw Subset from The Pile | 15 | MIT license | | EDGAR Database | 5 | Apache 2 | | English MultiLegal Pile | 50 | CC-By-SA-4.0 | | English EuroParl | 6 | Public Domain | | GovInfo Statutes, Opinions & Codes | 11 | Open government license | | Law Stack Exchange | 0.019 | CC-By-SA-4.0 | | Comm Open Australian Legal Corpus | 0.5 | CC-By-4.0 | | EU Legislation | 0.315 | CC-By-4.0 | | UK Legislation | 0.190 | Open government license | | Court Transcripts | 0.350 | CC-By-ND international 4.0 | | UPSTO Database | 4.7 | CC-By-SA-4.0 | | Web Data (legal) | 400 | ODC-BY | **Upon acceptance, we plan to include all the licenses in the appendix.** **The doubt The quality of synthetic legal instruction tuning data and preference data is uncertain.** We believe that the quality of the post-training is demonstrated by the results. **Including other Legal LLMs as a baseline:** See general comment. **Clarification of section 5.2.** See general comment. **About the Claim:** On page 7, it is mentioned: "The results of LLama3-70B and the scalability of our methods suggest that applying the same approach to the LLama3-70B base model could lead to even better performance than our best model, SaulLM-141B." The intention behind this statement is to highlight that the LLama3-Instruct results demonstrate stronger performance compared to Mixtral 7X22B (see Fig. 2). Based on this, we believe that applying the same methods could yield even stronger results. We acknowledge that this sentence may be confusing, and we will remove it in the updated version of the paper to avoid any ambiguity. **We hope that we have addressed the reviewers’ concerns and hope they will increase their grades.**
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Parameter-Inverted Image Pyramid Networks
Accept (spotlight)
Summary: This paper presents a novel method named parameter-inverted image pyramid to handle the issues of high computation overhead of image pyramids. It uses different model sizes to process different resolutions. The method achieves significant results on tasks of object detection, segmentation, and classification by reducing overhead and improving performance. Strengths: 1. The paper is well-organized and effectively presents its content, making for a clear and coherent read. 2. The idea of Parameter-Inverted, e.g., larger models for small images and smaller models for large images, sounds interesting. 3. The method is novel and simple without complicated handcraft structures. The direct use of existing ViT models is especially interesting and can be easily extended to other tasks. 4. The experiments are abundant and convincing. The authors provide sufficient experimental results with tables and scatter plots to verify that the proposed framework is solid and effective. The performance on 6B models is impressive. Weaknesses: 1. The authors did not specify whether the FLOPs and Param in Table 1,3 include a branch merging module. 2. The discussion with other multi-resolution networks is not sufficient, e.g. HRNet [16,45,57] 3. The proposed framework can be verified on stronger pre-training, e.g. ViTDet-L (BEiTv2) vs. PIIP-SBL (BEiTv2), to further enhance the effectiveness of the method. 4. In Lines 180-181, the authors used layer-wise learning rate decay, but the authors do not indicate how to deal with ViT combinations with different numbers of layers, such as PIIP-SBL, ViT-S/B has 12 layers, and ViT-L has 24 layers. 5. The text in Figure 1 could be slightly larger. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. I am just a little curious, for PIIP-TSB, why ViT-T from DeiT but ViT-S/B from DeiT-III? Won't different pre-training methods have some impact? According to Table 5, ViT-T with large resolution input plays the most important role, so the weaker pre-training ViT-T should limit the performance of the framework? 2. (Unimportant) In Figure 4, why are the improvements in detection better than those on instance segmentation? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for detailed comments and suggestions and provide our response below. ### **W1: Whether the #FLOPs and #Param in Tables 1,3 include a branch merging module.** To clarify, the #FLOPs and #Param in Tables 1 and 3 contain the branch merging module, which constitutes only a small proportion (\~1%) of the entire model, similar to the #FLOPs and #Param in Table 13(a). ### **W2: Discussion with HRNet series is insufficient.** The HRNet series [16,45,57] employs a four-branch architecture for pose estimation, semantic segmentation, and object detection. They also use different resolutions for each branch and add fusion layers every few blocks. However, in their architecture, the number of branches gradually increases as the layers deepen. As a result, they cannot utilize pre-trained models for different branches and must train the whole model from scratch. In contrast, our model uses a symmetric architecture for each branch, allowing for the use of pre-trained backbones. Besides, they do not adopt the parameter-inverted design that uses more parameters for processing smaller images. Lastly, the feature fusion in the HRNet series relies on convolutions and up/downsampling, which is less effective than our deformable cross-attention design. ### **W3: Verifying the method with stronger pre-training.** We verify our method using stronger backbones (e.g., DINOv2, BEiTv2) while maintaining the same FLOPs, and the results are consistent with those reported in the paper. | Backbone | Detector | Pretrain | Resolution | #FLOPs | Schedule | Box mAP | Mask mAP | | -------- | ---------- | -------------------------- | ------------ | ------ | ---- | ------- | -------- | | ViTDet-L | Mask R-CNN | BEiTv2 (L) | 1024 | 1542G | 1x | 49.3 | 44.1 | | PIIP-SBL | Mask R-CNN | DeiT III (S) + BEiTv2 (BL) | 1568/896/672 | 1464G | 1x | 51.6 | 45.4 | | ViTDet-L | Mask R-CNN | DINOv2 (L) | 1024 | 1542G | 1x | 46.3 | 41.7 | | PIIP-SBL | Mask R-CNN | DeiT III (S) + DINOv2 (BL) | 1568/896/672 | 1464G | 1x | 50.3 | 44.3 | ### **W4: Layerwise decay for different numbers of layers.** For combinations with an inconsistent number of layers, we will use a larger learning rate decay for the backbone with fewer layers. For example, for ViT-S/B (12 layers) and ViT-L (24 layers), the learning rate decay for ViT-S/B is set to be twice that of ViT-L (24/12=2). ### **W5: Text in Figure 1 can be larger.** Thank you for the suggestion. We will revise it. ### **Q1: Weaker ViT-T Pre-training limits performance.** We agree that initializing the small model with stronger pre-trained weights would yield better results. As we prioritize using backbones from the same family in our experiments, we sometimes face situations where a specific model size is unavailable, forcing us to use a backbone from a different family. For instance, we use ViT-S/B/L from DeiT III and ViT-T from DeiT instead because the stronger pre-trained backbone families (e.g. BEiTv2, DINOv2, DeiT III) do not provide smaller weights (e.g. ViT-T). ### **Q2: Detection is better than instance segmentation.** This appears to be a common phenomenon, as observed in ViT-Adapter-B/L and ViT-CoMer-B/L. We speculate that this is because the instance segmentation task is more complex than the detection task, making performance improvements more challenging, especially on a relatively high benchmark. --- Rebuttal Comment 1.1: Comment: I agree with reviewer H3y6 that the parameter-inverted design is initially counter-intuitive but impressive in the end, given the experimental results. Since my concerns and questions have been addressed by the authors, and considering the high quality of this research has been recognized by all the reviewers, I keep my positive rating.
Summary: This paper proposes two techniques: 1) Models with different parameter sizes to process different resolution levels of the image pyramid. 2) A feature interaction mechanism to integrate information from different spatial scales. Extensive experiment results are used to support the claims. Strengths: Both techniques are sound. Experiments are well documented and extensive. Weaknesses: It is unclear to me if the better accuracy in table 1, 2, 3 and 4 are entirely due to the proposed techniques or it could also be partially explained by the fact that the highest resolution in the pyramid is larger. Take table 1 as an example, the input resolution for baseline is 1024. But the highest resolution for PIIP is 1792. What if you use 1792 for baseline? It is perfectly fine to first hold FLOPS constant and check accuracy, as shown in table-1. But you should also hold input constant and check accuracy. Technical Quality: 3 Clarity: 4 Questions for Authors: See above Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's comments and provide additional experimental results to address these concerns. ### **Q1: Baseline with higher resolution is needed.** To evaluate the impact of using higher resolutions, we add another baseline, ViTDet-L with 1792 resolution (matching the largest resolution of PIIP-TSBL 1792/1568/1120/448), as shown in the second row of the table below. The first and third rows are from Table 1. We observed that while ViTDet-L with 1792 resolution achieves better performance compared to the 1024 resolution, its FLOPs are approximately 4 times larger. Compared with PIIP-TSBL, ViTDet-L 1792 has a lower box AP (-1.3%) and 4 times larger FLOPs. This experiment well explains that the performance improvement does not entirely come from larger image resolution. The structure of PIIP also plays an important contribution in the calculation amount and performance improvement. | Model | Resolution | #Param | #FLOPs | box AP | | --------- | ------------------ | ------ | ------ | ------ | | ViTDet-L | 1024 | 308M | 1542G | 46.8 | | ViTDet-L | 1792 | 308M | 6458G | 48.3 | | PIIP-TSBL | 1792/1568/1120/448 | 512M | 1535G | 49.6 | We hope that this experimental conclusion can allay your concerns and welcome further discussion. --- Rebuttal Comment 1.1: Comment: Thanks for the new results. I will keep my positive rating. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable discussion and positive decision. We will make corresponding revisions in the final manuscript.
Summary: The authors propose a novel vision architecture, Parameter-Inverted Image Pyramid Networks (PIIP), which can be applied to different tasks, including classification, object detection, and instance or semantic segmentation. The authors aim to take advantage of the multi-scale information of image pyramids, without the usual excessive computational demands of processing an image in multiple different resolutions. The core idea is to use a family of networks of different computational requirements, e.g., ViT-S/B/L, and apply the lightest model (least parameters) to the image of the highest resolution in the pyramid, apply the second lighter model to the next bigger image, and so on, ending up to apply the heaviest model (most parameters) to the image of the smallest resolution. The intuition is that stronger models with more parameters can be used to extract semantically rich contextual features from images of coarser resolution, while lighter models can be efficiently applied to extract low-level features from high-resolution images. This way, the computational requirements of the different models are balanced, meaningful features from all scales are extracted, and images can be processed in higher resolution because the lighter and less expensive models are processing them. The models are applied to the different images in the pyramid in parallel, and as features are extracted, the models communicate with each other through a proposed Interaction Unit. This way, features of different semantic meaning extracted from different models complement and inform each other. After the features from each individual model are extracted, they are merged to be used for the task at hand. The authors perform extensive evaluation of PIIP on 4 different tasks, object detection, instance segmentation, semantic segmentation, and image classification. The authors initialize the PIIP backbones with pre-trained ViT models, and for each task they fine-tune appropriate heads. PIIP is compared to baselines of different scale, and in all experiments it demonstrates superior or comparable performance. In addition, the authors perform multiple ablations, offering insights about the behavior of PIIP, and the importance of different design choices. Strengths: Originality: 1. The core idea of using a parameter-inverted pyramid seemed to me counter-intuitive at first, since I would expect heavier models to be applied to the images of higher resolution, which contain more information. However, the authors make a good argument about why the parameter-inverted pyramid design is sensible, and I think it is a novel idea, that in its simplicity offers a valuable contribution to the community. Quality: 1. The authors provide extensive experiments on multiple tasks, and in their experiments they control for the computational requirements of the models, offering meaningful comparisons. 2. The experimental evaluation includes multiple ablations, which shed light on different aspects of the proposed architecture. Clarity: 1. The manuscript is well written, and easy to follow. The authors explain the intuition and the specifics of their method in detail, accompanying the text with clear visualizations. Significance: 1. In addition to the novelty of the proposed idea of the parameter-inverted pyramid, I think a useful conclusion of the paper is the importance of using higher resolution inputs, even if they are processed with models of smaller size. To my understanding, this is the main reason PIIP outperforms the baselines, and is highlighted by the authors in Section 4.5 (ln 260 - 266). The importance of the input resolution is not something new, but the authors offer more evidence about its impact, and importantly, they offer a new paradigm to take advantage of it. Weaknesses: Quality: 1. My main concern about this work is that the authors don’t provide actual timings and memory measurements. Computational requirements are quantified through FLOPs, however, theoretical gains in FLOPs don’t always translate to benefits in practice when the compared architectures have considerably different designs. For example, the PIIP models may be applied in parallel, but the interactions between the features may force the models to wait for each other, adding a sequential element that may add to the latency. I want to clarify that I am not claiming that this is the case, but that actual timings beyond theoretical FLOPs are needed to offer conclusive evidence, especially when efficiency is one of the main claims of this work. 2. About memory requirements, it seems to me that using a whole family of models may be prohibitive in many settings, so, memory should be explicitly measured. In addition, the authors control in their experiments for FLOPs, e.g., in Table 1 they show how PIIP models perform compared to baselines with similar FLOPs. I think it would be useful to control for memory too, e.g., in Table 1 the best PIIP models have more than 50% higher number of parameters compared to the baselines. Based on this, I think a question that naturally arises is what if the baseline was a larger model with equal parameters? If we have an extra memory budget, is PIIP the best way to use it? I would like to clarify that I don’t think the authors should necessarily have additional experiments controlling for memory, but if it is a disadvantage of PIIP, as it seems to be due to the parameter count, I think they should discuss it in the limitations. Clarity: 1. In Table 2, the results for ViTDet-B and ViTDet-L are different compared to Table 1, why is that? Similarly, in Table 2, the results of PIIP-TSB and PIIP-SBL are different compared to Table 1. For the PIIP-SBL model, I see that the scores in Table 2 match the scores of the best model reported in Table 12, where models use higher resolutions, is this the reason for the discrepancy? However, the reported performance of the PIIP-TSB in Table 2 is higher than any performance reported for PIIP-TSB in Table 12, so, how the performance reported in Table 2 is achieved? I think the authors should clarify the configurations of the models they use in their experiments. 2. Some of the Table captions are not informative enough. For example, it is not clear why crop size is underlined in Tables 3, 6, 7. On a similar note, in the ablation about branch merging (ln 267 - 269, and Table 5), the authors don’t mention the dataset they use. I guess it is MS COCO, but I think it should be explicitly mentioned. 3. In the caption of Table 8, it is mentioned “‘PI’ and ‘IP’, ‘Inter.’ represent parameter-inverted, image pyramid and interactions”, I think it should be “‘PI’, ‘IP’, and ‘Inter.’”. 4. In Ln 262 the authors mention “green dashed circle in Fig. 5(a)”, but I think it should be “blue dashed circle”. Significance: 1. PIIP requires a family of backbones for its processing, and the authors use pre-trained models in their experiments, while they also include some experiments on training from scratch in the Appendix. I think the need to train multiple backbones, or the need to find multiple pre-train models, may be an undesirable overhead for the adoption of the method. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Section 4.1 the authors provide the pre-trained architectures they used for different tasks, how did they decide which models to use? 2. What is the formula the authors use to calculate the FLOPs of the models? 3. Why there are no comparisons with state-of-the-art models on image classification? 4. In Table 8, what is the range of resolutions that the model in the second row is trained on? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors offer a short discussion of the limitations in Section 5, which I think should be expanded to address the memory requirements of PIIP. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for providing detailed comments and highlighting our strengths. We hope our response will address the reviewer's concerns. ### **W1: Actual timing not reported; Gains in FLOPs don’t always translate to benefits.** We acknowledge that the reduction of FLOPs does not guarantee the improvement of throughput, which is related to engineering techniques and specific hardware. When using PIIP-LH6B with resolution 1280/1024/640, PIIP can reduce the training time of InternViT-6B from 91 hours to 62 hours while increasing the box AP from 53.8 to 55.7, though the actual increase of speed (\~32%) is less than the reduction of FLOPs (\~43%). Achieving consistent improvements across all cases can be challenging and may require additional engineering optimizations beyond the scope of this research. Previous works such as MobileNets highlighted the reduction in FLOPs as a contribution. However, the initial implementations didn't match the theoretical speed improvements. After hardware-related optimizations, current implementations have become much more efficient. We hope future work will address this challenge of PIIP. ### **W2: Control for memory; Larger baseline model with equal parameters; Discuss memory limitations.** Training a baseline with an equal number of parameters may be difficult. For example, there is no foundation ViT model with a parameter count equivalent to PIIP-TSB (~150M). However, we can derive insights from the 4th and 5th rows in Tab.1. PIIP-TSB obtains a similar box AP as ViTDet-L with ~50% number of parameters. Their memory consumption during training is 9.6GB and 7.9GB, respectively. We appreciate the reviewer for identifying the memory requirements as a limitation of our work and will include a discussion in the limitation section. ### **W3: Different results of ViTDet-B, ViTDet-L, PIIP-TSB and PIIP-SBL in Tables 1 and 2.** We thank the reviewer for highlighting the clarity issue. The ViTDet-B and ViTDet-L results (and other entries) in Table 2 are cited from the paper of ViT-Adapter, while the results in Table 1 are reproduced by ourselves. The discrepancy between PIIP-SBL results in Tables 1 and 2 is indeed from using higher resolutions, as reported in Table 12. For PIIP-TSB in Table 2, higher resolutions (1568/896/672 -> 1792/1344/672) and a larger window size (14 -> 28) are used, compared with the result in Table 1. We will include these explanations in the captions of Tables 1 and 2. ### **W4: Crop size in Tables 3, 6, 7; Dataset used in branch merging ablations.** We underline the crop size to ensure comparability with the baselines. The preprocessing process of semantic segmentation contains cropping from the original image, which is different from object detection that uses the whole image. In our method, the input image of Branch 2 is the cropped image, and the inputs of Branch 1 and Branch 3 are resized from the cropped image. Different initial crop sizes can lead to inconsistencies in the training data, so we annotate this to maintain consistency with the baseline settings. The dataset used in Table 5 and all other ablations are MS COCO. We will revise the captions for improved clarity. ### **W5, W6: Typos.** We shall correct them in the final version. ### **W7: Overhead of finding a family of backbones is undesirable.** This overhead is negligible in most cases, as the open-source community offers pre-trained models of various model sizes and families. If models of different sizes within the same family are not readily available, it is feasible to adopt models from different sources, as we demonstrated in the paper. We also conducted several additional experiments using various pre-training combinations from different sources, mainly because DINOv2 and BEiTv2 do not provide ViT-S, as shown in the table below. #### **Table: PIIP-SBL 1568/1120/672 using Mask R-CNN 1x schedule with different pre-trained models.** | Pretrain | Box mAP | Mask mAP | | ------ | ----- | ----- | | AugReg | 48.3 | 42.6 | | DeiT III | 50.0 | 44.4 | | DeiT III (S) + DINOv2 (BL) | 51.0 | 44.7 | | DeiT III (S) + BEiTv2 (BL) | 51.8 | 45.4 | ### **Q1: How to decide the selection of different pre-trained models for different tasks.** In practice, we do not have strict preferences for selecting pre-trained models for different tasks. We prioritize models from the same family, and if a specific model size is unavailable, we substitute with models from other families. For instance, we use ViT-S/B/L from DeiT III, and since DeiT III does not have Tiny size model, we use ViT-T from DeiT. In the above table, we also use the AugReg pre-trained model for detection as it is used for classification in the paper. ### **Q2: Formula for FLOPs calculation.** We use the FLOPs calculation script from MMDetection, with our modifications to accurately calculate FLOPs of modules like self-attention and deformable attention. We will release this script alongside the training code. We have also manually verified the calculations using formulas, and the results are consistent with those produced by the script. ### **Q3: Comparison with SoTA on image classification is missing.** Due to limited time and resources, this paper does not focus on enhancing the classification performance or comparing it with SoTA. Our primary focus is on detection and segmentation tasks, which benefit more from higher input resolution and are the main target tasks of image pyramids. Besides, to maintain comparable FLOPs with the baseline, the selection space of input resolutions in classification is narrower compared to detection. ### **Q4: Range of resolutions of "MS" in Table 8.** We use AutoAugment [1] for multi-scale training. Please refer to the *attached PDF* in the global response for detailed implementation. [1] Cubuk, Ekin D., et al. "Autoaugment: Learning augmentation policies from data." arXiv preprint arXiv:1805.09501 (2018). --- Rebuttal Comment 1.1: Title: Thank you for your reply Comment: I would like to thank the authors for their detailed response. As I mentioned in my review, my main concerns were about actual timings and memory. About timings, the numbers the authors provide are encouraging, and I agree that total agreement between FLOPs and actual runtime is not necessary from the very beginning, when a method is not yet fully optimized. I would suggest the authors include this information in Future Work or Limitations. About memory, I agree that memory is not easy to control with pre-trained models, and some of the existing results allude to favorable comparisons of PIIP with baselines of similar memory. So, I find the authors' response sensible and encouraging, but in the absence of more detailed resource comparisons in the current manuscript, I will maintain my initial score. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable discussion and positive decision. We will make corresponding revisions in the final manuscript.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for their time and efforts. Please refer to rebuttals under each review of detailed responses. Pdf: /pdf/6194ae9d5bebbd07a1e2403f1b6401065aea0134.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Time-Constrained Robust MDPs
Accept (poster)
Summary: This paper proposed a novel concept, time-constrained robust MDP, to address the conservativeness issue of the rectangular assumption. They assume the transition depends on an underlying parameter, and the parameter can be adversarially chosen from an uncertainty set. Several algorithms are proposed to solve the time-constrained robust MDP. Extensive numerical experiments show the effectiveness of the proposed framework and algorithms. Strengths: Given that the field of robust MDP has limited large-scale experiments due to the intractability of the rectangular assumption, the idea in this paper is very nice. I agree that the rectangular assumption can be too general and lead to conservative policies, due to its inefficiency in leveraging the inherent structure of the uncertainty set. The proposed time constraint robust MDP is interesting, especially since the experiment results seem to be promising. Weaknesses: My concern is that the methods proposed in this paper might be too heuristic, given the non-stationarity and possibly non-existence of the optimal policy. As I am not very familiar with the experiment side, I am not sure if the evaluation in this paper is sufficient. See my questions for details. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In the experiments, the oracle-TC is not always the best method. It seems that the performances are not that stable. Also, the vanilla TC outperforms in almost half cases. It is kind of anti-intuitive. 2. Compared to baseline methods, the TC-based methods show robustness to some extent, but how robust are they? What are the expected best (robust) performances? 3. In algorithm 1, what is "sample a mini-batch of transitions" for? How are the agent and adversary updated? 4. On lines 174-175, the authors claim that the rectangularity assumptions are rarely met in real-world scenarios. I think more evidence or references are required to support this claim. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and the questions you raised regarding our paper. We appreciate your feedback and would like to address your concerns as follows: > My concern is that the methods proposed in this paper might be too heuristic, given the non-stationarity and possibly non-existence of the optimal policy. As I am not very familiar with the experiment side, I am not sure if the evaluation in this paper is sufficient. See my questions for details. Actually, because there is always at least one stationary optimal adversary (Iyengar, 2005), the TC hypothesis preserves optimality. Hence, under classical assumptions, the proposed method is not heuristic: the optimal policy is preserved, as stated at the beginning of Section 6. > Questions: > In the experiments, the oracle-TC is not always the best method. It seems that the performances are not that stable. Also, the vanilla TC outperforms in almost half cases. It is kind of anti-intuitive. We took great care to avoid over-interpretation of empirical results. Quite often in robust RL, overall scores suffer from high variance. Hence, identifying clear domination between two algorithms can be tricky. One could conjecture that vanilla TC policies benefits from having a reduced input space (observation only, while the others use a sequence of observations or the information of $\psi$) which might prevent overfitting, while the observation itself holds enough information for a robust policy. This seems hard to verify though. > Compared to baseline methods, the TC-based methods show robustness to some extent, but how robust are they? What are the expected best (robust) performances? The optimal best robust performance is unknown in most realistic, large robust RL benchmarks. Additionally, non convexity of optimization landscapes prevents from providing optimality certificates for solutions. So the best one can do is rank algorithms based on their average score, and recall the variance. > In algorithm 1, what is "sample a mini-batch of transitions" for? How are the agent and adversary updated? This mini-batch is used in the subsequent UpdatePolicy lines (those updates are classical SGD steps, using this minibatch). Specifically, the update in our experiments is that of TD3, but the SAC update could be dropped-in without any modification to the overall training loop. > On lines 174-175, the authors claim that the rectangularity assumptions are rarely met in real-world scenarios. I think more evidence or references are required to support this claim. It is quite hard to prove a negative, but the essence of the rectanglarity assumption is very un-natural in the first place: it implies a bicycle's parts can change mass from one time step to the other, or that the weather is drastically different a few meters apart along the same road. So although the rectangularity assumption makes theoretical analysis possible, it is commonly admitted in robust MDPs that it makes little sense in practice. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. It addressed most of my concerns. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review. We appreciate your feedback and the opportunity to clarify our work. We hope that our explanations regarding the preservation of the optimal policy and the interpretation of empirical results have helped to address your concerns. We believe these points underscore the value of our contributions. With this in mind, we humbly ask if you might reconsider the scores, given also that the rebuttal seems to have addressed most of your concerns.
Summary: This paper aims to develop a new framework of robust RL addressing the over-conservability issue of rectangularity and dynamic uncertainty set assumption. This framework allows for time-dependent, correlated and multifactorial disturbance to the dynamics. Three distinct algorithms are developed depending on the available information to the policy. Extensive numerical experiments are further provided to demonstrate the performance of the proposed algorithms. Strengths: - The proposed framework addresses the major drawbacks of existing robust RL approach: overly conservative due to the use of dynamic and rectangular uncerainty sets. - Extensive experiments are provided to demonstrate the performance of the three algorithms - The approaches can be used with existing robust value iteration approaches. Weaknesses: - Thm 2.1 only applies to the case where the policy has the exact information of \psi, which in practice is usually unavailable. - The usefulness of this framework remains questionable, as it poses a significant challenge of constructing such complex uncertainty set with time and state-action correlation. - The training algorithm has the same flavor of the adversarial training as those in the literature, e.g., RARL. The novelty seems rather limited as this is mostly an experimental paper. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for this feedback. We are quite surprised by the mismatch between the expressed comments and the overall grade attributed to the paper. To us, the three comments somehow discard important parts of the paper and we warmly welcome further discussion to address them if necessary. > Thm 2.1 only applies to the case where the policy has the exact information of \psi, which in practice is usually unavailable. Indeed. We don't think this is really a major limitation: there is no guarantee that an observation-based optimal policy exists for POMDPs and yet people train regular RL algorithms on real-life environments with partial observability all the time. It is quite the same thing here. What theorem 2.1 states is a general property of the Bellman operator, which induces the existence of an optimal value function when (as you accurately point out) $\psi$ is observable. This does not mean that our algorithms are only applicable in this setting (see answer below). > The usefulness of this framework remains questionable, as it poses a significant challenge of constructing such complex uncertainty set with time and state-action correlation. We beg to differ. When one drives on the highway, the transition models in different geographical conditions are strongly correlated together (by the global weather for instance) and often follow a time-constrained evolution. Similarly, a bicycle's dynamics in different states and actions are coupled by general parameters (friction coefficients for instance) that couple them together. Such uncertainty sets are very easy to construct when designing simulators (one only needs to let the global parameters vary). We believe there may be a misunderstanding here and warmly welcome further discussion with the reviewer to better address this concern, and understand why it seems so undue to us. > The training algorithm has the same flavor of the adversarial training as those in the literature, e.g., RARL. The novelty seems rather limited as this is mostly an experimental paper. Indeed, most algorithms in the literature (most algorithms that scale to large domains at least) are RVI-inspired (like RARL or M2TD3). We believe this is actually a strength of the current proposal: it can be used in any previous RVI algorithm to make it better. Rejecting the paper on this basis is not a good signal for RL research: if only new algorithms were to be published, a large number of generic, important works would never receive attention. Additionally, we prove several properties of the TC framework, including that the optimal value function is not excluded by the TC assumption, hence we feel calling this work mostly experimental somehow discards parts of the contribution we deem important. Here again, we welcome further discussion to clarify things. --- Rebuttal Comment 1.1: Comment: The reviewer thank the authors for the feedback. They addressed my concerns. I raised the score to 5.
Summary: The paper introduces Time-Constrained Robust MDPs (TC-RMDPs) as a novel formulation to address the overly conservative nature of traditional robust RL under sa-rectangularity assumptions. The authors propose three algorithms to handle time-dependent and correlated disturbances in different situations. Extensive evaluations on continuous control benchmarks show that these algorithms outperform traditional robust RL methods in terms of balancing performance and robustness. Strengths: 1. The introduction of TC-RMDPs addresses a significant limitation of overly pessimistic in traditional robust RL, making the approach more applicable to real-world scenarios with time-dependent disturbances, which is interesting and novel. In addition, the paper presents three distinct algorithms with varying levels of information usage, expanding its usage. 2. The authors provide formal theoretical guarantees for the proposed algorithms, which is appreciated. 3. Extensive experiments on continuous control benchmarks demonstrate the efficacy of the proposed methods. Weaknesses: # Major 1. The complexity of the proposed algorithms, especially Oracle-TC, may limit their practical applicability in real-world scenarios where complete environmental information is unavailable. 2. How to select the Lipschitz constant $L$ in practical scenarios? Some ablations studies and analysis regarding $L$ would be beneficial. 3. It would be better if the authors could provide some visualizations of the difference between the traditional independent transitions with the time-constrained parametric transitions. 4. What is the number of worst-case episodes in the experiments? If it is 1, maybe it would be better to display the results as the average of the worst 10% of episodes for better robustness. This is because having only 1 episode might introduce stochasticity. # Minor 1. It is suggested to show the equation number for better illustration and reference. 2. There are some typos in the paper. For instance, in line 102, $B=\mathcal{B}(0_{\Psi},L)$ should be corrected to $B=\mathcal{B}({\Psi}_0,L)$, the right-most column in Table 1 should be Avg. The authors should carefully review the paper for such errors. Technical Quality: 3 Clarity: 3 Questions for Authors: please see above Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The framework assumes that the transition parameter vector $\psi$ is known during training. However, in real-world applications, it could be challenging. Also, the assumption on the uncertainty set $\Psi$ is also not practical in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for these insightful comments. We believe the major concerns raised are actually minor, in the sense that they can all be answered by elements already in the paper, and better explanations and phrasing. We explain why below and welcome further discussion with the reviewer. > 1. The complexity of the proposed algorithms, especially Oracle-TC, may limit their practical applicability in real-world scenarios where complete environmental information is unavailable. The rationale of proposing 3 variations on the same theme (TC, stacked-TC and Oracle-TC) is precisely designed to address your concern. As discussed in the paper, optimality can only be sought for Oracle-TC, since TC only solves a partially observable problem. Conversely, TC is designed to account for the real-world scenarios you mention. Stacked-TC is intended as an in-between solution that retains partial observability, yet recovers optimality when the underlying state can be inferred from the latest observations and actions. We hope this clarifies things. In our opinion, and unless we misunderstood your concern, we believe this identified weakness is already addressed in the paper. > 2. How to select the Lipschitz constant $L$ in practical scenarios? Some ablations studies and analysis regarding $L$ would be beneficial. Indeed, in practical scenarios, it is unrealistic to assume $L$ will be known. We claim TC approaches are actually quite insensitive to $L$'s value. To demonstrate this empirically, we trained our algorithms on the fixed $L=0.001$ setting and then evaluated them on environments with varying $L$ values, with order of magnitude $L=0.1$. This is already reported at line 265. We can stress this out more in the final version, with a particular emphasis on the fact that one needs not know $L$ beforehand and can make a conservative assumption on it, eventually making it a non-critical hyper-parameter. > 3. It would be better if the authors could provide some visualizations of the difference between the traditional independent transitions with the time-constrained parametric transitions. This is a very interesting question. Indeed, in the rectangular case, it is known that the minimum over the simplex of $sa$-local transition probabilities at each time step, is found on the border of this simplex. What the TC hypothesis does, is actually constrain this simplex' radius. So, in environments that respect the rectangularity assumption, we could expect that the difference you are referring to is actually the difference in diameters. But there are two key limitations to this reasoning. First, real-world scenarios are precisely not rectangular, and the worst cases need not be found on the border of the uncertainty set. But (secondly) more importantly, as stated at the beginning of Section 6, there always exists a worst-case adversarial policy (ie. a worst-case adversarial transition model) that is stationary (Iyengar, 2005). Therefore, such a transition kernel respects the TC property and can be found by our algorithms. If the worst-case dynamic transition model is unique, then it is necessarily stationary, and hence both classic RVI algorithms and TC ones should converge to it. In this case, the difference between models should be zero. But this might be unverifiable in practice because a span of dynamic models can actually lead to the optimal robust value function. So the difference between the time-constrained dynamic models and the non-time-constrained ones is eventually not very informative. For this reason, although your remark is very relevant, we believe visualizing such a difference would not be very informative and might be misleading. We propose to add this short discussion to the paper. > 4. What is the number of worst-case episodes in the experiments? If it is 1, maybe it would be better to display the results as the average of the worst 10% of episodes for better robustness. This is because having only 1 episode might introduce stochasticity. We suppose you are referring to the static evaluation case (for the dynamic evaluation case, there is a single adverasary's policy found by the algorithm). We checked the average score across the 10% worst transition models in that case. Although we agree with the statistical robustness argument, we would like to point out that this is eventually a different criterion overall, closer to a CVaR in spirit (or to a DR evaluation on a limited set of transition kernels). The results did not change significantly. We will add them to the appendix. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. It addressed most of my concerns, and I believe the paper meets the acceptance threshold.
Summary: The paper defines a robust training method for MDPs under uncertainty in the environment dynamics. Current methods assume sa-rectangularity, where the transition dynamics from consecutive states are independent of one another. The authors argue that this assumption is unrealistic and leads to overly conservative policies. Alternatively, this paper breaks the independence assumption in the environment dynamics to use a time-step evolution approach. This paper presents a Time-Constrained TC-MDP where the Time-Constrained Parameterized MDP kernels evolve each time-step constrained to be Lipschitz continuous. The model is trained in an adversarial manner, where the adversarial policy is learned to evolve the model dynamics but is limited to be L-close to the previous dynamics by definition. This learning method is tested under different variant conditions (cosine, exponential, linear, and logarithmic) with positive results. Robustly trained agents that learn under the worst TC adversities outperform vanilla robust algorithms that assume time independence in the model dynamics. Strengths: - The evaluation is fair. The model trained under worst-case time-constrained adversarial dynamics is then evaluated under different fixed evolving dynamics (cosine, exponential, linear, and logarithmic) that range in stochastic changes much larger than the used in learning (L=0.1 in evaluation vs L=0.001 in training). - The definition of TC-RMDP is smartly selected to preserve stationarity, defining the state as the tuple ($S \times \Psi$) - The TC strategy can be straightforwardly implemented in previous algorithms by limiting the search space of the adversarial $psi$ at every time-step. Weaknesses: - Reporting that the worst-case TC variants outperform vanilla adversarial models as M2TD3 or RARL is expected. As the decisions of the adversary are rather limited when compared to the vanilla algorithms. Perhaps highlighting the evaluation results (fixed or static setting) in the conclusions would help in emphasizing this. Technical Quality: 3 Clarity: 3 Questions for Authors: - In Algorithm 1, should is be: $b_{t+1} = \bar{\pi}(s_t, a_t, \psi_t)$ ? - Typo in Page 7 Line 276: "Appendix G and G" Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As the authors mention, this method assumes that the environment dynamics can be correctly parametrized in a confident set $Psi$. Additionally, selecting the radius L of the proximal adversarial set of actions is not trivial. Fortunately, the authors tested the environment with a large enough L (100 times larger than the one used during training) with positive results, which validates their claim. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, their careful analysis of our contributions, and positive assessment of them. > Reporting that the worst-case TC variants outperform vanilla adversarial models as M2TD3 or RARL is expected. As the decisions of the adversary are rather limited when compared to the vanilla algorithms. Perhaps highlighting the evaluation results (fixed or static setting) in the conclusions would help in emphasizing this. Actually, even though it seems counter-intuitive, there is not immediate theoretical reason for TC algorithms to outperform vanilla ones, despite the fact that their action space is smaller. The reason appears quite clearly when one considers that among the optimal adversaries, at least one is stationary. This adversary is reachable by both vanilla algorithms and TC ones. Consequently the key lesson here is that vanilla methods don't find the optimal adversary in practice, and that using the TC formulation preserves optimality while helping the optimization process (as demonstrated by the empirical results). We propose to better emphasize this in the conclusions as you suggest (but with the slight nuance that dominance of TC over vanilla should not be expected in the first place). > In Algorithm 1, should is be: $b_{t+1}= \bar{\pi}(s_t, a_t, \psi_t)$ Both notations seem acceptable, since $\psi_t$ is within the arguments of $\bar{\pi}$. In other words, since $\psi_{t+1}=\psi_t+b_t$, the two notations are equivalent. We chose to drop the $b_t$ notation for readability. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the clarification, It addresses my concerns.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
VideoLLM-MoD: Efficient Video-Language Streaming with Mixture-of-Depths Vision Computation
Accept (poster)
Summary: This paper aims to comprehend long video streams with multi-modal large language models. The existing works use too few visual tokens to represent the video stream to guarantee efficiency but may sacrifice visual perception performance. The authors propose to keep more visual tokens to represent each video frame but only select the crucial visual tokens to pass the transformer decoder layers to reduce computation. The experiments show that the proposed mixture of depth architecture achieves a trade-off between the number of visual tokens and the computation efficiency. Strengths: 1. The explored problem is meaningful. Handling long videos with large language models is promising and has wide applications. 2. The motivation for using different depths of networks to process visual tokens is clear and makes sense. 3. The proposed method is simple and straightforward, and can effectively speed up the long video context processing without reducing the token number. Weaknesses: 1. The presentation of Section 3.1 and the Figure 6 are quite similar to VideoLLM-online [8]. And in Eq.1, why are the indicator and probability terms multiplied inside the logarithm? 2. The quantitative performance improvement over VideoLLM-online is quite marginal. And even the full computation only results in slight performance gains on some datasets. What is the underlying reason? Does it mean preserving very few visual tokens is sufficient? 3. Some visualizations are redundant, e.g., Figure 1 and Figure 5 are very similar. There lacks the visualization on the selected visual tokens in different layers. It is necessary to show the selection results to show what is crucial in the learned LayerExpert. Technical Quality: 2 Clarity: 3 Questions for Authors: It is better to also compare with some video LLM works that compress each frame into fewer tokens, e.g., LLaMA-VID [33] in terms of both performance and efficiency. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations in the lack of experiments on exocentric data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your insightful feedback and valuable suggestions. **W1: The problem of the indicator.** Thanks for pointing this out. We follow VideoLLM-online and it makes an error. The revised equation is as follows: $$ L = \frac{1}{N}\sum^N_{j=1}\left(-l_{j+1}\log P_j^{\texttt{[Txt$_{j+1}$]}} - \sigma s_j\log P_j^{\texttt{[EOS]}}\right), $$ **W2: The reason of marginal improvements over VideoLLM-online and the similar results of full computation.** The online narration benchmark only requires generating simple time-synchronized narrations, such as “You are riding the bike,” without needing detailed descriptions that require fine-grained visual perception. Therefore, the performance improvement with increased resolution is marginal. However, experiments on other benchmarks, particularly those requiring fine-grained visual perception, demonstrate the necessity and benefit of increased resolution. As shown in Table 5b, for example, Ego Accuracy increased from 40.53% to 44.85% on the EgoExo4D Fine-grained Keystep Recognition benchmarks, indicating the potential of our approach for high-resolution vision tasks. **W3: Some visualizations are redundant, e.g., Figure 1 and Figure 5. There lacks the visualization on the selected visual tokens in different layers. It is necessary to show what is crucial in the learned LayerExpert.** We will reorganize our visualizations in the revised version. We also visualized the selected vision tokens learned by LayerExpert, as shown in Global-Rebuttal Figure 1. LayerExpert effectively focuses on critical vision tokens, such as bike-related tokens in Fig. 1a, tokens related to slicing onions in Fig. 1b, and tokens related to a table saw in Fig. 1c., and the model will attend to different tokens at different layers shown in Fig. 1d. **Q1: Compare with some video LLM works that compress each frame into fewer tokens, e.g., LLaMA-VID [33] in terms of both performance and efficiency.** We compare the performance and efficiency of our approach with LLaMA-VID [1], as shown in Global-Rebuttal Tables 1-4. Our approach achieves comparable performance to LLaMA-VID while requiring only 0.57x FLOPs and 0.2x training time. | Method | TFLOPs | Training Cost (Pretrain + Finetune) | GQA | MME | POPE | SQA | | ------------ | ------ | ----------------------------------- | ---- | ------ | ---- | ---- | | LLaMA-VID | 9.8 | 4hrs + 48.5hrs | 64.3 | 1521.4 | 86.0 | 68.3 | | VideoLLM-MoD | 5.8 | 2hrs + 8.5hrs | 62.8 | 1505.5 | 85.5 | 70.2 | | Method | TFLOPs | Training Cost (Pretrain + Finetune) | MSVD-QA | | MSRVTT-QA | | ActivityNet-QA | | | ------------ | ------ | ----------------------------------- | ------- | ----- | --------- | ----- | -------------- | ----- | | | | | Acc | Score | Acc | Score | Acc | Score | | LLaMA-VID | 40.1 | 9hrs + 30hrs | 69.7 | 3.7 | 57.7 | 3.2 | 47.4 | 3.3 | | VideoLLM-MoD | 23.0 | 4hrs + 5.5hrs | 68.5 | 3.7 | 58.2 | 3.3 | 46.3 | 3.2 | We highlight the advantages of our proposed approach over existing efficient vision modeling methods in the LMMs field: 1. **Utilizing Fewer, Semantic-Aware Tokens:** Semantic-aware token selection typically requires cross-attention-based modeling, which is computationally expensive when processing every frame. As shown in Global-Rebuttal Tables 1-2, LLaMA-VID [1] uses text features extracted from a q-former to select semantic-aware visual tokens via context-attention, resulting in significantly higher training costs (10.5 hours vs. 52.5 hours) with only marginal performance improvement. 2. **Efficient Inference:** High training costs are a significant issue for LMMs, particularly for video-based LMMs, as video consumes the majority of tokens, as indicated in Figure 7 of our paper. Unlike existing methods [2,3] that focus on efficient inference while still requiring high training costs, we successfully reduce both training and inference costs massively while maintaining LMMs’ performance. Our approach provides a new paradigm for other vision-language tasks, especially in training video-based LLMs, and plays a role in democratizing AI by enabling broader access to trained models without the need for large resources. 3. **Offline Spatial-Temporal Token Merging:** Online video LLMs must process every incoming visual token in real time and cannot perform frame sampling or offline token merging as Chat-UniVi [4] does. We are the first to explore efficient vision modeling in context rather than merely offline pruning or merging vision tokens. **L1: The authors have discussed the limitations in the lack of experiments on exocentric data.** In addition to extensive experiments on ego-centric datasets, we have conducted experiments on exo-centric COIN benchmarks, as shown in Paper Table 4. To further validate the effectiveness and generalization of our proposed method, we conducted experiments using the same training recipe as LLaVA/LLaMA-VID on standard image and video benchmarks, as shown in Global-Rebuttal Tables 1-4. Our method achieved comparable performance with significantly lower training costs and FLOPs, demonstrating that the proposed sparse vision processing strategy is broadly applicable to vision-language tasks in the LMMs field. [1] Yanwei Li et al. LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models. ECCV 2024. [2] Liang Chen et al. An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models. ECCV 2024. [3] Yuzhang Shang et al. Llava-prumerge: Adaptive token reduction for efficient large multimodal models. arXiv: 2403.15388. [4] Peng Jin et al. Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding. CVPR 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the author response. Some of my concerns still remain. The objective of MOD is to address the challenge of numerous vision tokens in long-term and streaming videos. However, the experiments are not convincing. + On the one hand, the current experiments on streaming long videos only show marginal improvements since they require no fine-grained information, failing to validate the effectiveness of proposed MOD. + On the other hand, the proposed MOD shows higher efficiency but comparable or even worse performance on some short video benchmarks compared to LLaMA-VID, which are not long enough to verify the effectiveness. I suggest the authors to include the results on some longer video benchmarks that require detailed understanding, such as EgoSchema [1], VideoMME [2] for evaluation. [1] Mangalam, Karttikeya, Raiymbek Akshulakov, and Jitendra Malik. "Egoschema: A diagnostic benchmark for very long-form video language understanding." Advances in Neural Information Processing Systems 36 (2024). [2] Fu, Chaoyou, et al. "Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis." arXiv preprint arXiv:2405.21075 (2024). --- Reply to Comment 1.1.1: Title: Response to Reviewer x5JZ (1/2) Comment: Thanks for your considerable feedback and suggestions! We argue that our approach **offers a general architecture that significantly reduces computational costs while maintaining performance for both training and inference in vision-language tasks**, *especially in scenarios that require processing a large number of vision tokens*, such as long-term, dense video frame online streaming. > On the one hand, the current experiments on streaming long videos only show marginal improvements since they require no fine-grained information, failing to validate the effectiveness of proposed MOD. 1. We validated the effectiveness of our approach on the *Video-MME[1]* benchmark. Our VideoLLM-MoD was trained using the same recipe as the initial submission paper, excluding the streaming loss, and with the same pretraining and finetuning data as LLaMA-VID[2]. Despite requiring significantly less training time—2.95x less than VideoLLaMA 2—our model still achieves top-tier performance, outperforming it. | Model | Frames | Training Cost | Overall(%) | Short Video(%) | Medium Video(%) | Long Video(%) | | ------------------ | ----------------------------- | ----------------- | ---------- | -------------- | --------------- | ------------- | | VideoLLaMA 2-7B | 32 frames && 32 tokens/frame | 65hrs & **2.95x** | 47.9 | 56.0 | 45.4 | 42.1 | | Chat-UniVi-v1.5-7B | 64 frames && 112 tokens/frame | 53hrs & **2.41x** | 40.6 | 45.7 | 40.3 | 35.8 | | Video-LLaVA-7B | 8 frames && 49 tokens/frame | 60hrs & **2.73x** | 39.9 | 45.3 | 38.0 | 36.2 | | VideoLLM-MoD-8B | 1fps && 10 tokens/frame | 22hrs | **49.2** | **58.4** | **46.6** | **42.4** | Given the limited time available during the discussion period, we will add experiments on EgoSchema[3] once our work is released. 2. Existing experiments on Ego4D narration benchmark can validate the effectiveness of our proposed approach. As *we claimed our core contribution is reduce both the training and inference cost without sacrificing the performance*, **we achieve comparable performance with 1.7x training speedup (24hrs -> 14hrs), 0.6x training FLOPs and 1.7x longer inference context (830s -> 1440s)**. Besides, as we claimed that in Paper Line 231-234, though the Full-computation in Paper Table 1 also shows marginal improvements, we found that larger vision resolution can indeed benefit performance, as shown in Figure 1, 5, and in experiments that demand more detailed visual information as shown in Table 4, 5. 3. Our approach allows for a larger visual budget within the same total computation, leading to significant performance gains from additional visual tokens, as demonstrated in extensive ablations in LLaVA-NEXT[4] and LongVA[5]. **Our method can be seen as a “free lunch” in increasing the vision resolution.** Moreover, **it is non-trivial to reduce computation in context under online video scenarios**, as online VideoLLMs must process every incoming visual token without relying on frame sampling or offline token merging, as done in Chat-UniVi [6]. We are the first to explore efficient vision modeling in context, rather than solely focusing on offline pruning or merging of vision tokens. --- Reply to Comment 1.1.2: Title: Response to Reviewer x5JZ (2/2) Comment: > On the other hand, the proposed MOD shows higher efficiency but comparable or even worse performance on some short video benchmarks compared to LLaMA-VID, which are not long enough to verify the effectiveness. 1. It is worth noting that we used the same training recipe as LLaMA-VID during the previous rebuttal phase for a fair comparison. Our approach achieved comparable performance with significantly less training time and TFLOPs. However, our sparse architecture allows us to utilize far more visual tokens within the same computational budget. Specifically, we increased the number of visual tokens per frame (from CLS token + 1x1 average pooling to CLS token + 3x3 average pooling), **resulting in substantial performance gains due to the higher vision resolution. Remarkably, the total training cost remained at just 0.56x that of LLaMA-VID.** | Method | Training Cost (Pretrain + Finetune) && speedup | MSVD-QA | | MSRVTT-QA | | ActivityNet-QA | | Video-based Generative Performance | | | | | | :------------------------------------- | :--------------------------------------------- | :------- | :------ | :-------- | :------ | :------------- | :------ | ---------------------------------- | -------- | -------- | -------- | ----------- | | | | Acc | Score | Acc | Score | Acc | Score | Correctness | Detail | Context | Temporal | Consistency | | LLaMA-VID-7B | 9hrs + 30hrs | 69.7 | 3.7 | 57.7 | 3.2 | 47.4 | 3.3 | 2.96 | 3.00 | 3.53 | 2.46 | 2.51 | | VideoLLM-MoD-7B (1fps, 2tokens/frame) | 4hrs + 5.5hrs && **0.25x** | 68.5 | 3.7 | 58.2 | 3.3 | 46.3 | 3.2 | 2.88 | 2.98 | 3.41 | **2.51** | 2.50 | | VideoLLM-MoD-8B (1fps, 10tokens/frame) | 8hrs + 14hrs && **0.56x** | **78.5** | **3.9** | **65.3** | **3.6** | **53.4** | **3.4** | **3.12** | **3.16** | **3.75** | 2.44 | **3.65** | 2. We demonstrate that our method generalizes well across extensive benchmarks. Here is a summary: **4 Image Benchmarks:** GQA, MME, POPE, SQA **9 Video Benchmarks:** Ego4D Narration, Ego4D LTA, EgoExo4D Fine-grained Keystep Recognition, COIN, MSVD-QA, MSRVTT-QA, ActivityNet-QA, VideoChatGPT, Video-MME Thanks again for your feedback! We hope that our response can address your questions, and if you still have any concerns, we would be pleased to discuss them further with you. [1] Fu, Chaoyou, et al. "Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis." arXiv preprint arXiv:2405.21075 (2024). [2] Li, Yanwei, et al. "LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models." ECCV, 2024 [3] Mangalam, Karttikeya, Raiymbek Akshulakov, and Jitendra Malik. "Egoschema: A diagnostic benchmark for very long-form video language understanding." Neurips 2024. [4] Liu, Haotian, et al. "Llava-next: Improved reasoning, ocr, and world knowledge." (2024). [5] Zhang, Peiyuan, et al. "Long context transfer from language to vision." *arXiv preprint arXiv:2406.16852* (2024). [6] Jin, Peng, et al."Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding." CVPR, 2024. --- Reply to Comment 1.1.3: Comment: Dear reviewer x5JZ: Thank you again for your thoughtful feedback! We hope the rebuttal and additional experiments we provided were helpful. If any residual concerns remain, we would be glad to discuss further. *If no concerns remain, we would appreciate it if you re-evaluate our paper.* Thank you once again for your thorough review of our paper. Best regards, Authors of Paper10329 --- Rebuttal 2: Comment: Dear Reviewer x5JZ, We would like to express our sincere gratitude for the time and effort you spent reviewing our paper. As **the author reviewer discussion stage draws to a close**, we are eager for your response to ascertain if our detailed response has sufficiently addressed your concerns. We would be honored to address any further questions you may have. *We eagerly anticipate and highly value your re-evaluation of our paper.* Thank you once again for your thorough review of our paper. Best regards, Authors of Submission 10329
Summary: The document presents a novel approach called VideoLLM-MoD, which aims to efficiently scale up the vision resolution for online video large language models (VideoLLMs) without incurring high computational costs. The approach is inspired by the "mixture-of-depths" approach and learns to skip the computation for a high proportion of vision tokens. Experiments on several egocentric and instructional dataset show the method could achieve similar or even better performance with significantly less computation effort. Strengths: 1. the proposed method could reduce computational cost and save GPU memory, thus high vision resolution could be used for videoLLM. 2. lightweight LayerExpert is proposed to determine which vision tokens should be processed at certain layers. 3. streamingloss is proposed to ensure model remain silent when it has no necessary response. Weaknesses: 1. this proposal mostly borrow idea from "Mixture-of-Depths", the novel part is applying the idea to visual tokens. The original contributions may not be enough for top-tier conference. 2. experiments are mainly on egocentric and instructional datasets, lack of Diversity. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. could authors summarize the original ideas or insights different from Mixture-of-Depths, except of different modalities? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper. We will address each of your concerns point by point. **W1/Q1: This novelty and contributions compared to Mixture-of-Depths.** We summarize the differences between our proposed framework and Mixture-of-Depths (MoD) as follows: 1. **Validation of Effectiveness on LMMs:** It is non-trivial to validate the effectiveness of the proposed approach in Large Language Models(LLMs) especially in Large Multi-modal Models(LMMs). While MoD demonstrates comparable loss scales on language models smaller than 1B parameters, we explore efficient methods to reduce computation in context for 8B-LMMs in both training and inference, particularly under online video scenarios. 2. **Causal Operations:** While MoD introduces causality through auxiliary loss or auxiliary MLP predictors, we simplify this by using LayerExpert to select vision tokens within individual frames across transformer blocks. The causal attention in the LLM then learns the temporal modeling, seamlessly accommodating our online video scenario. 3. **Extensive Experiments:** Unlike MoD, which validates its approach on language models smaller than 1B parameters on loss scale, we conducted extensive experiments on vision-language benchmarks, including the Ego4D narration benchmark, Ego4D LTA benchmark, EgoExo4D Fine-grained Keystep Recognition task, and COIN benchmarks. Additionally, we performed experiments on general image/video benchmarks, as shown in Global-Rebuttal Tables 1-4, demonstrating the generalization of our proposed approach to vision-language tasks. The topic of sparse vision modeling in context has gained popularity, as evidenced by Google DeepMind’s recent research on MoNE [1], released a few days ago. While they adopt a similar approach, their exploration is limited to traditional vision architectures rather than the popular LMMs field. We believe our idea "the importance of different vision tokens should be considered in context, and can be modeled by computation budget" can provide valuable insights into general LMMs. **W2: Experiments are mainly on egocentric and instructional datasets, lack of Diversity.** To validate the effectiveness and generalization of our proposed method, we conducted experiments using the same training recipe as LLaVA/LLaMA-VID on standard image and video benchmarks, as shown in Global-Rebuttal Tables1-4. Our method achieved comparable performance with significantly lower training costs and FLOPs, demonstrating that the proposed sparse vision processing strategy is broadly applicable to vision-language tasks in the LMMs field. [1] Gagan Jain et al. Mixture of Nested Experts: Adaptive Processing of Visual Tokens. arXiv: 2407.19985. --- Rebuttal 2: Comment: Dear Reviewer 2CSB, We would like to express our sincere gratitude for the time and effort you spent reviewing our paper. As **the author reviewer discussion stage draws to a close**, we are eager for your response to ascertain if our detailed response has sufficiently addressed your concerns. We would be honored to address any further questions you may have. *We eagerly anticipate and highly value your re-evaluation of our paper.* Thank you once again for your thorough review of our paper. Best regards, Authors of Submission 10329 --- Rebuttal 3: Comment: Dear reviewer 2CSB: Thank you again for your thoughtful feedback! We hope the rebuttal and additional experiments we provided were helpful. If any residual concerns remain, we would be glad to discuss further. *If no concerns remain, we would appreciate it if you re-evaluate our paper.* Thank you once again for your thorough review of our paper. Best regards, Authors of Paper10329
Summary: This paper proposes a novel layer skipping approach to reduce the computation and memory consumption in modern vision-language models. The overall performance is good on several egocentric video understanding benchmarks. Strengths: 1. The proposed layer skipping strategy enables efficiient attention computation while retaining the performance. 2. Experimental results on mutliple online/offline benchmark datasets demonstrate the effectiveness of the method. Weaknesses: 1. This paper uses videollm-online as the baseline and merely proposes a weighted layer skipping strategy. The technical contribution of this method is relatively limited. 2. The discussion on the choice of LayerExpert is not fully discussed. The authors claimed that the proposed strategy can select critical visual tokens in the layer. It would be better if the authors include visualization results to prove this claim. In Table 3, are there any semantic-aware token selection strategies that can be used for comparison? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Table 3, it is a little bit confusing why increasing the keep ratio r=0.3 results in worse performance? Also, it would be good to see the performance comparison on more keep ratios. 2. Could you please explain why the proposed method targets at online video processing? It seems that LayerExpert can be adapted to a variety of vision transformers and applied to many video understanding tasks, as proved in Table 4. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Please refer to Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback and valuable suggestions. **W1: The technical contribution of this method is relatively limited.** Our technical contributions are summarized as follows: 1. **Efficient Vision Modeling in Context:** It is non-trivial to reduce computation in context under online video scenarios, as online VideoLLMs must process every incoming visual token without perform frame sampling or offline token merging, as done in Chat-UniVi [1]. We are the first to explore efficient vision modeling in context, rather than merely offline pruning or merging vision tokens. 2. **Reduction of both Training and Inference Costs:** High training costs are a significant issue for LMMs, particularly for video-based LMMs, since video consumes the majority of tokens, as indicated in Figure 7 of our paper. Unlike existing methods [2,3] that focus on efficient inference while still requiring high training costs, we are the first to successfully reduce both training and inference costs massively while maintaining LMMs’ performance. We believe our approach offers a new paradigm for other vision-language tasks, especially in training video-based LLMs. 3. **Generalization:** To validate the effectiveness and generalization of our proposed method, we conducted experiments using the same training recipe as LLaVA/LLaMA-VID on standard image and video benchmarks, as shown in Global-Rebuttal Tables 1-4. Our method achieved comparable performance with significantly lower training costs and FLOPs, demonstrating that the proposed sparse vision processing strategy is broadly applicable to vision-language tasks in the LMMs field. The topic of sparse vision modeling in context has gained popularity, as evidenced by Google DeepMind’s recent research on MoNE [4], released a few days ago. While they adopt a similar approach, their exploration is limited to traditional vision architectures rather than the popular LMMs field. We believe our idea "the importance of different vision tokens should be considered in context, and can be modeled by computation budget" can provide valuable insights into general LMMs. **W2: The discussion on the choice of LayerExpert, the visualization to prove the claim, the semantic-aware token selection strategies for comparison.** Forcing LayerExpert to select only the important visual tokens across each transformer block can be viewed as encouraging LMMs to focus on the most useful regions, while it increases the learning difficulty. As shown in Global-Rebuttal Figure 1, we visualize the tokens selected by LayerExpert and observe that it indeed focuses on important visual regions, such as bike-related tokens in Fig. 1a, tokens related to slicing onions in Fig. 1b, and tokens related to a table saw in Fig. 1c. Semantic-aware token selection typically requires cross-attention-based modeling, which is computationally expensive when processing every frame. As shown in Global-Rebuttal Tables 1-2, LLaMA-VID [5] utilizes text features extracted from a q-former to select semantic-aware visual tokens via context-attention, resulting in significantly higher training costs (10.5 hours vs. 52.5 hours) with only marginal performance improvement. **Q1: Why increasing the keep ratio r=0.3 results in worse performance? Also, it would be good to see the performance comparison on more keep ratios.** First, selecting important visual tokens via LayerExpert increases the learning difficulty, leading to better performance with fewer visual tokens. Second, there may be deviations in the LMMs’ training process. We will conduct additional trials in the revised version and report the standard deviation across all experiments. Further ablation studies on the keep ratio are presented below. | r | LM-PPL | TimeDiff | Fluency | LM-Correctness | | ---- | ------ | -------- | ------- | -------------- | | 0.1 | 2.43 | 2.11 | 44.7% | 48.1% | | 0.2 | 2.41 | 2.04 | 45.2% | 48.9% | | 0.3 | 2.41 | 2.05 | 44.9% | 48.7% | | 0.5 | 2.41 | 2.03 | 45.1% | 48.8% | | 0.7 | 2.40 | 2.05 | 45.2% | 48.9% | | 0.9 | 2.40 | 2.04 | 45.3% | 49.1% | | Full | 2.40 | 2.05 | 45.3% | 49.0% | **Q2: Why the proposed method targets at online video processing?** Processing video in an online scenario differs from offline processing, as LMMs must handle every incoming frame in real-time without frame sampling or token merging. This results in excessive vision token lengths, as shown in Figure 7 of the paper, and the training costs for LMMs increase exponentially with token length. Moreover, online VideoLLMs like GPT-4o have shown great potential in real-world applications, highlighting the urgent need to explore approaches that reduce both training and inference computational costs in online video scenarios. To validate the effectiveness and generalization of our proposed method, we conducted experiments using the same training recipe as LLaVA/LLaMA-VID on standard image and video benchmarks, as shown in Global-Rebuttal Tables 1-4. Our method achieved comparable performance with significantly lower training costs and FLOPs, demonstrating that the proposed sparse vision processing strategy is broadly applicable to vision-language tasks in the LMMs field. [1] Liang Chen et al. An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models. ECCV 2024. [2] Yuzhang Shang et al. Llava-prumerge: Adaptive token reduction for efficient large multimodal models. arXiv: 2403.15388. [3] Peng Jin et al. Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding. CVPR 2024. [4] Gagan Jain et al. Mixture of Nested Experts: Adaptive Processing of Visual Tokens. arXiv: 2407.19985. [5] Yanwei Li et al. LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models. ECCV 2024. --- Rebuttal 2: Comment: Dear Reviewer qExK, We would like to express our sincere gratitude for the time and effort you spent reviewing our paper. **As the author reviewer discussion stage draws to a close**, we are eager for your response to ascertain if our detailed response has sufficiently addressed your concerns. We would be honored to address any further questions you may have. Thank you once again for your thorough review of our paper. Best regards, Authors of Submission 10329 --- Rebuttal 3: Comment: Dear reviewer qExK: Thank you again for your thoughtful feedback! We hope the rebuttal and additional experiments we provided were helpful. As the rebuttal period draws to a close, please don't hesitate to contact us if you have any problems. Thank you once again for your thorough review of our paper! Best regards, Authors of Paper10329
Summary: The core idea of this paper is to scaling up vision resolution for online video large language models. Instead of distributing FLOPs uniformly across all vision tokens in every decoder layer, they utilize a learnable module LayerExpert to allocate compute to critical vision tokens within the frame dynamically. Strengths: Video being such a compute heavy workload, anything to do with temporal modeling if can be performed in a streaming fashion can enable several practical applications. Online video-llms are relatively unexplored, this paper proposes sparse vision encoder suitable for enabling streaming applications at the same time retain spatial resolution. The project is timely, the results are good. The results on 3 egocentric benchmarks show encouraging results. Weaknesses: Not a major weakness; but performing experiments on ActivityNet-based training and evaluating on ViDSTG would give an idea of how does this MoD approach for making a sparse vision encoder work on standard benchmarks. Technical Quality: 4 Clarity: 4 Questions for Authors: Will you make your code available for the community? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The method does talk about increasing spatial resolution, but it doesn't consider spatial grounding. how does the author suggest the method can be extended for grounding? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your positive comments on our work and will address each of the issues you mentioned below. **W1: Performing experiments to show how does this MoD approach for making a sparse vision encoder work on standard benchmarks.** To validate the effectiveness and generalization of our proposed method, we conducted experiments using the same data configuration as LLaVA/LLaMA-VID on standard image and video benchmarks, as shown in Global-Rebuttal Tables 1-4. Specifically, for video benchmarks, we trained our VideoLLM-MoD on the ActivityNet and WebVid-2.5m datasets and further evaluated it on several other benchmarks. Our method achieved comparable performance with significantly lower training costs and FLOPs, demonstrating that the proposed sparse vision processing strategy is broadly applicable to vision tasks in the LMMs field. **Q1: Code availability.** The project code is included in the supplementary materials. We will release all the code, data, and checkpoints as soon as possible. **L1: How does the author suggest the method can be extended for grounding?** Our proposed method can process more vision tokens within the same computational budget. For spatial grounding tasks, this capability allows for larger spatial resolutions and denser frame representations. More vision tokens facilitate finer-grained representations of images and videos, capturing more detailed visual features. This is crucial for accurately locating and distinguishing small objects or intricate details within complex scenes, thereby improving the model’s precision in such scenarios. --- Rebuttal 2: Comment: Dear Reviewer HwD3, We would like to express our sincere gratitude for the time and effort you spent reviewing our paper. **As the author reviewer discussion stage draws to a close**, we are eager for your response to ascertain if our detailed response has sufficiently addressed your concerns. We would be honored to address any further questions you may have. Thank you once again for your thorough review of our paper. Best regards, Authors of Submission 10329
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their constructive comments. We appreciate their recognition of our **motivation** (HwD3, x5JZ); the **novelty** of the approach (qExK, 2CSB); the **efficiency** (HwD3, qExK, 2CSB, x5JZ); and **sufficient experiments** (HwD3, qExK, 2CSB). In the uploaded PDF of the Global-Rebuttal, we visualize the selected visual tokens of LayerExpert and the generated response in Figure 1. Specifically, we trained our VideoLLM-MoD on the videollm-online-chat-ego4d-134k [1] dataset. We then used five consecutive frames from the Ego4D test set videos as inputs, aggregated and normalized the vision weights from each LayerExpert, and visualized them with an alpha mask in Figure 1(a,b,c), as well as the visualization on each LayerExpert in Figure1d. Note that more transparent tokens represent larger vision weights. We further conduct more experiments on general image/video benchmarks as shown in Tables 1-4. Our method achieved comparable performance with significantly lower training costs and FLOPs, demonstrating that the proposed sparse vision processing strategy is broadly applicable to vision-language tasks in the LMMs field. For a fair comparison, we implemented our approach in the LLaVA codebase and trained it using the same recipe as LLaVA [2] and LLaMA-VID [3] for image and video benchmarks, respectively. We computed the FLOPs for each method using text input token lengths of 60 and one single image. For effective training, we utilized Deepspeed zero2 and Flash-Attention2. We trained the model using LoRA with a rank of 128 and a scaling factor of 256 on all linear layers of the LLMs. [1] Joya Chen et al. VideoLLM-online: Online Video Large Language Model for Streaming Video. CVPR 2024. [2] Haotian Liu et al. Visual Instruction Tuning. NeurIPS 2023. [3] Yanwei Li et al. LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models. ECCV 2024. In the following, we address each reviewer’s concerns. For each review, we address the **W**eaknesses, **Q**uestions, and **L**imitations point by point. Pdf: /pdf/85ef878bd545ef958f24db727e3b44885b3fdabd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Would I Lie To You? Inference Time Alignment of Language Models using Direct Preference Heads
Accept (poster)
Summary: The paper introduces DPH, a new method for pre-trained language models that addresses the limitations of RLHF. Unlike traditional RLHF, which can compromise a model's reasoning abilities and cause hallucinations, DPH employs an auxiliary reward head to learn human preference signals without altering the LM's output distribution. The authors conduct a theoretical analysis linking their objective function to cDPO and demonstrate that DPH can be used in conjunction with existing alignment techniques to improve performance. The experiment results show that models fine-tuned with DPH achieve higher scores compared to those fine-tuned with SFT or DPO alone. Strengths: The authors highlight the side effects of RLHF, such as damage to the model's ability to reasoning and hallucinations, and try to solve them. I believe the research problem is significant and worth investigating. The authors are going to release their code and model weights, which would benefit our community. Weaknesses: 1. The claim that the auxiliary reward head avoids affecting the output distribution of the language modeling head is undermined by the practical implementation. Specifically, this claim does not hold when the backbone language model is updated to learn the preference distribution (line 180) and when the model is updated using a joint loss function (Eq. 7). These aspects of the implementation weaken the validity of their claim. 2. Experiment setup: The baseline setup in the experiments is not reasonable. The authors compare their method with other language models that have distinct training settings and data, rather than comparing with other alignment methods. This makes it difficult to infer the superiority of the proposed method over existing alignment techniques, as the comparisons are not directly relevant. 3. Clarity and organization: The writing is not easy to follow and requires improvement. Specifically, sections 4.3 and 4.4, which are parts of the methodology, are incorrectly placed in the experiment section. Additionally, the evaluation protocol (section 5.1) should be introduced in section 4 but is instead placed in the results section. This misplacement affects the clarity and logical flow of the paper, making it harder for readers to understand the methodology and its evaluation comprehensively. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Why does DPH not require an SFT sampling and human labelling stage? In section 4.4, you mentioned SFT in your training pipeline. 2. What is the relationship between the proposed approach and the sentence "Would I Lie To You?" in your title? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors include a discussion regarding the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Addressing Weakness 1:** Although we do jointly train both the reward head and the preference distribution for completions this is realised through cDPO with a large beta penalty (which limits divergence of the output distribution) and a large epsilon (which limits the preferred vs dispreferred margin). So although the model does tend towards the LM head producing a more preferable output distribution this is heavily limited and was included purely to increase the chance of producing more preferable candidates. It is completely possible to swap cDPO with KL divergence to mitigate any change in the output distribution while learning the reward head weights, but - as stated in the paper - we found it beneficial to "slightly" align the model with regularised cDPO. We also do experiments using Qwen models in section 5.2.3 where the model backbone is completely frozen (meaning the output distribution cannot change) and find DPH achieves the highest scores on GLUE across the board, while lagging behind on other tasks which we believe is due to hidden state of the last token in the sequence not containing enough information for the commonsense and reading comprehension tasks - a phenomenon which would be mitigated by additionally training the backbone with either cDPO or KL. Additional ablations using KL divergence rather than cDPO would be an excellent candidate for inclusion in a paper revision either in the main body, or as additional experiments in the appendix. **Addressing Weakness 2:** The baselines we chose are from models of comparable size or capability with tradeoffs such as task-specific fine-tunes for smaller models or using base pre-trained checkpoints for larger models. To obtain baselines for similarly sized models using other alignment methods would have required us to train these models ourselves which requires significant time and compute. Had we compared our models to other publically available aligned checkpoints our models would have performed worse across the board as these publically available models are typically orders of magnitude larger, and even their SFT-only counterparts would outperform even our best in-house models. We aimed to show the efficacy of our method by showing how DPH improves over our LM head baselines and the baselines of other popular models. **Addressing Weakness 3:** We appreciate the reviewer's suggestion on organisation of sections which we will address in further revisions of the paper. **Addressing Question 1:** DPH does not require SFT sampling and human labelling as we do not follow the standard RLHF training pipeline and instead opt for a pipeline similar to that of DPO or ORPO which only requires binarized preference pairs for alignment, where such high quality datasets are readily available. We first must perform SFT to get a baseline policy (for both cDPO and prior regularisation) which is also necessary for other alignment methods like DPO and PPO. SFT sampling and human labelling could be incorporated into training however this is not a necessary step and would add complexity. **Addressing Question 2:** The sentence "Would I Lie To You?" refers to the fact that language models can often produce incorrect output which may be unhelpful or factually incorrect. We used this tag line because the model may sometimes generate "lies," but DPH is intended to rank such candidate outputs lower than more correct candidates regardless of log-probability. --- Rebuttal Comment 1.1: Comment: After reading the rebuttal and other reviews, here are my additional comments (hope other reviewers are also aware of following): TL;DR - The experimental setup is considerably flawed: - Evaluating an **alignment/preference tuning approach** using baselines with only pretraining (Table 2, Pythia & TinyLlama; Table 3 Llama) (some baselines are SFT) - Not including other preference tuning approaches as baselines - Motivation is disconnected the proposed approach - Claiming RLHF compromising an LLM (authors' motivation) but updating the parameters of the backbone model by DPH - Not showing DPH does not compromise an LLM --- ***Experimental setup is considerably flawed*** The experimental design is deeply misguided. The proposed approach is a preference tuning (PT) algorithm that is conducted after SFT and trained by preference data. **A legitimate experimental setup would directly compare this with other preference tuning algorithms (DPO, KTO, IPO, etc.)**. However, the comparisons presented are like: - Proposed approach: Base_Model_A + Pre-training_A + SFT_A + Preposed_PT - Others: Base_Model_B + Pre-training_B (+ SFT_B or no SFT_B + unknown PT or no PT) It is **really confusing when you expect readers to evaluate a preference tuning appraoch by comparing a PT-ed model with a pre-trained model** (Table 2, Pythia & TinyLlama; Table 3 Llama). > To obtain baselines for similarly sized models using other alignment methods would have required us to train these models ourselves which requires significant time and compute. This excuse is unconvincing. I believe you only need to take SFT checkpoint of your model (551M) and conduct other PT baselines like DPO. The computational cost would not exceed your method (cDPO + DPH). Furthermore, the same comparison based on TinyLlama backbone is expected to make it "real" extensive evaluation. These are tiny models and can be done on a single A100 GPU. --- ***Motivation is disconnected the proposed approach*** The authors argue that RLHF compromises an LLM, and thus propose a method to update only the added reward heads, purportedly to prevent any impact on the model's output distribution (line 10). However, their practical implementations (section 4.3) are: - Updating the backbone model, which is shared by language modelling head, by DPH + prior regularization - Updating the backbone model by DPH and cDPO loss These steps clearly alter the model's output distribution, directly contradicting the claimed motivation. Thus, the superiority of the proposed approach should seek strong support from the empirical results, which is unfortunately missing.
Summary: This paper introduces Direct Preference Heads (DPH), a novel method for aligning language models (LLMs) with human preferences at inference time. DPH works by adding an auxiliary reward head to the LLM that learns to predict human preference scores for generated outputs. This allows the model to self-evaluate multiple candidate outputs and select the highest-scoring one, effectively pruning undesirable responses. The authors argue that DPH offers several advantages over traditional Reinforcement Learning from Human Feedback (RLHF) methods like PPO and DPO: * Inference-time alignment: DPH aligns the model at inference time, avoiding the potential degradation of reasoning abilities often observed with RLHF during training. * Lightweight: DPH requires only a single model to produce both responses and rewards, unlike RLHF which typically involves multiple models. The paper presents two objective functions for DPH, separable and contrastive, and demonstrates their connection to Conservative DPO. Experiments on GLUE, GPT4All, and RACE datasets show that DPH consistently outperforms both supervised fine-tuning (SFT) and DPO alone. Strengths: * Novel approach: DPH offers a new perspective on LLM alignment by focusing on inference-time pruning rather than modifying the generation process itself. * Theoretical grounding: The paper provides a theoretical analysis of the objective functions and their relationship to cDPO, demonstrating robustness to label noise. * Strong empirical results: DPH consistently outperforms baselines on various tasks, showcasing its effectiveness. Weaknesses: * Lack of statistical significance: The paper does not report error bars or statistical significance measures, making it difficult to assess the robustness of the results. * Missing the comparison to standard RLHF methods: the paper does not include comparison to the PPO baseline, which is the most commonly used technique. * The authors do not discuss other inference-time alignment techniques that exist in the literature. Technical Quality: 3 Clarity: 2 Questions for Authors: * The paper uses different sampling strategies for SFT and DPH. It would be helpful to discuss the rationale behind these choices and explore the impact of different sampling strategies on DPH performance. * Sampling multiple candidate outputs at inference time can be computationally expensive, especially for larger LLMs. The paper doesn't explicitly address this cost, which could be a practical concern for real-world applications. Can you elaborate more on this in the paper? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper focuses on a 551M parameter model. It would be valuable to see how DPH scales to larger models, as RLHF is known to be more effective for larger models. Comparing DPH to RLHF-aligned larger models would provide a more comprehensive evaluation. Also RLHF baselines on top of DPO need to be added. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Addressing Weakness 1:** The paper does not include statistical significance measures as this would require performing each stage of the training pipeline multiple times which is computationally costly and we opted to use our compute budget to perform ablations with different hyperparameters and using different models. **Addressing Weakness 2:** Comparing to a PPO baseline would have vastly increased the scope of our work as it would also involve steps such as SFT sampling, collecting human labels and training a reward model, or using pre-labelled datasets to train a reward model, and then performing one or more rounds of PPO with rejection sampling. This would add a plethora of choices in the alignment pipeline which would need to be ablated over and further add to the computational cost of the research. Our goal with this paper was to lay the foundations of DPH as a novel method which could be used as a baseline for further research. **Addressing Weakness 3:** There are a variety of inference-time alignment techniques, varying from prompt based approaches to chain of thought based self evaluation to activation steering. All these methods, however, are typically performed on significantly larger language models which on account of their greater size would likely outperform our models even without these ad-hoc alignment methods. An alternative would be reproducing one or more of these methods with smaller models, but that would require significant time and compute. **Addressing Question 1:** To maximise compute throughput we use the Transformer-XL style sampling strategy, which also allows for performing SFT on samples which are larger than the LM context window (such as multi-turn conversations) due to the recurrent memory mechanism. Additionally, sampling a task multiple times was introduced as a way to balance the rate each task was sampled from but ultimately all tasks remained balanced. However, for DPH alignment we switched to typical on task sample per sequence in the batch as the batch needed to be constructed from positive and negative sample pairs. This isn't really possible using the Transformer-XL as we need to exactly match the positive and negative pairs one-to-one which becomes difficult when packing multiple samples of varying lengths into a single sequence. **Addressing Question 2:** Sampling multiple candidate outputs does indeed require more compute, however we frame DPH as a method which is useful for smaller LLMs which often don't saturate compute capabilities of higher end accelerators meaning larger numbers of candidates can be computed in parallel using similar compute cost as a single candidate generated by a larger LLM. There are other situations where DPH may be applicable such as inference on the edge where inference latency may be of less concern and where model weights of larger LLMs may not even fit on device. **Addressing Limitations:** Although our paper focuses on our in house 550M parameter model we do perform ablations on 4 Qwen 1.5 family models in section 5.2.3. See weakness 2 for comments on RLHF comparisons. Such comparisons may be carried out in future works.
Summary: Paper proposes Direct Preference Heads, which learns a reward prediction head using a pretrained model, without affecting the model's output distribution. Strengths: Comprehensive evaluation - paper presented experimental results across a wide range of tasks (NLU, commonsense reasoning, reading comprehension, etc. ), and compared the proposed method against a range of different baselines (Pretrained model, SFT, DPO, etc. ) Good presentation - results are cleanly presented, training objectives are clear Significance - results showed that the proposed method outperforms comparable baselines in most presented tasks Weaknesses: Some missing/confusing parts about the proposed method: - How are the learned reward predictions used? Some parts of the paper seems to suggest that multiple responses are sampled from the model, and the reward head is used to rank the samples? I think this is an important component of how the proposed method would be used, so it would be great to further elaborate on this procedure, e.g. how many samples? - Some parts of the paper seems to suggest that the proposed approach only learns a reward prediction head on top of an existing model without modifying the original model (e.g. in abstract, "... without directly affecting the output distribution of the language modeling head ..."), but other parts of the paper stated that it is better to finetune the entire model when learning to predict rewards (e.g. Section 4.3). However, the stated reason for why DHP might be better than other RLHF techniques like DPO is that further alignment (past SFT) can hurt model performance. If DHP also requires further finetuning, what is the intuition for why DHP performs better than DPO? Related works - Paper introduces PPO, DPO and cDPO. However, there has been a large number of recent works in RLHF beyond these papers. It would be great to present a more complete related works section with a more comprehensive survey of existing work. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Addressing Weakness 1:** The learned reward predictions are to be used to rerank candidate competitions at inference time, similar to rejection sampling. The number of samples to rank, however, is completely situational and can be determined by factors such as compute capability, memory availability and the maximum latency deemed acceptable before serving the highest rank completion to the end user. In general, the more candidates generated the more likely it is for a higher quality response to be produced. It should be pointed out that the time needed to rank the candidates is negligible as the rewards are computed alongside generation, meaning the compute, time and memory requirements are determined by generation procedure alone. This does, however, bring up an excellent direction for further research to determine if and how an early stopping heuristic may be employed to optimise the number of candidates produced. **Addressing Weakness 2:** DPH can indeed be employed without fine-tuning any of the model's backbone weights, and we do include experiments performing this on 4 Qwen models in section 5.2.3. However for DPH to perform best it does require further fine-tuning model weights, and we facilitate this through the usage of both prior regularisation and regularised cDPO with a large beta penalty and epsilon value to discourage the policy model from diverging from the SFT model while learning to produce rewards. It would be possible to replace cDPO with KL divergence, for example, to achieve a similar effect while completely decoupling the learning of the reward head from any form of RLHF or alignment method. We opted to use cDPO because we found it allowed the model to improve further (within the divergence and confidence bounds set by beta and epsilon) without degradation which further increases the likelihood of generating improved candidates for reranking at inference time. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for the response! It would be great to incorporate the clarifications in the rebuttal into the paper.
Summary: The authors propose an inference time method to align language models with human preferences without harming the model’s reasoning abilities. The method creates an auxiliary reward head that operates during inference to score potential outputs without changing the output distribution directly. The author validates approach by comprehensive evaluations on NLU, commonsense reasoning, and reading comprehension datasets. Strengths: 1. Theoretical Insight: The paper provides a theoretical analysis, connecting DPH to cDPO, and providing proofs to support the convexity and effectiveness of the proposed loss functions. 2. Novel Approach: The paper introduces a novel approach allowing for preference-aligned model fine-tuning without directly affecting the output distribution, potentially reducing negative side effects like hallucination and preserving the model’s original reasoning capabilities. Weaknesses: 1. Inference Time Overhead: The Direct Preference Heads method involves using language models to generate multiple candidate responses, which must then be evaluated for selection. This process is less efficient compared to traditional fine-tuning methods due to the additional computational steps required. 2. Dependence on Initial Sample Quality: The effectiveness of the generation-then-reranking approach highly relies on the quality of the initial responses generated. If the sampled responses are of low quality, DPH is unable to enhance the output, as it can only rerank the given candidates. 3. Limited Evaluation Scope: The evaluation of the DPH method is restricted to a single model of 550M parameters. It is not clear how well the method would perform when scaled to larger models, as the results might not be consistent across different model sizes. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The paper would be more convincing if the author applied the proposed method to a larger-sized model and demonstrated a scaling trend. The use of only one model makes the effectiveness of the approach a bit underdetermined 2. How does the proposed DPH method compare with other inference time algorithm such as rejection sampling? 3. How do you choose baselines when reporting performance? For example, Table 1 reports the model performance on BERT, an early-stage masked language model, instead of other competitive baselines. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed all the limitations in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Addressing Weakness 1:** DPH does indeed have a higher inference-time overhead than other alignment methods which aim to produce optimal completions in a zero-shot manner. However we frame DPH as being a suitable method for smaller language models which, as cited in the paper, may be harmed by other RLHF methods. The inference overhead of such smaller language models often do not not saturate the available compute or memory meaning multiple candidates can often be produced in parallel with minimal increase in latency. **Addressing Weakness 2:** Although the quality of produced responses depends on initial sample quality, we make use of high quality training datasets which have been used to train SOTA open-weight open-dataset language models. It is, of course, possible that all candidates generated by the language model are low quality, however through tuning generation hyper-parameters and making use of advanced sampling techniques such as typical sampling or contrastive search it is possible to coerce the model into producing a candidates with high variation from which it is likely some candidates will indeed be high quality. It is possible for DPH to be combined with other alignment methods if desired to further increase the quality of the most probable completions, but this paper primarily aims at providing the foundations for DPH as an inference time reranking method. **Addressing Weakness 3:** See below. **Addressing Question 1:** We do evaluate the proposed method with larger sized models by training a reward head and pooling function on 4 frozen Qwen 1.5 models (0.5B, 0.5B-Chat, 1.8B, and 1.8B-Chat) and report the results in section 5.2.3 and Table 6. **Addressing Question 2:** Rejection sampling is actually very similar to reranking with DPH, with the only difference being rejection sampling typically uses a separate reward model to evaluate the candidate completions whereas our method combines the generation and reward modelling into a single LM. Additionally, rejection sampling is often used during training to generate new candidates to improve the policy through methods such as PPO, while DPH is only intended for inference time reranking. If we were to use rejection sampling during training we would run into "reward hacking" issues where the model becomes overconfident in its reward assignments on self-generated completions which would be analogous to mode collapse in GANs; this does not mean that rejection sampling cannot be used with DPH models as the reward hacking issue will be much less prevalent if a separate reward model is used for ranking. This is, however, antithetical to the lightweight nature of our method and the intention to minimise the number of concurrent models needed to perform fine-tuning and alignment, and as such we did not explore rejection sampling and other similar RLHF methods as part of our training pipeline. **Addressing Question 3:** Our choice of baselines were partially dictated by the tasks themselves, and partially by the availability of results. For example with GLUE we picked BERT as it is still common to finetune a BERT model for NLU tasks which makes it an excellent choice for comparison. Additionally, although BERT has fewer parameters, the results were produced by task-specific fine-tunes which we believe makes it a valid and competitive baseline to compare against our 551M multi-task fine-tunes. We also included GPT-1 to compare our model with task-specific fine-tunes of a causal language model, which we chose due to the availability of results and its inclusion in the original BERT paper. GLUE also imposes rate-limiting on the evaluation server which adds time constraints to re-evaluating newer models on the test set. For the commonsense reasoning tasks we followed the evaluation used by TinyLlama and included TinyLlama and two Pythia models, all of which had significantly more parameters and pre-training albeit without fine-tuning. And for the reading comprehension tasks we included the results from GPT-1 to compare our model with a smaller but task-specific fine-tuned model, and the results from LLaMa 7B and 13B to compare our model with larger but non fine-tuned models. These choices may seem odd, but we wanted to include verified results from popular models which were performed externally and posted publicly to reduce evaluation bias due to factors such as prompt selection, or quirks in the evaluation pipeline which may favour certain models or token vocabularies when computing log probabilities. --- Rebuttal 2: Comment: Thanks a lot for the additional details you provided. However, I’m still not fully convinced about the choice of baselines included in the paper and the experimental setup. Specifically, I’m curious why larger-scale state-of-the-art models, such as LLaMA-2 7B, weren’t used for the main experimental results and compared with all baselines at that scale. The choice of smaller models makes it challenging to determine if the improvements would hold at a larger scale. I’m already leaning towards accepting the paper and will keep my score unchanged, but I hope future versions will include these considerations. --- Rebuttal Comment 2.1: Comment: Thank you for your response. It would indeed be valuable to include results from models such as Llama 2, however as stated in my previous response we elected to use verified results which were not collected by ourselves to reduce any potential bias in our testing pipeline (such as from prompt selection, or aggregation method for log-probabilities). However, for some benchmarks (the GPT4All suite and RACE) it may be possible to use a standardised evaluation suite such as Eleuther AI's LM Eval Harness to obtain reasonably unbiased results, and we would have the compute necessary to do so for a future revision. Evaluating GLUE, however, would be a bit more tricky, especially since STSB is a regression tasks and we had to compromise our own model's evaluation by forcing it to chose from integer predictions.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Discrete Dictionary-based Decomposition Layer for Structured Representation Learning
Accept (poster)
Summary: The authors propose the Discrete Dictionary-based Decomposition method, a discrete representation learning for TPRs. It encodes the roles and unbinding queries (and potentially also the fillers) using a learned dictionary, which serves as a codebook. The roles and then unbindings share the codebook, encouraging correct, noise-free retrieval. The layer is a drop-in replacement for any TPR-based layer. The authors demonstrate strong performance on tasks requiring systematic generalization, matching, or outperforming AID without using an iterative attention mechanism. Strengths: The authors present an interesting, novel, discrete representation of learning. Learning such representations is notoriously difficult. Their method is the only one that can solve the systematic split on the SAR task. The evaluation is solid. The authors have analyzed the learned roles and unbinding operators and showed that they are learning orthogonal representations without explicit regularization. Weaknesses: Given that there is a residual connection around the codebook, it is unclear what component of the roles and unbindings are generated by the codebook and which part of them comes from the residual. It would be nice to see ablations on the model without residual. If the model is not able to learn without it, an alternative is to replace the output of the residual by their mean over the dataset and see the effect on the performance. Additionally, I would like to see analysis like Figures 3 and 4, but for the baselines without D3. Currently, in 282, the authors claim "indicating the effectiveness of D3", but it is unclear how orthogonal representations are the baselines' representations. I believe that this additional analysis could deepen the understanding of why and how D3 works and significantly strengthen the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: In line 115, the authors write: "This distinctive characteristic allows D3 to leverage the learned symbolic features to decompose unseen data after training.". Could the authors provide an additional explanation for why discretization aids the decomposition of unseen data, as opposed to denoising the representations? On Fig 5 (b), the authors show an ablation over the number of keys in the codebook. Even with the smallest key size, the model works well. I would like to see a similar plot going down in the number of keys as long as the performance doesn't suffer significantly or the number of keys doesn't reach 1. This would help to understand better the effect of the codebook. Calling WikiText 103 a large-scale language modeling dataset in 2024 is a stretch. Please just call it "language modeling" dataset. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of their method in the paper. As an additional limitation, as with all TPR papers, it is unclear how the improved generalization on toy tasks would transfer to real-world scenarios, like improving the generalization of LLMs. However, this should not be a reason to downvalue a contribution of the paper, given that it aims to improve an important shortcoming with a novel idea. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for their constructive feedback and insightful suggestions. We will ensure that we reflect on our responses in our revised manuscript in the future. > W1: Ablation study for the effect of the residual connection **The *components* represent the vector representations of the symbolic components of TPR**, including roles, fillers, and unbinding operators, which are essential for TPR operations in TPR-based models. Specifically, D3 uses *queries* to access the codebook and generates *codes* based on the values in the codebook, as detailed in Eq. 3. Subsequently, it produces the *components* by residually connecting the *queries* to the *codes*, as described in Eq. 4. This mechanism implies that without the residual connection, the *components* are derived solely from the codebook values. In response to your suggestion, **we conducted an ablation study on the SAR and sys-bAbI tasks** to examine the impact of removing the residual connection. Please see the global response. **Fig. 13 shows that the residual connection is crucial for effectively training the D3 layer**. *** > W2: Additional orthogonal analysis on baseline models Thank you for your valuable input. In response, **we have conducted an orthogonal analysis for the baseline models** (FWM and AID) similar to the analysis presented in Section 4.3.1. Please see the global response. **Figs. 14 and 15 indicate that the D3 model generates more structured and orthogonal representations than the baseline models**, FWM and AID, demonstrating its effectiveness. *** > Q1: Could the authors provide an additional explanation for why discretization aids the decomposition of unseen data, as opposed to denoising the representations? Thank you for your insightful comment. Our work focuses on enhancing the systematic generalization of TPR-based approaches. Since systematic generalization aims at generalizing unseen data composed of known components observed during the training phase, the decomposition operations of TPR can be thought of as mapping unseen data to observed TPR components during training. The discretization technique enhances this decomposition operation by preserving the discrete features learned during training and enabling the mapping of given data to discrete representations. Within the D3 layer, **these discrete representations facilitate the generation of structured TPR representations that satisfy the TPR conditions**, as demonstrated in Figs. 3 and 4. Moreover, Figs. 3 and 4 illustrate that **the D3 layer also plays a role in denoising representations** by summing the *codes* to the *queries*, **effectively reducing the noise of the *queries* in the *components***. These characteristics allow the D3 layer to enhance the systematic generalization of TPR-based models by leveraging the discrete features to their decomposition operations of TPR. *** > Q2: Ablation study for the effect of varying the number of keys Thank you for your valuable input. In response to your comment, **we have expanded the range of our ablation study** to include additional experimental results for varying the number of keys in the codebook. Please see the global response. **Fig. 16 shows that even with a significantly reduced number of keys, the model with D3 maintains high accuracy on the SAR task**. This observation highlights the significance of the architectural inductive bias introduced by the D3 layer, effectively addressing the decomposition problem inherent in TPR-based models. These findings enhance our understanding of the codebook's impact on the model's performance. *** > Q3: The term "large-scale language modeling" in the manuscript **We will update the terminology to "language modeling"** to reflect its current standing better. *** > L1: As an additional limitation, as with all TPR papers, it is unclear how the improved generalization on toy tasks would transfer to real-world scenarios, like improving the generalization of LLMs. We appreciate the opportunity to address the concern regarding the transferability of improved generalization from toy tasks to real-world scenarios, a common limitation in TPR-related research. Previous work [1] has demonstrated the equivalence between TPR (or fast weight) and the linear attention mechanism, highlighting the potential for TPR to enhance linear attention. Additionally, recent works [2, 3] have proposed hybrid quadratic-linear attention methods that mitigate the computational complexity of self-attention while improving the generalization capabilities of LLMs. Given these findings, **we believe that the techniques developed in prior TPR-related research could address the limitations of linear attention within these hybrid methods, potentially improving the generalization of LLMs**. We plan to explore this promising direction in our future work. [1] Linear Transformers Are Secretly Fast Weight Programmers, ICML’21. [2] Transformer quality in linear time, ICML’22. [3] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models, ICML’24 --- Rebuttal Comment 1.1: Title: Concerns about the usefulness codebook Comment: We are thankful to the authors for their efforts in the rebuttal, and they answered most of my questions positively. However, I find the results of "Q2: Ablation study for the effect of varying the number of keys *extremely concerning*. The performance of the network doesn't change with even as low as 2 keys in the codebook. This begs the question of whether the codebook, which is the main component of the model, is doing anything, or are there other differences in the training pipeline or the model that cause the observed difference in Fig 2 compared to the baselines? This makes me question the validity of the approach. Do the authors have an explanation of how this is possible? Can you please run additional ablations with (1) the codebook removed, but otherwise use the same code and pipeline, (2) the number of codebook elements set to 1, which is equivalent to having a linear layer down-projecting to 1 elements and then projecting it back. I would appreciate any evidence that shows that the codebook is crucial, like the one I described above, or one that show that codebooks with very small number of entries do not work on other tasks. --- Reply to Comment 1.1.1: Comment: Thank you for your detailed feedback and valuable suggestions. To address your concerns, **we conducted additional ablation studies on the SAR and sys-bAbI tasks**. Specifically, we investigated **the D3 layer without incorporating the codebook** (referred to as "*w/o codebook*"), where *components* are generated solely using the shared feed-forward networks (layer$\_\text{residual}$ and layer$\_\text{final}$). In this configuration, the *component* is calculated as "layer$\_\text{residual}$(*query*)" instead of the "*component* = *code* + layer$\_\text{residual}$(*query*)", which is described in Eq. 4. As shown in Table A1, even without the codebook, the D3 layer improves the generalization performance of the baseline model on the SAR task. This result indicates that **the shared feed-forward networks significantly contribute to performance enhancement on the SAR task**, which may explain why the model maintains robust performance even with fewer keys. However, it is important to note that without the codebook, the D3 layer does not achieve near-perfect accuracy on the SAR task (as shown in Table A1) and fails to significantly enhance the systematic generalization of the baseline on the sys-bAbI task (as shown in Table A2). **These results demonstrate that the codebook plays a crucial role in enhancing the model's overall performance and generalization capabilities**, especially in tasks requiring systematic generalization. Additionally, we experimented with $N_\text{code} = 1$ on the SAR task, where the codebook may act as a bias term. The results in Table A1 show that using a single codebook element results in degraded generalization performance compared to the "*w/o codebook*" configuration, indicating that multiple codebook elements are essential for achieving optimal results. | **Table A1.** SAR task | $D_\text{code}$ | $N_\text{code}$ | top-$k$ | Accuracy | | --- | :---: | :---: | :---: | :---: | | FWM | - | - | - | 44.9$_{\pm31.5}$ | | --- | | D3 | 32 | 4 | 2 | 87.38$_{\pm11.10}$ | | D3 | 32 | 64 | 8 | 99.27$_{\pm0.88}$ | | D3 *w/o codebook* | 32 | - | - | 89.02$_{\pm4.56}$ | | --- | | D3 | 64 | 1 | 1 | 89.10$_{\pm7.99}$ | | D3 | 64 | 4 | 2 | 94.47$_{\pm2.35}$ | | D3 | 64 | 64 | 8 | 94.29$_{\pm8.06}$ | | D3 *w/o codebook* | 64 | - | - | 91.65$_{\pm3.66}$ | | **Table A2.** sys-bAbI task | *w/o sys diff* | *w/ sys diff* | | --- | :---: | :---: | | FWM | 0.79$_{\pm0.14}$ | 2.85$_{\pm1.61}$ | | --- | | D3 | 0.75$_{\pm0.17}$ | 1.96$_{\pm0.88}$ | | D3 *w/o codebook* | 1.19$_{\pm0.41}$ | 3.55$_{\pm1.04}$ | In summary, **combining of the codebook and the shared residual networks is crucial for achieving systematic generalization in TPR-based models**. We hope these additional experiments provide clarity regarding our approach.
Summary: The paper presents a discrete dictionary-based decomposition (D3) layer for tensor product representation (TPR). The purpose is to enhance the decomposition capabilities of TPR so that it can perform downstream tasks more effectively. D3 uses the discrete, trainable key-value dictionaries to map input data to “discrete” features within each dictionary. The dictionaries correspond to the TPR components, which are roles, fillers, and unbinding operators. More specifically, one dictionary is trained for the roles and unbinding operators and an optional one for the fillers. Experiments were conducted by using D3 in three TPR-based models. The paper shows that the use of D3 in these models outperforms the ones without using it. Strengths: - The use of discrete dictionary-based method in TPR models seems novel. I am not working in the area. This novelty is claimed in the paper. - The related work section is well structured and clearly mentions the difference between the proposed method and existing ones. - The experiments were conducted with multiple TPR-based models on multiple tasks and the proposed method achieved the best results on most cases (except for one case in Table 1 in which FWM+AID is the best). Weaknesses: - The paper assumes that the readers are familiar with terminologies used in the paper. It would be better if the paper provided definitions of some key concepts frequently used in the paper (such as roles, fillers, etc.). I had to look into references to fully understand the concepts. - D3 is evaluated with multiple TPR-based models on multiple tasks (such as text understanding and reasoning, visual relational reasoning and large language modeling). But it is unclear how these TPR-with-D3-based models compares with the SOTA models for these individual tasks. Without such a comparison, the significance of the work is unclear. - The paper mentioned at the beginning that using discrete representation can help with the interpretability of the models. However, there is no discussion or evaluation of this aspect of the model as the result of using D3. - I am not sure if the resulting “discrete” representation of the input data can be called “symbolic” features (which the paper uses a few times). The representations are still in the vector form, which is numeric. They may be considered “discrete”, but they are not symbols. - It is not clear how the dictionaries trained with D3 correspond to three components (roles, fillers, and unbinding operators) in TPR. Each component seems to be trained in the same way. There do not seem to be distinctions among them according to the descriptions in Section 3.1. In the experiments, only two or one dictionary is used, with one dictionary corresponding to roles and unbinding operators and an optional one for the fillers. If only the number of dictionaries matters, then the “w/o F” option could be considered “w/o roles/unbinding operators but with fillers”. More explanation on this would be beneficial. - There are some small errors in the presentation. For example, Subfigues (b) and (c) in Figure 6 should be switched to match their references in the text. Technical Quality: 2 Clarity: 2 Questions for Authors: - How is the TPR-with-D3-based models compared with the SOTA method on their individual tasks? - Does the use of the D3 layer increase the interpretability of the models? - How do the dictionaries trained with D3 correspond to three components (roles, fillers, and unbinding operators) in TPR? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for their constructive feedback and insightful suggestions. We will ensure that we reflect on our responses in our revised manuscript in the future. > W1: More explanation for some key concepts We acknowledge that the initial description may have been challenging for readers unfamiliar with the TPR framework. We provide more detailed explanations of the key terms to enhance the comprehensibility of our paper as follows. The TPR is a general method for representing the symbolic structure of data using distributed representations. **This framework operates by explicitly decomposing data at the representation level into distinct symbols, such as *role-filler* pairs, which are then encoded through the tensor product of *role* vectors and *filler* vectors**, $T= filler \otimes\textit{role}$. By doing so, this encoding method preserves the symbolic structure of the data. The *roles and fillers* are dependent on the task at hand. For instance, in a tree structure, the *role* corresponds to the position within the tree, while the *filler* represents the associated label with that position [1]. In associative memory, the *role* is analogous to an associative key (or address), and the *filler* corresponds to the associative value (or contents) [2, 3]. During the decoding phase, **the TPR framework retrieves specific *fillers*—essential for solving the given task—**from the encoded TPR representation via matrix multiplication using *unbinding operators* associated with particular *roles*, $filler = T \cdot \textit{unbind}$. In TPR-based neural networks, during training, **these TPR characteristics force the models to learn to generate structured representations satisfying TPR conditions through supervised training**. This process ensures the models can perform correct TPR operations to solve tasks accurately. [1] Differentiable tree operations promote compositional generalization, ICML’23. [2] TPR-RNN, NeurIPS’18. [3] FWM, ICLR’21. *** > W2 & Q1: Comparison with the SOTA models Our primary objective was to evaluate **how D3 enhances the systematic generalization capabilities of TPR-based models**. Therefore, **our initial experiments compared D3 with several TPR-based baselines across multiple tasks**. However, we acknowledge the importance of comparing our D3-enhanced TPR models with SOTA methods for a comprehensive evaluation. In response to your inquiry, we have expanded our comparisons to include a broader range of state-of-the-art methods. Please see the global response. *** > W3 & Q2: Interpretability of D3 The TPR framework decomposes data at the representation level into distinct symbols, such as *role-filler* pairs for encoding and unbinding operators for decoding. **This characteristic of TPR improves the interpretability of models** because the relationship between *the roles and the unbinding operators* explains which parts of the input the model focuses on to predict the answer. However, **this interpretability is accurate only when the generated structured representations satisfy the TPR conditions**. In this context, **the D3 layer enhances the interpretability of the models by providing structured representations that more effectively satisfy the TPR conditions compared to baseline models**. In response to Reviewer FjyM, we have included experimental results of the orthogonal analysis for the baseline models. Please see the global response. Figs. 14 and 15 demonstrate that the generated representations by the D3 better confirm the TPR conditions than other baseline models, supporting our claim that the D3 layer contributes to increased interpretability. *** > W4: The term "symbolic" features in the manuscript Thank you for your valuable input. Although we intended to convey that the codebook features within the dictionaries capture information that could be associated with symbolic components, it is more accurate to refer to these features as "discrete" rather than "symbolic" since they remain in vector form and are numeric. **We will revise the term "symbolic feature" to "discrete feature"** to prevent any potential misunderstanding (specifically on lines 13, 60, 61, 71, and 116). *** > W5 & Q3: How do the dictionaries trained with D3 correspond to TPR components? We appreciate the opportunity to clarify the correspondence between the dictionaries trained with D3 and the TPR components in TPR. As discussed in response to W1, during the training phase, the decomposition module in TPR-based models learns to generate TPR components to perform TPR operations. **Each dictionary in D3 is explicitly linked to a specific TPR component, ensuring that each dictionary is responsible only for generating its corresponding component**. The generated components are then utilized in pre-defined TPR operations of the TPR-based models. **This setup ensures that each dictionary is trained to be specialized to a specific TPR component**. Moreover, the number of dictionaries directly correlates with the TPR operations of the baseline models. For instance, the TPR operations of the FWM require two types of roles and one filler for encoding ($T= filler \otimes\textit{role}_1 \otimes \textit{role}_2$) and two types of unbinding operators for decoding ($filler = T \cdot \textit{unbind}_2 \cdot \textit{unbind}_1$). We thus set up three distinct dictionaries when integrating D3 with FWM (one for $filler$, another for $role_1/unbind_1$, and the other for $role_2/unbind_2$). We opted for the configuration "*with roles/unbinding operators*" to allow the D3 layer to contribute to generating structured representations that satisfy the TPR conditions, regardless of the number of dictionaries. Your suggested option, "*w/o roles/unbinding operators but with fillers*," presents an interesting alternative for examining the design of D3 and could be considered for future studies. *** > W6: Small errors in Figs. 6(b) and (c) **We will correct the captions**. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I've adjusted my rating accordingly. --- Reply to Comment 1.1.1: Comment: We thank Reviewer 3Nfz for your time and your constructive comments during the rebuttal period.
Summary: Drawing inspiration from discrete representation learning with dictionaries, the author introduced a novel Tensor Product Representation (TPR) framework, a Discrete Dictionary-based Decomposition (D3) layer designed to retain the learned symbolic features during training and apply them effectively to address decomposition challenges in previously unseen data. Owing to its architectural properties, this method is defined as a drop-in layer that maps input to pre-learned symbolic features, facilitating smooth integration with existing TPR methods. Comprehensive experimental results of this novel approach showcase superior or comparable performance to that of other baseline methods. Strengths: - S1: Generating components in the proposed method is analogous to transformer blocks with Top-K selection, which is straightforward and intuitive. - S2: Experimental results and ablation studies are well-organized and highlight the effectiveness of the proposed method. Weaknesses: - W1: The proposed method's technical novelty is a slight improvement over the previous work, AID. It appears that the proposed method's performance in more complex cases, such as Sort-of-CLEVR and WiKiText, is due to the introduction of additional configuration complexity and more learnable parameters. This may cause scalability issues in cases where more complex TRP operations are required since the number of dictionaries a model directly corresponds to the number of roles/unbind operators. - W2: Although the proposed method is a drop-in layer applicable to other existing TPR methods, it raises the question of how such an attention key-query-value mechanism can meet the TPR operation conditions. In particular, how can you ensure that the shared dictionary satisfies these properties during training to satisfy near orthogonality between roles or between unbinding operators? Technical Quality: 2 Clarity: 3 Questions for Authors: Please check out the Weakness section first. I listed additional questions as follows: - Q1: In Fig 2, the experimental results of the SAR task appear to show a huge difference in performance between the proposed method and AID. In the original AID paper, it achieved about 90% accuracy for the SAR task. The original experiment setup that AID ran on appears to be more difficult than yours, so why did AID perform significantly worse on your setup? - Q2 (related to W2): In Fig. 5(a), there are some strong similarities among specific keys, which indicates some redundancy of the learned keys. How can these redundant properties of keys not violate the TPR operation conditions? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Please check out the Weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for their constructive feedback. We will ensure that we reflect on our responses in our revised manuscript in the future. > W1: Technical novelty over AID and scalability issue While the prior work introduced an iterative competitive attention-based decomposition module, called AID, which refines structured representations iteratively, it has limitations in generalizing to unseen compositions of known symbols even in simple synthetic tasks, as shown in Fig. 2. AID lacks an explicit mechanism to leverage observed structural information during training for decomposition of TPR. To address these limitations, **we propose a novel discrete representation-based decomposition method for TPR-based models, which stores discrete features during training and maps given data to these learned discrete features via sparse activation**. Our experimental results in Tables 2 and 3 demonstrate that D3 achieves comparable performance to AID with a similar parameter setting ($D_\text{code}$ of 128 on the CLEVR and 32 on the Wiki). Moreover, D3 performs better with increased parameters (256 on the CLEVR task and 64 on the Wiki). **These results highlight our method's technical novelty and effectiveness compared to AID**. The scalability of our approach is inherently linked to TPR operations of baseline models since the number of dictionaries in the D3 layer aligns with the number of TPR components required for their operations. As TPR operations require increasing components to handle large datasets, our method also requires a proportional increase in dictionaries, resulting in significant computational and memory overhead. As explored in prior work, one potential solution to mitigate this issue is distributing shared dictionaries across multiple heads or layers [1]. However, this approach requires further investigation and experimentation, which we plan to research in future work. [1] Large memory layers with product keys, NeurIPS’19. *** > W2: How can you ensure that the shared dictionary satisfies these properties during training to satisfy near orthogonality between roles or between unbinding operators? The TPR framework operates by explicitly decomposing data at the representation level into distinct symbols, such as *role-filler* pairs, which are then encoded through the tensor product of *role* vectors and *filler* vectors, $T= filler \otimes\textit{role}$. By doing so, this encoding method preserves the symbolic structure of the data. During decoding, the framework retrieves specific *fillers*—essential for solving the given task—via matrix multiplication using *unbinding operators* associated with particular *roles*, $filler = T \cdot \textit{unbind}$. These TPR characteristics force TPR-based models to learn to generate structured representations satisfying TPR conditions through supervised training. **When integrating with TPR-based models, our D3 layer is trained to produce structured representations that satisfy these TPR conditions**. D3 employs a sparse key access mechanism during the generation of these representations. This mechanism ensures that individual discrete features within dictionaries become specialized to specific latent data features. Consequently, this specialization allows each discrete feature to learn and indirectly satisfy the near orthogonality requirements of TPR properties, thereby maintaining accurate TPR operations. *** > Q1: Difference in experimental settings of the SAR task between our work and the AID paper In the original AID paper, the SAR task performance was evaluated with varying levels of the task parameter $p$ with different values (0.0, 0.5, and 1.0), which adjusts the combinations of symbols observed during training. The AID achieved about 90% accuracy for $p$ values of 0.5 and 1.0, which are less challenging settings. **Our experiment adopts the SAR task's most challenging setting ($p=0.0$)**. This setting results in a more rigid evaluation environment than the other configurations. Consequently, the AID method does not achieve the same level of performance in our setup as it did in the original paper. To ensure clarity and prevent potential misinterpretation, we will include a detailed description of these differences between our work and the AID paper in the revised manuscript. This will help readers understand the context and rationale behind the observed performance disparities. *** > Q2: How can these redundant properties of keys not violate the TPR operation conditions? The D3 layer generates structured TPR representations by mapping input data to pre-learned discrete features within dictionaries through three steps: (1) *query* generation, (2) sparse key access, and (3) aggregation of code values. Therefore, the keys within dictionaries are intermediate features that assist the D3 layer in generating structured representations that satisfy the TPR conditions rather than needing to satisfy those conditions themselves. While some codebook keys in Fig. 5(a) show strong similarities, their corresponding codebook values in Fig. 5(b) show near-orthogonal patterns. This implies that **when similar *queries* access the dictionaries via these codebook keys, the resultant codes (read from the dictionaries) are orthogonal**. Figs. 3 and 4 provide further evidence supporting this claim. **As illustrated in Figs. 3 and 4, even though the *queries* are similar, the *codes* (retrieved from dictionaries via the *queries*) show more orthogonal patterns than the *queries* themselves**. Additionally, Figs. 3(c) and 4(c) demonstrate that D3 successfully generates *components* (which are our main focus) that meet the TPR conditions using these intermediate features (*queries and codes*). We believe this comprehensive approach ensures that the redundant properties of the keys do not violate TPR operation conditions. --- Rebuttal Comment 1.1: Title: Response to the reviewers' rebuttal Comment: Thank you for the authors' clarification and for addressing my concerns. I have decided to increase the score. --- Reply to Comment 1.1.1: Comment: We thank Reviewer QJJM for your time and your constructive comments during the rebuttal period.
Summary: This paper address the difficulty of decomposing input data into Tensor Product Representation (TPR) components, namely, roles, fillers, and unbinding operators. The proposal called D3 includes the use of learnable dictionaries for these components, and the mapping of input data into intermediate features (code, query, component), in order to generate TPR components from input data. The authors demonstrate that D3 can be easily integrated into existing TPR-based models and improves their systematic generalization performance across various tasks, including synthetic associative recall, text/visual question-answering, and language modeling. Strengths: * Originality: The D3 method presents a novel approach to addressing the decomposition problem in TPR-based models. The use of discrete, learnable dictionaries for this purpose is innovative. * Quality: The paper demonstrates thorough experimentation across multiple tasks and provides detailed ablation studies to support its claims. * Clarity: The method is explained clearly, with helpful visualizations and step-by-step descriptions of the D3 layer. * Significance: Improving the decomposition capabilities of TPR-based models has potential implications for enhancing systematic generalization in neural networks, which is an important goal in AI research. Weaknesses: * The paper is an alternative to the recently introduced AID method (Ref #22), and the goal/settings are nearly identical. It is helpful to clearly differentiate the contributions of this work to AID. * There are design choices in the D3: dictionaries, query, code, components and the ways they are connected. There should be clear motivations for each of these choice. * While the paper provides some analysis of the learned representations, a deeper theoretical analysis of why D3 works well could enhance the contribution. * The experiments are primarily focused on synthetic or relatively simple tasks. Testing on more complex, real-world tasks would strengthen the paper's claims about generalization. The comparison to baselines could be expanded to include a wider range of state-of-the-art methods in compositional generalization. Technical Quality: 3 Clarity: 3 Questions for Authors: The questions naturally arise from the weaknesses above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss some limitations of their approach, including the need for task-specific configuration and added computational overhead. They could expand on potential limitations in terms of scalability to very large models/datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for their constructive feedback and insightful suggestions. We will ensure that we reflect on our responses in our revised manuscript in the future. > W1: Comparison to AID While our work and AID focus on enhancing the systematic generalization of TPR-based models by addressing a decomposition problem, our contributions differ from AID's in several important aspects. Our main contribution is **introducing a novel decomposition method for the decomposition of TPR**, which stores discrete features during training and maps given data to these learned discrete features via sparse activation. In detail, AID employs a competitive attention mechanism between input features and intermediate TPR representations, refining these representations iteratively. However, as shown in Fig. 2, AID has difficulties generalizing to unseen compositions of known symbols. Additionally, AID does not explicitly leverage observed structural information during training. To address these limitations, we propose the D3 layer, which maps input data to the nearest pre-learned discrete features within dictionaries, generating structured TPR representations. Our comprehensive experiments demonstrate that the D3 layer significantly improves the systematic generalization of TPR-based models. *** > W2: Design choices in the D3 Our work focuses on enhancing the systematic generalization of TPR-based models. Since systematic generalization aims at generalizing unseen data composed of known components observed during training, the decomposition of TPR can be thought of as mapping input data to observed TPR features during training. This insight inspired our design choices for the D3 layer. **Our primary design choice was to employ a discrete representation-based decomposition module, which maps input data to discrete features learned during training**. Inspired by a prior key-value architecture [1], we introduced separate key-value-based dictionaries, each linked to specific TPR components (*role, filler, and unbinding operator*). This design ensures that each dictionary captures the symbolic information of the specific TPR components. The D3 layer generates *queries* based on the input data, which are then used to search and identify discrete features associated with the input data. Once relevant discrete features are identified, the codebook values are aggregated to generate *codes*. These *codes and queries* are then utilized as components for TPR operations to solve downstream tasks. In summary, each design choice in the D3 layer—from the multiple discrete dictionaries to the generation and utilization of *queries and codes*—was motivated by **the goal of mapping input data to discrete, learned features that facilitate systematic generalization in the decomposition operation of TPR**. [1] Large memory layers with product keys, NeurIPS’19. *** > W3: A deeper theoretical analysis We appreciate the suggestion to provide a deeper theoretical analysis of why D3 works well. Our D3 employs a separate key-value-based discretization mechanism, the robustness of which against distributional shifts was theoretically investigated in prior work [2]. **The D3 layer enables TPR-based models to mitigate errors in generating structured representations by mapping input data to pre-learned discrete features**. Figs. 3 and 4 empirically demonstrate how D3 mitigates errors while generating structured representations. Initially, generated *queries* fail to satisfy the TPR conditions, potentially causing inaccuracies in TPR operations. D3 addresses this by leveraging the *codes* derived through the discretization and generates near-ideal structured representations. These results imply that D3 improves the robustness of TPR-based models to unseen data comprising known symbols, thus enhancing the model's overall performance. Furthermore, in response to Reviewer FjyM, we have included further orthogonal analysis for the baseline models. Please see the global response. Figs. 14 and 15 show the superior quality of representations produced by D3 compared to the baseline models, further demonstrating the effectiveness of D3. [2] Discrete key-value bottleneck, ICML’23 *** > W4: Additional experimental comparison Our experiments evaluated the enhancement of systematic generalization in TPR-based models achieved by D3. We compared D3 with several TPR-based baselines across various systematic generalization tasks. Additionally, to assess the effectiveness of D3 in more complex scenarios, we extended our evaluation to the WikiText-103 task. **While those tasks considered in our study are relatively simple, the results consistently demonstrate that D3 significantly improves the generalization performance of TPR-based models**. We acknowledge the comment regarding the need for testing on more complex, real-world tasks. Although our current evaluation provides strong evidence of D3's benefits in systematic generalization tasks, further investigation is needed to confirm its effectiveness in real-world scenarios. We plan to address this limitation in future work. In response to your suggestion, we have expanded our comparisons to include a broader range of state-of-the-art methods. Please see the global response. *** > L1: Scalability of D3 The scalability of our approach is inherently linked to TPR operations of baseline models since the number of dictionaries in the D3 layer aligns with the number of TPR components required for their operations. As TPR operations require increasing components to handle large datasets, our method also requires a proportional increase in dictionaries, resulting in significant computational and memory overhead. As explored in prior work, one potential solution to mitigate this issue is distributing shared dictionaries across multiple heads or layers [1]. However, this approach requires further investigation and experimentation, which we plan to research in future work. --- Rebuttal Comment 1.1: Comment: Thank you for detailed response!
Rebuttal 1: Rebuttal: ## Global response We thank all the reviewers for their constructive feedback and insightful suggestions. We believe that the additional ablation studies and comparisons they recommended provide a clearer understanding of D3's strengths and limitations. Our revised manuscript will thoroughly reflect these improvements. We have attached a one-page PDF that includes the additional ablation studies suggested by reviewer FjyM, which are as follows: - Ablation study on the effect of the residual connection (Figure 13) - Additional orthogonal analysis on baseline models (Figures 14 and 15) - Ablation study on the effect of varying the number of keys (Figure 16) Moreover, as reviewers KCxS and 3Nfz suggested, we have expanded our comparisons to include a broader range of state-of-the-art methods, as detailed below. **sys-bAbI task**: We compared D3 to state-of-the-art methods (DAM [1] and STM [2]) on the original bAbI task. The results in the table below show that existing memory networks struggle with the sys-bAbI task, highlighting the efficacy of D3 compared to these state-of-the-art memory networks. | **sys-bAbI task** | *w/o sys diff* | *w/ sys diff* | Gap | | --- | :---: | :---: | :---: | | TPR-RNN | 0.79$_{\pm0.16}$ | 8.74$_{\pm3.74}$ | 7.95 | | TPR-RNN (+AID) | 0.69$_{\pm0.08}$ | 5.61$_{\pm1.78}$ | 4.92 | | TPR-RNN (+D3) | 0.65$_{\pm0.25}$ | 3.50$_{\pm2.07}$ | 2.85 | | FWM | 0.79$_{\pm0.14}$ | 2.85$_{\pm1.61}$ | 2.06 | | FWM (+AID) | 0.45$_{\pm0.16}$ | 1.21$_{\pm0.66}$ | 0.76 | | FWM (+D3 *w/ F*) | 0.75$_{\pm0.17}$ | 1.96$_{\pm0.88}$ | 1.21 | | --- | | DAM [1] | 0.48$_{\pm0.20}$ | 5.25$_{\pm1.64}$ | 4.77 | | STM [2] | 0.49$_{\pm0.16}$ | 4.19$_{\pm1.53}$ | 3.7 | **Sort-of-CLEVR task**: We included the Compositional Transformer [3], designed to enhance the systematic generalization capabilities of multi-head self-attention methods. (our experiment was performed under identical experimental settings as this prior work [3].) The results show that the Linear Transformer significantly degrades systematic generalization performance compared to the Transformer. While D3 improves the performance of the Linear Transformer from a TPR perspective, it still shows limited performance in reasoning the relationships between multiple objects (*Binary* and *Ternary*) compared to the Transformer and Compositional Transformer. | **Sort-of-CLEVR task** | $D_\text{code}$ | *Unary* | *Binary* | *Ternary* | | --- | :---: | :---: | :---: | :---: | | Linear Transformer | - | 69.3$_{\pm14.8}$ | 75.5$_{\pm1.3}$ | 56.4$_{\pm4.3}$ | | Linear Transformer (+AID) | - | 98.9$_{\pm0.2}$ | 78.6$_{\pm0.3}$ | 63.7$_{\pm1.2}$ | | Linear Transformer (+D3 *w/ F*)| 128 | 98.9$_{\pm0.2}$ | 79.5$_{\pm0.8}$ | 63.1$_{\pm1.9}$ | | Linear Transformer (+D3 *w/ F*) | 256 | 99.0$_{\pm0.3}$ | 82.1$_{\pm2.4}$ | 68.8$_{\pm1.2}$ | | --- | | Transformer | - | 97.4$_{\pm3.5}$ | 84.3$_{\pm4.3}$ | 62.7$_{\pm3.9}$ | | Compositional Transformer [3] | - | 98.9$_{\pm0.2}$ | 88.4$_{\pm1.4}$ | 66.5$_{\pm1.9}$ | **WikiText-103 task**: We compared D3 to the Delta Network [4], which introduced a delta updating rule instead of the additive outer product-based updating rule in the Linear Transformer. (our experiment was performed under identical experimental settings as this prior work [4].) The results indicate that although D3 improves the performance of the Linear Transformer in language modeling tasks, the choice of updating rules has a more substantial impact on performance for tasks involving the comprehension of lengthy corpora than the decomposition operation. | **WikiText-103 task** | $D_\text{code}$ | Valid | Test | | --- | :---: | :---: | :---: | | Linear Transfomer | - | 36.473 | 37.533 | | Linear Transfomer (+AID) | - | 36.159 | 37.151 | | Linear Transfomer (+D3 *w/o F*) | 32 | 36.061 | 37.220 | | Linear Transfomer (+D3 *w/o F*) | 64 | 35.975 | 37.009 | | --- | | Delta Network [4] | - | 35.640 | 36.659 | [1] Distributed associative memory network with memory refreshing loss. *Neural Networks*, *144*, 33-48. [2] Self-attentive associative memory, ICML’20. [3] Compositional Attention: Disentangling Search and Retrieval, ICLR’22. [4] Linear transformers are secretly fast weight programmers, ICML’21. Pdf: /pdf/aa8d8a4d524505cab25bfd0041fe8dfc17ba495f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
Accept (poster)
Summary: This paper addresses the well-known issue of OOD (Out-of-Distribution) actions in offline reinforcement learning by proposing the QDQ method, which penalizes the Q-values in regions with high uncertainty. To better estimate uncertainty, QDQ first constructs a truncated Q-value dataset for the behavior policy and learns the distribution of Q-values induced by the behavior policy using a consistency model, allowing subsequent estimation of the uncertainty for each state-action pair through one-step sampling. With this uncertainty estimation method, QDQ can perform pessimistic value estimation for samples, thus avoiding the overestimation of Q-values for OOD actions. The paper provides a series of theoretical results to support the rationality of the QDQ method and demonstrates the advantages of the QDQ method through a series of experiments. Strengths: 1. This paper proposes an innovative approach to estimate the sample uncertainty in the dataset by learning a consistency model and employing one-step sampling. This scheme is novel as it does not require maintaining multiple Q-networks to estimate uncertainty, as traditional methods do, thus incurring less computational overhead. Moreover, unlike the diffusion model, which requires multi-step denoising to obtain uncertainty, this approach ensures higher fidelity in uncertainty estimation without the need for iterative processes. Consequently, this scheme is practical in terms of implementation. 2. The paper has a solid theoretical foundation, demonstrating the feasibility and convergence of the algorithm. 3. The paper is well-written with clear expression, distinct structure, and logical flow. Weaknesses: 1. First and foremost, the fundamental assumption of this paper is that a consistency model can learn a distribution of Q-values with high fidelity. However, why do Q-values need to satisfy the property of consistency? If we are learning the distribution of trajectories, it is reasonable to require the trajectory distribution to be consistent, but is this assumption also reasonable for the distribution of Q-values? This is the starting point of the paper, yet it does not argue for its necessity. Therefore, I suspect that for any type of generative model, even Variational Autoencoders (VAEs), the uncertainty of Q(s,a) could be calculated, thereby allowing the subsequent methods of this paper to be applied for learning. 2. Based on my understanding, Equation (6) actually calculates the emperical version $V_{\epsilon}(s,a)$ of uncertainty $Var(Q^{\pi_{\beta}})$ corresponding to the behavior policy $\pi_{\beta}$, rather than the uncertainty $Var(Q^{\pi})$ of corresponding to the current policy $\pi$. However, it seems to me that in the subsequent use of $V_{\epsilon}(s,a)$ and in the proof of the theorems, $V_{\epsilon}(s,a)$ is directly used as $Var(Q^{\pi})$, which could be mathematically problematic. The authors need to provide an explanation for this point. If indeed $Var(Q^{\pi_{\beta}})$ is used in place of $Var(Q^{\pi})$, please justify the rationale. 3. This paper lacks some comparisons with related work. It could be compared with methods that use ensembles to estimate uncertainty [1] and methods that make pessimistic estimates for Out-of-Distribution (OOD) actions [2]. [1] Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning. [2] Supported value regularization for offline reinforcement learning. Technical Quality: 3 Clarity: 3 Questions for Authors: The same as Weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper does not mention any potential negative impacts that may arise from its work. It is recommended that this be supplemented. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For weakness: 1. We apologize for any ambiguity caused by our description of the consistency model. Consistency is a feature of the consistency model, not a requirement for the Q-value. The consistency model ensures a consistent relationship between the prior sample and the target sample during generation. This consistency is beneficial for computing Q-value uncertainty, as analyzed in Theorem 4.2, which shows that the Q-value distribution learned with the consistency model can ensure the absolute effect of action changes on the variance of the final bootstrap sample. This makes Q-value uncertainty more sensitive to OOD actions compared to the diffusion model. Additionally, the fast-sampling process of the consistency model significantly enhances QDQ's efficiency. Although one-step sampling may slightly reduce sample quality, QDQ only calculates uncertainty by the variance of the bootstrap sample, so this loss is negligible. Overall, the consistency model is highly suitable for uncertainty estimation due to its consistency, high fidelity, easier training, and faster sampling. Regarding the use of VAE and GAN for learning Q-value distributions, any distribution learner can be applied, but we prioritize high fidelity and efficiency. VAE often assigns excessive probability mass to less important regions by covering mode [1,2], leading to inaccuracies, especially in complex or multimodal Q-value distributions. GANs face instability and training difficulties due to their adversarial nature and the Nash equilibrium is not easy to reach, and GAN often resulting in mode collapse [3,4]. Thus, we prefer the consistency model for its stability and effectiveness. 2. Please refer to the 1. of the “author rebuttal” at the top for the reason and rationale for estimating uncertainty by using the Q-value distribution of the behavior policy. Moreover, while the estimated uncertainty influences the Q target penalty in the learning policy, it only affects the Q-value in the OOD region and does not impact the updating or optimization of the learning policy in the in-distribution region. Additionally, the uncertainty term has minimal effect on our theoretical proofs; for instance, in the proof of Theorem 4.3 (Appendix D.2), the uncertainty term is ultimately canceled out (see Eq. D.9 to Eq. D.10). 3. Thank you very much for your reminder, we apologize for our oversight and we will take your suggestion to add [5] and [6] to our related work. About limitation: We apologize that we have not been very clear about QDQ's limitation. Uncertainty estimation is challenging, but high-precision uncertainty estimation is crucial for the success of Q constraint kind method in offline RL [7]. We hope QDQ can serve as an uncertainty-based RL algorithm to facilitate the successful implementation Q value constraint method in offline RL. However, QDQ has limitations, particularly in using estimated uncertainty to adjust the learning policy's Q target value. Currently, QDQ employs a pessimistic adjustment by dividing the uncertainty value for OOD actions, but this approach may not be optimal. We will continue researching how to combine uncertainty estimation with the LCB of the learning policy, providing theoretical and experimental insights. We apologize for omitting this limitation and will add it to section 7. [1] Ghasemipour et al. EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL. [2] Chen et al. OFFLINE REINFORCEMENT LEARNING VIA HIGHFIDELITY GENERATIVE BEHAVIOR MODELING. [3] Martin et al. Wasserstein generative adversarial networks. [4] Li et al. MMD GAN: Towards deeper understanding of moment matching network. [5] Bai et al. Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning. [6] Mao et al. Supported value regularization for offline reinforcement learning. [7] Levine et al. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. --- Rebuttal Comment 1.1: Title: Follow-up Questions. Comment: Thank you very much for your response. Most of my questions have been answered, especially as you pointed out, replacing the uncertainty of \(Q\) with that of \(Q^{\beta}\) does indeed seem reasonable. However, I still need to confirm a few points with you. Specifically, do Theorems 4.2 and 4.3 strictly require that \(V(X|(s',a')\) be the uncertainty of \(Q\), or is it permissible to replace it with the uncertainty of \(Q^{\beta}\) in a rigorous sense? --- Reply to Comment 1.1.1: Comment: Thank you very much for your quick response. In QDQ, the uncertainty estimation of $H_Q(a'|s') = \sqrt{V(X_\epsilon|(s',a'))}$ is only used to penalize the Q-value of actions with high uncertainty($a' \in U(Q)$). Its primary purpose is to make the Q target more pessimistic by adjusting the Q target value to $\frac{1}{H_Q(a'|s')}Q(s',a')1_{(a' \in U(Q))}$. The use of $H_Q(a'|s') = \sqrt{V(X_\epsilon|(s',a'))}$ does not affect the proofs and conclusions of Theorems 4.2 and 4.3, as it only serves to underestimate the Q target value in high uncertainty (OOD) regions. As long as the Q target value in OOD regions is pessimistic, the theorem holds. Indeed, $H_Q(a'|s') = \sqrt{V(X_\epsilon|(s',a'))}$ is just a penalty factor, and as long as $\frac{1}{H_Q(a'|s')}Q(s',a')1_{(a' \in U(Q))} < Q(s',a') 1_{(a' \in U(Q))}$, Theorems 4.2 and 4.3 are hold, regardless of whether it is the uncertainty estimation of the Q value of the learning policy or the behavior policy. As shown in Figure G.4 (Appendix G.4), the distribution of $H_Q(a'|s') = \sqrt{V(X_\epsilon|(s',a'))}$ confirms that $\frac{1}{H_Q(a'|s')}Q(s',a')1_{(a' \in U(Q))} < Q(s',a') 1_{(a' \in U(Q))}$ always holds because the value of $H_Q(a'|s') = \sqrt{V(X_\epsilon|(s',a'))}$ is relatively large in the high uncertainty regions defined by QDQ.
Summary: This paper proposes a new offline RL method, Q-Distribution guided Q-learning (QDQ), which uses a consistency model to model the distribution of Q-value for uncertainty estimation and then introduces an uncertainty-aware optimization objective for pessimistic Q-learning. This method has theoretical guarantees and exhibits strong performance in the D4RL benchmark. Strengths: - This paper is clearly written, allowing readers to follow the main arguments. - It introduces a novel approach for estimating uncertainty of Q-value via a consistency model. - It provides detailed training curves in the appendix, making the experimental results credible. Weaknesses: - Personally, there are some confusions regarding the method. The consistency model is trained to fit the distribution of $Q^{\pi{\beta}}$, but it is used to estimate the uncertainty of the updating Q function (as shown in Eq. 8). So, I wonder why the variance of $Q^{\pi{\beta}}$ can estimate the uncertainty of Q. Another confusion is why there is still a policy constraint in Eq. 9. Normally, for pessimistic value offline RL methods like CQL[1], EDAC[2], and PBRL[3], they only impose conservatism on Q-value learning and do not add any constraint on policy learning. I noticed the ablation study on $\gamma$ shows that too small $\gamma$ leads to poor performance on some tasks like hopper-medium-v2. Does this mean that the proposed uncertainty-aware learning objective (Eq. 7) is too weak for some tasks? - I think some baselines are missing in the main evaluation experiment, like EDAC and PBRL. To the best of my knowledge, EDAC is the SOTA model-free uncertainty-based method so far. So, I suggest the authors include these two algorithms in Table 1 & 2. - The authors point out that previous uncertainty-based offline RL methods like bootstrap or ensemble impose significant computational burdens (line 181). Although their proposed method does not need ensemble Q-value networks, it additionally brings a consistency model whose training relies on a pretrained diffusion model. So, my concern is whether this will cost more computation compared with ensemble Q-value networks. [1] Kumar et al. "Conservative Q-Learning for Offline Reinforcement Learning" [2] An et al. "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble" [3] Bai et al. "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning" Technical Quality: 3 Clarity: 3 Questions for Authors: To summarize the main points/questions raised in the weaknesses section: - Could the authors provide some explanations for the confusions mentioned in the first point of weaknesses? - Could the authors include EDAC and PBRL in Table 1 & 2? - Could the authors compare the computational costs (like runtime and GPU memory usage) of their method with other methods? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For weakness: 1. We apologize for the ambiguity. Please see 1. of the “author rebuttal” at the top for the rationale behind using Q-value data from the behavior policy. Although estimated uncertainty penalizes the Q target of the learning policy, it only pessimistic the Q-value in the OOD region and does not affect the learning policy's Q-value updates and optimization in the in-distribution region. We will provide a clearer explanation in Section 3.1. 2. The gamma term in Eq. 9 stabilizes the learning of a simple Gaussian policy, particularly for action-sensitive and narrower distribution tasks like hopper-medium etc. For these tasks, a simple Gaussian policy can easily sample risky actions as it fits only a single-mode policy. Indeed, the inclusion of a gamma term in QDQ depends on the task. For instance, the optimal gamma for QDQ on wide distribution data (e.g., halfcheetah-medium) is 0. Estimating uncertainty is inherently challenging. QDQ aims to estimate uncertainty directly while applying minimal control over the Q-value and using a less robust Gaussian policy. This combination increases the risk of unstable Q-value training. In fact, the gamma in Eq. 9 is very small relative to the Q-value, primarily to stabilize training and avoid instability in Gaussian policy action sampling. CQL [1], although not explicitly controlling the policy, requires the leaning the behavior policy's distribution and another policy to cover the action space in the dataset, this is some implicit action intervene. IQL [2] uses AWR policy loss to control the learning policy as well as the expectile regression to underestimate the Q value function, and PBRL [3] mentions in the penultimate paragraph of the introduction that simply using ensemble to estimate uncertainty to control Q-value is not very useful, and highlights the need for OOD data in the learning policy's action space to the success of Q value ensemble kinds methods. MCQ follows a similar approach like PBRL and not just constrain the Q value function by behavior policy’s support. To verify the impact of the uncertainty-aware Q-value optimization in QDQ, we compared the performance of Q-values without uncertainty control (Bellman optimization for the Q value function like in online RL setting) to the QDQ algorithm on hopper-medium data, using identical gamma terms setting. Figure 2 in the attached PDF shows the learning curve. The figure illustrates that introducing uncertainty-based constraint for the Q value function in QDQ significantly improves training stability, convergence speed, and learning strategy performance. This supports the effectiveness of QDQ's uncertainty-aware Q-value optimization. We believe stronger learning policy will further reduce the need for this stabilizing term. 3. Thank you for your reminder. We apologize for the oversight and will add EDAC[4] and PBRL[3] to our baseline(Table 2) and related work. 4. Regarding the training cost of the consistency model, we believe it is almost negligible. Training a diffusion model on a 4090 GPU takes about 5.2 minutes, while training a consistency model with this pretrained diffusion model takes about 16 minutes. Additionally, the consistency model is stored and can be reused for subsequent RL experiments, eliminating the need for retraining. For the comparison of computational costs, see Table 1 in the “author rebuttal” at the top. The results of other methods are taken from Table 3 of EDAC [4]. QDQ aims not only to achieve SOTA performance but also to fill gaps in uncertainty research and advance methods in Q-value constraints. For Questions: Please refer to our previous reply on weaknesses. Table 2: Comparison of QDQ and the other baselines on the three Gym-MuJoCo tasks. All the experiment are performed on the MuJoCo "-v2" dataset. The results are calculated over 5 random seeds.med = medium, r = replay, e = expert, ha = halfcheetah, wa = walker2d, ho=hopper | Dataset | BC | AWAC | DT | TD3+BC | CQL | IQL | UWAC | MCQ | EDAC | PBRL | QDQ(Ours) | |----------|-------|-------|-------|--------|-------|-------|-------|-------|-------------|-------|-------------| | ha-med | 42.6 | 43.5 | 42.6 | 48.3 | 44.0 | 47.4 | 42.2 | 64.3 | 65.9 | 57.9 | **74.1** | | ho-med | 52.9 | 57.0 | 67.6 | 59.3 | 58.5 | 66.2 | 50.9 | 78.4 | **101.6** | 75.3 | **99.0** | | wa-med | 75.3 | 72.4 | 74.0 | 83.7 | 72.5 | 78.3 | 75.4 | 91.0 | **92.5** | 89.6 | 86.9 | | ha-med-r | 36.6 | 40.5 | 36.6 | 44.6 | 45.5 | 44.2 | 35.9 | 56.8 | 61.3 | 45.1 | **63.7** | | ho-med-r | 18.1 | 37.2 | 82.7 | 60.9 | 95.0 | 94.7 | 25.3 | 101.6 | 101.0 | 100.6 | **102.4** | | wa-med-r | 26.0 | 27.0 | 66.6 | 81.8 | 77.2 | 73.8 | 23.6 | 91.3 | 87.1 | 77.7 | **93.2** | | ha-med-e | 55.2 | 42.8 | 86.8 | 90.7 | 91.6 | 86.7 | 42.7 | 87.5 | **106.3** | 92.3 | 99.3 | | ho-med-e | 52.5 | 55.8 | 107.6 | 98.0 | 105.4 | 91.5 | 44.9 | 112.3 | 110.7 | 110.8 | **113.5** | | wa-med-e | 107.5 | 74.5 | 108.1 | 110.1 | 108.8 | 109.6 | 96.5 | 114.2 | 114.7 | 110.1 | **115.9** | | Total | 466.7 | 450.7 | 672.6 | 684.6 | 677.4 | 698.5 | 437.4 | 797.4 | 841.1 | 759.4 | **848.0** | [1] Kumar et al. "Conservative Q-Learning for Offline Reinforcement Learning". [2] Kostrikov et al. Offline reinforcement learning with implicit q-learning. [3] Bai et al. Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning. [4] An et al. Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble. --- Rebuttal Comment 1.1: Comment: Thank you for your reply, my concerns have been mostly resolved. I have already improved my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your prompt response and for raising the score! Your questions and valuable suggestions play a crucial role in improving our paper, and we will incorporate all the content from the rebuttal into the manuscript. Once again, we sincerely appreciate your feedback and your assistance in enhancing the quality of our paper!
Summary: The paper proposes a method for estimating Q values in the offline/batch setting by leveraging consistency models. Via these, the uncertainty over the Q function can be estimated and used as a robust penalty to prevent distribution shift in offline RL. The authors provide both experimental validation on the standard D4RL dataset, as well as theoretical insight into the performance of their algorithm. Strengths: The paper proposes an intuitive and well supported idea to improve the robustness of offline RL. Following in the well established path of using uncertainty instead of pessimism to regularize Q value estimation in the offline regime, they present a solid approach to estimate this uncertainty using consistency models. As far as I can tell, the addition and evaluation of the consistency model as the measure of uncertainty in a Q function is novel and a solid contribution to the literature. The empirical results place the method at the top of comparable methods. Weaknesses: The main weakness at the moment lies in the slightly confusing presentation. While I think that all the ideas are in principle well supported, it is very hard to follow the exact setup and intuition throughout. In addition, I am not fully convinced that the theoretical results are fully informative for the empirical implementation. The introduction of the consistency model is very terse. For readers who are less deep in the diffusion literature, this presents a large barrier for understanding the rest of the paper. I would recommend that the authors provide a slightly more thorough introduction here. The sliding window approach to Q estimation seems to rely heavily on a relatively dense informative reward. This should be discussed in the paper. While Theorem 4.1 does (somewhat trivially) hold, it does not suggest that partial reward sums will necessarily be informative. In cases with 0 reward across a chunk (which would happen frequently with sparse rewards) the consistency model would always predict near 0 uncertainty. This issues is currently the main reason I am leaning towards recommending rejection and I am happy to discuss it in the rebuttal in case I am overestimating the importance of the problem. In general, the theoretical analysis is not tied strongly to the rest of the paper. It is not clear if the authors draw conclusions from it for their method, or simply present it as a justification for its validity (in which case I would expect a slightly more thorough discussion). The method requires 3 additional hyperparameters, which the authors discuss. Given the enormous difficulty of model selection / hyperparameter tuning in offline RL and the communities lack of coherent standards here, I would like for the authors to discuss how the hyperparameters were tuned (and whether this constitutes test set training). I do concede that this is a wider issue in the community and that I cannot fully fault the authors to adhering to standards here, but I think the issue should be discussed in offline RL papers. The empirical results do not seem vastly better than MCQ, so i would tone done the writing a small bit. It is unclear whether the authors for example allowed themselves a larger hyperparameter tuning budget. Technical Quality: 2 Clarity: 2 Questions for Authors: Why is the set of baseline methods different on the two different benchmarks? Especially the strongest method on D4RL seems to be missing from the Ant-maze comparison. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For weakness: Firstly, we apologize for any ambiguities. In offline RL, which aims to train policy without interacting with the environment, "distribution shift" is the main obstacle. A learning policy may take out-of-distribution (OOD) actions, leading to overestimated Q-values. One suggested way to tackle this overestimation in Q value function is by its high uncertainty property in the OOD regions [3]. Following this idea, QDQ targets high uncertainty of the Q-values in OOD regions. Please see more details in the motivation of the "author rebuttal" at the top. While our theoretical framework is detailed in Section 4, we reference these results in Section 3 to provide context. For instance, Theorem 4.1, mentioned in line 166 of Section 3.1, shows that our sliding window-based truncated Q-value distribution converges to the true Q-value distribution, which guarantee accurate uncertainty estimation. Each theoretical result is briefly introduced in Section 4 before being detailed, such as in lines 232-235, where Theorem 4.2 is discussed. This theorem shows that the consistency model is suitable for uncertainty estimation because it guarantees the effect of actions on the variance change of the Q-value. Theorems 4.3 and 4.4 illustrate that QDQ penalizes the OOD region by uncertainty while ensuring that the Q value function in the in-distribution region is close to the optimal Q-value, which is the goal of offline RL. Detailed descriptions, proofs, and implications of these theorems are provided in Appendices B-F. We will revise the statements further for clarity. 1. For the consistency model: we use the consistency model [1] for the Q-value distribution learner, which addresses the inconsistency between the diffusion model's prior information and the target distribution. It also improves sampling speed through one-step sampling, outperforming the diffusion model. In essence, the consistency model is an enhanced generative model compared to the diffusion model. The diffusion model gradually adds noise to transform the target distribution into a Gaussian distribution and by estimating the random noise to achieve the reverse process, i.e., sampling a priori samples from a Gaussian distribution (forms the sample generation trajectory) and denoise to the target sample. And the consistency model ensures each step in a sample generation trajectory of the diffusion process aligns with the target sample (we call consistency), enhancing uncertainty estimation. The consistency feature, as discussed in Theorem 4.2, ensures the accurate impact of action changes on the variance of the final bootstrap samples, making Q-value uncertainty more sensitive to OOD actions compared to the diffusion model. Additionally, the fast-sampling process of the consistency model enhances QDQ's efficiency. Despite some quality loss in restoring real samples, this loss is negligible for QDQ, as it only calculates uncertainty from the variance of the bootstrap sample, not use the absolute Q-value of the sampled samples. Overall, the consistency model is an ideal distribution learner for uncertainty estimation due to its consistency, high fidelity, ease of training, and faster sampling. 2. For the sliding window: in fact, a good Q-value dataset for uncertainty estimation should span a broad state and action space of the dataset to accurately characterize the Q distribution and detect high uncertainty in the OOD action region. This coverage of the state and action allows us to identify actions with high Q-value uncertainty in OOD regions. Even if the variance of Q-values in the in-distribution region is small, it makes uncertainty detection more sensitive, as Q-value functions typically exhibit high uncertainty in OOD regions [3], aiding in identifying OOD actions by comparison. We also give some rational on how to choose the sliding window size, please see the line 814-821 in Appendix G.1. For sparse reward tasks, we apply adjustments such as those in IQL [2] to prevent all rewards from being zero and maintain some level of variance of the bootstrap samples even in the in-distribution region. Although a wider sliding window theoretically provides richer information as suggested by the reviewer, experiments (e.g., Fig. 1 in the attached PDF) show that it has not influence the shape of distribution of the Q-value data much. Conversely, larger window width reduces the Q-value dataset size, affecting coverage of the Q-value dataset, then harm the uncertainty estimation accuracy. 3. For parameter tuning, please refer to 2. in the” author rebuttal” at the top. 4. We have updated our results during parameter tuning, as shown in Table 2. QDQ outperforms MCQ by 50 points, with notable improvements in tasks like HalfCheetah-Medium and Hopper-Medium. MCQ addresses Q-value overestimation by identifying in-distribution and OOD actions from an estimated behavior policy. While both QDQ and MCQ apply mildly constraints on the Q value, they follow different approaches. QDQ focuses on Q-value constraints introduced in [3], whereas MCQ is more aligned with the method used in policy control method. Our goal with QDQ is not merely to achieve SOTA results but to advance research on uncertainty-guided pessimistic adjustment of Q-values in offline RL. QDQ provides both theoretical support and experimental validation, aiming to improve offline RL methodology and address research gaps beyond SOTA performance. For the question: In footnote 1, we explain why we did not include the same baselines as in Table 1 of Section 5, specifically the results of UWAC and MCQ. They do not provide experiment results for the Antmaze task, and we lack information on how to set their hyperparameters. [1] Song et al. Consistency models. [2] Kostrikov et al. Offline reinforcement learning with implicit q-learning. [3] Levine et al. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. --- Rebuttal Comment 1.1: Title: Reviewer reply Comment: Thanks for your thorough comments. I am satisfied with the answer and I will update my score, but I truly believe that this paper deserves another very thorough editorial pass to make everything more clear to the readers. I do not mean to offend, but the presentation feels very rushed and it might be a good idea to consider re-submission to a future conference after polishing the presentation so that you do not lose potential impact due to low clarity. As I am only a reviewer, this is of course not my final decision and between the AC/SAC and the authors, but I think it might be a good idea to forgo publication now to give yourselves the time to make your work truly shine. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response! We sincerely appreciate your ongoing recognition of our work and its contribution to the field of offline RL. As we mentioned earlier, we hope that QDQ can provide evidence and insights for research on uncertainty estimation in offline reinforcement learning, and on how to effectively use uncertainty to more accurately control the overestimation bias of Q values in areas beyond the knowledge of the dataset. We hope our work can offer some insights and support for future research in offline RL, and even more, shed a light on the usage of uncertainty estimation in the study of exploration in online RL. We sincerely apologize for the issues with the presentation of our work, which negatively impacted your review experience and took up your valuable time. We are taking this feedback very seriously and will thoroughly revise all presentation-related aspects of the paper. In fact, we have already begun revising our manuscript, and we will incorporate all the issues you mentioned, as well as the content from the rebuttal, into the latest version of our paper. We will refine the motivation, the introduction of relevant background (especially the consistency model), the description of the algorithm, and particularly the integration of theory and algorithm. We are confident that, with the additional time available and the valuable suggestions provided by you and the other reviewers, we can complete these revisions effectively. We are committed to refining our paper and are confident that the next version will meet your expectations. Once again, thank you for recognizing our work, and we sincerely appreciate your review efforts! We will address and correct all the issues you mentioned, making the entire work clearer and more precise. We also hope that our work can meet your expectations and play a positive role in advancing research in reinforcement learning.
null
null
Rebuttal 1: Rebuttal: We appreciate the valuable comments from our three reviewers, which have helped us improve our manuscript. We have provided detailed answers to each question and included additional experiments in the attached PDF. To address any remaining confusion, we would like to introduce the motivation behind our work (QDQ). QDQ aims to directly control overestimation in offline Reinforcement Learning (RL) for Q value functions by estimating the uncertainty of Q-values concerning the action during training [1]. Estimating Q-value uncertainty is challenging [1], and to our best knowledge, most methods address it indirectly by using ensembles of Q-value functions rather than bootstrapping. These ensemble methods suffer from lacking diversity in the Q-value[2] and fail to accurately represent the true Q-value distribution, often requiring tens or hundreds of Q-values to improve accuracy, which is computationally inefficient [2,4]. Other methods, like CQL[6] and MCQ, focus on underestimating Q-values in the OOD region by first identifying the OOD region, rather than exploiting Q-value uncertainty in that region. QDQ aims to solve the problem of estimating Q-value uncertainty by directly computing this uncertainty through bootstrap sampling from the distribution of Q-values of the behavior policy. By approximating the behavior policy's Q-values from the dataset, we train a high-fidelity, efficient distribution learner-consistency model. This ensures the quality of the learned Q-value distribution. Since the behavior and learning policy share the same high uncertainty action set [4], we can sample from the learned Q-value distribution to estimate uncertainty, identify risky actions, and make Q target values for these actions more pessimistic. Additionally, QDQ proposes an uncertainty-aware Q-value optimization objective to avoid excessively penalizing Q-values, ensuring the Q value function’s exploratory in the in-distribution region. QDQ seeks to find the optimal Q-value that exceeds the behavioral policy's optimal Q-value while being as pessimistic as possible in the OOD region. QDQ aims not only to achieve state-of-the-art experimental results but also to explore and advance uncertainty estimation in RL, an area with limited research. By promoting the role of uncertainty estimation in offline RL, QDQ seeks to enhance the methodology completeness of Q-value constraint. It is designed to be concise while maintaining competitive experimental performance and efficiency (Table 1). These features make QDQ flexible for integration into more complex elements, such as, enabling the incorporation of more powerful policies and enhancing exploration in online RL [1]. Next, we provide a brief description of some confusing details for QDQs: 1. We chose to estimate the Q-value distribution of the behavior policy rather than the learning policy because they share almost the same high-uncertainty action set [4]. Using the behavior policy’s Q-value distribution offers several advantages. Firstly, the behavior policy's Q-value dataset is derived from the true dataset, ensuring high-quality distribution learning. Conversely, the learning policy's Q-value is unknown, counterfactually learned, and often noisy and biased. Poor data quality would lead to biased distribution learning. Secondly, using the behavior policy's Q-value distribution to identify high-uncertainty actions does not constrain the learning policy's target Q-value to match the behavior policy's. This aligns with our goal of a mildly constrained Q-value. Theorems 4.3 and 4.4 demonstrate that our uncertainty-aware Q-value optimization objective can train a Q-value that closely approximates the optimal Q-value in the in-distribution region, outperforming the behavior policy, as confirmed by our experimental results. We will integrate these points into the paper to clarify the QDQ algorithm. 2. Although QDQ has three hyperparameters ($\alpha$, $\beta$, and $\gamma$) to achieve more flexible functions, the tuning process is straightforward. Take $\alpha$ as example, as discussed in Theorem 4.4 (Appendix E), theoretically $(1-\alpha)(1-\beta)$ should be small. Since beta controls the size of the uncertainty set and needs flexibility across different tasks, we typically set $\alpha$ closer to 1, tuning it between 0.9 and 0.995, which requires only a few experiments to find the optimal value. Our tuning process involves sequentially fixing parameters while choosing the best $\alpha$, then $\beta$, and finally $\gamma$. Considering the characteristics of different datasets (Section 5.2 and Appendix G.3), we can set each parameter a confidence initial value that close to its optimal value. QDQ provides evidence-based guidelines for hyperparameter ranges, making tuning manageable. Additionally, QDQ's speed (Table 1) reduces the tuning burden. This analysis as well as the tuning detail of $\beta$ and $\gamma$ will be detailed in Section 5 and Appendix G.3. Table 1: Computational performance of QDQ and other SOTA methods | | Runtime(s/epoch) | GPU Mem.(GB) | |-----------|------------------|--------------| | SAC | 21.4 | 1.3 | | CQL | 38.2 | 1.4 | | EDAC | 30.8 | 1.8 | | **QDQ** | **0.028** | **0.74** | [1] Levine et al. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. [2] An et al. Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble. [3] Agarwal et al. An Optimistic Perspective on Offline Reinforcement Learning. [4] Kumar et al. Stabilizing off-policy Q-learning via bootstrapping error reduction. [5] Kostrikov et al. Offline reinforcement learning with implicit q-learning. [6] Kumar et al. "Conservative Q-Learning for Offline Reinforcement Learning". Pdf: /pdf/227fedcc1d186aa4c86c0450d63945f4bb19df55.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing LLM Reasoning via Vision-Augmented Prompting
Accept (spotlight)
Summary: Traditional large language models (LLMs) struggle with tasks requiring visual and spatial interpretation based solely on text. This study introduces a visual-augmented prompting (VAP) strategy, using an external image generation tool to iteratively create intermediate visual representations that aid reasoning. VAP's effectiveness is validated in four tasks: (1) Geometry Intersection Counting, (2) Sudoku Puzzles, (3) Time Series Prediction, and (4) the Traveling Salesman Problem. Strengths: (1) The VAP method is simple yet effective and well-motivated, aligning with human cognitive processes. It promisingly integrates both intermediate textual and visual results to enhance the accuracy and interpretability of the reasoning process. (2) Results clearly show that VAP significantly outperforms all baselines across four tasks. (3) The manuscript is well-written and easy to follow. Weaknesses: (1) Efficiency of Iterative Reasoning: The author does not mention the time consumption of the iterative reasoning process of VAP. Additionally, I am curious about the trade-off between the number of images that need to be drawn and model performance. (2) More LLM models (such as LLaMA 3, GPT4) need to be incorporated for more comprehensive comparison. (3) Effect of Different Figure Drawing Tools: I am interested in understanding how different graphic rendering tools for image drawing affect the final reasoning results. (4) Method Scalability on Complex Geometry Problems: Figure 4 shows a significant accuracy drop for VAP when the number of shapes exceeds three. This raises concerns about the scalability of VAP in handling complex geometry problems. (5) Although VAP performs better than the baselines, its overall performance is still poor and has a noticeable gap compared to traditional methods. Technical Quality: 2 Clarity: 3 Questions for Authors: For weakness (2), the author claimed that using GPT-4V as the foundational model for the baseline methods ensures fairness. However, VAP requires an MLLM to handle visual-text input, whereas all other baselines only require textual input. I believe traditional LLMs should outperform MLLMs for text-only tasks. Therefore, the author is expected to incorporate a wider range of LLM models (such as LLaMA 3, GPT4) for comparison. I am curious about the potential of VAP in solving real-world reasoning problems, where the model needs to generate more photorealistic images (not just diagrams) to benefit reasoning. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The author has a section that addresses most of the limitations of the VAP method. However, I suggest that if the VAP method is time-consuming in terms of the iterative figure drawing process, the author should discuss the efficiency problem in the limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive comments on our work, which have helped us to enhance the paper. The following are our detailed responses to each comments. ### W1.1:Efficiency of iterative reasoning This is a good point. We added an efficiency experiment for VAP in our revision. The results for the geometry and Sudoku tasks are shown below: | | Geometry | | Sudoku | | | ------------- | -------- | -------- | ------ | ------------ | | | Time | Accuracy | Time | Correct rate | | Standard | 0.2 s | 8.5% | 0.3 s | 18.0% | | CoT | 0.5 s | 10.0% | 0.8 s | 17.3% | | CoT-SC (k=5) | 2.3 s | 11.0% | 4.1 s | 20.6% | | CoT-SC (k=10) | 4.5 s | 11.5% | 8.8 s | 20.6% | | ToT | - | - | 9.0 s | 22.6% | | VAP | 4.1 s | 16.5% | 9.5 s | 35.5% | While VAP is indeed time-consuming, it achieves comparable time usage to ToT and CoT-SC (k=10) in the Sudoku task. The reason is that, although VAP's context is quite large (tool instructions, thought trajectory, image encoding), its output is concise (API calls and immediate thoughts), leading to fast inference during each step (time usage is mainly affected by the number of decoded tokens). We believe the time usage is acceptable given VAP's superior effectiveness. ### W1.2:Trade-off between the number of images and model performance This is an inspiring comment. Currently, the number of images is determined by the planner, and we don't provide access to modify it. To address the reviewer's concern, we add an experiment of controlling the number of images. The results of this experiment can be found in our response to all reviewers. ### W2/Q1:More LLM models need to be added for comprehensive comparison We follow the reviewer's advice and add GPT4 and LLaMA 3 8B as additional LLMs for more fairer comparison in our revision. The results can be found in our response to all reviewers. Based on results, we believe that the superiority of VAP over other baselines remains valid, even when utilizing different LLMs. ### W3:Effect of different figure drawing tools We sincerely thank your comment and it actually pushes us to think deeply about the relationship between tool selection and performance. First, we analyze the distribution of drawing tools selection: | | Geometry | Sudoku | Time Series | TSP | | ---------- | -------- | ------ | ----------- | ---- | | Matplotlib | 86.0% | 91.3% | 100% | 78% | | Turtle | 14.0% | 8.7% | 0.0% | 22% | | DALLE3 | 0.0% | 0.0% | 0.0% | 0.0% | Matplotlib was used more frequently, especially for time series prediction, due to its ability to construct coordinate systems efficiently. DALLE3 is never selected by planner as the images of four tasks are all diagrams (we have explored another use case of DALLE3, detailed in Q2). Second, we are also interested in which tool perform better for these tasks. We conduct an experiment where we forced VAP to choose a specific tool. The results are as follows: | | Geometry↑ | Sudoku↑ | Time Series↓ | TSP(N=10)↓ | | --------------- | --------- | --------- | ------------ | ---------- | | Original | 16.5% | 35.5% | 556 | 312.4 | | Matplotlib only | 16.5% | **36.6%** | **556** | 312.6 | | Turtle only | 16.5% | 19.0% | 611 | **312.1** | We find that the impact of different tools depends on the task. For geometry and TSP, using Matplotlib or Turtle makes no significant difference, while in Sudoku and time series forecasting, Matplotlib is better than Turtle. Because drawing grids and coordinates are too complicated for Turtle (draws by pen movement). The results also reflect the rationale behind the LLM's tool selection, as it automatically avoids choosing tools that are not suitable for a given task. ### W4: Scalability of VAP Solving complex reasoning problems has been a well-known weakness of LLMs and this domain has attracted significant attention because once the challanges can be overcome, many applications can well benefit from the power of LLMs. So, it is not surprising to find that the accuracy drops when facing challenging reasoning problems. Compared with other LLM-based reasoning frameworks, our VAP still demonstrates the best performance. The performance gap is even widened when the problem becomes more challenging. ### W5: Although VAP performs better than the baselines, its overall performance is still poor and has a noticeable gap compared to traditional methods We totally agree with the reviewer that the current performance of LLMs is still not comparable to traditional methods in many tasks. Its strength lies in generality, with one model to support vairious decision and prediction problems. Thus, how to exploit the potential of LLMs and improve its capabilities in reasoning and prediction has become a hot research topic in recent years. Our work falls in this category and we believe with more efforts devoted in this domain, the performance gap may continue to shrink. ### Q2: The potential of VAP in solving real-world reasoning problems, where the model generates more photorealistic images We've actually explored VAP's potential in a real-world sotrytelling task. The task setting is to write a story based on the input of text prompts. In the implementation of VAP, DALLE3 is always selected as the external image tool. Then at each iteration, VAP generates a photorealistic image to render a scene. This image is then sent to GPT-4v to enhance the model's creativity in story generation. Our previous findings show that, based on our subjective judgement, VAP can significantly benefit the task. We did not put this task in the manuscript due to the lack of a convincing performance metric. --- Rebuttal Comment 1.1: Comment: Thank you for the author's detailed response. While most of my concerns have been addressed, I still have reservations regarding the scalability and generalizability of the VAP to more complex tasks. Since this is not my area of expertise, I recognize that VAP might represent an important step if it is indeed the first to achieve visual CoT by explicitly generating intermediate visual results. This could potentially influence my final rating, which I will determine during the reviewer discussion session. Otherwise, I encourage the author to provide more details about the current status of visual CoT. --- Reply to Comment 1.1.1: Title: Thank you for your feedback! Comment: Thank you for your feedback and we understand your reservations about the scalability of VAP to complex tasks. First, we notice that solving complex reasoning problems is a well-known challenging for current LLMs-based methods. In our experiments, the compared LLM-based methods failed to solve the hard version of geometry intersection counting problems with shape = 6. | Method | CoT | CoT-SC(k = 5) | CoT-SC(k = 10) | CoT-SC(k = 20) | VAP | | ----------- | ---- | ------------- | -------------- | -------------- | -------- | | Performance | 0.0% | 0.0% | 0.0% | 0.0% | **7.5%** | While VAP's performance of 7.5% may seem low, all other baselines fail to solve any of these problems (with success rate 0%). It is noteworthy that these instances are quite challenging for humans as well. In this context, VAP's performance represents an improvement over existing LLMs-based methods. Second, we follow the reviewer's advice and provide more details about the current status of visual CoT. Here is a comparison with recent related work: | Method | training-free ? | Main problem | Use self-systhetic image ? | Potential to solve complex numerical problems ? | | ---------- | --------------- | ------------------------------------------------------------ | -------------------------- | ----------------------------------------------- | | MMCoT | 𐄂 | Visual question answer | 𐄂 | 𐄂 | | DDCoT | 𐄂 | Visual question answer | *𐄂* | 𐄂 | | Cantor | *✓* | Visual question answer | 𐄂 | 𐄂 | | ViperGPT | *✓* | Visual question answer | 𐄂 | *✓* | | VisProg | *✓* | Visual question answer | 𐄂 | *✓* | | LLava-Plus | 𐄂 | Visual question answer | 𐄂 | 𐄂 | | CoI | 𐄂 | Numerical Reasoning problem (Geometry, chess) | *✓* | *✓* | | Visual CoT | *✓* | stotytelling, summarization | *✓* | 𐄂 | | VAP | *✓* | Numerical reasoning problems (Geometry, Sudoku, time series, TSP) | *✓* | *✓* | While many works (e.g. MMCoT, ViperGPT) aim to use visual CoT for visual question answering, the input for those tasks consists of an image and a text question. **So** **these VQA-oriented approaches** **are unable to handle text-only problems by self-synthesizing images.** Another work, Visual CoT, uses generated photos to enrich the context for handling creative tasks such as storytelling and summarization, but it is not designed to solve complex numerical problems. CoI, on the other hand, is not a training-free method and requires task-specific training to learn the patterns of each task, making it unsuitable as a general reasoning method. We hope this additional context could help address your concerns.
Summary: This paper targeting an intresting topi in VL research: Can VLM understand the organized prompts as LLM? The authors proposed a method called VAP(visual augmented prompt) to improve the prompting learning methods for VLM. The authors argue that human have two specialized subsystems that process verbal inforation and visual-spatial information respectively. Thus, to mimic human's decision making capability, the proposed method will synthesize images and organized a chain-of-thought in both modalities. This method comprises three steps: 1. selecting an anppropriate drawing toolkit and creating an initial image; 2. Iteratively perform reasoning on the synthesized images, and generates a paragraph of accompaning text for the generated image; 3. Finally, feed all the intermediate thoughts and images to the model as input to formulate the COT process. This method is tested on 4 different tasks to show its general capability. Strengths: 1. Though COT and prompt learning is not a novel topic in LLM. How to organize and build a CoT process for VLM to effectively activate the reasoning skill for VLM. While most recent work focusing on leveraging the language side to generate the rationale process. This work focusing on how to coordinately generate both visual and language rationale. This is a quite good idea. 2. The proposed method reflect the human decision process and thus sounds quite reasonable and novel. 3. The proposed method increase the results on different tasks by a large margin, even compare to many advance prompting and CoT styles. These solid increment prove the effectiveness of this method. Weaknesses: 1. As mentioned by the authors. The image generation process not always controllable and could generate error-pone content to mislead the thinking process. Therefore I wonder, is generating a explicit image the best practice for this process. As the generation of images could result in unwanted features, we could also keep the representation in the latent space as a prompt. 2. The method relies on a planner to plan what to draw for the whole process. However, a planner itself can be incorrect and we simply have no control on this planner and the generated plan. Is there a more formulated way to generate the plan, or do we have somewhat of tools to detect the possible problem? Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Since the drawing steps and numbers are decided by the planner. What if the plan is quite long, and we are out of tokens? Do we have a control on the granularity on the intermediate drawing? 2. How frequent the generated images will content incorrect information? And how well the self-alignment module detect some very specific errors? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive comments from the reviewer! We provide detailed responses to each concern in the following. ### W1: "I wonder, is generating an explicit image the best practice for this process. As the generation of images could result in unwanted features, we could also keep the representation in the latent space as a prompt." This is a really inspiring idea! In the current stage, commercial VLMs we used in this paper have not provided the API interface that allows us to use encoded image feature in the latent space as part of the input. We consider it as our future work to examine whether we can implement the idea on other open-sourced VLMs. ### W2: "Is there a more formulated way to generate the plan, or do we have somewhat of tools to detect the possible problem?" We appreciate your constructive feedback. We totally agree that a more formalized approach to generate the plan would likely improve the performance and robustness of our method. The ideas we currently have include: - Format control: we can utilize the open-source repository lm-format-enforcer (https://github.com/noamgat/lm-format-enforcer) to strictly formulate the output JSON schema. This tool uses a prefix tree decoder during the streaming output of the language model to enforce the output format. - Reflection mechanism: recent works, such as Reflexion, have shown that LLMs can self-reflect on previously generated answers and produce more reasonable responses. This mechanism can be applied on the planning stage, which could help detect potential problems. - Controllable parameters in planning prompt: we can introduce controllable parameters in the prompt of planning step, which would provide users with more control over the generated plan and would also contribute to performance (also proved in the experiment in Q1.2). - Planning decomposition: for complex tasks, decomposition can be an effective strategy. Techniques like ToT or ReAct could be applied to generate a more reliable plan. ### Q1.1: "Since the drawing steps and numbers are decided by the planner. What if the plan is quite long, and we are out of tokens?" Token length is indeed a common issue in LLM-based reasoning tasks. So far, the maximum number of tokens required by the planner of VAP is 2841, which is safely below the 8k token context limit in the default GPT-4 setting. ### Q1.2: "Do we have a control on the granularity on the intermediate drawing?" Thank you for your insightful comment. Currently, the granularity on drawing is determined by planner and we don't provide access to modify it. Inspired by your feedback, we explored the possibility of controlling granularity to enhance drawing efficiency. We conducted an experiment on Sudoku task by controlling the number of iterative steps (which is equivalent to controlling the granularity of drawing). Specifically, we inject the prompt "You must finish within `n_iterations` by drawing multiple rows in parallel" into the prompt of iterative reasoning. `n_iterations` is set to 8, 4, 2, and 1 to investigate performance. The results are as follows: | | Time usage | Correct rate | | ---------------------- | ---------- | ------------ | | VAP (original) | 19.0 s | 35.5% | | VAP (n_iterations = 8) | 17.7 s | 35.3% | | VAP (n_iterations = 4) | 15.9 s | **37.3%** | | VAP (n_iterations = 2) | 12.3 s | 26.6% | | VAP (n_iterations = 1) | 4.4 s | 19.9% | It is interesting to find that set `n_iterations` to 4 improve performance over the original version and also enhancing efficiency. This suggests that more iterations do not necessarily lead to higher accuracy. When `n_iterations` is set to 1, the iterative reasoning process is almost skipped, resulting in poor performance. These findings support your previous comment (W2) that a more formulated generation approach is beneficial and worth further exploration. We appreciate the reviewer's comment for pointing out this. ### Q2: "How frequent the generated images will content incorrect information? And how well the self-alignment module detect some very specific errors?" We analyzed the frequency of generated images containing incorrect information and report the results in the following table: | | Correct rate without self-alignment | Correct rate with self-alignment | | ----------- | ----------------------------------- | -------------------------------- | | Geometry | 77.0% | 83.5% | | Sudoku | 72.0% | 89.3% | | Time Series | 100.0% | 100.0% | | TSP | 98.0% | 98.0% | We define an image as "correct" if it contains the integral information described in the text. From the result, we observe that the geometry and Sudoku benefited significantly from self-alignment. In geometry task, LLM would occasionally initialize coordinates with the wrong range, causing the shapes outside of the view. In Sudoku, LLM may have difficulty translating position descriptions to image coordinates. For example, in a 9x9 board, the correct API call grid position "1 2 4" should be translated to `plt.text("1", x=1.5, y=3.5)`(placing the element in the center of the grid). However, the LLM occasionally make mistakes, such as: - `plt.text("1", x=2, y=4)`, placing at the bottom right corner - `plt.text("1", x=1, y=3)`, placing at the top left corner - Missing this element These mistakes can be detected by self-alignment because the LLM will describe the content of the image and check it against the original input, which is proved to be crucial in our ablation study. --- Rebuttal Comment 1.1: Comment: Thank you for the authors response. I find this response very satisfying as they include new experiments results as I required. Since I give an "8" as my initial rating, which is a very high score, I would not change it anymore. But I still recommend this paper to be accepted and enrouge other reviewers to raise their ratings as this paper really addressed some under studied topic of Vision-Language models. I wish the authors could include the discussion in this response into their main content later to make this work more complete. Wish you good luck.
Summary: This paper proposes a new prompting technique, vision-augmented prompting (VAP), to improve the reasoning capabilities of large language models (LLMs). Different from the mainstream chain-of-thought (CoT) frameworks that only involve textual reasoning steps, the proposed VAP framework automatically generates images from visual and spatial clues via external tools. In addition, the VAP framework feeds a chain of thought with both textual and visual context into LLMs to solve the original problems. Evaluations on four tasks (i.e., geometry, sudoku, time series prediction, travelling salesman problem) demonstrate that the proposed VAP outperforms prior CoT frameworks. Strengths: 1. The proposed VAP framework improves the traditional chain-of-thought (CoT) prompting via augmenting visual context. Specifically, the VAP generates high-level drawing planning from the textual question, and then iteratively outputs instructions to draw the stepwise visual context and provide related textual thoughts. The visual contexts with textual context (i.e., thoughts) provide richer information than the text-only context in the CoT framework and helps the reasoning capability of the LLM (MLLM) models. 2. As the iteratively generated visual and textual contexts may contain errors and lead to wrong reasoning, the authors propose a self-alignment mechanism to check if the visual and textual contexts align with the initial high-level drawing planning. If not aligned, the iterative reasoning procedure will be restarted accordingly. This mechanism helps to improve the iterative reasoning and results in a more accurate conclusion. 3. In the evaluation section, the authors conduct a comprehensive comparison between the proposed models and multiple CoT frameworks on 4 versatile benchmarks. In addition, the authors also introduce several task-specific baselines during the comparison and it further validates the effectiveness of the proposed VAP framework. Moreover, the authors conduct human analysis on some generated drawing and observe an impressive integrity rate. These evaluations are helpful to understanding the strengths of the VAP framework. 4. The paper is well written and easy to read. Weaknesses: 1. In the VAP framework, it chooses one of three tools (matplotlib, turtle, and dalle3), but the manuscript does not provide much details about how these tools are used - For matplotlib and turtle, code generation is needed. However, how can the framework guarantee that the code has the right syntax to properly generate the image? - For dalle3, how is the prompt generated for the image drawing? - What is the distribution of calls on these three different tools? - In each iteration of iteratively reasoning, does the selected tool draw a complete image, or overlay the new drawing on image from previous iteration? - Can the framework use a mixture of different tools in resolving one problem? 2. In the evaluation, the questions from all benchmarks are provided in text format. I wonder if the authors can include benchmarks with both text and image in question (e.g., VQA benchmarks). This is a fairer setting since the current baselines ignore the visual capability of the GPT4v model. 3. In ablation studies, there is one experiment that removes the planning step. I wonder if the authors can provide more details. Does it mean that a different set of iterative reasoning prompt is used? 4. Typos: Line 294: is plays -> plays / "Self-alignment plays an important role to the task of Geometry Intersection Problems" Shouldn't it be "Sudoku" since self-alignment achieves the largest improvement (25.1% -> 35.5%) on Sudoku? Technical Quality: 3 Clarity: 3 Questions for Authors: See the Weaknesses section for detailed questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mentioned and attempted to address several limitations in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the in-depth review! Below, we respond to the weaknesses raised in the review. ### W1.1:"For matplotlib and turtle, how can the framework guarantee the syntax correctness of generated code?" We understand the reviewer's concern. We add an experiment for code generation assessment and find the LLM-based code generation is sufficiently robust to guarantee syntax correctness by itself. The following table presents the ratio of syntax error and runtime error: | | Syntax Error | Runtime Error | | ----------- | ------------ | ------------- | | Geometry | 0% | 0% | | Sudoku | 0% | 2.7% | | Time Series | 0% | 0% | | TSP | 0% | 0% | Here, LLM produces no syntax errors. Because the code generation for API call is not challenging for LLM, without involving complex logic. For example, "a circle center at (1,3) with radius 2" is translated to `plt.circle(c=(1,3),r=2)`. We can observe slight runtime error in the Sudoku task. These errors occurred when translating Sudoku positions to actual coordinates in the image. For example, in a 9x9 board, the grid position "1 9 1" should be translated to `plt.text("1",x=0.5,y=8.5)`(placed in the center of the grid). However, the LLM occasionally made mistakes like `plt.text("1",x=1,y=9)`, causing out-of-bounds placement. ### W1.2: "For dalle3, how is the prompt generated for the image drawing?" For DALLE3, LLM also generates an API call to create the image(OpenAI offers a package for DALLE3). As DALLE3 is not selected in the tasks presented in our experiment(detailed in W1.3), we take another task we have previously tried as the example. This task is creative writing task, where LLM is asked to write storied based on some keywords.(We did't put the task in manuscript due to the lack of a convincing performance metric) Example of this task: Input: "Write a story according to given key words: Mage, Warriors, Priest" Output: "A light breeze swept the ground, ..." In this example, VAP first make a plan like: ```json { "tool": "DALLE3", "initialization": "dalle3 = OpenAI().images", ... } ``` Each iteration, VAP will generate an image using API call. For example, `img = dalle3.generate(prompt="The Mage dressed in a dark cloak, holding a staff and surrounded by a magical aura.", size="1024x1024")`. ### W1.3: "What is the distribution of calls on these three different tools?" As responded in W1.2, DALLE3 is not selected in the four tasks. So, we report the distribution of calls on the tools of Matplotlib and Turtle in the following: | | Geometry | Sudoku | Time Series | TSP | | ---------- | -------- | ------ | ----------- | ---- | | Matplotlib | 86.0% | 91.3% | 100% | 78% | | Turtle | 14.0% | 8.7% | 0.0% | 22% | Matplotlib is used more frequently, especially for time series prediction, due to its ability to construct coordinate systems efficiently. We've also explored the relation between selected tool and performance, which can be found in response to Reviewer gmDj. ### W1.4: "In iterative reasoning, the selected tool draw a complete image, or overlay the new drawings from previous iteration?" In each iteration, our method uses the drawing tool to overlay the new drawing on the image from the previous iterations. For example, consider a geometry problem with two shapes, as follows: ```Plain There's a circle centered at (3, 2) with radius 1 There's a circle centered at (6, 1) with radius 4 How many intersection points are there? ``` In the second iteration, the LLM will use the API call `plt.circle(c=(6,1),r=4)` to draw an additional circle on top of the previous image incrementally. ### W1.5: "Can the framework use a mixture of different tools in resolving one problem?" Unfortunately, our method does not support using a mixture of different tools during iterative reasoning. The drawing tool is determined in the planning step by LLM and remains fixed to ensure consistency. ### W2: "I wonder if the authors can include benchmarks with both text and image in question (e.g., VQA benchmarks). This is a fairer setting since the current baselines ignore the visual capability of the GPT4v model." We understand the reviewer's concern regarding the fairness of baseline comparision. It can be challenging to create an absolutely fair setting when baselines only require text ability of VLLM. Our ablation study may provides valuable insights. When the iterative reasoning stage is removed, it downgrade VAP to standard prompting with visual ability. And the results show a clear superiority of VAP over such a baseline. We also take the reviewer’s advice to investigate VQA benchmarks. However, we find our task setting differs from VQA. Our work focuses on enhancing text-only problems using the additional image channel, while VQA typically involves answering questions based on provided images. This represents a distinct area of research. ### W3: "In ablation studies, there is one experiment that removes the planning step. I wonder if the authors can provide more details. Does it mean that a different set of iterative reasoning prompt is used?" We apologize for the lack of details. When the planning step is removed, we lose access to meta-information like selected tool, which is used to fill into iterative reasoning prompt template. Therefore, we use an alternative prompt template for iterative reasoning. The key changes to this new prompt will be detailed in the Appendix in revision. The key changes of this prompt include: - No specific draw content and thought content given > ..., provide your thoughts according to this problem. - No specific tool given > ... update the image content using Python API calls. ### W4: "Typos at Line 294" Thank you for pointing these out. We have fixed the typo and it's indeed that Sudoku benefits most from self-alignment. We have revised the statements accordingly. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you for the detailed rebuttal. I think most of my concerns have been addressed.
Summary: This paper proposes visual-augmented prompting (VAP) for large language models (LLMs) in reasoning tasks. Specifically, VAP translates textual questions into a sequence of self-synthesized images using API calls (Python Turtle, Matplotlib, DALL-E3). These images are then fed back to Vision-LLM (GPT-4o) in a step-by-step manner as deduction steps. The detailed VAP process includes (1) planning, (2) iterative reasoning, and (3) conclusive reasoning. Experiments on several math tasks, such as geometry intersection counting, Sudoku puzzles, time series prediction, and the traveling salesperson problem, demonstrate that VAP helps LLMs perform better than chain-of-thought (CoT) and tree-of-thought (ToT) methods. Strengths: 1. The paper extends Chain-of-Thought (CoT) with visual prompt information. In addition to the step-by-step textual deduction in CoT, it uses drawing APIs (e.g., Python Turtle, Matplotlib, DALL-E) to synthesize pictures in the intermediate steps, helping to derive the final answer for mathematical problems. This approach is interesting and has not been explored before. 2. Experiments on math-related problems, such as geometry intersection counting, Sudoku puzzles, time series prediction, and the traveling salesperson problem, demonstrate that VAP outperforms CoT and ToT methods. Weaknesses: 1. When the LLM generates Python API calls, there is a chance that the generated code may not run successfully (bugs in the code). What is the probability of this phenomenon occurring in the experiments? 2. Generalization: CoT and ToT are more generalized to different LLM tasks, while VAP is limited to a few geometry problems. Does it generalize to normal Visual QA tasks? 3. There is a missing reference to relevant works on tool usage ability for LLMs, such as ViperGPT [1], VisProg [2], and LLava-Plus[3]. [1]. ViperGPT: Visual Inference via Python Execution for Reasoning [2].Visual Programming for Compositional Visual Reasoning [3]. LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the chance of visual drawing code failing? How to reduce this probability. 2. Does the model require customized instructional prompts for LLM on different tasks? 3. Can the model generalize to more general Visual QA tasks? 4. Is it possible to compare with VCoT mentioned in Section 2.3 (line 76)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss and address some of the limitations of VAP. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive comments from the reviewer. We provide detailed responses to each concern in the following. ### W1&Q1: "When the LLM generates Python API calls, there is a chance that the generated code may not run successfully (bugs in the code). What is the probability of this phenomenon occurring in the experiments?" We understand the reviewer's concern and have added an experiment to assess the code generation error rate. The following table presents the ratio of syntax error and runtime error: | | Syntax Error Rate | Runtime Error Rate | | ----------- | ----------------- | ------------------ | | Geometry | 0% | 0% | | Sudoku | 0% | 2.7% | | Time Series | 0% | 0% | | TSP | 0% | 0% | The results show that the componnet of LLM-based code generation is sufficiently robust, with no syntax error. This is because the task of code generation for API call is not challenging for LLM, without involving complex logic. For example, "a circle center at (1, -1) with radius 2" is translated to `plt.circle(c=(1, -1), r=2)`. We can observe slight runtime error in the Sudoku task. These errors occurred when translating Sudoku positions to actual coordinates in the image. For example, in a 9x9 board, the grid position "1 9 1" should be translated to `plt.text("1", x=0.5, y=8.5)`(placing the text in the center of the grid). However, the LLM occasionally made mistakes like `plt.text("1", x=1, y=9)`, causing out-of-bounds placement. ### W2&Q3: "Generalization: CoT and ToT are more generalized to different LLM tasks, while VAP is limited to a few geometry problems. Does it generalize to normal Visual QA tasks?" Yes, we agree with the reviewer that CoT and ToT are more generalized, but their performances are inferior to our VAP in the four diversified reasoning tasks that can benefit from dual-modality reasoning. In the current stage, we are unable to support normal visual QA tasks for two main reasons. First, the problem setting is different. The input of visual QA tasks consists of an image and text question. In our setting, the input is a text question. The image is automatically synthesized according to the text input. Second, VAP is designed to improve numeric reasoning, whereas normal visual QA tasks focus on semantic understanding of the input image. Nonetheless, we agree that it would be very impactful if we can extend VAP to be more general to support VQA. ### W3: "There is a missing reference to relevant works on tool usage ability for LLMs, such as ViperGPT, VisProg, and LLava-Plus." We thank the reviewer for providing these relevant works. In revision, we incorporated these works in related work and add necessary explanations and discussions. ### Q2: "Does the model require customized instructional prompts for LLM on different tasks?" In fact, VAP does not require task-specific prompts for different tasks, which we think is a desirable feature. This ows to the planning step, in which the prompt specifies the role, drawing workflow, external image tools, and output format. These elements are identical for different tasks. The prompt also contains the input problem text, which triggers LLM to understand the task and generate different plans accordingly. The key components of the prompt template for the planning include: > Your role is to visualize a problem by creating an image... // Role play > > The drawing will be executed through an iterative process... // Workflow clarification > > Here are drawing tools available:... // Introduction of the drawing tools > > You must strictly follow the JSON format... // Format control > > Problem Description:... // Problem input More details on this prompt can be found in Appendix A.1.1. ### Q4: "Is it possible to compare with VCoT mentioned in Section 2.3 (line 76)?" VCoT was designed for tasks like storytelling and summarization, instead of the numeric reasoning tasks presented in our work. Furthermore, we found that its source code is not available, with an empty page in the provided github link. To address the reviewer's concern, we came up with an idea to apply our VAP to the sotrytelling task as presented in the VCoT paper. The task setting is to write a story based on the input of text prompts. In the implementation of VAP, we incorporate DALLE3 as one of the external image tools. Then at each iterative step, VAP generates a photorealistic image using DALLE3 to render a scene. This image is then sent to GPT-4v to enhance the model's creativity in story generation. As to performace metric, VCoT adopts human evaluation to judge creativity etc, which we think are subjective. Therefore, for reviewer's reference, we simply provided an example below to demonstrate the results of two approaches. | Input short prompt | Me and my son decided to take a nice walk on the beach.We both were admiring the scenery. Afterwards, we started driving home. | | ------------------------------------------- | ------------------------------------------------------------ | | Result of VCoT (reported in original paper) | On the way home from school, my son and I enjoyed the beautiful view of the sky and flowers.We had picked up a presentation of flowers from school, which we admired as we drove home.The sky was blue and white over the car, and we enjoyed the scenic view of the flowers and the sky. | | Result of VAP | The sun dipped below the horizon as my son and I strolled along the sandy shore, admiring the beautiful colors of the sky and the calming sound of the waves.We were both quiet, watching the seagulls fly above us, the interesting shapes in the sand, and the boats far out on the water.As the day ended, we walked back to our car, feeling happy and closer than ever after our special time together on the beach. |
Rebuttal 1: Rebuttal: We thank all detailed feedbacks provided by the reviewers! We address a few points in this response. All other questions are addressed in reviewer specific responses. ### Trade-off between number of iterations and performance We add an experiment of controlling the number of iterations in Sudoku task. Specifically, we injected the prompt "You must finish within `n_iterations` by drawing multiple rows in parallel" into the prompt of iterative reasoning. `n_iterations` is set to 8,4,2 and 1. The results are as follows: | | Time | Correct rate | | ------------------ | ------ | ------------ | | VAP (original) | 19.0 s | 35.5% | | VAP (iterations=8) | 17.7 s | 35.3% | | VAP (iterations=4) | 15.9 s | **37.3%** | | VAP (iterations=2) | 12.3 s | 26.6% | | VAP (iterations=1) | 4.4 s | 19.9% | It is interesting to find that set `n_iterations` to 4 improve performance over the original version and also enhancing efficiency. This suggests that more iterations do not necessarily lead to higher accuracy. When `n_iterations` is set to 1, the iterative reasoning process is almost skipped, resulting in poor performance. ### Baselines comparasion with different LLMs Our experimental setup employs a unified VLLM. However, we observed that VAP necessitates a MLLM to process visual-text input, whereas other baselines solely require textual input. Considering that traditional LLMs are expected to outperform MLLMs in text-only tasks, we introduced GPT-4 and LLaMA 3 8B as additional LLMs to ensure a more comprehensive comparison. Accuracy on the geometry task is presented as follows: | | GPT-4v | GPT4 | LLaMA3 | | ------------- | --------- | ----- | ------ | | Standard | 8.5% | 10.0% | 7.0% | | CoT | 10.0% | 11.0% | 8.0% | | CoT-SC (k=5) | 11.0% | 11.5% | 8.0% | | CoT-SC (k=10) | 11.5% | 11.5% | 8.0% | | CoT-SC (k=20) | 11.5% | 11.5% | 8.0% | | VAP | **16.5%** | - | - | Here, we can get these conclusions: - GPT-4 slightly improved baseline performance compared to GPT-4v, but still significantly lower than VAP. - LLaMA3 8B shows decreased accuracy, likely due to its small size limiting generalizability. Note that currently we are unable to run larger versions of LLaMA3 due to machine constrain. - Simpler methods (standard prompting) benefit more from model changes than complex methods (CoT-SC). From the results, we maintain that the superiority of VAP over other baselines remains valid.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
UniAudio 1.5: Large Language Model-Driven Audio Codec is A Few-Shot Audio Task Learner
Accept (poster)
Summary: The paper proposes a LLM-codec module that can plug into an existing LLM, i.e., LLAMA-2, to perform few-shot in-context learning for tasks including classification (emotion & sound event), and text-to-speech synthesis. The proposed module takes the raw audio waveform as an input, and encodes it into latent space where the corresponding features are mapped into VQ codebooks in the LLM dictionary space. A multi-layer alignment designs are considered in the LLM-codec, where the shallow layer is responsible for semantic information, and deeper layers are responsible for more fine-grained information. Four losses are consider in aligning the features with the LLM pretrained embedding space, semantic loss, consistency loss, reconstruction loss and discriminator loss. For semantic loss and consistency loss, the goal is to guide the learned embeddings to have semantic meaning as well as align with the pretrained audio features for stability reason. Experiments demonstrate that the proposed module can be plugged into the pretrained LLAMA-2 for in-context few-shot learning for simple audio understanding tasks and TTS task. Strengths: 1. The paper proposes an interesting way of solving few-shot audio-related tasks using frozen LLMs. Different from previous methods, this work designs a plug-in module for LLM in-context learning for audio modality to avoid LLM training or fine-tuning. The plug-in module is designed to be efficient, with only 160M parameters. 2. The tasks can cover both audio understanding and simple text-to-speech synthesis, which is flexible enough considering the limitations of in-context learning. 3. The paper is mostly well-written and presentation is clear enough to understand the motivation and the proposed method. Weaknesses: 1. The task seems to be really simple, I am wondering how could the model performs under the more challenging scenarios as N-way goes larger? Also how the TTS performance degrades as the scripts become more complicated? 2. In table 2, for 2-way speech emotion classification, why is random guess only 40% accuracy rather than a number close to 50%. 59% acc is also not high enough for binary classification task. What is missing here in order to achieve better acc? 3. In Table 4, what is ACC for GT and FastSpeech 2, is the number not be able to compute? Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses above. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our contributions. We appreciate the constructive comments the reviewer provided to us to improve our paper further. We are delighted to have the following discussion with the reviewer. **Q1:** The task seems to be really simple, I am wondering how could the model performs under the more challenging scenarios as N-way goes larger? **A:** Thank you for your comments. We set different settings $N=2,3,4,5,6$ for the sound event classification task. The results are as follows. We can see that in the more complex scenarios, our proposed model can also get better performance than the baseline BLSP [1]. Furthermore, inspired by reviewer Gqom, we also find that using a larger LLM (e.g. LLAMA 2 13B) can further improve the performance. | Model / task | 2-way-1-shot | 3-way-1-shot | 4-way-1shot | 5-way-1-shot | 6-way-1-shot | |:----------------:|:-------------:|:------------:|:-----------:|:------------:|:------------:| | Ours (LLAMA 7B) | 60 | 41 | 36 | 33 | 17 | | Ours (LLAMA 13B) | **62** | **42** | **41** | **43** | **31** | | BLSP | 47 | 26 | 15 | 12 | 10 | **Q2:** Also how the TTS performance degrades as the scripts become more complicated? **A:** Thank you for your comment. Inspired by your suggestion, we set three difficulty levels to test the simple text-to-speech. The first level includes addition, subtraction, multiplication, and division, such as the speech of *2x2*. The second level includes simple reasoning, such as *what is the next for the sequence 0,1,2,3*. The third level includes complex reasoning, such as *if 2 times a number plus 1 equals 1, what is the number*. We ask ChatGPT to help construct the questions for each level. For each level, we test 20 utterances. As a result, we find that the accuracy for each level is 85\%, 65\%, and 60\%. **Q3:** In table 2, for 2-way speech emotion classification, why is random guess only 40 \% accuracy rather than a number close to 50 \%. **A:** Thank you for your comment. As we introduced in the Table 2 caption, for the Random guess, we follow the previous work Dynamic-SUPERB [2], and we calculate the average score based 5 times evaluation. We agree with the reviewer's view that when we run the random guess enough times, the final accuracy will close to 50\%. In order to give the reader a better understanding, we will update the random guess score as the theoretical probability. **Q4:** 59\% acc is also not high enough for binary classification task. What is missing here in order to achieve better acc? **A:** Thank you for your comment. We have two reasons: 1) out-of-domain data leads to lower performance; 2) low understandablity of LLM to audio modality. We have following potential solutions to address these two issues: 1) providing high-diversity audio dataset for model training; 2) fine-tuning the LLM. **Q5:** In Table 4, what is ACC for GT and FastSpeech 2, is the number not be able to compute?. **A:** For FastSpeech 2, we directly input the ground truth answer text to the TTS model, so the generated speech is always right. When comparing with FastSpeech 2, we want to show that our proposed method can generate high-quality speech. [1] Wang C, Liao M, Huang Z, et al. Blsp: Bootstrapping language-speech pre-training via behavior alignment of continuation writing[J]. arXiv preprint arXiv:2309.00916, 2023. [2] Huang C, Lu K H, Wang S H, et al. Dynamic-superb: Towards a dynamic, collaborative, and comprehensive instruction-tuning benchmark for speech[C]//ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024: 12136-12140. --- Rebuttal Comment 1.1: Comment: I have read through the authors’ response and it addresses all of my questions clearly by providing sufficient further experiment results. I keep my original rating and would lean towards the acceptance of the paper due to its novelty of design in LLM-Codec, albeit needing some improvement in presentation. --- Reply to Comment 1.1.1: Comment: We thank the reviewers for the time and effort. We will further improve our presentation in the final version.
Summary: The paper introduces LLM-Codec, which enables frozen LLMs to perform various audio tasks in a few-shot manner without fine-tuning LLMs. LLM-Codec operates in a RVQ-manner, hierarchically converting audio tokens into words or sub-words in the LLM vocabulary to compress the audio modality into the text space. Strengths: The approach is validated through experiments on tasks such as speech emotion classification, audio classification, text-to-speech generation, and speech enhancement, demonstrating its feasibility and effectiveness. Weaknesses: It’s hard to understand the meaning of the model setup. The authors train the system with distillation from T5 and Whisper instead of mapping them to the text-LLM like BLIP does. What is the motivation for building a system in this pipeline? Why is the decoder needed? What is the advantage of this approach compared to using external TTS or speech enhancement modules? These questions are difficult to answer with the current version of the presentation and authors should compare with BLIP-like approach to show its benefits. From a presentation perspective, there are many grammatical errors and misleading sentences throughout the manuscript. The authors should put more effort into clarifying their claims and making the manuscript easier to comprehend by providing solid details. - The captions of Figures 1 and 2 are hard to understand and contain grammatical errors. - The details of the experimental setup are also insufficient to fully understand the setting. - The purpose of Figure 4 is very unclear. There is no apparent correlation between the outputs of the semantic layer and the given audio. The purpose of this analysis is not evident, and the information provided is incomplete. The authors conducted various downstream tasks, yet, the setup of experiments are insufficient to validate the system’s capacity. I’ve pointed out questionable setups at the Questions section. Technical Quality: 1 Clarity: 1 Questions for Authors: - Why is RVQ setting adopted as the system pipeline? Have the authors try comparing with just performing downstream tasks using the outputs from T5 and Whisper? - for Table 1, - What is the evaluation dataset being used? Does it only contain speech data? - How about performance comparisons with other metrics such as me reconstruction loss or SI-SDR? - How about using better configurations of the baseline models? (e.g., 44K DAC) - I didn’t fully get what 3 Vanilla RVQ means for Encodec_24k and DAC_16k. - for Table 3, why is accuracy the only metric? How about other metrics such as AUC? - for Table 4, - How is ACC being computed? How about WER? - What is the intuition of proposed method having better DNSMOS than the GT? Why not compute on subjective MOS? Confidence: 4 Soundness: 1 Presentation: 1 Contribution: 3 Limitations: Authors included their potential limitations in the Appendix section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's time and patience with our paper. We do find these suggestions constructive and helpful. **Q1:** It’s hard to understand the meaning of the model setup ... What is the motivation for building a system in this pipeline? **A:** We apologize that our presentation cannot make the reviewer better understand our model setup and design. We are happy to discuss this with the reviewer. From the high-level, we aim to turn the powerful LLM into a universal audio understanding and generation model to tackle infinite audio tasks without training. Due to the length limitation, we give the detailed explanation on global response part. **Q2:** Why is the decoder needed? **A:** Thanks for this comments. As we discussed, we expect the LLMs can generate audio directly. So we first use our LLM-Codec to quantize the audio data into the LLM's token space (word or sub-word). Then LLMs will predict the corresponding tokens based on the instruction, these tokens can be recovered into waveform by using the codec decoder. Such a strategy has been widely used in audio generation, such as AudioLM [5], researchers first use a codec model to compress the audio into discrete tokens, then use a transformer model to predict the corresponding tokens. Lastly, the codec decoder is used to recover waveform from the predicted tokens. **Q3:** From a presentation perspective, there are many grammatical errors and misleading sentences throughout the manuscript. .... **A:** We appreciate this comment. We are glad to revise our paper for better writing clarity. **Q4:** Why is RVQ setting adopted as the system pipeline? **A:** Thank you for this comment. The reason includes (1) we need to quantize audio into discrete tokens (2) RVQ is the mainstream approach for audio quantization. Furthermore, we use the LLM's vocabulary as the RVQ codebook, which means that we quantize the audio modality into the LLM's vocabulary. The benefits include: (1) with the help of LLM-Codec, LLMs can directly generate audio; (2) we do not need to fine-tune the LLMs. **Q5:** Have the authors try comparing with just performing downstream tasks using the outputs from T5 and Whisper? **A:** Yes, one of our baseline BLSP [1], which using a Whisper encoder to extract speech representation, and fine-tuning the LLM with a learnable adaptor. Due to BLSP only output text, we only compare with it on audio understanding tasks, such as speech emotion classification and sound event classification. **Q6:** What is the evaluation dataset being used? Does it only contain speech data? **A:** We appreciate this comment. As we introduced in Table 1 caption, we evaluate the reconstruction performance on the VCTK dataset. We randomly chose 200 utterances. Inspired by your suggestion, we also chose 200 utterances from the ESC50 dataset to evaluate the reconstruction performance on sound data. The results as Table 2 shows (refer to global response PDF). **Q7:** How about performance comparisons with other metrics such as mel reconstruction loss or SI-SDR? **A:** Based on your suggestion, we add the Mel reconstruction loss metric as one of the evaluation metrics. The performance as Table 3 shows. We will update this into our final version. **Q8:** How about using better configurations of the baseline models? (e.g., 44K DAC) **A:** We agree that the 44K DAC model has good reconstruction performance for high-sampling rate audio (e.g. 44.1k hz). However, in our study, we train all of the codec models on 16khz audio data. Thus, we choose 16K DAC-codec as one of the baselines. Due to the 16K Encodec model is not released, we downsampling the generated audio by Encodec_24k into 16k. **Q9:** I didn’t fully get what 3 Vanilla RVQ means for Encodec_24k and DAC_16k. **A:** We thank this important comment and appreciate the reviewer. In our study, we propose a multi-scale residual vector quantization, which is different from previous commonly used RVQ in DAC and Encodec, so we name previous RVQ as Vanilla RVQ. We apologize for this misunderstanding, we will add an explanation to the Table 1 caption part. **Q10:** for Table 3, why is accuracy the only metric? How about other metrics such as AUC? **A:** For the metric, we follow the baseline Dynamic-superb to use accuracy as the metric. We agree that AUC is a good metric to evaluate a classifier. We want to point out that AUC is calculated by setting different threshold values to get a group of FPR and TPR. However, we use the LLMs to directly predict the text label, it is hard to set a 'threshold' like a traditional classification model. In other words, AUC is more suitable to evaluate the performance of traditional classifiers, in which humans can decide the threshold. If the reviewer can provide better metrics, we are happy to add them to the paper. **Q11:** for Table 4, How is ACC being computed? How about WER? **A:** We calculate the ACC by comparing the content of the generated speech with the ground truth value. Actually, due to the generated speech only includes digits, using ACC and WER has the same meaning. **Q12:** What is the intuition of proposed method having better DNSMOS than the GT? Why not compute on subjective MOS? **A:** The reason is that the GT from the Free Spoken Digit Dataset (FSDD) is low-quality. FSDD is an old speech digit dataset, which may include some noise. Instead, our audio codec model is trained on a clean and high-quality speech dataset, so the noise details will not be modeled by the codec model. Actually, such a phenomenon widely exists in speech generation, nowadays, many TTS models can synthesize better speech than the original one, e.g. NaturalSpeech 3 [8] shows it can generate better speech than LibriSpeech. Inspired by your valuable suggestion, we conducted a subjective evaluation as Table 4 shows. Due to the length limitation, we put the remaining question into global response part. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in their rebuttal and have carefully read through all the reviews and additional clarifications provided. This has given me a clearer understanding of the paper's contributions. However, my primary concern remains with the paper's presentation. The current version lacks clarity, which makes it difficult to fully appreciate the work's contributions. For this reason, I remain hesitant to recommend a solid acceptance. It is crucial that the final version significantly improves the clarity and presentation to effectively communicate the findings. That said, I do acknowledge the interesting empirical findings presented within the existing pipeline, which has led me to slightly raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 4YYB, Thank you again for your tremendous efforts and valuable comments. We sincerely appreciate your recognition of our contributions and your constructive feedback. We are committed to improving our presentation based on your comments and suggestions. We are currently revising the manuscript according to your feedback. We hope this will address your concerns. We will continue refining our presentation in the coming days to ensure a polished final version. In this version, we have mainly updated the following sections: **1. We carefully reviewed our abstract and introduction to ensure proper grammar and enhance readability.** **2. We updated the captions of Figures 1, 2, and 4. We hope the revised captions provide better clarity and address your concerns.** The details as follows: **Figure1:** This figure illustrates the framework of the proposed approach for performing speech emotion classification and simple text-to-speech generation tasks. For each task, we prepare the instruction, demonstrations (e.g., x_1, y_1, x_2, y_2 ), and the query x_q. The LLAMA 2 model is then asked to predict the corresponding result y_q. Here, y_q can be either text or audio. **Figure 2:** This figure provides a high-level overview of LLM-Codec, including an encoder, a decoder, a multi-scale discriminator, and a multi-scale residual VQ layers. Here, ‘sub’ denotes feature subtraction. Note that the modules marked with a snowflake are frozen during training. **Figure 4:** The token visualization of the semantic layer of LLM-Codec is shown. We present two groups of samples, each containing two audio recordings with the same sound event label. In each group, we use the same color to highlight potentially similar patterns in the two audio recordings, such as identical token sub-sequences or token repeating frequencies. We speculate that these patterns can be easily recognized by LLMs, allowing them to learn new sound events quickly with just a few demonstrations. **We rewrote the Experimental Setting section to improve clarity. This section is now divided into two subsections: The first subsection provides detailed information about LLM-Codec, including the training data, codec model configuration, evaluation data, evaluation metrics, and the corresponding audio codec model baselines. The second subsection details the integration of LLM-Codec with pre-trained LLMs for downstream tasks (e.g., emotion classification, sound event classification, text-to-speech, etc.), including the evaluation data for each downstream task and the compared baselines..** Once again, we greatly appreciate that you raised the score and believe your valuable comments have significantly improved the paper, offering more precise explanations and presentations. We sincerely thank you for your time, effort, and patience during this peer review process. We are always happy to have a further discussion and answer more questions raised by you.
Summary: The paper introduces LLM-Codec, a novel audio codec model that leverages Large Language Models (LLMs) to perform various audio tasks with minimal training examples. By translating audio signals into the token space of LLMs, it enables these models to understand and generate audio content. The model uses a multi-scale residual vector quantization approach to maintain audio quality while reducing token sequence length. Strengths: - This paper shows a novel codec model for audio compression and can be generated by a frozen LLM for in-context learning. - The task setting is challenging, and the semantic codec is a challenging task, while the paper shows good exploration to it. - The ablation study is sufficient, showing the elements in the RVQ codec model's usages. Weaknesses: Experiments. Considering the paper proposes an encodec model, the most important result is the reconstruction performance. Providing Table 1, the necessary explaination of the results is lacking. I suggest this paper adds additional demonstration to the experimental main results to improve the readability. Although the paper does a lot of experiments, showing promising results, the audio generation-related tasks still lack enough experiment results. In both introduction and related work section, the paper claims that previous codec models do not support audio generation tasks, while the text-to-audio evaluation results are not shown. Considering the strong claim and the AudioCaps training data, it is strongly recommended to show its performance comapred to other text-to-audio models. Technical Quality: 3 Clarity: 4 Questions for Authors: N/A Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: This paper provides sufficient discussion in this field. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our contributions. We appreciate the constructive comments the reviewer provided to us to improve our paper. We are delighted to have the following discussion with the reviewer. **Q1:** Considering the paper proposes an encodec model, the most important result is the reconstruction performance. Providing Table 1, the necessary explaination of the results is lacking. I suggest this paper adds additional demonstration to the experimental main results to improve the readability. **A:** We thank the reviewer for his/her valuable suggestions. We are glad to revise our paper to improve its readability. Specifically, we would: *(1)* Provide more explanation for reconstruction performance results in Section 4.2, including the reconstruction performance comparison, the influence of down-sampling steps and tokens per second. Furthermore, inspired by reviewer 4YYB, we add a new metric (the Mel reconstruction loss) into Table 1. *(2)* Highlight the advantages of our proposed codec. **Q2:** Considering the strong claim and the AudioCaps training data, it is strongly recommended to show its performance comapred to other text-to-audio models. **A:** We appreciate this suggestion. We add a text-to-audio evaluation. Specifically, we choose previous SOTA AudioGen [1] as one of the baselines, because AudioGen is also an autoregression model based on audio codec models. Furthermore, we also choose some diffusion-based audio generation models, including AudioLDM [2] and Tango [3], as the other baselines. For AudioLDM and Tango, we use their official checkpoints, and we set 200 diffusion steps for the inference. We conduct experiments on the ESC 50 [4] validation set. AudioGen, AudioLDM, Tango, and our model do not see the ESC 50 dataset in the training stage. We use the event label to construct the text description, e.g. if the event label is 'clapping', we will construct the caption as 'this is the sound of clapping.'. For the evaluation metrics, we follow previous works to use FAD and KL as the metrics. The results are shown in the following table. Furthermore, we also add a visualization of generated samples in Figure 1. We will update this into our final version. | model | FAD | KL | |:--------:|:-----:|:----:| | AudioGen | 20.4 | **1.94** | | AudioLDM | 15.6 | 3.52 | | Tango | **12.7** | 3.01 | | ours | 17.16 | 3.05 | [1] Kreuk F, Synnaeve G, Polyak A, et al. Audiogen: Textually guided audio generation[J]. ICLR 2023. [2] Liu H, Chen Z, Yuan Y, et al. Audioldm: Text-to-audio generation with latent diffusion models[J]. ICML, 2023. [3] Ghosal D, Majumder N, Mehrish A, et al. Text-to-audio generation using instruction-tuned llm and latent diffusion model[J]. ACM-MM, 2023. [4] Piczak K J. ESC: Dataset for environmental sound classification[C]//Proceedings of the 23rd ACM international conference on Multimedia. 2015: 1015-1018. --- Rebuttal Comment 1.1: Comment: Thanks for your explanation. I appreciate your efforts to add experiments comparing with the existing audio generation models. It is because that only when the related experiments are conducted, we can say if the paper overclaims its generation ability, which is states in the introduction section. From the results, we can find that the current model's performance is worse than a common baseline, AudioLDM 1. Considering the results, I personally suggest that the paper may use the term "support audio generation tasks" more carefully. Hence, I will not improve or reduce the current score. Overall it is a very good paper discussing a useful topic. --- Reply to Comment 1.1.1: Comment: Dear Reviewer oXws, Thank you again for your tremendous efforts and valuable comments. We sincerely appreciate your recognition of our contributions and your constructive feedback. We agree that the current model’s performance in the text-to-audio task is still below that of previous specialized models, such as AudioGen and AudioLDM. We believe one reason for this is the difference in data coverage. In our study, we utilized only the AudioCaps dataset, whereas other specialized models have leveraged more extensive data sources, such as AudioSet. Therefore, a potential direction for improvement would be to scale up the data coverage. Once again, we greatly appreciate that you recognize our contributions. We sincerely thank you for your time, effort, and patience during this peer review process. We are always happy to have a further discussion and answer more questions raised by you.
Summary: The authors introduce a three-step audio-to-discrete codec to encode continuous acoustic information into a form suitable for large language model-based audio and speech understanding. Overall, this method is novel and represents an important step in audio modeling. The architecture targets different levels of acoustic information, from semantics to acoustic representation, using only discrete codecs. The results are empirically strong and solid. However, there are fewer theoretical connections to justify the meaning of the lexical representation of the "trainable new (pseudo) language" of speech, which can be improved in future work. For example, former works on word-level model reprogramming, learning equivalent pseudo tokens [A] from random embeddings, and theoretical bounds [B] on connecting different layers for latent alignment (e.g., when and how to align these three RVQ adapters) could help justify the uniqueness of the latent distance compared to embedding injection-based methods. This analysis could strengthen the theoretical foundations of the paper. Despite these points, the overall contributions remain high-quality. - A few grammatical and formatting issues need to be very fixed in the final draft for a much ready version. In sum, I highly recommend accepting this paper and suggest the authors address these issues in the final version. Strengths: 1. the model architecture design is overall new and effective 2. the designs on the different concept of audio representation is interesting Weaknesses: 1. there are less discussion on the representation difference of proposed codec-based method to the embedding injection based works. 2. some minor grammar and formation issues 3. there are less theoretical discussion on the representation related to how many RVQ layers needed eventually Technical Quality: 4 Clarity: 3 Questions for Authors: See the weakness (to be improved) 1. Are there any streaming or token merging limitation? 2. Is there any scaling effects of the backbone LM selection? 3. what would be the semantic alignment from the codec to the lexical level representations? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: minor, in terms of performance, the gap between cascaded LM for ASR and translation tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our contributions. We appreciate the constructive comments the reviewer provided to us to improve our paper further. We are delighted to have the following discussion with the reviewer. **Q1:** A few grammatical and formatting issues need to be very fixed in the final draft for a much-ready version. **A:** We appreciate this comment. We are glad to revise our paper for better writing clarity. **Q2:** there are fewer theoretical connections to justify the meaning of the lexical representation of the "trainable new (pseudo) language" of speech, which can be improved in future work. For example, former works on word-level model reprogramming, learning equivalent pseudo tokens [1] from random embeddings and theoretical bounds [2] ... **A:** We appreciate and agree with the reviewer's suggestion. We also want to build a theoretical foundation for this study. We plan to refer to your mentioned reference paper pseudo tokens [1] and theoretical bounds [2], and try to learn how to build a theoretical foundation in the future. Can you give a detailed paper name or URL for [1] and [2]? We very much appreciate your help. **Q3:** There is less discussion on the representation difference between the proposed codec-based method and the embedding injection-based works. **A:** We appreciate and agree with the reviewer's comment: we will add a discussion section to show the difference between our proposed codec-based method and previous embedding injection-based works, e.g. previous works (e.g. BLSP [1] or Qwen-Audio [2]) use Whisper model extracts continuous embedding, and then fine-tuning pre-trained LLM on audio understanding tasks. We will add the following content to show the difference: *(1) Motivation:* the embedding injection-based works expect that LLMs can understand audio embedding by fine-tuning LLMs so that their model supports input audio modality and output text content. Instead, our proposed method tries to transfer audio modality into LLM's token space, so that LLM can directly understand audio modality without additional fine-tuning. *(2) Formulation:* the embedding injection-based works need a fine-tuning LLMs stage. In general, they need to train an adaptor or fine-turn part of the parameter by the LORA strategy. Instead, our proposed method does not need any parameter updating for LLMs. *(3) Target:* most of the embedding injection-based works focus on audio understanding tasks. Instead, we expect to build a universal audio understanding and generation framework with the help of LLM-Codec. **Q4:** there are less theoretical discussion on the representation related to how many RVQ layers needed eventually **A:** We appreciate and agree with the reviewer's comment. We are happy to discuss this issue: From a reconstruction performance perspective, using more RVQ layers will bring better performance. From a generation perspective, using more RVQ layers brings the long sequence problem for LM, furthermore, it also increases the inference costs. Thus, we seek a compact but complete audio representation, preserving sufficient semantic and acoustic information with few tokens. **Q5:** Are there any streaming or token merging limitation? **A:** We are happy to discuss this issue with the reviewer. Previous codec models, such as Encodec and SoundStream, are both supporting streaming. One of the reasons is that these works adopt a causal convolution block in the Encoder and Decoder parts. In our codec, we also use similar convolution blocks with Encodec. So our codec also supports streaming. For the token merging limitation, we adopt the multi-scale RVQ strategy, which results in different VQ layers producing different numbers of tokens, which may bring challenges in token merging. **Q6:** Is there any scaling effects of the backbone LM selection? **A:** We appreciate the reviewer's comment. Inspired by your suggestion, we added an experiment to explore the influence of scaling effects of the backbone LM. Specifically, we compare the performance of different LM selections: LLAMA2 7B and LLAMA 2 13B. We conduct experiments on N-way-1-shot sound event classification, The performance comparison as following Table shows. We can see that scaling the backbone LM can also bring improvement for audio tasks. | Model / task | 2-way-1-shot | 3-way-1-shot | 4-way-1shot | 5-way-1-shot | 6-way-1-shot | |:----------------:|:-------------:|:------------:|:-----------:|:------------:|:------------:| | Ours (LLAMA 7B) | 60 | 41 | 36 | 33 | 17 | | Ours (LLAMA 13B) | **62** | **42** | **41** | **43** | **31** | **Q7:** what would be the semantic alignment from the codec to the lexical level representations? **A:** We appreciate the reviewer's comment. In our view, the semantic token should have a strong connection or high correlation with the lexical-level representations. For instance, the same semantic information in two audios should be mapped into a similar lexical sequence, so that the LLMs can learn the the pattern with few demonstrations. [1] Wang C, Liao M, Huang Z, et al. Blsp: Bootstrapping language-speech pre-training via behavior alignment of continuation writing[J]. arXiv preprint arXiv:2309.00916, 2023. [2] Chu Y, Xu J, Zhou X, et al. Qwen-audio: Advancing universal audio understanding via unified large-scale audio-language models[J]. arXiv preprint arXiv:2311.07919, 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the authors’ response. I think the original suggested references are missing due to some open review formatting. On the token level exploration, the most representative work is WRAP in ACL 2021 and the first speech model prompting work, Voice2Series in ICML 2021 has provided a general population risk bound for 1d vector discrete matching via measurement. The authors could strength their work with wider audiences based on more in-depth connections on these two well known works. But I think the current version is acceptable for my evaluation for Neurips; although with relatively shallow theoretical findings. I recommend to accept this work and please add these extra discussion in a final version. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Gqom, Thank you again for your great efforts and the valuable comments. Your suggestion significantly improves our work. We will add the extra discussion in the final version.
Rebuttal 1: Rebuttal: We thank the meta-reviewer for organizing this helpful peer review stage. We thank all reviewers for their time, patience, and constructive comments to help us improve our paper. We specifically address the concerns raised by reviewers **4YYB** regarding the motivation of our paper. We are eager to engage in the following discussions: **What is the motivation for building a system in this pipeline?** As we discussed in the Introduction part. The success of LLMs inspires many researchers to build multi-modal LLMs to solve audio-related tasks in the audio domain. For instance, BLSP [1], and Qwen-Audio [2] typically use a pre-trained Whisper encoder to extract speech embedding and then use a learnable adaptor module (e.g. Linear layer or Q-former [3]) and LORA to map the speech embedding into the representation space of LLM. Such a strategy has been widely discussed and recognized in the research community. However, previous works (1) focus more on expanding LLMs to solve specific audio tasks, without considering the in-context-learning ability to unseen audio tasks, e.g. Quen-Audio collects over 30 tasks to conduct multi-task training. Their motivation is using the strong ability of LLMs to help improve the performance of audio understanding tasks. (2) do not support audio generation tasks, which limits its application scenarios. In general, audio tasks can be divided into audio understanding and audio generation. We refer to the audio understanding task as the model's input can be text and audio, but the model's output is only text, e.g. Automatic speech recognition (ASR), Spoken language identification, Emotion recognition, and so on. Similarly, if the model's output is audio, we call these tasks as audio generation tasks, e.g. text-to-speech (TTS), text-to-sound, speech enhancement, and so on. We claim that **one of our motivations is building a universal audio understanding and generation tasks solver with the help of LLMs and the proposed LLM-Codec**. We highlight that making LLMs to generate audio is not an easy thing. Some pioneer works, such as SpeechGPT [4], try to make a pre-trained LLAMA2 model generate speech. To realize this target, SpeechGPT expands its vocabulary with speech tokens and uses large-scale datasets and GPUs to learn the alignment between speech and text. In contrast, we propose to train an audio codec that can quantize the audio modality into LLM's token space, so that we do not need to expand the LLM's vocabulary like SpeechGPT. **In summary, we aim to turn the powerful LLM into a universal audio understanding and generation model to tackle infinite audio tasks without training.** To realize this target, we propose to map the audio modality into LLM's token space. As a result, we only use 2 GPUs to train the codec model, and one GPU to conduct the inference with pre-trained LLMs. **What is the advantage of this approach compared to using external TTS or speech enhancement modules?** We thank this important comment and appreciate the reviewer. We agree that using external TTS or speech enhancement modules can also build a cascade system. However, our motivation is to build a universal end-to-end system, which supports audio and text as input and output. We expect our model can handle multiple tasks. We understand and agree with the reviewer's opinion: combining multiple external modules can also solve many tasks, such as HuggingGPT [6]. But we want to reach a consensus with the reviewer: combining multiple external modules or building a universal model (such as GPT4-o) are both potential paths toward artificial general intelligence (AGI). As a research work, we expect to explore more potential possibilities and inspire more work. **Furthermore, we also summarize all of the response Table and Figure as one PDF file** [1] Wang C, Liao M, Huang Z, et al. Blsp: Bootstrapping language-speech pre-training via behavior alignment of continuation writing[J]. arXiv preprint arXiv:2309.00916, 2023. [2] Chu Y, Xu J, Zhou X, et al. Qwen-audio: Advancing universal audio understanding via unified large-scale audio-language models[J]. arXiv preprint arXiv:2311.07919, 2023. [3] Li J, Li D, Xiong C, et al. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation[C]//International conference on machine learning. PMLR, 2022: 12888-12900. [4] Zhang D, Li S, Zhang X, et al. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities[J]. EMNLP, 2023. [5] Borsos Z, Marinier R, Vincent D, et al. Audiolm: a language modeling approach to audio generation[J]. IEEE/ACM transactions on audio, speech, and language processing, 2023, 31: 2523-2533. [6] Shen Y, Song K, Tan X, et al. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face[J]. Advances in Neural Information Processing Systems, 2024, 36. [7] Huang C, Lu K H, Wang S H, et al. Dynamic-superb: Towards a dynamic, collaborative, and comprehensive instruction-tuning benchmark for speech[C]//ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024: 12136-12140. [8] Ju Z, Wang Y, Shen K, et al. Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models[J]. ICML, 2024. Pdf: /pdf/89d6d64078ad62f40256c3357a31c312ed4c5b30.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Integrating Suboptimal Human Knowledge with Hierarchical Reinforcement Learning for Large-Scale Multiagent Systems
Accept (poster)
Summary: The authors proposed a new framework (hhk-MARL) that integrates human abstract knowledge with hierarchical reinforcement learning to address the learning challenges in large-scale multi-agent systems. The framework employs fuzzy logic to represent human knowledge and a graph-based group controller to enhance agent coordination. Experimental results in the StarCraft Multi-agent Challenge demonstrate that the proposed approach significantly accelerates the training process and improves the final performance of the agents, even with imperfect human prior knowledge. Strengths: * Overall, the proposed hhk-MARL framework effectively integrates human knowledge with hierarchical reinforcement learning, providing a flexible and adaptive approach for multi-agent systems. * Methodologically, the authors use fuzzy logic and hypernetworks to dynamically generate weights based on agent observations allowing for the seamless combination of human and agent preferences. * In the experimental results, the comprehensive experiments demonstrate the framework's efficacy and robustness, showing significant improvements over baseline methods in various scenarios. Weaknesses: While the paper is well-written overall, it lacks detailed comparisons with prior research and thorough explanations of the human knowledge used. Specifically, section 3.1 does not clearly justify the choice of fuzzy logic over other approaches, which could also handle uncertainty and abstract knowledge effectively. Additionally, the paper can explain how its approach differs from previous works such as KoGuN and Shi 2021, to clarify its unique contributions. In section 3.2, the new aspects of using hypernetworks for knowledge integration should be more distinctly highlighted in comparison to similar methodologies. For the details, see the following Questions. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The paper employs fuzzy logic in the hhk-MARL framework for integrating human knowledge, but it does not provide a clear explanation in section 3.1 for choosing fuzzy logic over other methods such as Bayesian networks, a combination of reinforcement learning and heuristics, or neural network-based knowledge representation. These alternative methods could also handle uncertainty and abstract knowledge effectively. A more detailed justification for selecting fuzzy logic would enhance the understanding of its advantages in this context. 2. The idea of incorporating fuzzy logic into hierarchical RL has been introduced in prior works such as KoGuN [20] and in Shi 2021 [a]. However, it is essential to explain the differences between these approaches and the proposed method in section 3.1. Highlighting these distinctions will clarify the unique contributions of this work and demonstrate how it advances beyond existing methodologies. 3. In section 3.2 on Knowledge Integration, it is important to highlight the novel aspects of the proposed method. The integration of human knowledge using hypernetworks to dynamically generate weights based on agent observations presents a unique approach. However, the methodology bears similarities to existing works such as HAMXCS [21] which also incorporates heuristic knowledge into reinforcement learning, Hierarchical RL for Self-Driving [19] which uses hierarchical reinforcement learning for decision-making, and KoGuN, which refines fuzzy logic rules with hypernetworks. A clearer distinction of the innovative elements in comparison to these related works would strengthen the contribution of this paper. 4. Appendix A.6 presents the suboptimal human knowledge rules, but it is unclear who determined these rules and to what extent they are incomplete or reasonable. Given that they are labeled as suboptimal, a clearer explanation of their limitations and adequacy is necessary, especially for readers unfamiliar with Starcraft. This information is crucial for understanding the potential applicability and adaptation of these rules to other tasks. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your kind review. We are pleased that you thought we have provided a flexible and adaptive approach for multi-agent systems. We have addressed your comments below. We hope that this clarifies any concerns that you had and strengthens your support for the paper. >Question1:...it does not provide a clear explanation in section 3.1 for choosing fuzzy logic...A more detailed justification for selecting fuzzy logic would enhance the understanding Thank you for your comment. We will revise Section 3.1 to further depict our motivation for selecting fuzzy logic. We consider fuzzy logic because it is closer to the human perceptions and knowledge structures. Compared to the mentioned alternative methods, fuzzy logic is more interpretable, giving humans more freedom and control over knowledge design and representation. Moreover, as the intermediary to control agents, using fuzzy logic can reduce complexity, making it more suitable for training large-scale multi-agent systems. Furthermore, the framework can benefit from the advantage of fuzzy logic in generalization. Since previous research has revealed these advantages of fuzzy logic on knowledge representation [20, 38], it motivated us to leverage fuzzy logic for human prior knowledge representation. We will revise as: *Section 3.1: “Compared to other knowledge representation methods, fuzzy logic is closer to the structure of human knowledge, making it more interpretable. Furthermore, it has been proven that fuzzy logic is more suitable for training large-scale multi-agent systems with the advantage of generalization [38]. Inspired by previous works on knowledge representation with fuzzy logic [20, 38], we leverage fuzzy logic to abstract human prior knowledge in this work. The general form…”* >Question2:The idea of incorporating fuzzy logic into hierarchical RL has been introduced in prior works such as KoGuN [20] and in Shi 2021[a]. it is essential to explain the differences between these approaches and the proposed method in section 3.1... Thank you for your comment. The comparison between our work and previous research is proposed in Appendix A.2. We will further clarify the difference in the related work section. KoGuN [20] applies fuzzy logic in the single-agent scenario where the agent only learns how to leverage human knowledge without self-policy development ability. In Shi 2021 [13], the authors consider an all-purpose cross-task transfer that transfers knowledge among agents based on features extracted from neural networks. Different from these two works, we leverage fuzzy logic to connect agents and humans in multi-agent systems. We will revise as: *Appendix A.2: “…of MARL [11, 12]. On the one hand, the most straightforward implementation is to repurpose solutions from previous tasks obtained by agents [13]. On the other hand, various studies also emphasize the reuse of knowledge from auxiliary sources, such as human expert demonstrations [32]. ……Fuzzy logic has been applied in previous work for knowledge representation [20], while their focus is on single-agent scenarios and the agent does not have self-policy development ability. As far as we…”* >Question3:In section 3.2 on Knowledge Integration, it is important to highlight the novel aspects... Thank you for your comment. Some discussion about knowledge transfer methods is given in Appendix A.2 to exhibit the novelty of our approach. We will detail the motivation for applying hyper-networks in Section 3.2 and further clarify the distinction in Appendix A.2. In general, through the hyper-networks based Knowledge Integration module, agents can still maintain learning ability from the local Q network, and the human prior knowledge is not distorted. In comparison, KoGuN [20] requires the global state information to refine the compressed human prior knowledge, and the agent action preference is not considered. HAMXCS [21] also requires global information for the two-player competitive game, and a neural network is applied to construct an opponent model. In the approach for self-driving [19], their focus is more on decomposing challenging long-horizon tasks into simpler subtasks. Although it mitigates the reliance on labelled driving data, the human demonstration still needs to be step-by-step samples. We will revise as: *Section 3.2: “Although applying a concatenated neural network as the knowledge integration is straightforward, it is difficult to capture the dynamic knowledge requirements in different states. To allow agents to automatically adapt to human guidance, motivated by previous research [20], we propose a hyper-networks based knowledge integration that allows agents to refine the proposed prior knowledge based on the local observation. As shown in Figure 2… ”; Appendix A.2: “…large-scale MAS. Based on the hyper-networks in knowledge integration, we are able to combine human preference with agent preference to empower agents with more knowledge selection freedom.”* >Question4:...it is unclear who determined these rules and to what extent they are incomplete...explanation of their limitations and adequacy... Thank you for your comment. In this work, these human knowledge rules are specifically designed for SMAC, and the design of knowledge is correlated with the applied domain. As declared in Section 4.1, the proposed knowledge for SMAC is suboptimal, resulting in a 0% win rate when agents are solely manipulated by the proposed knowledge. Furthermore, our approach does not rely heavily on the selection of the rules, and we give users the freedom to define these fuzzy logic rules based on their domain knowledge. As shown in our ablation study (Section 4.3.2), agents can selectively adapt to prior knowledge through the Knowledge Integration module, allowing humans to propose any knowledge they consider useful for agents. In the future, we will investigate which kinds of knowledge are more appropriate and how to design effective knowledge. --- Rebuttal 2: Title: Seek open dialogue Comment: We are grateful to your earlier constructive comments, and we hope our rebuttal has addressed the questions raised. If you need further clarifications, we are very happy to follow up to improve our final version to meet the high standard of this conference. --- Rebuttal Comment 2.1: Title: Thank you for the rebuttals Comment: Thank you for the detailed responses and clarifications. These revisions will certainly enhance the clarity and understanding of your paper. I have no further concerns at this time. --- Reply to Comment 2.1.1: Title: Thank you for your response Comment: Thank you very much for taking the time to respond to our rebuttal and your effort in engaging with us. We are glad that we were able to address your concerns.
Summary: This paper integrates an human in the loop to provide knowledge that improves learning in marl. This is done through a hierarchical structure, but ultimately it is up to the agents the final decision of accepting the human suggestions (hierarchy comes from human knowledge to agents). Overall, there is an integration of human knowledge with what the agents learn. Strengths: The paper is well organized and it is interesting. The idea of integrating human knowledge is this kind of tasks is interesting and it makes sense. Mostly, the paper is easy to follow and understand, with some exception that I outline below. Weaknesses: While I find this paper interesting, I have some remarks, as noted below: * in line 131 it is stated that $M_L^I$ corresponds to a fuzzy set; however, it is not clear what $M$ is exactly, and how it relates to fuzzy logic in this context; it is not easy to understand how the relation between $O$ and $M$ is calculated and how is the membership function $\mu$ implemented * from my understanding, when integrating the proposed approach with IQL, there is not a mixer as in the other value function factorization methods like QMIX; this leads me to think that the lack of a mixer could be the reason behind the huge improvements seen in Figure 4 for IQL when combined with this method; can it mean that the mixer can be slightly detrimental in the overall framework? Technical Quality: 3 Clarity: 3 Questions for Authors: In addition to the points above, I have some specific questions that I would like the authors to comment on: * what are the limitations of considering only a small set of human opinions, like the 8 ones defined in appendix 6? * if a certain human knowledge concerns only one specific agent, is it still given to the others as well? if yes, how does it affect learning? do the others get confused by that knowledge? * how do the agents decide if they accept a certain human suggestion? is it a joint decision? if yes, can they reason accurately if the decision concerns a specific agent, but has nothing to do with the others? * is the knowledge controller deterministic? or was it trained a priori with human knowledge? * it is stated in lines 179-180 that the group controller is used only during training to not violate CTDE; what about the knowledge integration, is it also used only during training? or also during execution? since it receives human knowledge from all agents too * the knowledge integration module seems to follow an interesting approach, since the weights of the second network are generated by an initial network based on the observations of the agents given as inputs; could the authors elaborate on the motivations for this approach? why are the weights generated from a first network and not only using one network? * i can see the authors focused in environments involving marines only; is there a specific reason for that? is it because of the availability of prior human knowledge? or how hard it would be to create a knowledge controller network for other cases? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your review and your kind words about our paper. We are pleased that you found our idea of integrating human knowledge to be interesting. Below we have addressed your questions. We hope that this strengthens your support for the paper. >Weakness1: ...what $M$ is exactly, ...$\mu$ implemented Thank you for your comment. The fuzzy set $M$ and membership function $\mu$ are the components of fuzzy logic rules that are designed by humans. $M$ denotes the fuzzy set and $\mu$ represents the relationship between $O$ and $M$ as follows: $\mu_M(o):O\rightarrow [0,1]$. An example of the fuzzy logic rule is proposed in Section 2.3: ’IF $O$ is *high*, THEN *action* is *read*’ regarding the knowledge of ’Read paper with high citation score’. Here, $O$ is the observation of the citation score, and *high* is a fuzzy set $M$ whose membership function could simply be, $\mu_{high}(o): clip[0.05 \cdot o,0,1]$. It is worth mentioning that we give the users freedom to define these fuzzy logic rules and our approach does not rely heavily on the selection of the rules. We are deeply sorry that our Section 2 and Table 1 may not be clear enough, and we will modify the structure of our paper to further clarify it. >Weakness2:...can it mean that the mixer can be slightly detrimental... Thank you for your comment. From our understanding, mixers can influence the cooperation among agents and different types of mixers may have diverse benefits, as even baseline algorithms exhibit different performances in different scenarios. Still, as shown in Figure 4, our framework can easily be combined with various MARL algorithms and enhance their performance. As in this work we just consider a general approach for MARL algorithms, we do not further discuss the influence of mixer and answer the question of what kinds of mixers are more suitable. We will investigate this in our future work. >Question1:What are the limitations of considering only a small set of human opinions... Thank you for your comment. In this work, those human opinions are specifically designed for SMAC, and the design of knowledge is correlated with the applied domain. As proven by imitation learning and inverse reinforcement learning [33, 35], more comprehensive guidance should be more beneficial. As shown in our ablation study (Section 4.3.2), more comprehensive knowledge can improve the learning speed and final performance. However, because of the complexity of multi-agent systems, it is challenging to propose the overall demonstrations, while our approach allows the use of suboptimal human guidance. Even if a small set of human opinions is proposed, the Knowledge Integration module can allow agents to selectively adapt to the prior knowledge while maintaining self-learning ability. Based on the domain, if it is possible, then a larger set of human opinions can be more instructive, while a small set is still acceptable for our framework. >Question2:If a certain human knowledge concerns only one specific agent ... do the others get confused by that knowledge? Thank you for your comment. In this work, we focus on the homogeneous agents where agents share similar goals, observations, etc. The proposed human prior knowledge is shared among all agents. As agents can selectively adapt to the proposed human knowledge based on the Knowledge Integration module, it should not affect the learning and other agents should not get confused. >Question3:How do the agents decide if they accept a certain human suggestion? is it a joint decision?... Thank you for your comment. Agents use local observation to decide the utilization of the proposed human suggestions. This is not a joint decision, as each agent can use its Knowledge Integration to decide whether to accept a certain suggestion. >Question4:Is the knowledge controller deterministic... Thank you for your comment. The Knowledge Controller module is deterministic and is set up by humans before the training and then adjusted by agents through the reinforcement learning process. >Question5:...about the knowledge integration, is it also used only during training? or also during execution... Thank you for your comment. The Knowledge Integration module is applied during both training and execution. Each agent selectively adapts to the proposed knowledge based on its Knowledge Integration, which is shared among agents. We will emphasize this in our paper. >Question6:...could the authors elaborate on the motivations for hyper-network... Thank you for your comment. The hyper-network structure can offer more advantages than a concatenated neural network. Even though using a neural network is straightforward, it is hard to capture the dynamic knowledge requirements in different states. Moreover, using a feed-forward network to generate weights for another network, hyper-networks are more in line with the semantics of the Knowledge Integration module. It allows agents to selectively adapt to human guidance through local observation, which is hard to achieve in a single neural network. As hyper-networks have been proven to be more beneficial for knowledge refining in previous work [20], we apply such a structure in this work motivated by its advantages. We will emphasize the importance of hyper-networks in our paper. >Question7:...availability of prior human knowledge? or how hard it would be to create a knowledge controller... Thank you for your comment. In this work, we focus on the homogeneous agent setting. As the marine is the most common unit in SMAC, we deploy our approach in scenarios with marines involved. It is worth mentioning that our method is not limited to a single scenario. To reduce human burden, we allow the proposed human knowledge to be suboptimal and give users the freedom to design the transferred knowledge. As empirical results show (Section 4.1 and Figure 6), our Knowledge Integration module can greatly reduce the knowledge design requirements. --- Rebuttal 2: Title: Seek open dialogue Comment: We are grateful to your earlier constructive comments, and we hope our rebuttal has addressed the questions raised. If you need further clarifications, we are very happy to follow up to improve our final version to meet the high standard of this conference.
Summary: In this paper, the authors propose a novel method to tackle the multi-agent reinforcement learning problem. They do so by combining human abstract knowledge with hierarchical reinforcement learning. Specifically, human knowledge in the form of fuzzy logic rules is combined, at the top level, with each individual agent’s decisions, learned at the bottom level. Then, a graph-based group controller performs agent coordination to decide what the action at each step should be. The authors evaluate the proposed method on the StarCraft multi-agent Challenge, combined with three algorithms (IQL, QMIX, and Qatten). The results indicate that the proposed approach is capable of improving the overall performance. Strengths: The authors tackle the multi-agent reinforcement learning problem in a creative and novel way, combining human feedback with hierarchical techniques. Moreover, it is a very interesting idea to provide a general algorithm that can be coupled with any existing MARL technique. In this way, it is possible to take advantage of the benefits of previously proposed algorithms while also incorporating new ideas to improve overall performance. The paper is well-written, and all the high-level ideas behind the different components are well-explained. Weaknesses: The empirical evaluation is thorough, and it indicates that the claims are correct. However, only three samples are rarely enough to draw strong conclusions. I believe some crucial parts of the algorithm were not very detailed. In particular, what are the learning rules/loss functions for each of the components? That is, how is $\beta$ trained? How is the knowledge integration component trained? With the current description, I believe it would be very hard for someone to replicate the method. Technical Quality: 3 Clarity: 3 Questions for Authors: For the ablation studies of the impact of each component on the final performance, it would also be interesting to see how much of the agent’s Q is used vs how much of the human knowledge Q is used for the final Q prediction. That is, what is the magnitude of the impact that the human knowledge component has over the Q predictions? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of the work. No major concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. We are pleased that you found our method to be creative and novel and that you thought we had set a general approach for MARL algorithms. We address your concerns below and hope this will strengthen your support for the paper. > The empirical evaluation is thorough, and it indicates that the claims are correct. However, only three samples are rarely enough to draw strong conclusions. Thank you for this comment. In our experiments, our results are based on three separate runs, with the number of the test episodes during each run being 32 (shown in Table 2). Therefore, the number of samples to draw the conclusions is actually 96. The reason we deploy three different trials is to avoid random initialization deviation, which is a common strategy in previous works [Böhmer et al., 2020 (4 trials); Zhou et al., 2022 (4 trials); Zhong et al., 2024 (3 trials)]. As the standard deviation shown in Figure 5 is relatively small, it confirms the consistency of our results. We believe such results are sufficiently strong to draw a conclusion. We will clarify it in our paper to avoid this misunderstanding. We will revise the paper as follows: *Section 4.1: “Furthermore, all experimental results are derived across three separate trials with different random seeds, with 32 test episodes in each trial. The shaded region…”* * Böhmer, W., Kurin, V., & Whiteson, S. (2020, November). Deep coordination graphs. In International Conference on Machine Learning (pp. 980-991). PMLR. * Zhou, H., Lan, T., & Aggarwal, V. (2022). Pac: Assisted value factorization with counterfactual predictions in multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 35, 15757-15769. * Zhong, Y., Kuba, J. G., Feng, X., Hu, S., Ji, J., & Yang, Y. (2024). Heterogeneous-agent reinforcement learning. Journal of Machine Learning Research, 25(1-67), 1. > I believe some crucial parts of the algorithm were not very detailed. In particular, what are the learning rules/loss functions for each of the components? That is, how is $\beta$ trained? How is the knowledge integration component trained? With the current description, I believe it would be very hard for someone to replicate the method. Thank you for your comment. We will add some details to the paper to further clarify these questions (in Section 3.1, 3.2, 3.4, and Algorithm 1). In general, all the components (including $\beta$ and Knowledge Integration) follow the traditional Q learning process, and the overall loss function is proposed in Equation 15: $\mathcal{L} _{tot} = \mathbb{E} _{[ o_t^i,o _{t+1}^i,u_t^i,u _{t+1}^i ] _{i=1}^N }[Q _{tot}([ o_t^i,u_t^i ] _{i=1}^N)-y_t]^2$ 1. For $\beta$, they are initialized to 1 and backpropagated from $Q_F$ in the Knowledge Controller, which is similar to a neural network. 2. For the Knowledge Integration, the reward signal is backpropagated from the $Q_i$ to update the parameters of the integration $k_θ (\cdot)$ and then further update the parameters of the hyper-network $h_α (\cdot)$. We will revise the paper as follows: *Section 3.4: “This learning framework is end-to-end and can be combined with various MARL algorithms where the training of the knowledge controller, knowledge integration, and group controller module is based on the traditional Q learning process. To clarify this process…”* *Section 3.1: “These trainable weights are initialized at 1 to avoid disturbing the prior knowledge, and then adjusted through the reinforcement learning based on the reward signal. However, it…”* *Section 3.2: “…in knowledge adjustment. Similar to the knowledge controller, the knowledge integration is also trained based on the reinforcement learning process and this module is also shared among all agents. ”* > For the ablation studies of the impact of each component on the final performance, it would also be interesting to see how much of the agent’s Q is used vs how much of the human knowledge Q is used for the final Q prediction. That is, what is the magnitude of the impact that the human knowledge component has over the Q predictions? Thank you for noticing this. This is a very interesting question that is strongly related to designing beneficial human prior knowledge. To evaluate this, we can extract the weight of hyper-networks in the Knowledge Integration module to reveal the components of these two parts. As shown in our second ablation study (Figure 6(b)), the inappropriate knowledge will be automatically filtered out by agents, which may partially answer this question. Since our focus here is on connecting humans and agents in a hierarchical structure to boost the learning process, we do not consider this human knowledge representation aspect. We will address this question in our future work to guide users on how to design more advantageous human prior knowledge for better behavior guidance. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, and I am sorry for the late reply. This definitely clarified my questions/misunderstandings and I will update my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you very much for taking the time to respond to our rebuttal and for updating your score! We are glad that we were able to address your concerns. We will definitely incorporate the clarifications from the rebuttal to the revised version of the manuscript. --- Rebuttal 2: Title: Seek open dialogue Comment: We are grateful to your earlier constructive comments, and we hope our rebuttal has addressed the questions raised. If you need further clarifications, we are very happy to follow up to improve our final version to meet the high standard of this conference.
null
null
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their constructive comments. We are encouraged that all the reviewers think our paper is well organized and our approach to integrating human knowledge with multi-agent reinforcement learning is flexible and effective (Reviewer JxHm, kCt8, and 8Cur). It is a great honor that the reviewers find our idea interesting (Reviewer JxHm and kCt8). We are excited that you find our method to be novel (Reviewer JxHm) and technically sound (Reviewer kCt8 and 8Cur), with experiments that are thorough and comprehensive (Reviewer JxHm and 8Cur). We answer the comments from reviewers based on part 6 (weaknesses) and part 7 (questions), to address your concerns: 1. We further clarify the training process of our approach with more details added. 2. We further clarify our motivation for choosing fuzzy logic and using hyper-networks. 3. We further clarify the difference between our approach and previous works. 4. We further clarify the technical details about designing and leveraging fuzzy logic rules. 5. We explain the reason for our experimental setting. 6. We answer the specific comments from each reviewer. We will ensure that all the concerns are addressed in the revised paper. Our detailed response to the reviewers’ comments is shown below and hope this will strengthen your support for the paper.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Understanding Bias in Large-Scale Visual Datasets
Accept (poster)
Summary: This paper explores dataset biases and introduces a framework to identify the unique visual attributes that differentiate various datasets. The method involves applying a range of transformations to extract semantic, structural, boundary, color, and frequency information from the datasets, evaluating how each type of information contributes to their distinct characteristics. Strengths: 1. A diverse range of transformations was considered for this study. 2. Provides a comprehensive study to identify various factors that allow the identification of various biases that help to distinguish between datasets. 3. The paper is written well. Weaknesses: 1. Even though the paper identifies various factors that help in distinguishing datasets, it does not provide any information/ takeaways on how this information could be used to 'build more diverse and representative datasets in the future'.I consider the investigative analysis as a good and significant contribution but not commenting on how to utilise these results in a way that would be useful for the community is a significant shortcoming of this paper. The authors could have moved some of the transformations to the appendix and utilise the space for describing 'key takeaways and explaining how &what dataset curators should keep in mind while building datasets. 2. No information is provided on how the accuracy of pre-trained models affects the inferences or findings. 3. There is no discussion on how the format of data transformation affects the findings. For instance, converting the dataset into segmentation masks or object detection boundaries could capture more information in the segmentation results. The authors have not addressed the impact of these different transformation formats on their observed findings. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. how does the accuracy of pre-trained models affect the inferences or findings? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful review and the constructive comments. We would like to address your concerns below. >w1: Even though the paper identifies various factors that help in distinguishing datasets, it does not provide any information/ takeaways on how this information could be used to 'build more diverse and representative datasets in the future'. Thank you for your suggestion. We are happy to include more discussion about takeaways for dataset curation. Our study provides a general framework for identifying the concrete form of low-level and semantic bias in large-scale datasets. **The identified dataset bias can be used retrospectively to analyze the dataset curation procedure.** For example, on YCD: - YFCC contains predominantly outdoor scenes and human interactions. YFCC samples images solely from Flickr, a platform for user-uploaded photos. As Flickr users primarily share personal photos, landscapes, and social interactions, YFCC images predominantly feature natural scenes and human activities. Moreover, YFCC excludes photos labeled as “screenshots” [1], reinforcing the focus on human-related and natural imagery. - DataComp has the lowest number of unique objects per image. DataComp filters for images with high embedding similarity to ImageNet training examples [2], most of which feature object-centric images. While this empirically leads to higher zero-shot performance of downstream CLIP models, it biases the dataset toward images with lower per-image object diversity. - CC and DataComp are significantly brighter and contain more digital graphics and object showcase. These datasets are collected from the Internet and feature results from search engines [2, 3], which prioritize professionally created content like advertisements, infographics, and digital media. This results in a higher prevalence of digital graphics and brighter images, optimized for visual engagement and online presentation. >w2: No information is provided on how the accuracy of pre-trained models affects the inferences or findings. To address your concern, we reran some of the experiments in Sections 3 and 4 using different pre-trained models. **While the accuracy of the pre-trained models slightly affects dataset classification accuracy on the transformed datasets, our results and insights remain unchanged.** For Section 3, we use (1) a weaker VitDet-Base [4] model to extract bounding boxes, (2) a weaker SAM (ViT-B) [5] model to extract object contours, and (3) a stronger generative model SD-XL [6] to text-conditionally generate images. **Even when using different pre-trained models for transformations, the dataset classification model still achieves high accuracy**, suggesting that semantic differences and object shape variations are important contributors to the bias among YCD. ||VitDet-Base (new)|VitDet-Huge| |-|-|-| |#Parameters|145M|695M| |Box AP (LVIS) ↑|43.0%|51.5%| |Dataset Classification Acc.|60.8%|61.5%| ||SAM (ViT-Base) (new)|SAM (ViT-Large)| |-|-|-| |#Encoder Parameters|91M|308M| |Dataset Classification Acc.|65.9%|67.3%| (Due to resource constraints, the SAM experiments are on 300K training samples.) ||SD-2.1|SD-XL (new)| |-|-|-| |#Parameters|983M|3500M| |Generation Performance|low|high| |Dataset Classification Acc.|55.1%|57.8%| For Section 4, we use Claude 3.5-Sonnet [7] and Llama 3.1-8B [8] to derive patterns for each dataset as in Figure 16. **The dataset features summarized from these two LLMs closely resemble those of the original GPT-4o**. Specifically, for YFCC, the emphasis is on "people," "outdoor," and "nature," while for DataComp, the focus is on "white background" and "object focus." Claude 3.5-Sonnet: |YFCC|CC|DataComp| |-|-|-| |Outdoor Settings|Detailed Focus|Object Focus| |Group Dynamics|Artistic Representation|White Background| |Action and Movement|Clothing and Accessories|Branding| |Wildlife and Nature|Setting Variety|Detail Emphasis| |Visual Details|Text and Branding|Content Variety| Llama 3.1-8B: |YFCC|CC|DataComp| |-|-|-| |Natural Environments|Detailed Compositions|Focus on Color and Texture| |Everyday Life and Human Connection|Urban Landscapes and Settings|Minimalist Backgrounds| |Vibrant Colors and Dynamic Compositions|Stylized and Artistic Visuals|Object-Centric Compositions| |Focus on People and Relationships|Emphasis on Objects and Concepts|Attention to Detail| |Realistic and Detailed Scenes|Visual Arrangement and Composition|Simple yet Effective Composition| We will conduct more experiments with other segmentation models and object detection models and add the results and discussion to our draft. >w3: There is no discussion on how the format of data transformation affects the findings. Thank you for pointing this out. We acknowledge that the impact of the transformed images’ format could benefit from further elaboration and a more focused discussion in Section 3: - Semantic segmentation and object detection: Semantic segmentation provides fine-grained per-pixel semantic annotation, whereas object detection only captures coarse-grained spatial information through bounding boxes. The lack of detailed spatial information contributes to the lower dataset classification accuracy on bounding boxes (61.5%) compared to segmentation masks (67.5%). - Caption: Caption discards all spatial information, creating more discriminative semantic representations through natural language. This textual representation is less affected by the low-level and spatial cues in the images. - Edge detection and SAM contour: Object contour delineates fine-grained object shape and spatial information, while lacking the rich object semantic information present in semantic segmentation masks and bounding boxes. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for the feedback. I think the authors misunderstood my initial review, I didn't mean how "the identified dataset bias can be used retrospectively to analyze the dataset curation procedure". I asked **how these findings can be used to 'build more diverse and representative datasets in the future'?** . --- Reply to Comment 1.1.1: Title: Reply to "build more diverse and representative datasets in the future" Comment: Thank you for your clarifications. Indeed, our previous response focused more on past efforts. **Here we provide several ways our framework can be used to help build more diverse and representative datasets in the future:** - When considering adding a new set of images (e.g., from another website) into a data collection, we can first treat new images as a separate dataset, use the transformation and classification framework to tell where and how much they differ from the existing image collections. This can help us decide whether to join them. If the goal is to enhance diversity of the dataset in a certain aspect (e.g., object types), we should only join them when they are sufficiently different. - Our language-based analysis provides textual descriptions for any new dataset. This text description directly gives the data curators the intuition on the gist of the dataset, especially when compared with reference datasets. It can help curators refine text terms for image search and tag filtering, in search engines or other platforms. - Our framework can identify bias and distribution imbalance in the image statistics (e.g., colors and object distribution). This can help guide adding/removing images with desired/undesired statistics for more balance. - The result of a dataset classifier trained on the transformed datasets can serve as a measure of the image's "typicality" within the dataset. For example, images in dataset A that are misclassified as images in dataset B are considered "not typical" within A, for that transformation/attribute. If needed, images with less typical attributes can be then oversampled in collecting data and/or training to enhance representation. **In addition to how to use the framework, we list a few direct lessons we learned from our analysis on YCD, that can also help build more diverse and representative datasets in the future:** - Filtering by embedding similarity to images of a reference dataset could inherit bias of that dataset. We observe that DataComp has the lowest number of unique objects per image (Fig 12). This potentially resulted from DataComp filtering for images with high embedding similarity to ImageNet training examples, most of which feature object-centric images [1]. To mitigate this, dataset curators should be mindful of the inherent biases in their reference datasets (e.g., ImageNet). Concretely, to enhance the object diversity within DataComp, one could consider using a reference dataset with a higher per-image object diversity (e.g., COCO) during filtering. - The source website's image collection mechanism can introduce bias. We also noted that YFCC is heavily skewed towards outdoor scenes and human interactions (Sec 4.2). This bias likely stems from its reliance on a single data source, Flickr, where user-generated content often focuses on personal photos, landscapes, and social interactions. Dataset curators should recognize that the collection methods (e.g., user uploads) of data sources (e.g., Flickr) can introduce biases into the resulting dataset (e.g., YFCC). - Web-scraped images would naturally contain more digital graphics. Since CC and DataComp are crawled from the Internet, they feature results from search engines [2, 3]. This prioritizes professionally created content like advertisements, infographics, and digital media. Curators should evaluate whether this composition aligns with their downstream goals. Thank you for your quick reply/clarification. We will add a discussion on this to the paper and see it as an important improvement. We also hear your suggestion on moving experiments to the appendix for the space. Given that NeurIPS policy allows an additional page for accepted publications, we would be able to add the discussion while maintaining the current experiments. We are happy to address any further concerns. References:\ [1] Barbu et al, ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models\ [2] Changpinyo et al, Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts\ [3] Gadre et al, DataComp: In search of the next generation of multimodal datasets --- Rebuttal 2: Title: Rebuttal Comment: We thank you again for your valuable feedback and we hope our response can address your questions. If you have any further questions or concerns, we are very happy to answer. [1] Thomee et al, YFCC100M: The New Data in Multimedia Research\ [2] Gadre et al, DataComp: In search of the next generation of multimodal datasets\ [3] Changpinyo et al, Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts\ [4] Li et al, Exploring Plain Vision Transformer Backbones for Object Detection\ [5] Kirillov et al, Segment Anything\ [6] Podell et al, SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis\ [7] Anthropic, Claude 3.5-Sonnet\ [8] Meta, Llama 3.1 --- Rebuttal 3: Title: Reply Comment: My concerns are mostly addressed, it would be interesting to see how these suggestions could be incorporated to build a new dataset curation method in future work. I increase my score to 6 after reading the rebuttal. --- Rebuttal 4: Title: Thank you Comment: Many thanks for your valuable feedback and discussion!
Summary: This work theorizes and investigates various concrete forms of inter-dataset biases, namely those among YFCC, CC, and DataComp (YCD). The authors analyze such inter-dataset biases in pure visual attributes as well as in semantic attributes using LLM-generated descriptions. Strengths: The paper is a fluent read composed of a clear progressive structure. The authors have shown meticulous effort in constructing the investigations as comprehensive as possible. Weaknesses: This work extends the scope of [1] and aims to provide specific insights for constructing less biased data collection in future. However, I feel there are several missing evidence that undermine the contributions of this work in its current state: 1. **The authors have only examined the factors of biases individually.** However, I notice that none of the single attributes listed contributes to a higher prediction accuracy over the baseline of using original visual features. Could it be possible that a combination of multiple visual/semantic attributes jointly contributes to the large overall bias? This needs to be verified. 2. **The authors have only investigated one classification model.** According to [1], the inter-dataset overall bias (high prediction accuracy) is observed with multiple classification models as well. So if we use the smaller ResNet-50 model, can we still observe the high accuracy over each individual structural or semantic attribute? I haven't seen the authors ruling out the confounding factor of the classification model size. 3. **The proposed framework consists of indicative bias metrics only relative to YCD.** However, YCD all have their own intra-dataset biases. So when we are attempting to create a debiased dataset in future, *how can we make sure our newly collected data are truly diversified and fair?* Can the authors provide similar evidence over 'Memorization vs. Generalization' as in [1] Sec. 4.2., and verify individual inter-dataset bias attributes with a truly unbiased pseudo-dataset? [1] A Decade’s Battle on Dataset Bias: Are We There Yet? Liu et al. 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: Please find my major questions in the Weakness section. Another slight concern is about the presentation. Since an ideal debiased dataset should enjoy an equal likelihood to be predicted as one of the anchor datasets (e.g. YCD), maybe the authors should explicitly clarify this important task setting as early as possible in the paper. (Aug 9th): Updating my overall ratings thanks to the quality response. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations have been sufficiently addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review and the insightful questions. We would like to address your concerns below. >w1: Could it be possible that a combination of multiple visual/semantic attributes jointly contributes to the large overall bias? This needs to be verified. Thank you for this suggestion. To verify this, we consider different combinations of 3 transformations with lower dataset classification accuracies: object detection (61.5%), random-order pixel shuffle (52.2%), and SAM contour (73.1%). For each pair of transformations, we combine them by concatenating the resulting transformed images on the channel dimension. ||Transformation 1 Acc.|Transformation 2 Acc.|Combined Acc.| |-|-|-|-| |Obj. Det. + Pixel shuffle|61.5%|52.2%|68.9%| |Pixel Shuffle + SAM contour|52.2%|73.1%|74.2%| |Obj. Det. + SAM contour|61.5%|73.1%|73.1%| We observed that **combining semantic and structural attributes resulted in higher dataset classification accuracy compared to using a single attribute alone.** >w2: So if we use the smaller ResNet-50 model, can we still observe the high accuracy over each individual structural or semantic attribute? We are happy to provide more results on this. We’ve expanded our main experiments to employ smaller vision architectures: ResNet-50 (23M parameters) [1] and ConvNeXT-femto (5M parameters) [2]: ||baseline|Obj. Det.|SAM Contour|Patch Shuf.|High-pass| |-|-|-|-|-|-| |ConvNeXt-Tiny (28M)|81.7%|61.5%|73.1%|80.3%|79.4%| |ResNet-50 (25M)|82.0%|61.6%|73.6%|80.2%|81.9%| |ConvNeXt-Femto (5M)|79.9%|59.8%|70.8%|78.7%|76.7%| Additionally, we use two weaker sentence embedding models (MPNet-Base [3] and Sentence-BERT-Base [4]) for caption classification: ||Short Cap.|Long Cap.| |-|-|-| |Sentence-T5|63.7%|66.0%| |MPNet|63.2%|64.5%| |Sentence-BERT|63.2%|65.3%| **Across model sizes and architectures, the classification accuracy over each individual structural or semantic attribute remains high.** We’ve added this result to our Appendix. >w3: So when we are attempting to create a debiased dataset in future, how can we make sure our newly collected data are truly diversified and fair? Can the authors provide similar evidence over 'Memorization vs. Generalization' as in [1] Sec. 4.2., and verify individual inter-dataset bias attributes with a truly unbiased pseudo-dataset? We agree that our framework can only identify bias relative to the given combination of datasets (e.g., YCD) rather than their bias to the universal vision world. **Nevertheless, identifying inter-dataset bias can still be helpful in creating a debiased dataset, which is 'truly diversified and fair', in the future.** For example, in the paper, we used LLM to summarize specific characteristics of each dataset. These characteristics can be used retrospectively to analyze the dataset curation procedure. On YCD: - YFCC contains predominantly outdoor scenes and human interactions. YFCC samples images solely from Flickr, a platform for user-uploaded photos. As Flickr users primarily share personal photos, landscapes, and social interactions, YFCC images predominantly feature natural scenes and human activities. Moreover, YFCC excludes photos labeled as “screenshots” [5], reinforcing the focus on human-related and natural imagery. - DataComp has the lowest number of unique objects per image. DataComp filters for images with high embedding similarity to ImageNet training examples [6], most of which feature object-centric images. While this empirically leads to higher zero-shot performance of downstream CLIP models, it biases the dataset toward images with lower per-image object diversity. - CC and DataComp are significantly brighter and contain more digital graphics and object showcase. These datasets are collected from the Internet and feature results from search engines [6, 7], which prioritize professionally created content like advertisements, infographics, and digital media. This results in a higher prevalence of digital graphics and brighter images, optimized for visual engagement and online presentation. **We observe similar trends over 'Memorization vs. Generalization' as in Sec. 4.2 of [8] on our transformed datasets.** Specifically, we create 3 pseudo-datasets, all of which are sampled without replacement from the same transformed YFCC dataset (e.g., images of Canny-detected edges or images of object bounding boxes). We perform dataset classification on these pseudo-datasets. The tables below present the pseudo-dataset classification _training_ accuracy on YFCC bounding boxes and YFCC Canny-edges. As the number of training images from each dataset increases, the task becomes harder. _All of the models trained on this pseudo-dataset classification have a chance-level accuracy of 33% in the validation set_. This is because they merely memorize the dataset origin of each training image rather than learning any generalizable patterns. |#Training Images per YFCC Bounding Box Pseudo-Dataset|without augmentation|with augmentation| |-|-|-| |100|100%|100%| |1K|100%|100%| |10K|100%|fail| |100K|fail|fail| |#Training Images per YFCC Canny edge Pseudo-Dataset|without augmentation|with augmentation| |-|-|-| |100|100%|100%| |1K|100%|100%| |10K|100%|fail| |100K|fail|fail| >q1: Another slight concern is about the presentation. Since an ideal debiased dataset should enjoy an equal likelihood to be predicted as one of the anchor datasets (e.g. YCD), maybe the authors should explicitly clarify this important task setting as early as possible in the paper. We appreciate your suggestion. We’ve added a clarification that “An ideal unbiased dataset should have a chance-level probability of being predicted as any of the anchor datasets.” in our introduction. --- Rebuttal 2: Title: Rebuttal Comment: We thank you again for your valuable feedback and we hope our response can address your questions. If you have any further questions or concerns, we are very happy to answer. [1] He et al, Deep Residual Learning for Image Recognition\ [2] Liu et al, A ConvNet for the 2020s\ [3] Song et al, MPNet: Masked and Permuted Pre-training for Language Understanding\ [4] Reimers & Gurevych, Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks\ [5] Thomee et al, YFCC100M: The New Data in Multimedia Research\ [6] Gadre et al, DataComp: In search of the next generation of multimodal datasets\ [7] Changpinyo et al, Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts\ [8] Liu et al, A Decade’s Battle on Dataset Bias: Are We There Yet? --- Rebuttal 3: Title: This is some impressive response. Comment: I greatly appreciate the authors effort in putting up the response. In fact, it's beyond my expectation. I find my 3 major concerns have been adequately addressed thanks to the additional information - I trust the authors will incorporate them into the final version. In retrospect, I believe what hindered my initial impression was the title of the paper, which led me to believe this was some incremental work over investigating *'intra-dataset biases'*, such as imbalanced distribution of object classes or visual features and etc. I highly recommend, if applicable, that the authors explicitly open up with **'INTER-dataset biases'** in the title or somewhere early-on in the revised paper. Overall, I am willing to update my rating owning to this thorough rebuttal. --- Rebuttal 4: Title: Thank you Comment: Thank you again for your helpful comments and for reviewing our response! We are glad to hear that the concerns have been addressed. We will incorporate the additional results discussed in our rebuttal into the next version of the paper and emphasize that we're studying the inter-dataset bias in our introduction to better reflect the focus of our work.
Summary: This paper studies the problem of dataset bias prevalent in current multimodal datasets. It revisits the dataset classification experiment from Torralba et al, recently studied again by Liu and He, and deconstructs their findings to understand what aspects of datasets (structural, semantic, color, object-level, caption-level etc) contribute most to the bias prevalent in visual datasets. With abundant empirical evidence, the paper provides an interesting and important analysis on the fundamental gaps in our understanding of large-scale image-text datasets. Strengths: - The paper is very well written and presented, the research question is concisely stated, and all the experimental results clearly tie back with the main research question. - The research question studied in itself is on that is pivotal in the current age of foundation models. Understanding datasets is key for understand models, and this paper takes a few steps to further our understanding of these giant datasets. Weaknesses: I have a few technical concerns that might undermine the significance of this paper's findings, I note them down here. Further, I also have a few suggested additional experiments that might help boost the significance of this paper's results. - All the results rely on a three class classification problem at heart. How should one interpret these results under the argument that neural networks in general can learn noisy labels, and can fit arbitrary distributions [1]? How does the baseline 81.7% performance change if you assigned random labels to the original set of 3M images? This is a very important ablation to verify the significance of all the findings. - One issue with the semantic segmentation classification experiment (sec 3.2) is that it removes the pixel information from the task, and converts it into a potentially much simpler task where the input features are only of size 150 (if I understand the experiment correctly). Here is a simple suggestion for removing that confounder. What happens if you train directly on the semantic masks itself? Of-course there would be some bias coming in from the colour applied to the segmentation masks for each of the different object classes, but this could be mitigated by training two different models using two different mask colour palettes, and looking at the variance. If the accuracy of this particular task still remains similarly high, that would further boost the significance of the results. - For the caption classification task (sec 3.2), could you also use another sentence embedding model that is potentially weaker in its initial MTEB [2] performance, for reproducing the results of that experiment. Again, this would help boost the significance of the findings. - Would the main results be consistent across two completely different subsets of the YCD classificaiton task? Could you sample an entirely different set of 3M images and 30K validation images, rerun the experiments, and cross-validate on the two different validation sets? - For figure 11, could you redo that analysis on another subset of 3M samples as well? I am skeptical that this result would hold exactly true for a given dataset, especially given this prior result that most datasets curated from the web have similar concept distributions [3]. Can the authors provide a reconciliation between this prior result and their results? - Another interesting analysis that can be done would be to use the VisDiff tool [4] to perform an analysis on the visual distribution differences between the pairs of YCD datasets. It might provide an interesting sub-analysis and would help further verify the results in sec 4.2 and fig. 16 in the paper. - The paper has a lot of very interesting insights about bias across visual datasets. However, there are no concrete suggestions/discussion on how one could go about mitigating these biases. A discussion regarding the source of these datasets, different data curation / filtering mechanisms, and how they might potentially impact these biases, would make for a great and required addition to this paper. [1] Zhang et al, Understanding deep learning requires rethinking generalization [2] Muenighoff et al, MTEB: Massive Text Embedding Benchmark [3] Udandarao et al, No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance [4] Dunlap et al, Describing Differences in Image Sets with Natural Language Technical Quality: 4 Clarity: 4 Questions for Authors: All my questions are also mentioned in the weaknesses section above. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes the authors have mentioned that their main weakness is with respect to using pretrained models which themselves might be biased. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive comment. We are encouraged that you find our paper provides helpful insights about bias in large-scale datasets. We address your concerns below: >w1: How should one interpret these results under the argument that neural networks in general can learn noisy labels, and can fit arbitrary distributions? How does the baseline 81.7% performance change if you assign random labels to the original set of 3M images? All the reported accuracies in our work are evaluated on 30k validation samples. On the other hand, **the findings in [1] about high model accuracy on random labels through memorization are only on the _training_ set.** When we assign random labels to our data, the model only achieves a chance-level accuracy of 33.3% on the validation set. This is expected since models trained on random labels cannot learn dataset-specific patterns that is generalizable to validation sets. >w2: What happens if you train directly on the semantic masks itself? **We would like to clarify that we converted each pixel, rather than the full image, to a 150-channel binary array** (Line 79). The transformed image maintains the original width and height. Additionally, training on the _RGB_ segmentation masks with 2 different color palettes results in stable accuracies of 70.1% and 70.0%. >w3: For the caption classification task (sec 3.2), could you also use another sentence embedding model that is potentially weaker in its initial MTEB performance? We used weaker models MPNet-Base [2] and Sentence-BERT-Base [3] for caption classification: ||MPNet|Sentence-BERT|Sentence-T5| |-|-|-|-| |Short Cap.|63.2%|63.2%|63.7%| |Long Cap.|64.5%|65.3%|66.0%| **The accuracy is consistent on all 3 sentence embedding models.** >w4: Could you sample an entirely different set of 3M images and 30K validation images, rerun the experiments, and cross-validate on the two different validation sets? We reran the experiments on new samples (3M training, 30K validation). Due to time constraints, caption and segmentation experiments are on 300k new training samples. **Accuracies vary minimally across two samples.** ||baseline|Short Cap. (300k)|Long Cap. (300k)|Sem. Seg. (300k)|Obj. Det.|Patch Shuf.|Canny|High-pass| |-|-|-|-|-|-|-|-|-| |train (ori.)>val (ori.)|81.7%|61.5%|63.1%|54.6%|61.5%|80.3%|70.84%|79.4%| |train (ori.)>val (new)|82.0%|61.8%|63.5%|55.3%|62.2%|80.3%|70.8%|79.3%| |train (new)>val (ori.)|82.1%|61.3%|63.1%|54.6%|61.0%|79.8%|70.4%|80.1%| |train (new)>val (new)|82.1%|61.6%|63.6%|55.1%|62.2%|79.9%|71.12%|80.3%| >w5: For Fig 11, could you redo that analysis on another subset of 3M samples as well? … Can the authors provide a reconciliation between this prior result and their results? We observe **the number of overlapped top 10 object categories (as in Fig 11) is high between the original 3M and new 3M images**: |#Overlaps|YFCC|CC|DataComp| |-|-|-|-| |ImageNet|9|8|8| |LVIS|10|10|10| |ADE20k|10|9|9| This does not contradict [4]. Fig 11 shows the top 10 object classes with the highest proportion of their images from a particular dataset, rather than the most frequent object classes within each dataset. Despite certain object categories being overrepresented in certain datasets, the overall object distribution vectors [4] are highly correlated: |ADE20k Corr |YFCC|CC|DataComp| |-|-|-|-| |YFCC|1|0.92 |0.82| |CC||1|0.97| |DataComp|||1| |LVIS Corr|YFCC|CC|DataComp| |-|-|-|-| |YFCC|1|0.90|0.81| |CC||1|0.93| |DataComp|||1| >w6: Another interesting analysis ... would be to use the VisDiff tool to perform an analysis on the visual distribution differences between the pairs of YCD datasets. Thank you for recommending VisDiff [5]. We generated caption-based set differences for each dataset pair (**Figure 2 in the attached pdf**): ||Y|C|D| |-|-|-|-| |(compared to) Y||unique home decor|Product Images| |(compared to) C|outdoor sports activities||furniture and appliances| |(compared to) D|People involved in activities|people at gatherings|| **Results from VisDiff highly overlap with LDA- and LLM-extracted dataset characteristics**, emphasizing “people” and “outdoor activities” for YFCC and “product” for DataComp. We’ve added this to our Appendix. >w7: There are no concrete suggestions/discussion on how one could go about mitigating these biases. Our study provides a general framework for identifying the concrete form of low-level and semantic bias in large-scale datasets. **The identified dataset bias can be used retrospectively to analyze the dataset curation procedure.** For example, on YCD: - YFCC contains predominantly outdoor scenes and human interactions. YFCC samples images solely from Flickr, a platform for user-uploaded photos. As Flickr users primarily share personal photos, landscapes, and social interactions, YFCC images predominantly feature natural scenes and human activities. Moreover, YFCC excludes photos labeled as “screenshots” [6], reinforcing the focus on human-related and natural imagery. - DataComp has the lowest number of unique objects per image. DataComp filters for images with high embedding similarity to ImageNet training examples [7], most of which feature object-centric images. This biases the dataset toward images with lower per-image object diversity. - CC and DataComp are significantly brighter and contain more digital graphics and object showcase. These datasets are collected from the Internet and feature results from search engines, which prioritize professionally created content like advertisements, infographics, and digital media. This results in a higher prevalence of digital graphics and brighter images, optimized for visual engagement and online presentation. We have consolidated the above as a discussion paragraph in our draft. --- Rebuttal 2: Title: Rebuttal Comment: We thank you again for your valuable feedback and we hope our response can address your questions. If you have any further questions or concerns, we are very happy to answer. [1] Zhang et al, Understanding deep learning requires rethinking generalization\ [2] Song et al, MPNet: Masked and Permuted Pre-training for Language Understanding\ [3] Reimers & Gurevych, Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks\ [4] Udandarao et al, No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance \ [5] Dunlap et al, Describing Differences in Image Sets with Natural Language\ [6] Thomee et al, YFCC100M: The New Data in Multimedia Research\ [7] Gadre et al, DataComp: In search of the next generation of multimodal datasets\ [8] Changpinyo et al, Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts --- Rebuttal 3: Title: Response to rebuttal Comment: I thank the authors for their added experimental results and comments. (1) The random labels experiment sufficiently answers my question regarding the specificity of these dataset-level biases. (2) Thanks for running the sentence embeddings and segmentation mask experiments, those high stable accuracies clarify my concern. (3) The cross-validation results are very strong. I would encourage the authors to somehow incorporate these results into the main paper perhaps as a mean +/- std or something similar, I think this would significantly increase the confidence in the paper's findings. (4) Thanks for the clarifying discussion b/w your work and the No Zero-shot paper. You being able to reproduce the findings of that paper are very interesting. (5) Great that the VisDiff results corroborate your analysis. (6) Your response to w7 is very insightful, thank you for adding this. I would encourage to expand on this discussion and if possible add some sample images from each of the datasets to that section in the appendix. Would be super useful! Overall, the author rebuttal has significantly increased my confidence in the paper and the main results and insights provided in the paper are very useful. I am updating my score to an 8. --- Rebuttal 4: Title: Thank you Comment: Thank you for your time and feedback on the paper! We’re glad that your concerns have been resolved. We will make sure to include the additional results from the rebuttal in the next version of our paper. The review has been very helpful in enhancing our paper.
null
null
Rebuttal 1: Rebuttal: Dear Reviewers: Thanks for all your constructive comments! We hope our response and additional results can address your concerns. Please let us know if you have further questions or comments and we would be more than happy to discuss. For reviewer yVzd, please note the PDF file attached contains figures in response to weakness 5 and weakness 6. Best,\ Authors Pdf: /pdf/b6fca804b526fb37a80b82acbafd2315c8f78c74.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Residual Diffusion Models for Deep Scalable Vision Generation
Accept (poster)
Summary: The paper describes the gradual change in the state z_t of a diffusion-based neural network as an ODE, and links the flow-architecture and UNet-architecture to the parameters of the ODE. Then, it describes a way to better train this network based on the dynamics of this ODE, and proposes an alternative loss function. It then performs experiments on image and video generation with this loss function, and several variants of the architecture. Strengths: The paper does a great job of explaining how the dynamics of z_t through the network can be unified across the UNet and Flow architectures. It then derives a loss function based on the ODE of those dynamics. The derivation itself is involved and highly useful. The paper also performs several experiments on multiple modalities to validate the proposed hypothesis of the new loss function / perspective. In particular, there are experiments in both image and video generation, each being quite strenuous in terms of academic effort and computational requirements. --- Update: I increased my score after the authors' response. Weaknesses: Some parts of the paper are not clearly written. In Figure 2, the caption does not sufficiently explain the figure. In particular, (a) and (c) look the same, the difference needs to be elaborated on (corresponding to the main text). What dotted lines mean, what coloured lines mean, etc. has to be explained, in Figures 1 and 2. Sections 2.2 and 2.3 need to be written more clearly. In particular, the dimensions of the involved variables need to be mentioned. It is unclear whether the network is used to completely or partially denoise the input. Most image-based experiments seem to be finetuned experiments, which are dependent on the biases of the original network used. Section 3.5 contains experiments on models trained from scratch, which is great. It is unclear though why Variant 0 performs so much better than Variant 4, since they only have a scalar "beta" difference. Technical Quality: 3 Clarity: 2 Questions for Authors: Is the final loss function as stated in lines 162-163? In which case, does the loss apply to each layer separately? Do you sum the losses per layer? How was the value of gamma as 0.35 decided? It is unclear whether the network is used to completely or partially denoise the input. It seems as though it is assumed that the network is completely denoising from noise to data (instead of transforming partially noised to slightly less noised). Because the dynamics of F mention L -> infinite. If, however, F is assumed to be the full sampling procedure of multiple denoising steps, then Figure 2 doesn't make sense. In brief, it is unclear how the full dynamics are connected with each forward pass of the network. This is especially confusing from the perspective of the UNet. The difference between Variant 0 and Variant 4 seems to be just a scalar beta value. Why does this 1 parameter affect performance so much? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper states where the model works better than prior works and where it doesn't. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Explanation of some details in Fig.1 & 2**\ **R1:** Thanks. We will explain these details below: 1) In Fig.2, (a), (b) and (c) respectively represent **Flow-shaped Networks** (e.g., Transformer), **U-shaped Networks** (e.g., U-Net) and **Unified Stacking Network (our Neural-RDM)**. As depicted in line 98-118, (a) and (b) are both special cases of (c), i.e., $f_{\theta_i}$ in (a) corresponds to $F_{\theta_i}$in (c), whereas $f_{\theta_i^{(l)}}$ and $f_{\theta_i^{(r)}}$ in (b) jointly correspond to $F_{\theta_i}$ in (c). Note that (c) covers both data streams mentioned earlier with a unified dynamics formula, which is a significant difference from (a). 2) The dotted lines in Fig.1 & 2 indicate the residual connections. 3) The coloured arrows (i.e., blue arrows) denote the data streams of the stacked networks. We will carefully supplement these details in the final version. **Q2: Dimensions of the variables**\ **R2:** We first want to clarify that this paper is actually a foundational work applicable to a variety of DMs' deep-scalable training, e.g., image, video and others, thus we do not specifically emphasize the dimensions of the variables, but they are actually easy to specify, e.g., $\boldsymbol{z}\in\mathbb{R}^{B \times F \times C \times H \times W}$($B$ is bach size, $F$ is video frames, $C$ is channels, $H$ and $W$ denote height and width respectively), $\alpha\in\mathbb{R}^{L \times D}$ and $\beta\in\mathbb{R}^{L \times D}$ ($L$ is the number of network layers, $D$ is dimension of the hidden layer) in video tasks. Note that the image tasks are slightly different from the videos in frame dimensions. If necessary, we'll add them in the final version. **Q3: Whether to denoise partially or completely**\ **R3:** We suspect this may be a serious misunderstanding, and we want to clarify that although the Neural-RDM network ($F_{\theta}$) is composed of a series of mrs-unit $F_{\theta_i}$, but it is still performed for single-step denoising, i.e., $\boldsymbol{z}(t)\rightarrow\boldsymbol{z}(t-1)$ for time t, as illustrated in Fig.2(d). Note that this point is completely consistent with the previous baseline LDM, DiT and Latte. Moreover, we want to further highlight: we introduce continuous-time **_Residual-Sensitivity ODE_** into the neural networks of depth L, **NOT** into the diffusion process (e.g., $\boldsymbol{z}(0)\rightarrow\boldsymbol{z}(T)$). Moreover, for U-shaped stacking networks, we want to specify $\mathcal{F} _ {\theta _ {i}}$ stands for both $f_{\theta_i^{(l)}}$ and $f_{\theta_i^{(r)}}$ in the i-th residual unit, i.e., the skip-connection $\boldsymbol{z} _ {i+1}\rightarrow\boldsymbol{z}_{2L-2-i}$ in UNet or U-ViT. The goal is to build a unified **_Denoising-Dynamics ODE_** under consistent dynamics formula, so as to facilitate the improvement and optimization of all stacked networks (including U-shaped and Flow-shaped) for better deep scalable training. We will make these points clearer in the final revision and hope the above clarifications can help understand our work better. **Q4: Explanation for the difference in performance between variant 0 and variant 4**\ **R4:** We understand the reviewer's concern, but that's actually unnecessary. Firstly, we want to clarify that the $\hat{\beta} _ {t,\phi}$ is not a simple scalar, but rather a series of learnable time-dependent parameters that are used to fit the mean-related scheduler (i.e., $\mu(z_t,t)$) in our **_Denoising-Dynamics ODE_**, thus the adjustment of $\hat{\beta} _ {t,\phi}$ directly impacts the denoising performance (i.e. the FVD score). More broadly, the $\hat{\alpha} _ {t,\phi}$ and $\hat{\beta} _ {t,\phi}$ respectively parameterized the variance- and mean-related scheduler $-\frac{1}{2}\sigma(t)^2$ and $\mu(\boldsymbol{z} _ t,t)$, which replaces human design in previous works as, $\frac{d\boldsymbol{z} _ t}{dt} = \mu(\boldsymbol{z} _ t,t)-\frac{1}{2}\sigma(t)^2\cdot\Big[\nabla _ {z} \log p_t(\boldsymbol{z}_t)\Big] = \hat{\alpha} _ {t,\phi}\cdot\mathcal{F} _ \theta(\boldsymbol{z} _ t,t) + \hat{\beta} _ {t,\phi}.$ Whereas variant 4 contains only a scaling tensor $\hat{\alpha}_{t,\phi}$, making it difficult to fit the **_Denoising-Dynamics ODE_**, thus resulting in worse results than variant 0. **Q5: Details of the Loss Function**\ **R5:** Yes, the final loss function with sensitivity control is as stated in lines 162-163, which can be expressed as follows, $\mathcal{L} _ s = ||\mathcal{F} _ \theta(\boldsymbol{z} _ {t}, t) - \nabla_z \log p _ t(\boldsymbol{z} _ t)||^2_2 + \gamma \cdot \sum _ {L}|| \hat{\alpha}_ {t,\phi} \cdot\frac{\partial f _ \theta(\hat{\boldsymbol{z}} _ {t},t)}{\partial\hat{\boldsymbol{z}} _ {t}}-\hat{\beta}_{t,\phi}||^2_2.$ The total loss is obtained by summing the loss per layer, with sensitivity of each mrs-unit layer $F_ {\theta_i}$ adaptively modulated and updated by $\hat{\alpha} _ {t,\phi}$ and $\hat{\beta} _ {t,\phi}$. **Q6: Determination of hyper-parameter $\gamma$**\ **R6:** The value of the $\gamma$ is determined via hyper-parameter search experiments. As shown in the table below, we conducted extensive experiments with different values of $\gamma$ on C2I & T2I tasks, benchmarking on ImageNet and JourneyDB datasets respectively. Based on the observation from experimental results, we select $\gamma=0.35$ as the optimal value. | $\gamma$ | | 0.00 | 0.20 | 0.30 | 0.35* | 0.40 | 0.60 | 0.80 | 1.00 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | C2I (IS$\uparrow$) | Neural-RDM-U | 79.03 | 140.24 | 241.61 | **256.55** | 228.00 | 173.52 | 152.41 | 139.92 | | | Neural-RDM-F | 224.75 | 247.07 | 273.09 | **295.32** | 266.91 | 263.06 | 247.71 | 235.79 | | T2I (IS$\uparrow$) | Neural-RDM-U | 64.50 | 182.52 | 269.98 | **235.35** | 208.04 | 192.67 | 189.27 | 146.35 | | | Neural-RDM-F | 195.03 | 202.98 | 204.82 | **206.32** | 193.57 | 182.01 | 170.76 | 162.19 | --- Rebuttal Comment 1.1: Comment: Dear Reviewer 6rnE, We would like to extend our appreciation for your time and valuable comments. We are eagerly looking forward to receiving your valuable feedback and comments on the points we addressed in the rebuttal. Ensuring that the rebuttal aligns with your suggestions is of utmost importance. Thank you for your dedication to the review process. Sincerely, Authors --- Rebuttal Comment 1.2: Title: Thanks! Comment: Thank you for your point-by-point response, and the clarifications. 1. Those details are very helpful, please add them to the figure in the final version! 2. While the dimensions are indeed easy to specify, it is important to specify nonetheless for completion. It helps quickly clarify several confusion, such as which dimensions scalar variables are being added to, etc. Please add them to the final version. 3. There was indeed some confusion about this because of the reasons previously stated. Your response is quite helpful, please add it to the final version. Since the same variable names such as F_theta are being specified for different architectures, it is critical to clarify such confusions in the main text. 4. Good to know, thanks for the clarification. 5. Thanks for the clarification. 6. Good to know that \gamma was calculated empirically, would be helpful to add this maybe to the supplementary. In light of the authors' response, I increase my score. --- Reply to Comment 1.2.1: Comment: We greatly appreciate you sparing time to read our rebuttal and give us an equally careful point-by-point response with an improved rating, and we will supplement these details to the final manuscript.
Summary: This work presents a framework for visual generative diffusion models, aiming at addressing the challenges associated with deep stacked networks in terms of numerical propagation errors and scalability. Strengths: 1. Clear motivation 2. The authors provide a theoretical analysis for their approach, including the use of continuous-time neural ODEs to demonstrate the relationship between residual-style network structures and generative denoising abilities. 3. Sufficient experiments: The paper is supported by extensive experimental evidence, which show that the proposed models achieve state-of-the-art performance on various generative tasks, including image and video generation. Weaknesses: 1. While the introduction of gated residual parameters is innovative, it may also add complexity to the model, which could potentially make it harder to train or less intuitive for practitioners to understand. 2. How about the computation requirements for this scalability model? 3. Model parameters , GFLOPs should be mentioned. Technical Quality: 3 Clarity: 3 Questions for Authors: Listed in weakness. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** **Complexity of the model after introducing gated residual parameters**\ **R1:** We understand the reviewer's concern, but that's actually unnecessary. Firstly, we want to highlight the introduction of these gated residual parameters is a _**simple yet meaningful**_ change to the common architecture of deep generative networks (e.g., _**U-Net**_ or _**Transformer**_), which only requires adding learnable scaling and bias tensors in each vanilla residual connection layer to fit the mean-variance scheduling dynamics of the proposed _**Denoising-Dynamics ODE**_. Secondly, we designed an error correction loss and integrated it into the conventional score matching loss at a smaller proportion (i.e., $\gamma=0.35$), balancing the model's denoising ability and error correction performance while ensuring that the model is easy to train. **Q2: Computation requirements**\ **R2:** Neural-RDM focuses on helping existing or brand-new generative backbone networks support large-scale and deep-scalable training, which supports both full parameter training and partial parameter fine-tuning. In our experiments, _**2/4\*A100 80G GPUs**_ are respectively adopted to fine-tune the learnable gated parameters for image tasks (C2I & T2I) and video tasks (N2V & C2V) and _**8\*A100 80G GPUs**_ for the full parameter training from scratch. **Q3: Model parameters and GFLOPs**\ **R3:** Thanks for this helpful suggestion, we have added the details of model parameters and GFLOPs in the table below, which will be included in the final version. | Task | Method | Parameters (M) | FLOPs (G) | | --- | --- | --- | --- | | Image Generation | Baseline (LDM-4) | 264.00 | 2,759.97 | | | Neural-RDM-U (Ours) | 293.11 (**+11.02%**) | 2,931.91 (**+6.63%**) | | | Baseline (Latte-XL/2) | 673.68 | 428.56 | | | Neural-RDM-F (Ours) | 748.06 (**+11.04%**) | 457.26 (**+6.67%**) | | Video Generation | Baseline (Latte-XL/2) | 673.68 | 5,572.69 | | | Neural-RDM (Ours) | 748.06 (**+11.04%**) | 5,603.80 (**+0.55%**) | --- Rebuttal Comment 1.1: Comment: Thanks for your reply. You have solved my concerns. It seems that the computational burden is not heavy. I will keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for sparing time and efforts reading our rebuttal and giving a response, thank you.
Summary: This paper addresses the problem of the numerical propagation errors of progressively deeper stacked neural networks for generative models. It proposes Neural Residual Diffusion Models (Neural-RDM), which introduced a series of learnable gated residual parameters to the common architectures of deep generative networks. Evaluation of the proposed method on image generation and video generation benchmarks demonstrate its superior performance over state-of-the-art methods. Strengths: - The proposed method is simple and well-motivated. - The experiments on generation benchmarks are comprehensive, and the great results demonstrate the efficacy of the proposed method. - The paper is clearly written. Weaknesses: - It is not clear what the specific definition of the "Scalability" columns in Table 1 and Table 2 is. Could you clarify the metrics represented by the cross, tick, and double tick symbols? - Since the paper is targeting on addressing the scalability of deep generative models, it would be better if the scalability experiments could be enhanced. Figure 7 shows that the performance of the proposed method can be improved as the depth of the network increases. However, how does its scalability compare with baseline architectures? Also, does it show similar scalability on other tasks and datasets? - As the authors have discussed in the limitation section, the numerical propagation errors caused by stacking network layers cannot be completely avoided. Therefore, it would be more accurate not to claim "enabling the networks to be *infinitely* stacked". Technical Quality: 3 Clarity: 3 Questions for Authors: See Weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed clearly about the potential limitations and social impacts in A.4 and A.5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Definitions and clarifications on scalability metrics** \ **R1:** Thanks for the helpful comment. We first want to clarify that the "Scalability" columns in Tab.1 & Tab.2 indicates the scaling capability (i.e., parameter scale and architecture stackability) of the evaluated models, meanwhile ensuring that errors do not accumulate as the network deepens, to facilitate the deep scalable training of the generative models (i.e., scaling law). Moreover, we want to re-clarify the meaning of the three metrics (i.e., _**cross**_, _**tick**_ and **_double tick_**): 1) GAN: Non-scalability (_**cross**_), limited by the instability of deep network training between the generator and the discriminator; 2) U/F-shaped: Low-scalability (_**tick**_), supporting a certain deep-scalable training, but error accumulation will occur as the network deepens during gradient update; 3) Ours: High-scalability (**_double tick_**), supporting deep-scalable training with the integration of error correction mechanism. We will make these points clearer in the final revision and hope the above clarifications can help understand our work better. **Q2: Supplements of scalability experiments**\ **R2:** Thanks for this valuable suggestion, we have supplemented the scalability experiments versus the baseline (i.e., **_Latte-XL_**) in following table, benchmarking on more tasks (**_T2I_**, **_N2V_** & **_C2V_**) and more datasets (**_JourneyDB_**, **_Taichi-HD_** & **_UCF-101_**). As shown in the table, in the T2I task, as the network depth increases from 12 to 32, the IS-score of our Neural-RDM shows a significant improvement compared to the baseline, which implys the superiority of the error control mechanism in supporting deep scalable training. A similar trend can be observed in the N2V and C2V tasks, which consistently validate the deep scalability advantage of our Neural-RDM. We will supplement the above results to the final version to better illustrate the performance of the proposed method. | Depths | T2I in JourneyDB (IS$\uparrow$) | | N2V in Taichi-HD (FVD$\downarrow$) | | C2V in UCF-101 (FVD$\downarrow$) | | | --- | --- | --- | --- | --- | --- | --- | | | Baseline | Neural-RDM (Ours) | Baseline | Neural-RDM (Ours) | Baseline | Neural-RDM (Ours) | | 12 | **103.18** | 94.20 | 236.83 | **191.04** | 923.38 | **896.33** | | 24 | 134.62 | **167.36** | 171.91 | **128.91** | 681.31 | **625.46** | | 28 | 195.03 | **206.32** | 159.60 | **91.22** | 477.95 | **461.01** | | 32 | 211.97 | **231.06** | 121.89 | **59.71** | 376.46 | **338.08** | **Q3: Regarding the imprecise claim.** \ **R3:** Thanks for this very thoughtful comment. We will refine this claim to make it more precise in the final version. --- Rebuttal Comment 1.1: Comment: Dear Reviewer zq4Z, We would like to extend our appreciation for your time and valuable comments. We are eagerly looking forward to receiving your valuable feedback and comments on the points we addressed in the rebuttal. Ensuring that the rebuttal aligns with your suggestions is of utmost importance. Thank you for your dedication to the review process. Sincerely, Authors
null
null
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for sparing their time and efforts reading our paper and giving many insightful comments. We notice that all reviewers hold a positive view regarding the _**well motivation**_ and _**superior performance**_ of our model. The major questions are summarized below and addressed point by point, to help understand our work better. Pdf: /pdf/162e31eb0a78d4cdb1380287fe32ea540e94f936.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DiPEx: Dispersing Prompt Expansion for Class-Agnostic Object Detection
Accept (poster)
Summary: The paper proposed a prompt expansion method to produce a diverse set of text queries for class-agnostic object detection. Sequentially performing inference using each text query and collating the prediction results often achieves high recall but incurs significant computational cost. Merging all text queries into one prompt reduces the cost significantly but results in much lower performance. The paper argues that this is due to the semantic overlap amongst the queries. To address this, the paper starts with a learnable parent prompt. After training the prompt embeddings with the standard object detection losses, the parent prompt is expanded into numerous child prompts by applying random rotations. The child prompts are then further learned with additional loss terms to decrease the similarity amongst the child prompt embeddings. This was shown to increase the maximum angular coverage, measured by the largest angle between the embeddings of two child prompts. Experiments demonstrate significant improvement of average recall and average precision on MS COCO and LVIST datasets. Strengths: 1. The paper aims to tackle the task of class-agnostic object detection, which is essential for various applications such as OOD detection. The task moves away from the classic close-vocabulary object detection problem, and has significant practical values as the real world is much less constrained. 2. The paper proposed an interesting approach to grow the set of text queries that are used to prompt a grounding model. The approach is geometrically motivated and is rather intuitive. 3. The paper provided extensive experimental results on benchmark datasets like MS-COCO and LVIS. The improvements in average recall (AR) and average precision (AP) over existing methods demonstrates great potential of the method. Weaknesses: 1. The semantic overlap amongst words, as the motivation behind the paper, seems more like a hypothesis and was not sufficiently investigated. Lines 53-55 of the paper pointed out that merging all queries into one text prompt results in inferior performance compared to running inference on each query sequentially and combing the predictions. The paper then directly jumped to the conclusion that this was the result of "semantic overlap" between the queries, without any investigation or reference to prior investigations. This undermines the fluidity of the paper. Even though the proposed method does improve the performance, it may not be the solving the problem the paper is claiming to solve. The hypothesis should not be too hard to test. Semantically similar and dissimilar queries can be manually selected to compare their detection performance. 2. As I understand, the motivation behind merging queries as opposed to just merging predictions is to lower the computational cost. The paper would be more convincing if there is a inference cost comparison between the proposed method, the naive query-merging method and the prediction-merging method. 3. Some technical details of the paper were not stated very clearly and were somewhat hard to follow. There are also numerous typos and inconsistent use of notations. For instance, in section 3.1, subscripts of v denote the text embeddings of different words in the same prompt, but in section 3.2 and other subsequent sections, the subscripts seem to denote different text prompts. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In Figure 3 (right), the average precision seems of the proposed method seems to drop first before increasing again. Is this due to random noise? Overall, it appears that increasing the number of prompts to 9 yields significant improvements but the trend is not clear before it. 2. In line 269, the paper claims that a higher MAC correlates with a broader spectrum of vocabularies. Aside from intuitions, is the range of vocabularies measured in some way? How did paper come to this conclusion? 3. How exactly is the prompt logit activation computed? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors discussed some technical limitations regarding the need of self-supervised learning and hyper-parameter tuning, both of which contribute to additional computational cost during training. Flag For Ethics Review: ['Ethics review needed: Human rights (including surveillance)'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comment and constructive feedback! Please find our detailed response below: > **R4.1** Clarification on "semantic overlap". We appreciate the reviewer’s detailed feedback and understand that the section on “semantic overlaps” may seem disjointed. We conducted a **pilot study** using UNIVERSAL-query and CLASS-WIDE query sourced from ChatGPT and WordNet, respectively, to validate the efficacy of GroundingDINO in recognizing semantically similar and dissimilar words. These results, shown in Table 1, **support our hypothesis** that semantic overlap negatively impacts detection performance. Additionally, a case study provided in Appendix A.2 reveals **diminished confidence** in GroundingDINO when presented with semantically overlapping queries, as exemplified by the contrast between "plates . cups ." (semantically different) and "plates . dishes ." (semantically similar), as illustrated in Figure 7. Due to page limits, we regret any confusion caused by the placement of this crucial information in the supplementary materials, which might have been overlooked. To improve the clarity, we will incorporate this case study into the main body of the revised version. > **R4.2** Inference cost comparison Great suggestion! We have provided an additional inference cost comparison between our proposed method and two handcrafted baselines. Our results, detailed in Table 2, demonstrate that while the naive query-merging method reduces the computational cost, it significantly **compromises** detection accuracy due to overlapping semantics. In contrast, our proposed method strikes a balance, achieving superior detection performance. Specifically, our Dipex approach is 6.69% slower compared to the naive query-merging method, but it shows an impressive **89.34%** reduction in inference time compared to the prediction-merging method. > **R4.3** Technical details clarification Thank you for your feedback! We apologize for the confusion regarding the notations. To clarify: - In Section 3.1, the contextual embeddings $\mathbf{v}$ are denoted as $\mathbf{v} =\lbrace\mathbf{v}_i\rbrace ^{M} _{i=1} \in\mathbb{R}^{M\times d}$, where $M$ represents the number of $d$-dimensional learnable tokens appended to a text query $\mathbf{c}$. This follows the conventional way of introducing learnable context vectors [A]. - In Section 3.2, we introduce a novel prompt expansion method. Here, the prompts hierarchy is denoted as $\mathbf{v}_{l,k}$. The subscript $l$ indicates the layer in the tree or training round, while $k$ represents the number of learnable tokens in the $l$-th layer, similar to the notation in Section 3.1. We hope this clarification clears up any confusion regarding the notations. [A] Kaiyang Zhou et al. Learning to Prompt for Vision-Language Models, in IJCV. > **R4.4** Concerns on Potential Random Noise Thanks! The COCOEval metric is particularly strict in evaluating detection performance, as it only matches detections with the **highest Intersection over Union (IoU)**, which can penalize minor localization errors. Additionally, the MS-COCO dataset is **not comprehensively annotated**, which can even lower the average precision (AP) as correct predictions for unannotated instances are **mistakenly counted** as **false positives**. At the very initial training stage (l=1), the learnable tokens can be highly uncertain. Optimizing a small number of prompts can lead to instability, given the insufficient parameters to capture the diverse pseudo-labels. Selecting only one parent prompt with high uncertainty for expansion in the first iteration may not provide enough stability. However, as the number of prompts progressively increases, the model becomes **more mature**, and we observe a **steady improvement** in performance. This trend illustrates how increasing the number of prompts helps manage uncertainties and improves overall performance, thereby validating our approach. > **R4.5** In line 269, the paper claims that a higher MAC correlates with a broader spectrum of vocabularies. Aside from intuitions, is the range of vocabularies measured in some way? How did paper come to this conclusion? We appreciate the reviewer's thoughtful question! The concept of MAC (Mean Angular Coverage) is inspired by the WordNet hierarchy, where a larger MAC indicates the discovery of more high-level semantics. While MAC serves as an approximation rather than a direct measure of vocabulary range, it provides a **practical criterion** for determining when to stop the training process. The main motivation behind MAC is to capture the **breadth** of concept coverage without the need for direct interpretation of the learned tokens, which is inherently complex. Although MAC does not explicitly measure the vocabulary range, its correlation with high-level semantic discovery offers a useful approximation for our purposes. We hope this clarifies our rationale. > **R4.6** How exactly is the prompt logit activation computed? Thanks for raising this question. As stated in line 285, prompt logit activation refers to the number of activated prompts based on the confidence threshold. Figure 5 illustrates the activation frequency (in log scale) of each expanded child prompt, providing a clear visualization of how different prompts contribute to the detection process. **Thanks again for reviewing our paper! We are more than willing to have a follow-up discussion with you if you still have any further concerns!** --- Rebuttal Comment 1.1: Title: Follow-Up on Submission Responses Comment: Thank you once again for your thoughtful feedback on our submission. As we get closer to the end of the discussion period on August 13th, we wanted there are any additional questions or comments regarding our responses that you would like to discuss further. Apologies for reaching out over the weekend—we know it’s not the most ideal time. However, your feedback is very important to us, and we aim to address any outstanding concerns before next Wednesday. Thanks. Authors.
Summary: This paper proposes a novel Dispersing Prompt Expansion (DiPEx) approach to enhance class-agnostic object detection (OD) using vision-language models (VLMs). The authors observe that manually crafted text queries often result in undetected objects due to semantic overlap, and address this by progressively learning a set of distinct, non-overlapping hyperspherical prompts. DiPEx starts with a generic parent prompt, selects the one with the highest semantic uncertainty for further expansion, and generates child prompts that inherit semantics from the parent while capturing more fine-grained details. Dispersion losses are employed to maintain high inter-class discrepancy among child prompts while preserving parent-child consistency. The method utilizes the maximum angular coverage (MAC) of the semantic space as a criterion for early termination to prevent excessive prompt growth. Experiments on MS-COCO and LVIS datasets demonstrate that DiPEx outperforms other prompting methods by up to 20.1% in average recall (AR) and achieves a 21.3% AP improvement over SAM for out-of-distribution OD. Strengths: - The proposed DiPEx approach seems novel and innovative to the class-agnostic object detection. - The paper is well-written and structured, with clear explanations of the concepts, techniques, and experimental setup. - The proposed DiPEx method has the potential to significantly advance the state-of-the-art in class-agnostic OD and out-of-distribution detection. - Some important works about class-agnostic learning could be appended: [1] In ECCV 2022. Pose for everything: Towards category-agnostic pose estimation. [2] In CVPR 2023. Matching is not enough: A two-stage framework for category-agnostic pose estimation. [3] In CVPR 2024. Meta-Point Learning and Refining for Category-Agnostic Pose Estimation. Weaknesses: - The Abstract is not abstract enough. - The evaluation should be detailedly described. One intuitive way is to cast all categories into single category, but not all class-agnostic objects in dataset are annotated. How about the evaluation for these unannotated objects? - How about the efficiency of the proposed model in training/inference? Technical Quality: 2 Clarity: 2 Questions for Authors: See Weaknesses* Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See Weaknesses* Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and suggestions, which we address below: > **R3.1** Discussion on more related works Thanks for bringing the literature to our attention! [1] proposed POMNet, which leverages a transformer-based Keypoint Interaction Module (KIM) to capture interactions between keypoints and the relationship between support and query images. [2] introduced CapeFormer, a two-stage framework where keypoints are matched and treated as similarity-aware position proposals in the first stage, addressing noisy matching results due to the open-set nature of the problem. [3] introduces a meta-learning approach combined with iterative point refinement techniques for class-agnostic pose estimation. Please note these methods are specifically designed for **pose estimation**, which involves predicting key-points on objects. Object detection, on the other hand, focuses on identifying and localizing entire objects within an image. The granularity and nature of the tasks are different, making direct application challenging. Though we find these methods relatively out of scope to our current work, we acknowledge the importance of this area. We will **add a detailed discussion** accordingly in our revised version to broaden its relevance. > **R3.2** The Abstract is not abstract enough. Thanks for your constructive comment! Our intent was to provide a comprehensive summary of our work, outlining the goals and motivation clearly. While we believe it accurately reflects our research, we will endeavor to make it more concise. > **R3.3** Clarification on evaluation. Thanks for your suggestion! We confirm that our evaluation approach involves casting all existing categories from MS-COCO and LVIS into **a single class**, and we will provide a detailed explanation in our revised manuscript. As highlighted in our introduction, the primary goal of our paper is to enhance **Average Recall (AR)**, which measures how comprehensive objects are captured. Through visual inspections of ground truth (GT) against our results, we observed that COCO and LVIS are **not** densely annotated (see Figure 9-10 in the manuscript for more visualization). This is the primary reason our AP appears low; **correct predictions** for unannotated instances are **mistakenly counted as FP** in COCOEval. To address this, we plan to develop a fairer and more comprehensive benchmark in the future. > **R3.4** How about the efficiency of the proposed model in training/inference? Thanks for raising this important question! Regarding the efficiency of our model, we have noted this as a limitation in the conclusion section of our paper. We have provided an additional inference cost comparison between our proposed method and two handcrafted baselines in the attached PDF. Our results, detailed in Table 2, demonstrate that while the naive query-merging method reduces the computational cost, it significantly **compromises** detection accuracy due to overlapping semantics. In contrast, our proposed method strikes a balance, achieving superior detection performance. Specifically, our Dipex approach, is 6.69% slower compared to the naive query-merging method, but it shows an impressive **89.34%** reduction in inference time compared to the prediction-merging method. We will incorporate the efficiency analysis into the main body of the revised version! **Thanks again for reviewing our paper! We are more than willing to have a follow-up discussion with you if you still have any further concerns!** --- Rebuttal Comment 1.1: Title: Follow-Up on Submission Responses Comment: Thank you once again for your thoughtful feedback on our submission. As we get closer to the end of the discussion period on August 13th, we wanted there are any additional questions or comments regarding our responses that you would like to discuss further. Apologies for reaching out over the weekend—we know it’s not the most ideal time. However, your feedback is very important to us, and we aim to address any outstanding concerns before next Wednesday. Thanks. Authors. --- Rebuttal Comment 1.2: Title: Response to the author Comment: Thanks for the author's response which addresses most of my concerns. --- Reply to Comment 1.2.1: Title: Thank You for Reading and Consideration Comment: Dear UZCf, Thank you for your positive response and for raising your score! We are glad that our clarifications addressed your concerns. Best, Authors
Summary: This work identifies that the "semantic overlaps" may contribute to the diminished class-agnostic object detection performance for previous works utilizing VLMs, which is evidenced by the pre-experiments on had-crafted text queries on the MS COCO dataset. Furthermore, the authors derive a self-supervised prompt learning strategy to iteratively expand the (soft) prompt set in a tree hierarchy to ensure the diversity and coverage of class-agnostic object textual descriptions. A maximum angular coverage metric is provided for expansion determination to balance the prompt diversity and number cost. Experiments on MS COCO and LVIS datasets have proved the method's effectiveness. Strengths: 1. The research topic is interesting and valuable. Class-agnostic may be one of the cornerstones for large foundation vision models. 2. The proposed prompt set expansion method is novel. The experiments have proved its efficacy. 3. The paper is well-presented and easy-to-follow. Weaknesses: 1. My major concern lies in the author's claim that the "semantic overlap" among input words in the prompt declines object detection by visual-language matching. In Table 1, after the query-merging operation, the encoded text features are derived through the complicated self-attention mechanism across the input words (embeddings). Therefore, I prefer to owe the detection degradation to the disturbed attention compared to one input word in a single inference pass, rather than the so-called semantic overlap. In other words, Section 2 and Appendix 2 failed to relate semantic overlap with empirical results for me. 2. The overall framework is reasonable, but there are still some points to clarify: - For the children prompt initialization, it may not be sufficient to choose only one parent prompt with the high uncertainty to expand, especially for the first iteration when all of the root prompts are highly abstractive and therefore uncertain. In addition, Eq(2) may also be added to the children prompts with all of the prompts in previous generations, rather than only the chosen parent prompt. - For the MAC metric for expansion determination, it is weird to use the maximum angle to evaluate the diversity/coverage of ALL learned prompts. How about using the mean angular or KNN metrics? - Adding dispersion loss as well as the MAC metric at the input prompt embeddings is a little bit confusing, where the constraints may change after the encoding process. I wonder what will happen when adding the constraints to the encoded text features. - How to ensure the quality of pseudo labels for class-agnostic object detection? 3. The comparison with SAM on LVIS is unfair, as DiPEx is fine-tuned on the training set (even without box annotations) while SAM is a pre-trained model. Besides, is there some comparability between DiPEx and UniDetector[1]? [1] Z Wang et al. "Detecting Everything in the Open World: Towards Universal Object Detection", CVPR 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The detailed operations in Table 1 should be specified, eg., does prediction-merging means NMS? In addition, it would be better to report prediction-merging at the top line and query-merging at the bottom, which helps to identify the performance decline. 2. There seems some misuse of notations. For instance, is the definition of $P$ in line 134 and line 137? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The application of current work is limited by the training data sources, which only evaluate MS COCO and LVIS separately. Joint training with larger data sources (e.g., Object365) will improve the application value. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and constructive feedback. > **R2.1** I prefer to owe the detection degradation to the disturbed attention, rather than the so-called semantic overlap. We appreciate the insightful feedback. To clarify our "semantic overlap" hypothesis, we refer to the example in Appendix A.2. Our empirical study on "semantic overlap" is illustrated in Figure 7, where detection confidences are maintained with less similar words ("plates . cup ."), but performance drops with similar words ("plates . dishes ."). We conjecture that this is highly attributable to the dataset that was used during pre-training - there is a **low chance** of an image containing similar words [C], which may explain the reason why Grounding DINO was structured to **favor specific categories** over generic ones. Owing to this observation, we gained inspiration to **hierarchically** discover more fine-grained concepts from higher-level semantics. [C] Shuai Shao et al. Objects365: A Large-Scale, High-Quality Dataset for Object Detection, in ICCV. > **R2.2** Clarification & Additional Experiments Great suggestion! We have run an additional experiment based on your suggestion, where we add additional prompts at the initialization stage. Please refer to Table 1 in the attached PDF for your reference. By increasing the number of initial prompts from ($k=1$ to $k=5$), we observe an improvement in the detection performance. > **R2.3** Why MAC metric for expansion determination? How about using the mean angular or KNN metrics? We appreciate the reviewer's suggestion regarding the MAC metric for expansion determination. Our choice of **maximum angular coverage (MAC)** is inspired by the observation of semantically dissimilar words have larger angular discrepancies as hypothesized in our case study in Appendix A.2. We aim to capture semantics as comprehensively as possible to improve recall. The angular discrepancy between prompts is enforced by dispersion loss, where we intend to push child pairs further, and maximum angular deviation represents the comprehensiveness of the vocabs. While using mean angular metrics is plausible, it tends to find fine-grained concepts such as "apple" & "oranges", which can be easily **dominated** by the abundance of fine-grained words. This contrasts with our goal of promoting diversity among learned prompts from a global perspective. KNN on the other hand may signify how **detailed** for a particular pair and the results can be very restricted to a particular concept. Due to limited time during rebuttal, we will add comparisons in the revised version. > **R2.4** I wonder what will happen when adding the constraints to the encoded text features. Thank you for your feedback! As we are unsure about your concern, we interpret your questions as "*how dispersion loss can affect the encoded text features*". Firstly, we would like to **clarify** that MAC is the criterion for early stopping and it is not directly used in our optimization process. Dispersion loss, on the other hand, is our **training objective** which ensures that child prompts are sufficiently distinct from each other while not straying too far from their parent prompts. This combination ensures that our approach effectively captures diverse and unique concepts while maintaining coherence with the broader parent prompts. The learnable prompts are directly appended to the encoded text. The encoded *textual query* (*e.g.,* "generic") is **only used** as a guide to acquiring **pseudo-labels** and it is not optimized as part of the learning objective. > **R2.5** How to ensure the quality of pseudo labels for class-agnostic object detection? Thanks! To re-iterate, we iteratively refine our pseudo-labels to ensure their quality during the training process. We start with predictions from an off-the-shelf GroundingDINO model, removing low-confidence detections and excessively large bounding boxes. At each self-training stage, we update the pseudo-labels by performing inference using the learned prompts. To avoid duplication, we eliminate boxes with an IoU greater than 0.5 compared to the previous "ground-truth" boxes and apply SoftNMS. > **R2.6** Comparability with UniDetector Thanks for the great suggestion! It is important to clarify that UniDetector is an Open-Vocabulary detector (similar to what GroundingDINO was originally designed for), which **differs** from our class-agnostic settings. UniDetector requires comprehensive class labels for training, whereas our approach **only relies** on pseudo-box supervision. Notably, UniDetector also proposes a class-agnostic detector (CLN), which combines both the RPN and RoI head to generate proposals for universal object detection. This highlights the significance of class-agnostic object detection in Open-World scenarios, aligning with our focus on detecting every object in the scene without relying on predefined class labels. > **Q2.1** Detailed operations in Table 1 Thank you for your suggestion! As described in lines 52-57, query-merging involves concatenating all queries into a single string (*e.g.,* "objects . generic . entities ."), whereas prediction-merging entails combining the results from separate inferences using individual text prompts (*e.g.,* "objects", "generic", "entities"). We apologize for any confusion and will clarify this distinction in the revised manuscript. > **Q2.2** Misuse of Notations We appreciate your feedback! In this paper, we have used the terms "prompt" and "word" interchangeably, which may have caused some confusion. The notation in lines 134 and 137 is accurate; however, we will revise the manuscript to ensure that $P$ consistently refers to "prompt embeddings" for clarity. **Thanks again for reviewing our paper! We are more than willing to have a follow-up discussion with you if you still have any further concerns!** --- Rebuttal Comment 1.1: Title: Follow-Up on Submission Responses Comment: Thank you once again for your thoughtful feedback on our submission. As we get closer to the end of the discussion period on August 13th, we wanted there are any additional questions or comments regarding our responses that you would like to discuss further. Apologies for reaching out over the weekend—we know it’s not the most ideal time. However, your feedback is very important to us, and we aim to address any outstanding concerns before next Wednesday. Thanks. Authors. --- Rebuttal Comment 1.2: Comment: I appreciate the author's responses. I have raised my score to 6. Good luck. --- Rebuttal 2: Title: Thank You for Reading and Consideration Comment: Dear Reviewer 86g8, Thank you for your positive response and for raising your score! We are glad that our clarifications addressed your concerns. Best, Authors
Summary: This study investigates the use of visual-language models to improve class-agnostic object detection through a self-supervised prompt learning strategy. Diverse Prompt Expansion (DipEx) is proposed to enhance downstream task performance by learning to expand a set of diverse, non-overlapping prompts that boost recall rates of object detection. Strengths: 1) The authors find a new setting, class-agnostic object detection, which is practical and universal in real-world scenarios. 2) The proposed method achieves performance improvements compared to its baselines. 3) The figures are rich and vivid, and the writing is good. Weaknesses: - I'm mainly concerned about the novelty. The primary contribution of this paper lies in the prompt expansion method, which uses contrastive loss for optimization to acquire diversified prompts. However, this kind of optimization-based prompt diversification is not new art [1,2], especially as the previous work [1] presents a rather similar concept of optimizing prompts on the hypersphere. [1] Promptstyler: Prompt-driven style generation for source-free domain generalization. ICCV, 2023. [2] Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification. ACM MM, 2023. - The proposed prompt expansion method seems not essentially related to object detection task, since it may also suitable to other vision tasks. I thins it's may be not proper to sell the class-agnostic object detection as a major contribution in this paper. - As the authors claim that this paper aims to solve class-agnostic task, I am highly curious about how to obtain the pseudo labels during training. Does the pseudo labels get updated during prompt-based optimization? If updated, how to do this? - What does the learned prompt resemble? More visualization results, such as the attention visualization, of the learned prompt are expected. - Some presentations leave me confused. For example, in Line 117-118, the authors state that, ``applying query-merging to UNIVERSAL words results in a 52.46% reduction in AR compared to prediction-merging, whereas CLASS-WIDE queries (e.g., from WordNet) achieve a smaller decrease in AR of only 23.64%''. However, I fail to understand how the 52.46% and 23.64% reductions are computed by the results in Table 1. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the weakness. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have stated the limitations in the conclusion part, and I hope they can do some further validations to explore the effecitveness of the proposed method on open-vocabulary and open-world detection. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and suggestions, which we address below: > **R1.1** Novelty and comparisons with [1,2] **Contributions.** We appreciate your feedback and would like to **clarify** the major contributions of our work: (1) **General Impact**: Our work presents an **early analysis** of the bottleneck of current class-generalized detection tasks such as OOD detection. The challenge of locating *every possible* object remains a **fundamental challenge** before being able to improve AP for classes of interest (*line 28-32*) has been **rarely studied**. Further, prior OOD works are incomparable due to inconsistent evaluation metrics (*e.g.,* open-set benchmark uses FPR95, and unsupervised class discovery uses CorLoc). DiPEx is the **first work** to comprehensively **benchmark** existing OOD detection (open-set/open-world detection) and unsupervised object discovery against class-agnostic OD setting, providing valuable guidance for future research. (2) **Technical Contributions**: We would like to point out that instead of using off-the-shelf **contrastive loss** ([1]&[2]), all our design is motivated by **angular perspective** to discover unrevealed semantics by pushing children prompts away while maintaining coherence with the parent prompts. The rationale behind this was based on our observation in pilot study (Appendix A.2 & Figure 7), where two text tokens that are semantically dissimilar exhibit large angular distances lead to more confident box predictions. This motivates us to hierarchically decompose high-level semantics into finer-grained ones, aiming to uncover more semantics. **Compared to SOTA.** We appreciate the reviewer for highlighting [1, 2]. **Key differences** include: (1) **Task.** Both [1] and [2] focus on domain adaptation for style diversification with **classes available**, or in other words, diversifying the suffixes prior to the *class* token. Additionally, both methods are constrained to **a fixed number** of **style prompts**. On the contrary, our design aims to disentangle **arbitrary numbers** of fine-grained classes, followed by a MAC early stopping strategy to prevent the excessive growth of class prompts. Notably, we are the **first** one to introduce progressive prompt expansion, proving its effectiveness. Our task, without class vocabulary available, is much more challenging. (2) **Objectives**. Both [1] & [2] use off-the-shelf contrastive loss, which significantly differs from our **dispersion loss**. Contrastive loss separates **inter-class** samples; this contradicts our motivation of revealing classes in a tree hierarchy fashion. Additionally, orthogonal constraint in [1] is **unsuitable**; recall our ablation study (see Appendix A.2) - semantically similar words can have small angular discrepancy (e.g., "plates" & "dishes" have angular distance $\theta$ of 53.73$^\circ$). This means that the orthogonal ($\theta=90^\circ$) constraint is too much and will push prompts **too far** and **distort** underlying representation. Therefore, we have opted to use **dispersion loss** as a **softer constraint** to enforce separability between child prompts while maintaining child-parent coherence. > **R1.2** The proposed method may also be suitable to other vision tasks. We appreciate the reviewer's feedback. Adapting existing prompting methods (*e.g.,* CoOp), designed primarily for classification tasks, to detection yields suboptimal performance (Table 2, 3), especially with a large number of classes (Figure 3). This is because Grounding DINO uses class and query confidence to find top-k proposals, and if class prompts are not accurately learned, many correct boxes will be **missed** due to low confidence. Our hierarchical expansion allows us to find **accurate fine-grained** semantics, specifically designed to help detection-based foundation models identify more objects and **improve recall**. Furthermore, detection serves as a basis for downstream applications, such as multi-object tracking, which we plan to explore in future work. > **R1.3** Does the pseudo labels get updated during prompt-based optimization? If updated, how to do this? Yes, our pseudo-labels are iteratively refined with each training process during the training process. (1) We start with inference using an off-the-shelf GroundingDINO model with "generic" and preprocess the predictions by removing low-confidence detections and excessively large bounding boxes. (2) At each self-training stage, we perform inference again with the expanded prompts to obtain updated pseudo labels. (3) For quality control, we remove boxes with an IoU greater than 0.5 with the previous "ground-truth" boxes and apply SoftNMS. This ensures the continual improvement of box accuracy and promotes the model to discover emerging concepts. > **R1.4** What does the learned prompt resemble? Great question! The interpretability of context tokens remains as an **open question** [A, B]. Prior arts [A] have attempted to use NN words from pre-trained embeddings to interpret the learned vectors. However, since text features operate within a continuous embedding space, they likely carry more abstract meanings that are not readily interpretable. Regardless, we have provided a visualization of the attention features from each distinct prompt in the **attached PDF** for your reference. [A] Kaiyang Zhou et al. Learning to Prompt for Vision-Language Models, in IJCV. [B] Brian Lester et al. The Power of Scale for Parameter-Efficient Prompt Tuning, in EMNLP. > **R1.5** However, I fail to understand how the 52.46% and 23.64% reductions are computed by the results in Table 1. Thanks! The percentage reduction was calculated using (query_merged-prediction_merged)/query_merged. We will provide more clarification in the revised version. **Thanks again for reviewing our paper! We are more than willing to have a follow-up discussion with you if you still have any further concerns!** --- Rebuttal Comment 1.1: Title: Follow-Up on Submission Responses Comment: Thank you once again for your thoughtful feedback on our submission. As we get closer to the end of the discussion period on August 13th, we wanted there are any additional questions or comments regarding our responses that you would like to discuss further. Apologies for reaching out over the weekend—we know it’s not the most ideal time. However, your feedback is very important to us, and we aim to address any outstanding concerns before next Wednesday. Thanks. Authors. --- Rebuttal 2: Title: Thank You for Reading and Consideration Comment: Dear Reviewer Jt2M, Thank you for your positive response and for raising your score! We are glad that our clarifications addressed your concerns. In our revised manuscript, we will make sure to include a discussion on the contrasts between our work and the literature you referenced [1, 2]. We will also further elaborate on the additional concerns you have raised in your review. [1] Promptstyler: Prompt-driven style generation for source-free domain generalization. ICCV, 2023. [2] Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification. ACM MM, 2023. Best, Authors.
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to extend our sincere gratitude for your thoughtful and encouraging feedback. We are pleased to see that our exploration into class-agnostic object detection was recognized as **practical and universal** in real-world scenarios, with performance improvements over baseline approaches (Reviewer Jt2M, 86g8, fTsG). Your acknowledgment of the foundational potential of class-agnostic detection for large vision models and the **novelty** of our prompt set expansion method is greatly appreciated (Reviewer 86g8, fTsG). We are grateful for the positive feedback on the **clarity**, **structure**, and **quality** of our writing and figures (Reviewer Jt2M, 86g8, UZCf). The recognition of the potential **impact** of our DiPEx approach on the state-of-the-art in both class-agnostic and out-of-distribution detection is particularly gratifying (Reviewer UZCf, fTsG). The appreciation for our geometrically motivated and intuitive approach to expanding text queries for grounding models, along with the comprehensive experimental results on benchmark datasets like MS-COCO and LVIS, further encourages us (Reviewer fTsG)! The attached PDF contains additional experiments for reference: - Visualization of the attended areas activated by the learned prompts - The impact of different lengths of prompts for initialization - Inference cost comparison in terms of time and memory consumption **Thank you once again for your valuable feedback. We will incorporate the revisions accordingly to improve the quality of this work. Please let us know if you have any additional questions, and we are more than happy to address them!** Pdf: /pdf/da485b5b0254a7e4933a43990b1ee3b75554df90.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Multivariate Stochastic Dominance via Optimal Transport and Applications to Models Benchmarking
Accept (poster)
Summary: This paper considers the task of estimating the degree of stochastic dominance between two multivariate distributions. In the univariate setting, stochastic dominance is a useful tool for tasks such as benchmarking LLMs, where a practitioner may have estimates of some quality metric for responses. An efficient way to approximately estimate multivariate stochastic dominance could thus be helpful in setting where models may be evaluated on many metrics, rather than just a single value. The authors introduce a natural definition of near-stochastic dominance based on an existing definition in the univariate case and a connection to optimal transport, and show that while this notion may not allow for efficient estimation, adding an entropic regularization term allows for better guarantees. Through experiments, the authors demonstrate that their proposed metric converges to the true unregularized value in synthetic experiments, and for an LLM dataset, their approach outputs a ranking of models far more correlated with a "ground-truth" ranking. Strengths: - The paper is clearly written, and does a good job of explaining how their results rest on prior results in univariate stochastic domination and regularization methods. - The goal of obtaining good methods for estimating multivariate stochastic domination is very natural, and seems very relevant to many areas of practice, especially evaluating LLMs on multiple axes, as the authors mention. Weaknesses: - The experiments section is a bit confusing, in particular the explanations of the various approaches tested in the LLM experiment. The reasoning behind the choice of the ground-truth ranking is also a bit unclear to me. Technical Quality: 3 Clarity: 3 Questions for Authors: - Does your method also improve estimates of the degree of stochastic dominance (e.g. the first-ranked model is much better than the second-ranked model, but the second- and third-ranked models are comparable)? Is this something that can be demonstrated empirically? - If GPT-generated prompt comparisons are viewed as good enough to use as a ground-truth ranking, why not use this as the gold-standard benchmarking method? If not, is there a better baseline to compare the multivariate FSD approach to? - Moreover, is there a reason why we should expect a "human" ranking to match the true stochastic order induced by some set of metrics? It seems that in some cases we might actually want the order induced by a set of metrics to look somewhat different than what the average human might output as a ranking. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the authors adequately address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments regarding the experiments. We will work to clarify the experimental setup and address the comments regarding comparison to ChatGPT. We believe that both of these are important to further improve the quality of our submission. ___ **The experiments section is a bit confusing, in particular the explanations of the various approaches tested in the LLM experiment. The reasoning behind the choice of the ground-truth ranking is also a bit unclear to me. If GPT-generated prompt comparisons are viewed as good enough to use as a ground-truth ranking, why not use this as the gold-standard benchmarking method? If not, is there a better baseline to compare the multivariate FSD approach to?** ___ We thank the reviewer for pointing out that the LLM benchmarking experiment should be more clearly explained to avoid ambiguities. We will amend this in the text. The experiment is conducted as follows: 1. We use the dataset from [1], which compiles a training set of 100K prompts along with the resulting response from 12 different LLMs. 2. For each prompt, the responses from the 12 different LLMs are evaluated according to 9 automatic metrics (e.g. BLEU, ROUGE, BERTScore, BARTScore, etc) and are stored as 12 vectors in $\mathbb R^9$ $(x^{(i)})_{i=1}^{12}$, where $x^{(1)}$ is the vector containing the outputs of the 9 metrics for the 1st LLM and so on. $x^{(1)}$ is then viewed as a sample from the distribution of all possible responses from the 1st LLM evaluated according to these 9 metrics. 3. This process is repeated for each of the 100K prompts so that one can construct the empirical measure $\hat \mu_N^{(1)}=\frac{1}{N}\sum_{i=1}^N\delta_{x^{(1)}_i}$ for $N = 100K$ (here $\delta_x$ is the Dirac measure at the point $x\in\mathbb R^9$). 4. The empirical measures $\hat \mu_n^{(i)}$ and $\hat \mu_n^{(j)}$ for some $n\leq 100K$ and $i\neq j$ are then compared by computing the normalized index $\varepsilon_{h,\lambda}(\hat \mu_n^{(i)},\hat \mu_n^{(j)})=\varepsilon_{ij}^{(h,\lambda)}$. 5. Given the pairwise ratios $\varepsilon_{ij}^{(h,\lambda)}$, we rank the 12 LLMS according to the relative testing procedure described in Section 4.2. The resampling procedure (bootstrap) is used to construct the confidence intervals. 6. The results thus obtained are compared (via Kendall Tau Similarity) to a univariate ranking based on ChatGPT scoring i.e. ChatGPT is presented by the instruction and the response of each LLM as described in point 1 and produces a score that judges the quality of the response in terms of following the given instruction; the different models are then ranked according to their ChatGPT score (a univariate ranking) and compared to the multivariate ranking described above. In the experiments, ChatGPT is being used for the purposes of evaluation only, motivated by its use as a human proxy in the context of evaluating the quality of generated natural language [1-4] . We show that the ranking resulting from multivariate stochastic dominance on automatic metrics correlates well with the ChatGPT ranking which, in turn, correlates well with human evaluation. The advantage of our multivariate stochastic dominance over ChatGPT or human evaluation is threefold. First, as discussed previously, it requires significantly less computational overhead. Second, there are no upfront monetary costs associated with evaluating the ratio statistic, this can be an important consideration for large-scale comparison tasks. Third, our approach can be run locally thereby eliminating the privacy concerns of exposing sensitive data on APIs running LLM as a judge, such as ChatGPT. ___ **Moreover, is there a reason why we should expect a "human" ranking to match the true stochastic order induced by some set of metrics? It seems that in some cases we might actually want the order induced by a set of metrics to look somewhat different than what the average human might output as a ranking.** ___ This is a great and subtle point. The notion of an “ideal” ranking according to a given set of metrics depends largely on the preferences of the practitioner. One limitation with the approach discussed in the text is that all dimensions are treated equally, as the same function $h$ is used to compare each dimension. This may limit the utility of this methodology in applications where violations of the order in one particular dimension should be severely penalized (e.g. unsafe responses from LLMs). In such a case, one may modify the proposed framework to use a cost function of the form $c(x,y)=\sum_{i=1}^d h_i(y_i-x_i)$ (i.e. we prescribed a different cost to violations in each dimension). Provided that each of the $h_i$ are nonnegative functions satisfying the smoothness condition, all of the theoretical results derived in the paper still go through. As such, a practitioner that wishes to formulate a domain-specific notion of almost stochastic dominance can adopt a data-driven approach to learn a new stochastic order tailored to the user’s preferences by letting the $h_i$ be parametrized costs (e.g. the logistic function) and optimizing over the parameters (in the previous example, the gain). --- Rebuttal 2: Title: Rebuttal continued (1/1) Comment: ___ **Does your method also improve estimates of the degree of stochastic dominance (e.g. the first-ranked model is much better than the second-ranked model, but the second- and third-ranked models are comparable)? Is this something that can be demonstrated empirically?** ___ This is an interesting question. Currently, the paper only addresses the question of establishing a ranking of different models according to the notion of multivariate stochastic dominance by computing the value of the ratio between each pair of models (values closer to $0$ indicate stronger dominance). Therefore, the magnitude of the ratio is indicative of how strong or weak the dominance is with 0 indicating perfect dominance and 1 indicating the opposite; these values thus give a relative sense of dominance strength. We have included a table of the one-versus-all violation ratios computed in the application to LLM benchmarking in the rebuttal pdf which indicates how these models compare in terms of the average pairwise ratio value. However, we may consider going beyond computing just the numerical value of the index to enable a more refined comparison between two models. Indeed, when computing the numerator via Sinkhorn’s algorithm, we obtain not only the value of the entropic optimal transport, but also an optimal plan $\pi$ for that problem which, in the discrete case considered here, characterizes how much mass should be sent from $x^{(i)}$ to $y^{(j)}$ for every pair of support points for $\mu$ and $\nu$ respectively to achieve the optimal transport. With this, we may characterize the points at which $\mu$ fails to dominate $\nu$ (i.e. points where $\pi(\{x^{(i)},y^{(j)}\})>0$ but $x^{(i)}$ does not dominate $y^{(j)}$ componentwise). This allows us to define regions where the stochastic domination of model A over model B is violated. Given such a pair $x^{(i)},y^{(j)}$, we can discern which metrics model B outperformed model A in as well as the magnitude in terms of difference of metric values by which the responses differed. This can enable us to identify which prompts lead to unsatisfactory responses from model A which can serve as a starting point for improving the overall quality of its responses. [1] Hada, R., Gumma, V., de Wynter, A., Diddee, H., Ahmed, M., Choudhury, M., ... & Sitaram, S. (2023). Are large language model-based evaluators the solution to scaling up multilingual evaluation?. arXiv preprint arXiv:2309.07462. [2] Jiang, D., Ren, X., & Lin, B. Y. (2023). Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. arXiv preprint arXiv:2306.02561. [3] Liu, Y., Iter, D., Xu, Y., Wang, S., Xu, R., & Zhu, C. (2023). G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634. [4] Zheng, L., Chiang, W.L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E. and Zhang, H. (2024). Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36. --- Rebuttal Comment 2.1: Comment: Thanks for your detailed response to my review. I will maintain my score as I think the main results are timely and of use to the LLM community, but the paper could probably benefit from a more thorough experiments section and improved clarity of writing throughout.
Summary: Insipred by the uni-dimensional case of First order Stochastical Dominance (FSD), the authors indtroduce multidimensional FSD which does away making approximations using aggregations and reductions of multi dimensional metrics and reduce the orderings to unidimensional case. This is done vi Optimal Transport framework but this comes with a caveat since, using empirical Optimal Transport framework in multi dimensional case suffers from the curse of dimensionality. This is dealt with using approximations in form of entropic regularization using ideas from Cuturi 2013, to get Entropic Optimal Transport framework. Theorem 1 in the paper shows that the approximation error has a computable upper bound. The framework is tested on a simulated data and LLM metric evaluation case study. Strengths: 1. It is great to see a method which can help LLM research community regarding proper evaluation and benchmarking. There is clearly a need for such metrics and methods, since most LLM's responses are highly stochastic, even when using same prompt and same parameters. 2. It is now high time for LLM research community to turn to rigorous statistical analysis of their research methodology and this paper contributes to that so it is both timely and useful. 3. The authors have applied the framework to two different domains: financial domain with portfolio allocation study and LLM evaluation. 4. Clear comparison between OT and EOT frameworks, absoulute and relative tests. 5. Figures are clear and support the claims made. Weaknesses: - Unclear writing in many parts of the paper which makes such a difficult technical paper harder to read and understand. I strongly advise the authors to take couple of polishing passes. - Line 287-290 unclear writing - Line 314-319 unclear - predicts linearly p, indicating that it is captures well - Figure 1 caption and title don't match. - Maybe in experiments use multiple LLMs and not just Chat GPT Technical Quality: 3 Clarity: 3 Questions for Authors: Some typos and writing mistakes 1. Some references are repeated, - Del Barrio 2018 2. How to tune the hyperparameters: $\beta$ and $\lambda$ in Sec 5.2 ? 3. It will be helpful to clearly state that for Kendall tau statistic, lower values are better Fig 2. 4. How reasonable it is to take consensus with ChatGPT as the ground truth ? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I am not an expert in theory and this paper is heavy on OT theory, and the readibility of the paper is low in many important places. It would have been great if the authors would have listed clearly what limitations do these methods bring, one can ofcourse be related to computation when using multidimenional statistics over aggregation to uni-dimensional case, at what dimension could the cost become too prohibitive ? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their support of this work. We take your feedback regarding unclear writing very seriously and will further polish the paper to minimize any ambiguities. We have corrected all of the typos you have identified and reworked the sections that were identified as unclear. ___ **How to tune the hyperparameters: $\beta$ and $\lambda$ in Sec 5.2?** ___ This is indeed an important point when implementing this methodology, we will expand on the discussion in the main text according to the following points. The parameter $\lambda$ corresponds to the regularization strength in the entropic optimal transport distance, which is employed as a statistically efficient and computationally tractable proxy for the standard optimal transport distance. Given Theorem 1, it is desirable to set $\lambda$ as small as possible in order to approximate the true optimal transport distance well, for example $\lambda = 0.1$ as used in the numerical experiments. We remark that for practical purposes, $\lambda$ cannot be chosen arbitrarily small due to possible underflow in the matrix $e^{-C/\lambda}$, where $C$ is the matrix of pairwise costs, used when computing the entropic distance using Sinkhorn’s algorithm. Notably, Sinkhorn’s algorithm is a method tailored to positive matrices and cannot cope with zero entries. As such, a practitioner can start with a small value for $\lambda$ and increase it gradually if numerical instability is encountered. As for the choice of $\beta$, it is mentioned in Example 1 and demonstrated empirically in Section 5.1 that the larger the gain, $\beta$, of the logistic function is, the closer the logistic function is to being compatible with standard multivariate FSD as described in Definition 1. For the matrix $e^{-C/\lambda}$ from the previous paragraph to be well-conditioned, there is a tradeoff between $\beta$ and $\lambda$. In the experiments, these hyperparameters were set by first fixing the entropic parameter to $\lambda = 0.1$, then $\beta$ was set as large as possible subject to a computational budget constraint. Indeed, as described in the final point of this response, the number of iterations required for Sinkhorn’s algorithm to converge scales as $||C||_{\infty}/\lambda$. ___ **How reasonable it is to take consensus with ChatGPT as the ground truth ?** ___ In the experiments, the usage of ChatGPT for the purposes of evaluation is motivated by its use as a human proxy in the context of evaluating the quality of generated natural language [2-5]. Such evaluations are common in the literature and we do not anticipate significant differences by using another equivalently sized LLM such as Claude, LLama2 70b etc. for the purpose of scoring. We underscore that the advantage of our multivariate stochastic dominance over the ChatGPT-based scoring method or human evaluation is three-fold. First, it requires significantly less computational overhead as discussed in the following point. Second, there are no upfront monetary costs associated with evaluating the ratio statistic, this can be an important consideration for large-scale comparison tasks. Third, our approach can be run locally thereby eliminating the privacy concerns of exposing sensitive data on APIs running LLM as a judge, such as ChatGPT. --- Rebuttal 2: Title: Rebuttal continued (1/1) Comment: ___ **It would have been great if the authors would have listed clearly what limitations do these methods bring, one can ofcourse be related to computation when using multidimenional statistics over aggregation to uni-dimensional case, at what dimension could the cost become too prohibitive ?** ___ This is another good point which deserves additional discussion in the appendix. Starting from the question of computational complexity, the dimension of the statistic is not a big concern given that the entropic distances defining the ratio are computed using Sinkhorn’s algorithm. Precisely, to solve an entropic optimal transport distances with regularization strength $\lambda$ between two distributions on $\mathbb R^d$ supported on $N$ points $x^{(i)},y^{(i)}$, $i=1,\dots, N$ using the Sinkhorn scaling algorithm as implemented in the Python OT package, we first construct a matrix of pairwise distances $C\in\mathbb R^{N\times N}$ where $C_{ij}=c(x^{(i)},y^{(j)})$, which requires a one-time cost of $N^2K(d)$ operations, where $K(d)$ is the complexity of computing the cost between two $d$-dimensional vectors, then an iterative scaling procedure is performed, which consists of iterated products of the fixed matrix $e^{-C/\lambda}$ with two vector iterates so that each iteration runs in $O(N^2)$ time. Following [1] Theorem 1, we see that the algorithm reaches its termination condition to a given precision $\eta$ (the default value in the package used is 1e-9) in a number of iterations, $K$, bounded as $$ K\leq 2 +\frac{-4\log(e^{-\|C\|_{\infty}/\lambda }\kappa)}{\eta}, $$ where $\kappa$ is the smallest value in the input probability vectors (remark that this quantity is at most $\frac 1 N$ with equality when considering empirical distributions on distinct points). Given that the ratio requires two evaluations of the entropic distance, the overall complexity is twice that of the above procedure. Moreover, if the numerator is computed first we can reuse the matrix C when computing the denominator’s pairwise cost matrix. Another limitation of our method as formulated in the paper currently is that violations in each dimension are penalized equally. This may limit the utility of this methodology in applications where violations of the order in one particular dimension should be severely penalized (e.g. unsafe responses from LLMs). In such a case, one may modify the proposed framework to use a cost function of the form $c(x,y)=\sum_{i=1}^d h_i(y_i-x_i)$ (i.e. we prescribed a different cost to violations in each dimension). Provided that each of the $h_i$ are nonnegative functions satisfying the smoothness condition, all of the theoretical results derived in the paper still go through. As such, a practitioner that wishes to formulate a domain-specific notion of almost stochastic dominance can adopt a data-driven approach to learn a new stochastic order tailored to the user’s preferences by letting the $h_i$ be parametrized costs (e.g. the logistic function) and optimizing over the parameters (in the previous example, the gain). [1] Dvurechensky, P., Gasnikov, A., & Kroshnin, A. (2018, July). Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn’s algorithm. In International conference on machine learning (pp. 1367-1376). PMLR. [2] Hada, R., Gumma, V., de Wynter, A., Diddee, H., Ahmed, M., Choudhury, M., ... & Sitaram, S. (2023). Are large language model-based evaluators the solution to scaling up multilingual evaluation?. arXiv preprint arXiv:2309.07462. [3] Jiang, D., Ren, X., & Lin, B. Y. (2023). Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. arXiv preprint arXiv:2306.02561. [4] Liu, Y., Iter, D., Xu, Y., Wang, S., Xu, R., & Zhu, C. (2023). G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634. [5] Zheng, L., Chiang, W.L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E. and Zhang, H. (2024). Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36.
Summary: In this paper authors propose a testing framework for First order Stochastic Dominance (FSD) for multivariate random variables. To achieve this, the authors use ideas from optimal transport using Entropic Regularization to derive a hypothesis testing procedure for testing multivariate FSD. The proposed methodology includes theoretical results showing the distributional convergence of the test statistic. Strengths: - The paper considers an important problem, which is well motivated. - Theoretical results back the methodology - The paper is well-presented Weaknesses: 1. The experimental results do not present the Type I and II errors for the hypothesis testing methodology proposed. The authors should consider adding these results (with a synthetic setup if needed). 2. The results should also investigate how these errors change with increasing $N$? And increasing $d$? Overall, I think more extensive empirical investigation is needed to show how fast the convergence takes place, and how well the methodology scales with increasing $d$ and $N$ (in terms of computational complexity) **Other comments:** Lines 166-167: - Notation should be $OT_{h, 0}$ instead of $OT_{0, \lambda}$ - Kendall tau rank has not been defined - From the definition of $\mathcal{E}_{\mathcal{W}_2}$ below line 73, we should have that $\mathcal{E}_{\mathcal{W}_2}= 1$ when $F_Y^{-1}(t) \leq F_X^{-1}(t)$ for a.e. t. However, line 77 states that in this case $\mathcal{E}_{\mathcal{W}_2} = 0$. I'm not sure if this is correct. Technical Quality: 3 Clarity: 3 Questions for Authors: See above Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations in the discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their support of this paper and for highlighting some additional experiments which would further clarify the performance of the method. We have run the proposed experiments and will add them to the appendix. We have also addressed the points from your other comments in the text. ___ **The experimental results do not present the Type I and II errors for the hypothesis testing methodology proposed. The authors should consider adding these results (with a synthetic setup if needed). The results should also investigate how these errors change with increasing $N$? And increasing $d$? Overall, I think more extensive empirical investigation is needed to show how fast the convergence takes place.** ___ We agree that additional experiments would help to better illustrate the strengths and limitations of this approach. In line with your recommendations, we have computed the Type I and II errors for the synthetic experiment in the paper and conducted an empirical study of how these errors vary as a function of $N$ and $d$. The results of these experiments are compiled in the attached pdf and demonstrate that the proposed testing methodology is sample efficient and scales well even in moderate dimensions. As mentioned, these additional experiments will be added to the appendix. ___ **How well [does] the methodology scales with increasing $d$ and $N$ (in terms of computational complexity)** ___ Thank you for bringing up this important point, we agree that the computational complexity of our approach should be more carefully discussed. We will add the following discussion to the appendix. The ratio requires computing two entropic optimal transport distances with regularization strength $\lambda$ between two distributions on $\mathbb R^d$ supported on $N$ points $x^{(i)},y^{(i)}$, $i=1,\dots,N$. To solve this problem numerically, we utilize the popular Sinkhorn scaling algorithm as implemented in the Python OT package. To compute one distance, we first construct a matrix of pairwise distances $C\in\mathbb R^{N\times N}$ where $C_{ij}=c(x^{(i)},y^{(j)})$, which requires a one-time cost of $N^2K(d)$ operations, where $K(d)$ is the complexity of computing the cost between two $d$-dimensional vectors), then an iterative scaling procedure is performed, which consists of iterated products of the fixed matrix $e^{-C/\lambda}$ with two vector iterates so that each iteration runs in $O(N^2)$ time. Following [1] Theorem 1, we see that the algorithm reaches its termination condition to a given precision $\eta$ (the default value in the package used is 1e-9) in a number of iterations, $K$, bounded as $$ K\leq 2 +\frac{-4\log(e^{-||C||_{\infty}/\lambda }\kappa)}{\eta}, $$ where $\kappa$ is the smallest value in the input probability vectors (remark that this quantity is at most $\frac 1 N$ with equality when considering empirical distributions on distinct points). Given that the ratio requires two evaluations of the entropic distance, the overall complexity is twice that of the above procedure. Moreover, if the numerator is computed first we can reuse the matrix C when computing the denominator’s pairwise cost matrix. --- Rebuttal Comment 1.1: Comment: I would like to thank the reviewers for their response, which addresses the questions raised. I am satisfied with the response and will keep my score unchanged.
Summary: The paper studies the testing of multivariate stochastic dominance, i.e., deciding an order between two multivariate random variables. The authors generalized the notion of index of almost stochastic dominance, which is for uni-variate rv. The new index is based on regularized value of optimla transport problems. Convergence of plug-in estimate of that index, and its bootstrap theory is developed. As a main application, it is applied to LLM Benchmarking, where an LLM is evaluated on many metrics. Strengths: 1. This generalized notion of stochastic order index for multivariate random variables is interesting. Its application to LLM benchmarking is also interesting. 2. Many advanced tools from probability is used in the proof section. It is interesting to see that these theories find application in LLM evaluation. Weaknesses: 1. There are two hyperparameters that need to be chosen to define the ratio statistics. The author should provide guidance on how to choose them. 2. Can the authors discuss the computation complexity of computing the ratio statistics? 3. The new notion of stochastic dominance is interesting. I think it would explain this new concept more if the authors could compile a list of examples that showcase this notion. For example the example in Sec 5.1 is intuitive. But it is too simplified. Since "dominance" is a complicated concept for multivariate distributions, explaining what kind of dominance is being captured and what not could be useful when practitioners apply the method. Technical Quality: 3 Clarity: 3 Questions for Authors: - What does $\hat \mu$ $\hat \nu$ mean in the context of LLM benchmarking? Do we divide the dataset into multiple smaller datasets and evaluate LLM on them? - Typo at line 185. Should be "whenever $OT_{\bar h, \lambda } = 0$" Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of the paper and for identifying sections which could use further clarification. This feedback is invaluable to us in improving the quality of our submission. We have corrected the typos identified and will further polish the paper to improve its readability. ___ **There are two hyperparameters that need to be chosen to define the ratio statistics. The author should provide guidance on how to choose them.** ___ For the choice of hyperparameters, we recall that the parameter $\lambda$ corresponds to the regularization strength in the entropic optimal transport distance, which is employed as a statistically efficient and computationally tractable proxy for the standard optimal transport distance. Given Theorem 1, it is desirable to set $\lambda$ as small as possible in order to approximate the true optimal transport distance well, for example $\lambda = 0.1$ as used in the numerical experiments. We remark that for practical purposes, $\lambda$ cannot be chosen arbitrarily small due to possible underflow in the matrix $e^{-C/\lambda}$, where $C$ is the matrix of pairwise costs, used when computing the entropic distance using Sinkhorn’s algorithm (see the following point for more details). Notably, Sinkhorn’s algorithm is a method tailored to positive matrices and cannot cope with zero entries. As such, a practitioner can start with a small value for $\lambda$ and increase it gradually if numerical instability is encountered. As for the choice of function, $h$, three examples are discussed in Example 1. For practical applications, the logistic function is preferred for ease of computation and since it satisfies the desired smoothness properties. As mentioned in Example 1 and demonstrated empirically in Section 5.1, the larger the gain, $\beta$, of the logistic function is, the closer the logistic function is to being compatible with standard multivariate FSD as described in Definition 1. For the matrix $e^{-C/\lambda}$ from the previous point to be well-conditioned, there is a tradeoff between $\beta$ and $\lambda$. In the experiments, these hyperparameters were set by first fixing the entropic parameter to $\lambda = 0.1$, then $\beta$ was set as large as possible subject to a computational budget constraint. Indeed, as described in the following point of this response, the number of iterations required for Sinkhorn’s algorithm to converge scales as $||C||_{\infty}/\lambda$. We also highlight that the hyperparameter $\varepsilon_0$ used when defining the notion of entropic multivariate almost FSD does not need to be set when performing relative testing; a discussion of this approach is included on lines 252 onwards. Notably, the empirical study we performed on ranking LLMs utilizes this relative testing framework to perform the rankings using the proposed ratio statistic and does not require one to set the parameter $\varepsilon_0$. We will add these clarifications on the choice of hyperparameters to the appendix. ___ **Can the authors discuss the computation complexity of computing the ratio statistics?** ___ This is indeed an important question, we acknowledge that a more complete discussion is warranted. The ratio requires computing two entropic optimal transport distances with regularization strength $\lambda$ between two distributions on $\mathbb R^d$ supported on $N$ points $x^{(i)},$$y^{(i)}$ $i=1,\dots, N$. To solve this problem numerically, we utilize the popular Sinkhorn scaling algorithm as implemented in the Python OT package. To compute one distance, we first construct a matrix of pairwise distances $C\in\mathbb R^{N\times N}$ where $C_{ij}=c(x^{(i)},y^{(j)})$, which requires a one-time cost of $N^2K(d)$ operations, where $K(d)$ is the complexity of computing the cost between two $d$-dimensional vectors), then an iterative scaling procedure is performed, which consists of iterated products of the fixed matrix $e^{-C/\lambda}$ with two vector iterates so that each iteration runs in $O(N^2)$ time. Following [1] Theorem 1, we see that the algorithm reaches its termination condition to a given precision $\eta$ (the default value in the package used is 1e-9) in a number of iterations, $K$, bounded as $$ K\leq 2 +\frac{-4\log(e^{-\|C\|_{\infty}/\lambda }\kappa)}{\eta}, $$ where $\kappa$ is the smallest value in the input probability vectors (remark that this quantity is at most $\frac 1 N$ with equality when considering empirical distributions on distinct points). Given that the ratio requires two evaluations of the entropic distance, the overall complexity is twice that of the above procedure. Moreover, if the numerator is computed first we can reuse the matrix C when computing the denominator’s pairwise cost matrix. We will add a sentence to this effect in the main text and provide a thorough discussion in the appendix. --- Rebuttal 2: Title: Rebuttal continued (1/2) Comment: ___ **The new notion of stochastic dominance is interesting. I think it would explain this new concept more if the authors could compile a list of examples that showcase this notion. For example the example in Sec 5.1 is intuitive. But it is too simplified. Since "dominance" is a complicated concept for multivariate distributions, explaining what kind of dominance is being captured and what not could be useful when practitioners apply the method.** ___ We agree that this notion of dominance should be more clearly demonstrated in terms of a more complex example. To amend this, we propose to add the following discussion to the paper. > Suppose that an agent must choose between multivariate (financial) portfolios, e.g. a list of assets from $k$ different companies, with the aim of maximizing the return. For the standard notion of stochastic dominance, it is required that each individual asset from company $i$ generally achieves a higher value than that of company $j$; in particular, the expected return of each asset is higher. The notion of almost FSD relaxes this notion to allow for the possibility that some of the assets from company $i$ underform those of company $j$ on average, but only by a small amount. One limitation with the approach discussed in the text is that all dimensions are treated equally, as the same function $h$ is used to compare each dimension. This may limit the utility of this methodology in applications where violations of the order in one particular dimension should be severely penalized (e.g. unsafe responses from LLMs). In such a case, one may modify the proposed framework to use a cost function of the form $c(x,y)=\sum_{i=1}^d h_i(y_i-x_i)$ (i.e. we prescribed a different cost to violations in each dimension). Provided that each of the $h_i$ are nonnegative functions satisfying the smoothness condition, all of the theoretical results derived in the paper still go through. As such, a practitioner that wishes to formulate a domain-specific notion of almost stochastic dominance can adopt a data-driven approach to learn a new stochastic order tailored to the user’s preferences by letting the $h_i$ be parametrized costs (e.g. the logistic function) and optimizing over the parameters (in the previous example, the gain). We will add a sentence explaining this extension in the revised paper. ___ **What does $\hat \mu,\hat \nu$ mean in the context of LLM benchmarking? Do we divide the dataset into multiple smaller datasets and evaluate LLM on them?** ___ We acknowledge that the LLM benchmarking experiment should be more clearly explained to avoid ambiguities. The experiment is conducted as follows: 1. We use the dataset from [2], which compiles a training set of 100K prompts along with the resulting response from 12 different LLMs. 2. For each prompt, the responses from the 12 different LLMs are evaluated according to 9 automatic metrics (e.g. BLEU, ROUGE, BERTScore, BARTScore, etc) and are stored as 12 vectors in $\mathbb R^9$ $(x^{(i)})_{i=1}^{12}$, where $x^{(1)}$ is the vector containing the outputs of the 9 metrics for the 1st LLM and so on. $x^{(1)}$ is then viewed as a sample from the distribution of all possible responses from the 1st LLM evaluated according to these 9 metrics. Note that we normalize these metrics so they are all in same [0,1] range. 3. This process is repeated for each of the 100K prompts so that one can construct the empirical measure $\hat \mu_N^{(1)}=\frac{1}{N}\sum_{i=1}^N\delta_{x^{(1)}_i}$ for $N = 100K$ (here $\delta_x$ is the Dirac measure at the point $x\in\mathbb R^9$). 4. The empirical measures $\hat \mu_n^{(i)}$ and $\hat \mu_n^{(j)}$ for some $n\leq 100K$ and $i\neq j$ are then compared by computing the normalized index $\varepsilon_{h,\lambda}(\hat \mu_n^{(i)},\hat \mu_n^{(j)})=\varepsilon_{ij}^{(h,\lambda)}$. 5. Given the pairwise ratios $\varepsilon_{ij}^{(h,\lambda)}$, we rank the 12 LLMS according to the relative testing procedure described in Section 4.2. The resampling procedure (bootstrap) is used to construct the confidence intervals. 6. The results thus obtained are compared (via Kendall Tau Similarity) to a univariate FSD ranking based on ChatGPT scoring i.e. ChatGPT is presented by the instruction and the response of each LLM as described in point 1 and produces a score that judges the quality of the response in terms of following the given instruction; the different models are then ranked according to their ChatGPT score (a univariate ranking) and compared to the multivariate ranking described above. --- Rebuttal 3: Title: Rebuttal continued (2/2) Comment: We underscore that the advantage of our multivariate stochastic dominance over the ChatGPT-based scoring method or human evaluation is threefold. First, as discussed previously, it requires significantly less computational overhead. Second, there are no high upfront monetary costs associated with evaluating the ratio statistic, this can be an important consideration for large-scale comparison tasks. Third, our approach can be run locally thereby eliminating the privacy concerns of exposing sensitive data on APIs running LLM as a judge, such as ChatGPT. ___ [1] Dvurechensky, P., Gasnikov, A., & Kroshnin, A. (2018, July). Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn’s algorithm. In International conference on machine learning (pp. 1367-1376). PMLR. [2] Jiang, D., Ren, X., & Lin, B. Y. (2023). Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. arXiv preprint arXiv:2306.02561.
Rebuttal 1: Rebuttal: We thank the reviewers for their careful reading of our submission. Their questions and comments have proved invaluable in improving the quality of the paper and helped us in identifying passages which were unclear and confusing. We briefly summarize the main comments brought up in the reviewers here and attach point by point detailed responses to each official review. * Q: **The complexity of computing the violation ratio is not stated explicitly. In particular, what is its dependence on the dimension $d$ when comparing two distributions on $N$ points in $\mathbb R^d$?** A: The entropic optimal transport distances which are used to define the ratio are computed using Sinkhorn’s algorithm. The only step of this algorithm which depends on the dimension is the construction of the pairwise cost matrix $C$ which requires a one time cost of $N^2K(d)$ operations, where $K(d)$ is the complexity of computing the cost between two $d$-dimensional vectors. The main loop of the algorithm consists of matrix-vector products between a fixed $N\times N$ matrix and $N$-dimensional vector iterates; the number of steps required for the loop to terminate is also characterized and independent of $d$. * Q: **How should a practitioner set the various hyperparameters, $\lambda,\beta,\varepsilon_0$?** A: The parameter $\lambda$ serves as the regularization strength for the entropic optimal transport distance. Given that the entropic distance is used as a computationally and statistically efficient proxy for the standard optimal transport distance, it is desirable to set $\lambda$ as small as possible. As Sinkhorn’s algorithm utilizes the matrix $e^{-C/\lambda}$, the ratio $(\max_{ij} C_{ij})/\lambda$ cannot be too large, otherwise numerical underflow will be encountered. If underflow occurs, Sinkhorn’s algorithm will fail, as it can only cope with positive matrices. As for $\beta$, the gain in the logistic cost function, it is discussed in the text and demonstrated in the experiments that $\beta$ should be taken as large as possible to best capture the notion of stochastic dominance. Given the discussion in the previous paragraph, there is a tradeoff regarding how large $\beta/\lambda$ can be for the purposes of numerical estimation. This ratio also figures in the number of Sinkhorn iterations required for convergence. As such, a practitioner may first fix a desired value of $\lambda$ (say $0.1$) and set $\beta$ to be large (but not so large as to cause underflow). If Sinkhorn iterations take too long to converge, the user may consider decreasing $\beta$ or increasing $\lambda$. Finally, we underscore that, for the purpose of relative testing of models, the threshold for multivariate entropic FSD, $\varepsilon_0$ does not need to be set. * Q: **How reasonable is it to take consensus with ChatGPT as the ground truth?** A: In the experiments, ChatGPT is being used for the purposes of evaluation as motivated by its use as a human proxy in the context of evaluating the quality of generated natural language [1-4] . It is shown that the ranking obtained via multivariate stochastic dominance on automatic metrics correlates well with the ChatGPT ranking which, in turn, correlates well with human evaluation. We underscore that our approach based on multivariate stochastic dominance is preferable to ChatGPT or human evaluation with regard to the following factors. First, as discussed previously, it requires significantly less computational overhead. Second, there are no upfront monetary costs associated with evaluating the ratio statistic, this can be an important consideration for large-scale comparison tasks. Third, our approach can be run locally thereby eliminating the privacy concerns of exposing sensitive data on APIs running LLM as a judge, such as ChatGPT. * Q: **What notions of dominance can be captured using this methodology? If a practitioner wishes to implement a notion of stochastic ordering tailored to a particular application, can they do so using this methodology?** One limitation of our approach is that all dimensions are treated equally as the same function $h$ is used to compare each dimension. This may limit the utility of this methodology in applications where violations of the order in one particular dimension should be severely penalized (e.g. unsafe responses from LLMs). However, the proposed framework can be adapted to use a cost function of the form $c(x,y)=\sum_{i=1}^d h_i(y_i-x_i)$ (i.e. we prescribed a different cost to violations in each dimension) provided that the $h_i$ satisfy the technical conditions described in the paper. With this, a practitioner aiming to formulate a domain-specific notion of almost stochastic dominance can adopt a data-driven approach to learn a new stochastic order tailored to the user’s preferences by letting the $h_i$ be parametrized costs (e.g. the logistic function) and optimizing over the parameters (in the previous example, the gain). [1] Hada, R., Gumma, V., de Wynter, A., Diddee, H., Ahmed, M., Choudhury, M., ... & Sitaram, S. (2023). Are large language model-based evaluators the solution to scaling up multilingual evaluation?. arXiv preprint arXiv:2309.07462. [2] Jiang, D., Ren, X., & Lin, B. Y. (2023). Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. arXiv preprint arXiv:2306.02561. [3] Liu, Y., Iter, D., Xu, Y., Wang, S., Xu, R., & Zhu, C. (2023). G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634. [4] Zheng, L., Chiang, W.L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E. and Zhang, H. (2024). Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36. Pdf: /pdf/6640c893ca552cd96f376a386a07d427abd0c885.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Chinese Inertial GAN for Writing Signal Generation and Recognition
Reject
Summary: The paper presents a novel Chinese inertial generative adversarial network (CI-GAN) designed to generate high-quality training samples for Chinese writing recognition using inertial sensors. The CI-GAN integrates Chinese Glyph Encoding (CGE), Forced Optimal Transport (FOT), and Semantic Relevance Alignment (SRA) to enhance the quality and authenticity of generated inertial signals. The approach addresses the challenge of collecting diverse and extensive training data for Chinese character recognition, showing significant improvements in classifier performance. Strengths: The paper introduces innovative methods in the form of CGE, FOT, and SRA, contributing significantly to the field of inertial writing recognition. The release of a new dataset further enriches the community's resources. Weaknesses: 1.Lack of Detailed Baseline Configuration: The paper compares CI-GAN with a traditional GAN in the appendix, but fails to provide detailed settings for the baseline method. This lack of information hinders the ability to fully understand and replicate the comparative effectiveness reported. 2.Insufficient Comparison with Other Augmentation Techniques: The study does not compare CI-GAN with other data augmentation methods, such as random perturbations. It remains unexplored whether applying random disturbances to the data could also substantially improve classifier performance. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you provide more information about the baseline model and the CI-GAN model used in your study, such as the number of parameters and other configuration settings? 2. Have you considered evaluating how simple data augmentation methods, such as random perturbations, might also significantly improve classifier performance? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed limitations related to the variability of writing styles and the potential impact of environmental factors on sensor data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **For weaknesses 1 and Question 1:** The input consists of a 100-dimensional random noise vector and the devised Chinese Glyph Encoding representing the character class, concatenated together to form an input vector. This combined vector passes through a fully connected layer, producing an output of size 256, followed by ReLU activation and batch normalization. The output is then reshaped into a tensor of shape (256, 1), which undergoes a series of 1D transposed convolutional layers. The first transposed convolutional layer uses 512 filters with a kernel size of 7, applying 'same' padding, and is followed by ReLU activation and batch normalization. Subsequent layers reduce the number of filters to 256, 128, 64, and 32, each with a kernel size of 5, maintaining batch normalization and ReLU activation. The final layer consists of 6 filters, corresponding to the six output channels (three for accelerometer and three for gyroscope data). The input to the discriminator consists of six-channel signals (three for accelerometer data and three for gyroscope data). The signal data undergoes processing through four 1D convolutional layers, which increase in filter count (64, 128, 256, and 512, respectively) and employ kernel sizes of 7 and 5. Each convolutional layer uses 'same' padding and Leaky ReLU activation with an alpha of 0.2, coupled with batch normalization to stabilize the training process. The final convolutional output is flattened and passed through two separate branches. The first branch, designated for real/fake classification, includes a fully connected layer followed by a Sigmoid activation function, yielding a scalar probability that indicates the likelihood of the input being real. The second branch, responsible for class label prediction, also comprises a fully connected layer, followed by a Softmax activation function that outputs a vector of probabilities across all potential classes. The field of inertial sensors lacks deep learning-based data augmentation methods, and image-based methods are challenging to directly apply to time-series signals, so we chose cGAN as our baseline, which shares the same generator and discriminator as our model. In fact, our CI-GAN only adds three designed modules: CGE, FOT, and SRA. These modules introduce innovative enhancements—CGE provides semantic guidance by encoding the glyph shape of Chinese characters, FOT ensures feature consistency and prevents mode collapse through forced optimal transport, and SRA aligns the semantic relevance between inputs and outputs. These designs result in significant performance improvements, demonstrating the effectiveness and novelty of our approach. **For Question 2:** Following your suggestion, we supplemented extensive comparative experiments, including 12 data augmentation methods covering five major categories. As shown in the Table below (Table 1 in the uploaded PDF), the results clearly demonstrate that CI-GAN significantly outperforms all other methods. Unlike in the field of image processing, it is challenging for humans to recognize semantic information in signal waveforms through observation or to judge whether the augmented signals are reasonable. Therefore, data augmentation methods specifically designed for images are not well-suited for sensor signals. In fact, there is a notable lack of deep learning-based data augmentation methods for inertial sensors. Our CI-GAN fills this gap and has been adopted and applied in the industry. | Data Augmentation Methods | 1DCNN | LSTM | Transformer | RF | XGBoost | SVM | |---------------------------|-------|------|-------------|----|---------|-----| | **Time Domain** | | | | | | | | Cropping | 15.7% | 9.1% | 7.7% | 12.8% | 16.3% | 9.6% | | Noise Injection | 17.3% | 11.9% | 12.2% | 8.5% | 13.8% | 10.1% | | Jittering | 20.1% | 13.0% | 14.4% | 9.7% | 17.4% | 7.5% | | **Frequency Domain** | | | | | | | | Amplitude and Phase Perturbations | 22.3% | 13.6% | 19.7% | 19.0% | 25.1% | 16.3% | | Amplitude Adjusted Fourier Transform | 32.1% | 20.7% | 25.4% | 27.5% | 35.9% | 19.2% | | **Decomposition** | | | | | | | | Wavelet | 19.9% | 12.1% | 10.6% | 13.8% | 22.6% | 9.5% | | EMD | 24.4% | 17.1% | 20.9% | 17.9% | 23.4% | 12.2% | | **Mixup** | | | | | | | | CutMix | 21.9% | 14.8% | 15.5% | 14.7% | 18.9% | 13.1% | | Cutout | 25.6% | 16.4% | 16.9% | 18.5% | 27.1% | 16.6% | | RegMixup | 41.5% | 27.8% | 36.8% | 38.4% | 45.9% | 30.3% | | **Learning based** | | | | | | | | cGAN | 18.5% | 14.8% | 15.7% | 12.4% | 20.5% | 8.4% | | **CI-GAN (ours)** | **95.7%** | **93.9%** | **98.4%** | **83.5%** | **93.1%** | **74.6%** | We sincerely hope our response addresses your concerns and we will incorporate all your suggested content into the accepted version. --- Rebuttal 2: Title: Hoping Our Response Meets Your Expectations Comment: Thank you once again for your insightful feedback on our paper. We have carefully addressed each of your concerns in our response, particularly regarding the detailed baseline configurations and the comparisons with various data augmentation techniques. We genuinely believe that our work offers valuable advancements in the field, especially given the lack of deep learning-based augmentation methods for inertial sensors. We would greatly appreciate it if you could take another look at our revisions and explanations. Your feedback is invaluable to us, and we sincerely hope that our paper can meet your expectations and contribute meaningfully to the field. Thank you for your time and consideration.
Summary: This paper proposes CI-GAN to acquire unlimited high-quality training samples, alleviating the data scarcity in the inertial signal recognition of Chinese characters. By utilizing these generated data, the performance of recognition models is highly improved. Strengths: - This paper is easy to follow. - The proposed methods may help disabled people. Weaknesses: - The pipeline lacks novelty. The employed technologies are widely used in CV and NLP, and the proposed pipeline merely reuses them for the inertial signal domain without any innovative design. Furthermore, the author fails to cite relevant studies such as [1][2] and does not discuss their differences. [1] Wasserstein GAN (WGAN) [2] Efficient Estimation of Word Representations in Vector Space - The proposed CGE is simply a learnable embedding to represent Chinese characters, lacking innovative design for glyph information. The author introduces GER to enhance the orthogonality of character embeddings but does not provide an ablation study to verify its effectiveness. - The author uses Wasserstein distance in GANs. What is the difference between this approach and WGAN [1]? Additionally, the author proposes using FFM to supervise the signal in feature spaces. These measures are also similar to some works, such as perceptual loss using VGG and identity loss using ArcFace, but the author does not cite these and discuss the difference. - The dataset used for training and testing is too small, which could not effectively verify the effectiveness of the proposed method. Technical Quality: 2 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **For weakness 1:** The novelty of the CI-GAN consists of three proposed modules: Chinese Glyph Encoding (CGE), Forced Optimal Transport (FOT), and Semantic Relevance Alignment (SRA). These modules are interdependent, with many structures serving multiple functions. For example, CGE provides semantic guidance for the generator as a condition and imposes constraints on the generated results within the SRA. Similarly, the signal features in SRA also support the FOT module, making the entire architecture compact and sophisticated. Each module also has its innovation. For instance, SRA aligns the relevance between different generation outputs with the relevance between the prompts, significantly reducing hallucinations in the generative model. In June 2024, Nature published an article titled "Detecting Hallucination in Large Language Models Using Semantic Entropy," sharing a similar approach to our SRA. They assess the inconsistency in outputs when the same question is posed multiple times to a large model. Their approach essentially forces the model to give similar outputs for similar prompts. Our SRA goes a step further by ensuring that the relationships between the prompts are consistent with the relationships between the outputs, thereby reducing hallucinations and enhancing the model's practicality and stability. Compared to the CV and NLP fields, the sensor domain lacks deep learning-based data augmentation methods. Most data augmentation techniques for images are not applicable to time-series signals. Our method is pioneering in the inertial sensor domain and has been adopted by a wearable device manufacturer. We apologize for missing some literature, and we will reference the recommended papers and similar work published in Nature in future versions. **For weakness 2:** Unlike general embeddings that capture character meaning, CGE represents character glyph. Inertial sensors capture writing motions containing glyph information, allowing us to construct glyph features of each character during training. To achieve this, we introduce an encoding matrix following the one-hot input and design a Glyph Encoding Regularization (GER) based on Rényi entropy. As GER decreases during training, the Rényi entropy of the glyph encoding matrix increases, leading to two key effects: Each vector's information entropy increases, enabling it to carry more glyph information; The orthogonality between different encodings improves, capturing the differences between glyphs. This design may also benefit other categorical representation tasks by applying a similar regularization term to the category encoding dictionary. Additionally, in CI-GAN, CGE supports the SRA module, helping to alleviate hallucinations in the generative model. Moreover, CGE is also used in FFM to ensure consistency between real signal features, generated signal features, and glyph encoding, where CGE is also directly supervised. The effectiveness of CGE is primarily due to GER, which is why we provided an ablation study of CGE without showcasing the ablation results of GER separately. Following your suggestion, we removed GER while retaining the encoding matrix, reducing it to a learnable transformation from one-hot encoding without additional guidance. This led to a significant performance decline, as shown in Table 2 (in the uploaded PDF), underscoring the critical importance of the Rényi entropy-based regularization in categorical representation tasks. **For weakness 3:** WGAN primarily addresses the overall distribution differences between generated and real samples. In contrast, FOT incorporates a Forced Feature Matching (FFM) mechanism that enhances the realism of the generated signals by aligning their features with those of real samples. Unlike WGAN, FFM imposes an additional constraint, ensuring that the generated samples not only match the real data distribution closely but also maintain consistency in key features. This feature-specific matching is not explicitly achieved in WGAN, which is crucial for signal generation. Unlike images, the quality of signals cannot be readily assessed through visual inspection. Thus, stringent constraints are essential to ensure the reliability of the generated results. Perceptual loss only constrains the consistency between generated and real signals. Identity loss in ArcFace ensures that the generated face images retain the same identity as the real faces. Differently, FFM proposes triple consistency constraints for generative models: prompt, generated signal features, and true signal features, which not only improves the realism of the generated signals but also ensures their SEMANTIC accuracy. Meanwhile, FFM also supervises the glyph encoding, reflecting the interaction between the proposed three modules. We will supplement these analyses and citations in future versions. **For weakness 4:** We recruited two new volunteers, each using their smartphone to write 500 characters in the air, resulting in 1000 new samples. Writing Chinese characters, segmenting, and extracting corresponding signals for each character is extremely labor-intensive. Particularly, the segmentation phase requires optical equipment to precisely mark the start and end times of the signal segments corresponding to each character, which is a highly time-consuming task. Given the excellent performance of CI-GAN, we used all 1000 newly collected samples for testing without retraining. As shown in Table 3 (in the uploaded PDF), the six classifiers performed even better, whose improvement is likely because these smartphones were newer, and the built-in sensors had not aged much, resulting in higher-quality signals. This demonstrates that the classifiers trained with CI-GAN-generated signals can adapt to sensors of different usage times, providing significant convenience for device manufacturers and leading to CI-GAN adoption and application in the industry. Finally, we sincerely hope to receive your recognition. --- Rebuttal 2: Title: Hoping Our Response Meets Your Expectations Comment: Thank you once again for your thoughtful review and valuable feedback on our paper. We have taken your comments very seriously and have provided detailed explanations and clarifications in our response. We understand that assessing innovation can sometimes involve different perspectives, and we sincerely hope that you will consider our explanations, particularly regarding the aspects of novelty that we have highlighted. Our research not only introduces some unique design elements but has also shown promising results in practical applications. We genuinely believe that this work can make a meaningful contribution to both the academic community and industry. If there are any further questions or areas that need clarification, we would be more than happy to discuss them in greater detail. We truly appreciate your time and effort and hope that our work can meet your expectations. --- Rebuttal Comment 2.1: Comment: Thank you for your detailed rebuttal and the additional explanations provided. After carefully considering your responses, I still have concerns preventing me from recommending acceptance. - Lack of Clear Innovation: The CI-GAN framework seems to be a variation of conditional GAN, and the CGE appears to be an upscaled one-hot embedding rather than a novel integration of glyph information. Without clear motivation and experimental validation, the proposed modules show differences but not true innovation. The concept of hallucination introduced in the rebuttal also seems unrelated to the core task, making it difficult to understand its relevance. - Overcomplication with Intuitive Modules: The paper introduces several modules based on intuitive motivations, which makes it hard to identify a central, innovative contribution. The work feels more like a collection of empirical studies than a focused research effort. - Insufficient Experimental Validation: I still find the experiments lacking. For example, there is no visualization of the generated handwritten characters or an assessment of their diversity, which would be critical in evaluating the effectiveness of CI-GAN. I hope these points help you refine your work in the future. --- Reply to Comment 2.1.1: Comment: Thank you for providing additional feedback. We would like to address the remaining concerns you have raised. 1. Lack of Clear Innovation: We understand that you perceive CI-GAN as a variation of c-GAN, but if we follow this logic, c-GAN itself could also be considered a variation of GAN. By this reasoning, all models based on the GAN architecture could be seen as mere variations, thereby discounting any innovations in the field. However, we strongly believe that the true measure of innovation lies in the specific adaptations and design made to the base architecture to address unique challenges—in our case, the challenges inherent to inertial signal generation. Regarding the Chinese Glyph Encoding (CGE), it is indeed more than just an “upscaled one-hot embedding.” CGE was designed as a novel way of category encoding that injects more information into the generative process. We devised a Renyi entropy-based regularization applied to a learnable category encoding matrix, significantly enhancing the matrix's ability to represent categorical information. This allows CGE to capture the nuances of Chinese character glyphs, which is a novel approach in the context of signal generation. To support this, we have visualized the glyph encodings of various Chinese characters, demonstrating that characters with similar glyphs are indeed closer in the embedding space. This visualization underscores the effectiveness of CGE in preserving the structural relationships between different characters. Additionally, we are puzzled by your comment that “The concept of hallucination introduced in the rebuttal also seems unrelated to the core task.” Hallucination is a well-known issue in generative models, particularly in scenarios where the generated outputs can deviate from realistic or expected patterns. Our SRA module is specifically designed to mitigate hallucination by ensuring that the semantic relationships between generated signals are consistent with those of the input glyphs, thereby enhancing the realism and reliability of the generated signals. Addressing hallucination is not only relevant but crucial to the success of our generative task. 2. Value of Intuitive Motivations: We respectfully disagree with the notion that intuitive design somehow detracts from the innovation or significance of our contributions. On the contrary, intuitive designs often lead to more effective and impactful solutions precisely because they resonate with the underlying principles of the problem being addressed. Each of the three modules in CI-GAN—CGE, FOT, and SRA—was designed with clear, intuitive motivations, and each one independently offers significant contributions to the field of deep learning. For example: CGE introduces a novel way of category encoding that injects more information into the generative process, enhancing the quality and diversity of the generated signals. FOT establishes a triplet constraint between the input, output, and label, addressing issues such as mode collapse, mode mixing, and the authenticity of generated results. This approach helps to ensure that the generated signals are not only diverse but also semantically accurate and realistic. SRA ensures that the semantic relationships between inputs are maintained in the generated outputs, reducing the likelihood of hallucinations. These modules are not merely empirical tweaks; they represent fundamental advancements that can inspire future research. When integrated into the CI-GAN framework, these modules work synergistically to achieve the first successful generation of inertial sensor signals. This is not just a collection of empirical studies; it is a cohesive and innovative solution to a complex problem that has not been addressed before. 3. Visualization and Diversity: Your comment that “there is no visualization of the generated handwritten characters” seems to overlook the visualizations we provided in the paper. Figure 3 illustrates the generated signals for different Chinese characters, showing how they closely follow the fluctuation trends of real signals. To further emphasize diversity, Figure 4 presents the results for multiple generated signals of the same character, “王,” compared with a real handwriting signal. These visualizations clearly demonstrate that CI-GAN not only generates diverse samples but also maintains the essential characteristics of real handwriting signals. The differences in individual features, while preserving overall trends, validate the model’s ability to produce high-quality, diverse, and realistic samples. We hope this response clarifies the innovations and contributions of our work. We believe CI-GAN represents a significant advancement in the field of inertial signal generation and has the potential to drive future research and applications. We sincerely appreciate your thoughtful consideration of our work and hope that you will recognize the value it brings. Thank you for your time and consideration.
Summary: The paper address an important probem in human computer interaction: making computers accessible to vision impaired people. The paper address this my collection paired data of text and imu signals. First, the paper address the issues of limited data by training a generative model, to resample/bootstrap more data and then train recognition model on both real and generated data to archive high performance. Strengths: the paper addresses an important social problem, and accessibility should be focused on all groups. The data collected for this paper, the paired data on text and imu is very useful, hope the authors will open-source it. paper is well written and the figures are clear and convey the ideas. Weaknesses: My main concern is, that it is very unlikely that we get more than we give to the system, the generated samples are a function of real samples. I would like to see, a competitive baseline with good data augmentation, and maybe on a low data regime gan generated samples are better than augmentation, but this has to be shown, otherwise, I don't see the value of extra effort to train a generative model to get data augmentation. Technical Quality: 2 Clarity: 3 Questions for Authors: please see my concerns about the weakness section. Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: I wouldn't say this is a major limitation, but on the scale axis, this problem can be solved by collecting more data. Unlike annotations like explaining an image or video, handwriting signals are more easy to collect on the long term. would be nice if the authors can address this, also please explain the issues with data augmentation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We fully agree with your point: "It is very unlikely that we get more than we give to the system." In fact, what we give to the system is sufficient, as our training data provides multiple writing signals for each Chinese character. In comparison, humans can usually recognize new categories after just one exposure. This suggests that the model already receives enough information; the key is to utilize this information to thoroughly explore and memorize the intrinsic patterns of each Chinese character and then generate reasonable variations. To memorize the patterns of each character, we designed the Chinese Glyph Encoding (CGE) module, which effectively represents the shapes and stroke features of Chinese characters, providing a solid informational foundation for generating new writing signals. To generate reasonable variations, we introduced the Forced Optimal Transport (FOT) and Semantic Relevance Alignment (SRA) mechanisms. FOT addresses common issues in GAN, such as mode collapse and mixing, and establishes a triple constraint involving the prompt, generated signal features, and real signal features, ensuring the generated signals' authenticity and semantic accuracy. The SRA mechanism aligns the relationships of generated signals with the relationships of input Chinese glyph encodings, providing a group-level constraint, which mitigates hallucinations in the generative model, resulting in more realistic and reliable signal samples. Following your recommendation, we employed five major categories of data augmentation—Time Domain, Frequency Domain, Decomposition, Mixup, and Learning-based strategies—encompassing 12 methods for comparison. All methods generated the same amount of samples (15,000) for training six classifiers, as shown in Table below (Table 1 in the uploaded PDF). Notably, except for our proposed augmentation method, the accuracy of classifiers trained using all other data augmentation methods failed to surpass 50%, whereas our method achieved over 90%. Additionally, due to the lack of deep learning-based augmentation methods in the sensor field, we could only compare our approach with cGAN, which performed worse than many non-deep learning methods, underlining the difficulty of designing deep learning models capable of generating accurate and realistic inertial handwriting signals and highlights the value of our CI-GAN. | Data Augmentation Methods | 1DCNN | LSTM | Transformer | RF | XGBoost | SVM | |---------------------------|-------|------|-------------|----|---------|-----| | **Time Domain** | | | | | | | | Cropping | 15.7% | 9.1% | 7.7% | 12.8% | 16.3% | 9.6% | | Noise Injection | 17.3% | 11.9% | 12.2% | 8.5% | 13.8% | 10.1% | | Jittering | 20.1% | 13.0% | 14.4% | 9.7% | 17.4% | 7.5% | | **Frequency Domain** | | | | | | | | Amplitude and Phase Perturbations | 22.3% | 13.6% | 19.7% | 19.0% | 25.1% | 16.3% | | Amplitude Adjusted Fourier Transform | 32.1% | 20.7% | 25.4% | 27.5% | 35.9% | 19.2% | | **Decomposition** | | | | | | | | Wavelet | 19.9% | 12.1% | 10.6% | 13.8% | 22.6% | 9.5% | | EMD | 24.4% | 17.1% | 20.9% | 17.9% | 23.4% | 12.2% | | **Mixup** | | | | | | | | CutMix | 21.9% | 14.8% | 15.5% | 14.7% | 18.9% | 13.1% | | Cutout | 25.6% | 16.4% | 16.9% | 18.5% | 27.1% | 16.6% | | RegMixup | 41.5% | 27.8% | 36.8% | 38.4% | 45.9% | 30.3% | | **Learning based** | | | | | | | | cGAN | 18.5% | 14.8% | 15.7% | 12.4% | 20.5% | 8.4% | | **CI-GAN (ours)** | **95.7%** | **93.9%** | **98.4%** | **83.5%** | **93.1%** | **74.6%** | In practice, collecting, segmenting, and processing these handwriting signals is a challenging task. We need to isolate and extract the segment corresponding to each character from a continuous signal flow, a process that requires identifying each character's precise start and end points, as shown in Figure 1 in the uploaded PDF. These points are not easily identifiable and often require optical equipment for accurate frame-level segmentation and annotation. We invested significant time and effort to obtain the original 4,500 handwriting signals. Our CI-GAN eliminates these difficulties by providing a straightforward method for generating handwriting signals, thereby saving time and resources. Thank you for your insightful comment. We hope we have addressed your concerns. --- Rebuttal 2: Title: Hoping Our Response Meets Your Expectations Comment: I hope this message finds you well. I wanted to take a moment to sincerely thank you for your thoughtful and detailed review of our submission. Your insights have been invaluable in helping us refine our work, and we have made considerable efforts to address each of your concerns thoroughly. In our response, we provided a detailed comparison with various data augmentation methods to highlight the significant value of our CI-GAN approach, which introduces generative deep learning models into inertial sensor data augmentation for the first time. Our results, which include extensive testing and ablation studies, show that CI-GAN not only performs significantly better than other methods but also offers a flexible platform that addresses the specific challenges of the sensor signal domain. We genuinely believe that our approach brings innovation to this field, particularly through the designed Chinese Glyph Encoding, Forced Optimal Transport, and Semantic Relevance Alignment, which together form a cohesive and effective system. These elements work in tandem to ensure not just the generation of realistic signals but also their alignment with the semantic content, which is crucial for practical applications. Thank you once again for your time and effort. We are eagerly looking forward to your response.
null
null
Rebuttal 1: Rebuttal: We sincerely thank the reviewers and the conference chair for their valuable feedback and thoughtful consideration of our paper. First, we want to clarify that collecting handwriting samples of Chinese characters is not easy. During data collection, volunteers wrote different Chinese characters continuously. We had to accurately locate the signal segments corresponding to each character from long signal streams, as shown in Figure 1 in the uploaded PDF. However, accurately segmenting and extracting signal segments requires synchronizing optical motion capture equipment and then comparing the inertial signals frame by frame with the optical capture results to find all character signal segments' starting and ending frames. Consequently, we expended significant time and effort to obtain 4,500 signal samples in this paper, establishing the first Chinese handwriting recognition dataset based on inertial sensors, which we have made open-source partially. By contrast, our CI-GAN can directly generate handwriting motion signals according to the input Chinese character, eliminating the complex processes of signal segmentation, extraction, and cleaning, as well as the reliance on optical equipment. We believe it provides an efficient experimental data platform for the field. Unlike the fields of CV and NLP, many deep learning methods have not yet been applied to the sensor domain. More importantly, unlike image generation, where the performance can be visually judged, it is challenging to identify semantics in waveforms by observation and determine whether the generated signal fluctuations are reasonable, which imposes high requirements on generative model design. Therefore, we had to design multiple guidance and constraints for the generator, resulting in the design of Chinese Glyph Encoding (CGE), Forced Optimal Transport (FOT), and Semantic Relevance Alignment (SRA). * CGE introduces a regularization term based on Rényi entropy, which increases the information content of the encoding matrix and the distinctiveness of class encodings, providing a new category representation method that can also be applied to other tasks. As far as we know, this is the first embedding targeted at the shape of Chinese characters rather than their meanings, providing rich semantic guidance for generating handwriting signals. * FOT establishes a triple-consistency constraint between the input prompt, output signal features, and real signal features, ensuring the authenticity and semantic accuracy of the generated signals and preventing mode collapse and mixing. * SRA constrains the consistency between the semantic relationships among multiple outputs and the corresponding input prompts, ensuring that similar inputs correspond to similar outputs (and vice versa), significantly alleviating the hallucination problem of generative models. Notably, the June 2024 Nature paper "Detecting Hallucination in Large Language Models Using Semantic Entropy," published after our NeurIPS submission, shares a similar idea with our proposed SRA. They assess model hallucination by repeatedly inputting the same prompts into generative models and evaluating the consistency of the outputs. Their approach essentially forces the model to produce similar outputs for similar prompts. Our SRA not only achieves this but also ensures that the relationships between prompts are mirrored in the relationships between the outputs. This significantly reduces hallucinations and enhances the model's practicality and stability. CGE, FOT, and SRA not only guide and constrain the generator but also interact with each other. We added a diagram (Figure 2 in the uploaded PDF) to illustrate their roles and interactions. The Chinese glyph encoding not only provides semantic guidance to the generator but also supplies the necessary encoding for FOT and SRA, and it is also supervised in the process. FOT and SRA share the VAE and generated signal features, providing different constraints for the generator, with FOT focusing on improving signal authenticity and enhancing the model's cognition of different categories through the semantic information injected by CGE, thereby mitigating mode collapse and mode mixing. In contrast, SRA ensures consistency between the relationships of multiple outputs and prompts through group-level supervision, which helps alleviate the hallucination problem of generative models. In summary, the three modules proposed in CI-GAN, CGE, FOT, and SRA are innovative and interlinked, significantly enhancing the performance of GANs in generating inertial sensor signals, as evidenced by numerous comparative and ablation experiments. This method is a typical example of deep learning empowering the sensor domain and has been recognized by the industry and adopted by a medical wearable device manufacturer. It has the potential to become a benchmark for data augmentation in the sensor signal processing field. We sincerely hope we have addressed the concerns of the three reviewers, and once again, we thank everyone for their review and suggestions for this paper. Pdf: /pdf/b06c6399e57ec941dd199687f85c1cf54319643b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Power of Hard Attention Transformers on Data Sequences: A formal language theoretic perspective
Accept (poster)
Summary: This paper focuses on analyzing the expressive power of transformers through the lens of formal languages, inspired by Angluin's approach. Specifically, while traditional unique hard attention transformers (UHAT) on strings are associated with $AC^0$ and regular languages definable in first-order logic, the authors here explore the implications of using data sequences instead of words as input. The main findings are: - UHATs over data sequences with positional encodings fall within $TC^0$, not $AC^0$. - Non-regular languages can be recognized even by masked UHATs without positional encodings. - UHATs can recognize all languages definable in $(LT^2)L$, an extension of LTL with unary predicates and local arithmetic tests over finite windows over the data. Strengths: **S1: Important Contribution** Understanding the expressive power of significant classes of transformers is crucial, given their widespread popularity. Moreover, considering data sequences is a natural and worthwhile extension. The results obtained are both elegant and insightful. **S2: Clarity** For readers familiar with the topic, this is a well-written paper. The authors effectively set the stage and explain most concepts and proof ideas in sufficient detail within the main text. **S3: Elegant Proofs** The results linking UHAT with $TC^0$ involve a well-crafted stepwise reduction using polynomial equations and polyhedral analysis. Weaknesses: **W1: Coverage of $TC^0$** While the characterization of UHAT on data sequences in $TC^0$ is commendable, it does not fully encompass $TC^0$. A precise characterization of the power of UHATs would strengthen the paper. This observation also applies to the other main results: Which logic precisely captures UHAT? **W2: Accessibility** The authors make little effort to make the paper accessible to the broader ML community. As it stands, the paper could have been submitted to any computer science theory conference. Consequently, the results may not be surprising to those communities. **W3: Practical Implications** The paper lacks discussion on the practical implications of the theoretical analysis for transformer design. Coupling the theory with practical insights would enhance the paper's value. In summary, the paper provides significant insights into the expressive power of transformers over data sequences, presenting elegant and theoretically sound results. However, a broader characterization of UHATs' power, increased accessibility for a wider audience, and practical implications would strengthen the paper further. Technical Quality: 3 Clarity: 3 Questions for Authors: Please comment on **W1** and **W3**. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: This has been addressed in a satisfactory way by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q: How much of $TC^0$ is covered?** A: Firstly, Barcelo et al. [3] showed that there is an $AC^0$ language over the alphabet {0,1} that is not in UHAT. The same language is also a witness that UHAT over sequences of numbers (i.e. our setting) cannot even recognize some $AC^0$ language. While it is true that likely not every $TC^0$ language is recognized by a UHAT, we do show in the proof of Prop. 10 that UHAT recognize a $TC^0$-complete language. Thus, the $TC^0$ upper bound is best-possible in terms of complexity classes. We will clarify this in the paper. **Q: Would the paper also fit at a computer science theory conference?** A: We agree that the paper contains many theoretical results and thus would also be a candidate for a general theory venue. However, virtually all prior work on Formal Languages and Neural Networks (FLaNN) has appeared in ML and NLP venues (ICML, ICLR, ACL, EMNLP, TMLR). In addition, we believe that our results specifically clarify aspects of transformers that are of significant practical interests (see our comments on practical implications in the global rebuttal). **Q: Which logic precisely captures UHAT?** A: This is actually still an open question even for the case of finite alphabets. Barcelo et al. [3] showed that first-order logic (equivalently LTL) with monadic numerical predicates (called LTL(Mon)) is subsumed in UHAT with arbitrary position encodings, i.e., the transformer model that we are generalizing in this paper to sequences of numbers. This does not capture the full UHAT, e.g., palindrome. As remarked in [3], the logic can be extended with arbitrary linear orders on the positions (parameterized by lengths), which can capture palindrome. The same can be done for $(LT)^2L$. However, it is possible to show that the resulting logic is still not general enough to capture the full generality of UHAT. That said, although our logic does not capture the full UHAT, it can still be used to *see at a glance* what languages can be recognized by UHAT (see an example in response in global rebuttal). There could perhaps be a hope of obtaining a precise logical characterization if we *restrict* the model of UHAT. The recent paper [2] showed that LTL(Mon) captures precisely the languages of masked UHAT with position encodings with *finite image*. It is interesting to study similar restrictions for UHAT in the case of sequences of numbers. We will add the above remarks in the paper. **Q: What are the practical implications?** A: See answer in global rebuttal. --- Rebuttal Comment 1.1: Title: Rebuttal Read Comment: I have read the rebuttal. Thank you for your responses. There are indeed interesting open questions here related to the logic. The "application" related comment serves as a good additional motivation and should preferably be added to the paper.
Summary: The paper studies the computational expressiveness of unique hard attention transformers (UHAT) on formal languages formed over an infinite alphabet. The work is motivated by the application of transformers to time series forecasting where input values can be unbounded. Specifically, the authors assume the language to be formed over tuples of rational numbers. To this end, the paper contributes three novel results: (1) The languages recognized by UHAT belong to the circuit complexity class $\mathsf{TC}^0$ and there exists a language recognized by UHAT which is $\mathsf{TC}^0$-hard. (2) There exists a non-regular language over the alphabet $\mathbb{Q}^d$ that is recognized by a UHAT with past masking and no positional encoding. (3) UHAT with positional encoding recognize all languages expressible in an extension linear-time temporal logic with unary numerical predicates and linear rational arithmetic. Strengths: - To the best of my knowledge, the paper is the first to connect the formal language theory over infinite alphabets with the computation expressiveness of transformers. For unique hard attention transformers, a comprehensive set of results is established. - The paper's results are particularly interesting in the light of existing results for finite alphabets. For finite alphabets, the languages accepted by UHAT are contained in $\mathsf{AC}^0$ and are exactly the star-free regular languages when masking is applied. Therefore the results provide insights into the increase in computational expressiveness due to an infinite alphabet. - The paper’s technical claims are well presented. For each claim, either a full proof or a proof sketch with references to a full proof in the appendix is provided. - Although the paper is densely written the contributions are clear and the paper is easy to navigate. Weaknesses: - There seems to be an important restriction to the first result: because real precision is assumed for UHAT, language inclusion can only be shown for words up to a particular length n. I assume that without this restriction the two classes are actually incomparable. At the same time, the authors seem to suggest in the concluding remarks that some results still hold when assuming rational precision for UHAT. The importance and implications of this choice should be discussed more clearly in the paper. - The paper does not clearly convey the practical implications of the results. The infinite precision of rational numbers can not be represented in real-world systems. Instead, they would be approximated by floating point numbers such that the result of previous work applies. - Minor: It would be helpful if the abstract already states that arbitrary input refers to the rational numbers, to avoid confusion with a real-valued input. Technical Quality: 4 Clarity: 3 Questions for Authors: Do the same results hold for formal languages over the alphabet $\mathbb{Q}$? If yes, why are the results derived in terms of tuples? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have addressed limitations adequately, except for the real precision of UHAT. As detailed in the review it seems that some results only hold for inputs up to a particular length n. The theorems themselves do not state this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q: Is there a restriction to input lengths up to $n$?** A: We do not restrict the language to sequences of length up to $n$. Instead, the definition of the circuit complexity class $TC^0$ is that there is a family of circuits: namely one circuit for each input length $n$ (with Boolean and majority gates, arbitrary fan-in, bounded depth, and size polynomial in $n$) that recognizes exactly the input strings of that length $n$. Thus, providing a construction for each input length $n$ is exactly what is required to prove that the entire UHAT language belongs to the class $TC^0$. In particular, the results are shown exactly as stated. **Q: Which results continue to hold in the setting with rational parameters in the UHAT?** A: What we mention in the conclusion is that many of our results not only clarify the setting of real precision in the transformer parameters, but also the case of rational-precision parameters: The $TC^0$ upper bound continues to hold (because rational precision is a restriction). Moreover the $TC^0$ lower bound also still holds, because our lower bound proof (Proposition 10) yields a UHAT with rational parameters. Likewise, the example of non-regularity is already available for rational precision. **Q: Do the same results hold for formal languages over the alphabet $\mathbb{Q}$? If yes, why are the results derived in terms of tuples?** A: Yes, all our main results (Theorems 1,2,3) also hold for languages over the alphabet $\mathbb{Q}$ (in other words, when $d=1$). Let us elaborate on this. This is trivial for the $TC^0$ upper bound in Theorem 1 and for Theorem 3. Moreover, our lower bound results (the non-regular example in Theorem 2, and the non-containment in $AC^0$ in Theorem 1) hold in the case of $d=1$: For the $TC^0$ lower bound (and non-containment in $AC^0$) for UHAT with positional encoding, one can use our result in Section 5 to see that the language of all length-2 sequences $(r,s)$ over $\mathbb{Q}$ with $r>s$ is accepted by a UHAT with positional encoding. The reason we consider the more general case of tuples is two-fold: (1) this is standard in the literature on formal languages expressiveness of UHAT: Each token is encoded by a vector of real numbers, (2) time series in general is a sequence of tuples of numbers (e.g. for stock application, a position in the sequence could be associated with max/min and entry/closing prices for the day, as well as other information like trading volume on that day). **Q: What are the practical implications?** A: See answer in global rebuttal. **Q: The infinite precision of rational numbers can not be represented in real-world systems. Instead, they would be approximated by floating point numbers such that the result of previous work applies.** A: Firstly, our result implies also that UHAT with only floating point numbers in the input is also contained in $TC^0$ and therefore is efficiently parallelizable. Secondly, although we end up with a finite alphabet if we assume a finite set of floating point numbers, this finite alphabet is extremely large ($2^{64}$ or sometimes $2^{128}$ or more in modern computers). Treating them as finite alphabets and using constructions for UHAT over finite alphabets yield extremely large $TC^0$ (in fact $AC^0$) circuits, i.e., of size at least $2^{64}$ or $2^{128}$. This is because the finite-alphabet constructions assume the alphabet size to be *constant*, meaning the actual size does not impact that complexity analysis. This is analogous to representing boolean circuits/formulas by their lookup tables, or solving games like chess/go by precomputing lookup tables (because there are finitely many configurations), none of which are realistic settings. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal, in particular for clarifying on the input length $n$. This was indeed a misunderstanding on my part.
Summary: This paper studies the expressive power of transformer encoders with leftmost-hard attention and strict past-masking, where the input is not a sequence of symbols from a finite alphabet, but a sequence of rational vectors. The three main results are: 1. Transformers (under the assumptions above, with or without position encodings) are in TC0 but not in AC0. 2. There is a UHAT that recognizes a non-regular language (that is, cannot be accepted by a finite automaton whose transitions are labeled with arithmetic constraints instead of symbols). 3. Every language definable in locally testable linear temporal logic (that is, linear temporal logic where an atomic formula can check a linear constraint on the current “symbol” and k lookahead “symbols”) is recognizable by a UHAT with position encoding. Strengths: Theorem 1: This proof (because the transformer weights are real) looks ambitious and interesting, although I was not able to check the details. Theorem 2: This proof looks correct. Theorem 3: Although I didn’t check every line of this proof, I definitely agree that the result is correct. Weaknesses: Theorem 1: I didn’t exactly understand why the inputs are rational vectors but the parameters are allowed to be real. Appendix A shows why allowing reals makes a difference, but: (a) It doesn’t explain why the difference is important. (b) It doesn’t show that language (2) requires real parameters, only that a particular UHAT requires real parameters to recognize language (2). (c) One could make exactly the same argument about UHATs for strings over a finite alphabet. With rational weights, I take it Proposition 9 would be fairly easy. How do you define Boolean circuits that take sequences of rational vectors as input? This is hinted at in line 273 but should be spelled out more explicitly for Proposition 10 and its proof to be clear. Theorem 2: I think it would be helpful not to reuse variable names. Theorem 3: Just a minor comment on the phrase “logical language” in the title of Section 5. I understand a “logical language” to be the syntax of a logic, not the set of strings models of a logical sentence, so I find this phrase confusing. Technical Quality: 4 Clarity: 3 Questions for Authors: Could you explain why it's important for the parameters to be real, not just rational? How do you define Boolean circuits that take sequences of rational vectors as input? Could you expand on the significance, in this setting, of the classes AC0, regular, and TC0? Perhaps it would be helpful to give some examples of languages that do/don’t belong to these classes and expand on the practical implications. You allude to the language at lines 321-324 being important, but don’t explain why. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q: What is the role of past masking in the paper?** A: Our paper mostly does *not* use masking, but instead permits arbitrary position encodings. This follows the classic model of UHAT formalized by Hao et al. [18], and has been used in various papers (e.g. see [3]). We used masking to obtain a stronger bound in the paper (e.g. Theorem 2). More precisely, masking has been used in several papers because it is a very mild version of position encodings (e.g. in the recent paper [2]). In particular, masking (future/past) can be easily simulated by using arbitrary position encodings. Without masking and position encodings, transformers cannot tell the ordering of the elements in the input sequence [27]. We show in Theorem 2 that UHAT with "very mild position encodings" (namely, past masking) can already recognize non-regular languages. This is in stark contrast to the results over finite alphabets, where UHAT with masking recognize only regular languages. **Q: Why is it important to have rational inputs, but real parameters?** A: The use of rational numbers in the input sequence is standard when studying symbolic computation involving reals because (1) they can represent, among others, real numbers with finite precision, and (2) we cannot represent all real numbers by finite means. We use real parameters in the specification of transformers (e.g. in the position encodings, in the specification of affine transformations, etc.) for two reasons. Firstly, the purpose of this paper is *not* to create a new theory from scratch, but rather to extend existing works. In particular, the classic model of UHAT over finite alphabets [18] permits arbitrary real-valued parameters for affine transformations, position encodings, etc. See also the recent survey by Strobl et al. [33]. Such usage of real numbers in the formal model of transformers has been argued by the need of practical applications of transformers that also employ real-valued functions in position encodings like sin/cosine. Secondly, many of our results (e.g. that UHAT is in $TC^0$ and there is a UHAT for a $TC^0$-hard language) apply also to the more limited setting with only rational parameters. **Q: Why does Appendix A not show that any UHAT for language (2) requires real parameters?** A: The point of Appendix A is *not* to show that real parameters yield more expressiveness than rational parameters (for this one can argue in a different way, see the next question). The point of Appendix A is to illustrate the difficulty of obtaining Theorem 1: It demonstrates that proving a $TC^0$ upper bound cannot be done by showing that each real parameter in a UHAT can be replaced by a rational one while preserving recognized input sequences (even those of length 3). We overcome this difficulty by showing that one can translate the UHAT into a carefully chosen data structure (polynomial constraints with alternation bounds). Here, the key observation is that in polynomial constraints, one *can* sufficiently approximate all real parameters by rational numbers. **Q: Why are UHAT with real parameters more expressive than UHAT with rational parameters?** A: Here is a simple example: For every real number r, one can easily build a UHAT that recognizes all sequences of length 1 and with d=1 (i.e. every accepted sequence consists of a single rational number) such that a number x is accepted if and only if x>r. This yields a different language for each of the uncountably many choices of r. However, there are only countably many languages with sequence length 1 and d=1 recognized by rational-parameter UHAT (since the set of rational numbers is countable). Thus, there must be a number r for which our real-parameter UHAT has no rational-parameter equivalent. We are happy to add this to the paper. **Q: One could make exactly the same argument of Appendix A for strings over a finite alphabet.** A: Yes, exactly: The example in Appendix A shows that also for finite alphabets, one cannot sufficiently approximate the real parameters in the UHAT by rational ones. However, in the finite-alphabet case, this does not add to the difficulty, because there is no need to replace real numbers by rationals at all: Since the alphabet is finite, there is only a finite set of intermediate values. This means, all computations can be made symbolically using a fixed table, because one only needs to distinguish the finitely many possible values. In other words, yes, Appendix A shows in particular that a naive approach to achieving sufficient precision (which is needed in the infinite-alphabet case) would already fail in the finite-alphabet case (but it is not needed there). **Q: How do you define Boolean circuits with rational vectors as input?** A: The rational vector is encoded as a string, where each rational number is encoded as a pair of binary-encoded integers. We will make this more explicit. **Q: Significance of $AC^0$, regular and $TC^0$ in the context of UHAT over sequences. Practical implications?** A: See answer in global rebuttal. --- Rebuttal Comment 1.1: Comment: Thanks for your responses!
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their in-depth reviews and useful feedback. There are some common questions with regards to practical implications of the results, which we will address here. Other questions are addressed directly to the reviewers. **Q: What are the practical implications?** A: We believe that the results in the paper have several interesting practical implications and applications. We mention them below, and will add them in the paper. Firstly, that UHAT over sequences of numbers are still contained in $TC^0$ provides a justification that over sequences of numbers (be they represented as rationals or floats) are still efficiently parallelizable (more precisely, constant-time parallel complexity). This is in stark contrast to Recurrent Neural Networks, which are in the class $NC^1$ and so have logarithmic parallel complexity, and so are not as efficiently parallelizable. Note that $TC^0$ is contained in $NC^1$ (containment is not known to be strict) and that RNN can recognize $NC^1$-complete languages. Secondly, our $TC^0$ bound can be used to understand possible limitations of UHAT in expressing various concepts over sequences of numbers. In particular, some numerical analysis concepts (e.g. SQRTSUM – which we mentioned in the paper on line 54 – determinants, and permanents) might be difficult to capture. Thirdly, our logic $(LT)^2L$ provides a declarative language for a sufficiently large subset of UHAT. It can be used to show that some important concepts in time series can precisely be captured using UHAT. For example, take the concept of 7-day Simple Moving Average (7-SMA); this can be generalized to larger sliding windows of 50-days, or 100 days, which are often used in finance. Using $(LT)^2L$, it is easy to show that the following notion of "uptrend" can be captured using UHAT: sequences of numbers such that the value at each time t is greater than the 7-SMA value at time t. The $(LT)^2L$ formula for this is $G( X^7\top \to \varphi(x_1,\ldots,x_7))$ where $\varphi(\bar x)$ is the formula $7x_7 > \sum_{i=1}^7 x_i$. **Q: Significance of AC0, regular and TC0 in the context of UHAT over sequences.** A: That UHAT over sequences of numbers can recognize non-$AC^0$ and non-regular languages, and that UHAT are in $TC^0$ entail that the model is sufficiently powerful in performing arithmetics over supplied numbers in the input sequence, which is crucial for many applications involving sequences of numbers (e.g. time series). The latter also entails efficient parallelizability. In particular, $AC^0$ is known to have limited counting and arithmetic ability, e.g., multiplications and PARITY (the number of occurrences of a certain element in the sequence is even). *Regular languages* over sequences of numbers (i.e. well-known notion captured by symbolic automata discussed in the CACM paper [10]) have limited abilities in comparing numbers at *different positions* in the input sequence. That UHAT contains non-regular languages entails that UHAT can compare numbers at different positions in the input sequence. Finally, the circuit complexity class $TC^0$ equips $AC^0$ with the power of performing arithmetics. That UHAT over sequences of numbers capture $TC^0$-hard problems like multiplications shows the expressive power of UHAT in performing arithmetics over the numbers in the input sequence. In addition, $TC^0$ is associated with the class of languages that are efficiently parallelizable (i.e. constant time parallel complexity), as we already remarked above.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Scaling laws for learning with real and surrogate data
Accept (poster)
Summary: This work addresses the challenge of augmenting limited data with more accessible surrogate data to improve generalization. The authors proposed a weighted ERM approach for integrating surrogate data into training and analyzed its performance under various statistical models. It was shown that incorporating surrogate data provably reduces test error, even when the surrogate data is far from the original ones. The paper introduced a scaling law that predicts the optimal weighting scheme and the amount of surrogate data to use, supported by both theories and experiments across different domains. Strengths: - The effect of weighted ERM on generalization is theoretically analyzed on various statistical models. - The discussion on the connection with the Stein paradox through the toy Gaussian mean estimation example is insightful and provides a nice intuition for why incorporating surrogate data can be beneficial. - The theoretical results are well-supported by experiments in different domains. Weaknesses: Weighted ERM is a simple and arguably the most intuitive way to incorporate surrogate data. This is a strength rather than a weakness on its own. However, given the ubiquity of such methods, the discussion on related works could be more comprehensive. - For example, in addition to synthetic data from generative models, data augmentations also seem to fall into the category of surrogate data considered in the paper. The results on sample complexities of data augmentation methods could be relevant. - Another potentially related topic is distributionally robust optimization. Despite the different optimization problems, the idea of finding a near-optimal weight for different groups of samples shares a similar high-level goal. - Instead of "introducing weighted ERM", the contribution of this work seems to be more about providing a theoretical underpinning for the effectiveness of weighted ERM with an insight on the choice of weight. This could be made clearer in the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: - In Eq (4), is there any reason for using $\approx$ instead of $\asymp$? - In Theorem 1 and 2, how to interpret $\beta$ intuitively, the difficulty of the learning problem? - In the Sobolev class example (line 161), what's the particular reason for assuming $n=Q^d$? Is this essential or just for simplicity? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: - Some limitations are listed in the Weaknesses section. - It would be helpful to provide more intuitions for Theorem 3 given the complexity of the notations and the distinct form of the result in the asymptotic regime (cf. the clear sample complexities in Theorems 1 and 2). Remark 3.3 on the connection with the scaling law provides some insights but doesn't seem to be clear enough. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Essentially all weaknesses pointed by the referee concern literature review. We will expand the comparison with literature. Regarding the specific suggestions provided: 1. Data augmentation. We feel this is a significantly different setting. In data augmentation, the new data is typically obtained by perturbations/transformations of the original one. As such, the dependence between original and surrogate data need to be carefully modeled. 2. Distributionally robust optimization. This is indeed an interesting connection that is worth exploring. 3. We agree that “introducing weighted ERM” is not accurate. We will replace it with “investigating weighted ERM” or similar. 4. We agree that the content of Theorem 3 is not easy to parse. We will complement it with some simplified/approximate expressions that are easier to parse and interpret. Questions: Q1: The symbol $\asymp$ indicates equivalence up to (possibly large) multiplicative constants. In several of our mathematical examples (and in simulations) constants seem to be close to 1 (possibly with some small additive error). We use $\approx$ to indicate that we interpret the scaling law in this stronger sense. Q2: $\beta$ is a scaling exponent that quantifies how the test error scales with the sample size. Scaling laws have been widely popular among AI practitioners and in those cases this exponent is fitted to empirical data. In these examples we compute the scaling exponent explicitly. Q3: Indeed, we assume $n=Q^d$ for mathematical simplicity. It is standard to assume such “regular designs” in nonparametric regression because this simplifies the analysis, without changing the final result. --- Rebuttal Comment 1.1: Comment: Many thanks for the clarifications. My concerns are addressed. Conditioned on a better presentation and a more comprehensive literature review, I will raise my score and vote for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you, we will do our best to improve the presentation and comparison with earlier work, as recommended.
Summary: This work investigates the effects of augmenting training datasets with lower quality data, under a weighted ERM scheme. It is shown that introducing this surrogate data to optimally weighted training can improve predictive performance on the original data distribution, as measured by test error, even when the surrogate data is unrelated to the dataset of interest. A corresponding scaling law is derived, giving insight into both the optimal weighting and the amount of surrogate data that can be used. Numerical examples are provided. Strengths: The paper is generally well written. A substantial numerical study is provided. Weaknesses: The theory of this paper is developed for a squared loss function. However, most of the numerics are done on classification tasks without mention of this discrepancy. This should be addressed, with justification provided if possible. The role of the regularizer is barely mentioned, however it appears to play a crucial role in the relatively clean form of the relevant quantities given in the result statements. Analysis of sensitivity of the results to the regularization parameters would seem important and be appreciated. Further, growth estimates on its spectrum do not match the usual ridge regression setup (i.e. setting $\Omega$ to be the identity) in a way that's independent of parameter dimension. This makes the assumption highly artificial, and somewhat redundant (compared to just using the largest eigenvalue), and impacts the statement of the theorems. Clear discussion on the rationale for a scaling law and its specific form is not given. Technical Quality: 2 Clarity: 3 Questions for Authors: Can you please address the above weaknesses? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: While the phenomena is framed as analogous to a regularization technique, I do not feel that adequate care is taken to ensure that non-expert readers are not encouraged to add junk data to their training regimes without adequate care being taken. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We agree that it would be important to generalize our theory to classification loss. At the same time, a substantial number of results suggest that the two settings are not as different in high dimension. Among others 1. Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu, and Anant Sahai, Classification vs regression in overparameterized regimes: Does the loss function matter?, Journal of Machine Learning Research 22 (2021), 2. Montanari, A., Ruan, F., Sohn, Y. and Yan, J., 2019. The generalization error of max-margin linear classifiers: Benign overfitting and high dimensional asymptotics in the overparametrized regime. These papers show that the risk of max margin classification is either identical (paper 1) or very similar (paper 2) to that of least square regression in high dimension. We also would like to point out that some of our simulations are for square loss and some for classification loss. The results are qualitatively very similar. We also agree that investigating the sensitivity to the regularization value is an important research question. At the same time, we would like to point out that: 1. Theorems 1 and 2 keep holding (up to constants) if the regularization parameter is changed by a multiplicative constant. 2. Theorem 3 captures the entire dependence on the regularization parameter. We did not explore this dependence in the plots presented in the manuscript, but we can do it straightforwardly (it is just a matter of changing a parameter in a function that evaluate the formula). The setting in Section 3.1 is really motivated by nonparametric regression and other nonparametric settings (eg the Sobolev regression problem of Section 3.2). In these cases, the theta_k are Fourier or wavelet coefficients and $\Omega$ encodes the prior smoothness information about the signal. For instance, cubic smoothing splines correspond to the case $\Omega_k$ proportional to $k^4$. The reviewer mentions that the paper does not take “adequate care to ensure that non-expert readers” do not “add junk data to their training”. We would like to emphasize that, to the contrary, this is exactly the problem that we address in our paper. If the weight \alpha is optimized through a consistent procedure (eg cross-validation), then the resulting learning procedure is very robust to the use of bad surrogate data. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their clarification. I do realize that these losses behave similarly, however in a paper such as this I think it should be addressed when presenting theory and should be explicitly mentioned in the paper. The authors have briefly commented regarding the regularization parameter, setting of Theorem 3.1, and my concerns about partitioners blindly using dummy data. However, they have not provided any indication on whether they intend to address these concerns in their paper, and if so, detailed discussion on how the changes to the document, its framing, and experiments will be made. Without this, I cannot adjust my score. --- Reply to Comment 1.1.1: Comment: Thanks for the comment. Concretely, we plan the following changes. To address the generalization to other loss functions: 1. We will add a discussion of classification loss, referring to the papers mentioned in our rebuttal. We will draw explicit consequence of these results for the surrogate data setting. 2. We will plot an empirical comparison of classification and regression losses. For this plot, we will consider one example with binary labels, and fit either cross-entropy loss or least square loss, plotting the resulting test error versus alpha in the two cases. 3. We have mathematical results on classification loss for under-parametrized settings in which the sample size is much larger than the number of parameters. These are currently given in Appendix B. We will instantiate these general results for classification, and state these consequences in the main text. To address readers blindly adding bad surrogate data: 1. We will add a new subsection to Section 1, to discuss the practical use of the scaling law. (This will expand the paragraph beginning on line 93.) 2. In this subsection we will outline an explicit flowchart of how practitioners can use the proposed weighting scheme and scaling law. 3. We will add a figure demonstrating the evolution of test error with added surrogate data using our approach, and a naive approach (comparing for “good” and “bad” surrogate data). This figure should clarify that if ‘bad’ surrogate data are added to the training in a naive way, it can dramatically hurt the model test error. On the other hand, if the same data is added with weight that is optimized via cross-validation, then this effect is significantly mitigated and potentially reversed (“bad” data can help). 4. The same figure will be used to illustrate the robustness of the method with respect to changes in the regularization parameter. Finally, we will add motivations for the setting of Theorem 3.1 by connecting it to classical nonparametric regression models. --- Rebuttal 2: Title: discussion Comment: Dear Reviewer 3jgb, Thank you very much for submitting your review report. The author(s) have posted responses to your review. Could you kindly provide comments on whether your concerns have been adequately addressed? AC
Summary: This study addresses the challenge of computational scenarios where true data are scarce, and surrogate data are available to assist in building statistical models. The authors provide novel theoretical and empirical insights demonstrating that training models with both true and surrogate data, with appropriated selected coefficient $\alpha$, could improve empirical risk reduction on test sets, even when the surrogate data is unrelated. Strengths: - The scaling law claim is clearly presented and supported by theoretical results across various contexts, including Gaussian sequence model, non-parametric regression, low-dimensional asymptotic, and high-dimensional linear regression. This breadth of application suggests the claim's novelty and relevance. - The link established between training with surrogate data and the Stein paradox is both novel and intriguing, potentially influencing future work in data augmentation, multi-fidelity modeling, and out-of-distribution training. - Theoretical and empirical results are robust and well-articulated, lending significant credibility and depth to the findings. Weaknesses: - The discussion is limited to certain loss functions, specifically $\lVert \theta_\ast - \theta\rVert^2$ and $\lVert f_\ast - f\rVert^2_{L^2}$. Extending these results to include more common loss functions like L-1/L-inf norms or classification-driven losses could broaden the applicability of the findings. - Some notations are unclear or undefined, which could hinder understanding for readers unfamiliar with the symbols or concepts. For example, the symbol $\asymp$ used in Line 152 and $\Lambda(\alpha)$ in Line 207 require definitions. - Some minor errors: - Line 50 introduces the test error $R_\text{test}$ but this subscript is not used in the following context, e.g. Equation (4). Please make sure the test error notations are consistent; - Line 226: $R_\ast=R(\hat\theta_{0, \infty}(0))$ should be $R_\ast=R(\hat\theta_{\infty, 0}(0))$ Technical Quality: 4 Clarity: 4 Questions for Authors: - Equation (4) appears to lack a variable that quantifies the "closeness" between the surrogate and true data, such as the $\gamma$ parameter defined in Theorem 3. Could the authors clarify the absence of such a metric in this model formulation? Note: I realize that $R_\text{su}^\text{ex}(\infty)$ can represent the deviation from surrogate data to true data (indirectly). But let's keep this in case others share similar questions. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The limitation is included and no negative societal impact should be considered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Thanks for the positive feedback! 2. We agree that studying classification and other losses would be an important next step. 3. Thanks for pointing out the typos. We will correct them in the final version. We will also add the definitions of the symbols/concepts pointed out by the reviewer. 4. The “closeness” enters implicitly in the first term on the right-hand side of Eq (4), namely the excess population error of training on (infinitely many) surrogate samples. In this formula we assume that the population error of training on (infinitely many) original samples is 0. Hence this term is non-negative and equal to 0 if and only if surrogate distribution coincides with the original one. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. Good luck!
Summary: This is a technical report on predicting training loss for a model trained on a weighted combination of real (in-distribution) and surrogate (out-of-distribution) data. The report proposes a parametric function for predicting how the training loss scales with the amount of real and surrogate data. The report first conducts theoretical analysis with a number of standard models on stylized cases, such as Gaussian sequence model, high-dimensional linear regression, etc., and derivated conditions for the proposed scaling relationship to hold. The report conducts empirical validation of the proposed scaling law on a range of classic tasks such as Sentiment analysis for the IMDB dataset, ResNet for CIFAR-10 classification, etc., and shows the training loss consistent with the proposed scaling function. Strengths: The paper adopts the format of a technical report rather than a typical research paper. The structure of the report is clear. The problem is clearly introduced and formulated at the beginning of the paper. Abundant theoretical analyses are provided and a range of models and cases are considered. Experiments are diverse. Weaknesses: I'm not sure whether this technical report format is suitable for a NeurIPS paper. The lack of context or background is not a critical problem, though the lack of context may make it less accessible to broader audiences. What I found most concerning is the complete lack of comparison with existing works and a serious literature review. The practice of data augmentation or/and using synthetic data from generative models/simulation, etc. is prevalent in the field and has a long history. This also seems not also the first paper to propose the idea of weighted training with real and surrogate data. It is essential for any paper contributing to this topic to first 1. organize and list previous efforts on this topic; 2. introduce the current progress and research landscape; identify and justify there exists a valid research gap; 3. how this paper moves forward compared to previous efforts. All of these are missing from this paper. Section 1.3 simply listed a number of potentially relevant references without commenting on each one or explaining how they relate to or are different from this paper. It is not actually informative. --- The problem definition does not appear rigorous. Though problem formulation and notations come early on in the paper, many are not clearly introduced or defined. Some vague arguments mismatch with the math-heavy writing style of the paper. For example, in the abstract, the authors emphasized > "Integrating surrogate data can significantly reduce the test error on the original distribution. Surprisingly, this can happen even when the surrogate data is unrelated to the original ones. " The word 'unrelated' makes it a very strong argument. Surprisingly, I did not find it clearly defined in the main paper. Actually, I don't think learning with unrelated surrogate data will always reduce model loss. Imagine training the model with mislabeled data. It will only degrade the validation performance. My guess is what the authors are trying to say might be "training the model with not directly related data may also contribute positively due to regularization effect or knowledge transfer". But this is well-known to the community. So I am confused. Technical Quality: 2 Clarity: 1 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The work does not have a standalone discussion for its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We agree: we will expand the related-work discussion. At the same time, part of this criticism is unfair. The distinction between “technical report” and “research paper” depends on the subcommunity. Mathematical papers tend to be more technical and less focused on positioning. Review: ‘The word 'unrelated' makes it a very strong argument. Surprisingly, I did not find it clearly defined in the main paper.’ Response: The abstract and introduction try to convey our results in an intuitive way. This sentence means that the surrogate data have a distribution very different from the original one. In Section 2 we give a very simple mathematical example in which this claim is formalized and formally proven. Subsequent sections provide more sophisticated examples. So the claim is both rigorously stated two pages after it is introduced, rigorously proven, and, judging from the referee report, very surprising. Review: Actually, I don't think learning with unrelated surrogate data will always reduce model loss. Response: Actually Section 2 already gives a toy example in which adding surrogate data (and proper data-driven weighting) will yield a non-zero improvement, regardless of the distance between original and surrogate data! In practice, this improvement can be negligibly small, when the distribution of surrogate data is very different from the original one. However, we find that the improvement is often non-negligible in practice. Further, a data-driven weighing strategy yields significant improvements even in cases where the common practice of naively mixing real and surrogate data with equal weight hurts rather than helps. Review: The work does not have a standalone discussion for its limitations. Response: Section 5 is a standalone discussion of the paper’s limitations. --- Rebuttal 2: Title: discussion Comment: Dear Reviewer XfMG, Thank you very much for submitting your review report. The author(s) have posted responses to your review. Could you kindly provide comments on whether your concerns have been adequately addressed? AC --- Rebuttal Comment 2.1: Title: Thanks for the responses. Comment: Thanks for the responses. I have seriously re-read the manuscript, attempted derivations myself, and read additional references. I admit that some of my previous reviews are inaccurate (on the evaluation loss and training with "unrelated data"). The quoted theories are indeed interesting. I like the idea of this paper. I feel the limitations are these stylized cases, generalized from Stein's phenomenon, still have gaps from practical use cases. For example, the paper will greatly improve its impact and attractiveness to readers beyond theory people if it can connect to commonly used practices such as regularization, data augmentation, etc. It is unclear whether the currently proposed scaling laws are practically useful. For example, a number of papers exist studying the problem of mixing data from different distributions to improve model performance, providing scaling functions with high accuracy and applying them to broad use cases including foundation models. E.g., [Model performance scaling with multiple data sources] [Performance scaling via optimal transport: Enabling data selection from partially revealed sources] [Data mixing laws: Optimizing data mixtures by predicting language modeling performance] Do the authors think the proposed methods are practically competitive compared with these works? My current opinion is [weak reject] for the presentation issue outweighs the merit of the theoretical results. --- Reply to Comment 2.1.1: Comment: As witnessed by the references provided by the reviewers, adding surrogate data to training is a practice of great current interest in itself. This is different from data augmentation, which uses transformations of the original data, and from regularization, which does not use surrogate data. We will emphasize the relation to and distinction from these lines of work in our revision. The referee also mentions three related works. All of these papers are very different and cannot be compared to ours. The most important difference is all of these papers use vanilla ERM instead of weighted ERM. As a consequence, if the surrogate data sample is sufficiently large, the effect of surrogate data will drown the original data, and the resulting model will perform poorly. In other words, if we compare our approach to theirs in a setting in which the surrogate dataset is significantly larger than the original one, our approach will outperform theirs by a multiplicative factor that can be arbitrarily large. (Of course, the naive solution to this problem is to add only a small fraction of surrogate data, but this is suboptimal.) Overcoming this fundamental problem is the very starting point of our work. Hence our work takes the next step beyond what is accomplished in these papers. We remark the following additional differences: 1. Hashimoto, T., 2021, July. Model performance scaling with multiple data sources. This paper derives a low-dimensional scaling result for data mixtures. However, the settings studied are such that the exponent is always equal to one. 2. Kang, F., Just, H.A., Sahu, A.K. and Jia, R., 2024. Performance scaling via optimal transport: Enabling data selection from partially revealed sources. This paper is entirely empirical. It assumes without mathematical justification that the error is an affine function of the Optimal Transport distance. Coefficients are fitted to this postulated relation without providing insights into how the scaling behavior depends on the mixture proportion. The only theorem assumes that the postulated scaling law holds exactly, which is obviously unrealistic. In conclusion, the real problem addressed in this paper is actually very different from ours. Given a certain postulated law, they optimize the proportion (which they do by gradient descent). Our work is about deriving a correct scaling law. 3. Ye, J., Liu, P., Sun, T., Zhou, Y., Zhan, J. and Qiu, X., 2024. ‘Data mixing laws: Optimizing data mixtures by predicting language modeling performance’. This paper was posted on arxiv on March 24. As such, it is concurrent or follow-up work. Also, our work implies that the “mixing law” they suggest does not hold in any of the models that we can analyze mathematically. (The law they propose in Eq (1) is exponential in a linear combination of the proportions, while we prove a different relationship.)
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: In the context of linear regression, this paper proposes to complement training data with surrogate data (generated / synthesized, or from an unrelated task / domain). For an optimal combination of real and surrogate data, new scaling laws are derived which show that that surrogate data can can help reduce the text error. Strengths: The paper is very well-written. The context and motivations are clearly presented. The paper makes nontrivial contribution to understanding commonplace practice in training large models in modern AI era: namely, the use of synthetic or surrogate data. Weaknesses: NaN Technical Quality: 4 Clarity: 4 Questions for Authors: - In practice, how is the optimal value of the weighting paramater $\alpha_*$ to be set ? Is there a consistent estimator for it which only depends on the input training data ? Without such a prescription, the proposed scheme might fall short of practical usefulness. - My understanding is that the proof of the main result uses the Gordon comparison theorem. If a nontrivial covariance matrix for the features $x$ is introduced, major modifications might be required, or an entirely different approach (e.g via RMT) might be required. I'd be glad if the the authors can comment on this. - I wonder what this work could tell us about the "model collapse" phenomenon [1,2,3], where a model trained on synthetic data eventually breaks down (becomes biased towards a trivial / useless estimator). [1] Shumailov et al. "The Curse of Recursion: Training on Generated Data Makes Models Forget" (2023) [2] Alemohammad et al. "Self-Consuming Generative Models Go MAD" (2023) [3] Dohmatob et al. "A Tale of Tails: Model Collapse as a Change of Scaling Laws" (ICML 2024) Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The setup considered is extremely: linear regression with Gaussian data with no covariance structure (i.e isotropic covariates) Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: In our paper we provide two recommendations for selecting $\alpha_*$ 1. Compute the error on a validation split from the original data for various values of $\alpha$, and optimize among those. (This can be improved using cross validation) 2. Use the scaling law expression, with free parameter fitted to empirical error at $\alpha=0$ and $\alpha=1$. The first technique can be proved to be consistent under standard assumptions. The second one is computationally faster. Of course, we can think of intermediate strategies as well. We will emphasize these recommendations. Indeed, the result for high-dimensional ridge regression uses Gordon inequality. This approach can be generalized to cover cases with non-identity covariance at the price of becoming technically heavier. (See, e.g. Celentano, M., Montanari, A. and Wei, Y., 2023. The lasso with general gaussian designs with applications to hypothesis testing. The Annals of Statistics, 51(5), pp.2194-2220.) The connection to model collapse is indeed worth pursuing. One interesting remark is that there exist weighting schemes that will prevent model collapse. --- Rebuttal Comment 1.1: Title: Discussion Comment: My concerns have mostly been addressed. Thanks. I'll keep my current score.
null
null
null
null
null
null
TFG: Unified Training-Free Guidance for Diffusion Models
Accept (spotlight)
Summary: The paper introduces Training-Free Guidance (TFG), a novel framework designed to enhance the generation of samples with desired properties using diffusion models, without necessitating additional model training. TFG aims to resolve the shortcomings of existing training-free methods by offering a unified algorithmic framework that simplifies the comparison and application of such methods across a wide range of tasks. By theoretically and empirically analyzing a hyper-parameter design space within this framework, the authors develop an effective strategy for hyper-parameter selection applicable to various tasks. Their comprehensive benchmarks across multiple tasks and targets demonstrate TFG's superior performance, achieving an average improvement of 7.4% over existing methods. Strengths: 1. Given the burgeoning interest in the area of training-free guidance, the establishment of a unified benchmark as presented in this paper is a commendable contribution that holds the potential to significantly advance research in this field. The authors' efforts in this direction are highly appreciated and underscore the importance of standardized benchmarks for facilitating future developments. 2. The experiments conducted in this study are extensive, reflecting a high degree of diligence and thoroughness. Such a comprehensive experimental approach is commendable and warrants recognition. Consequently, I believe this aspect of the paper merits a positive evaluation for its contribution to validating the proposed framework and its applicability across a diverse range of tasks. Weaknesses: There are several areas where improvements could significantly enhance its contributions. Addressing these points satisfactorily would make a strong case for elevating the paper's status to "Accept" or "Strong Accept". 1. While the paper provides valuable insights into a specific line of training-free guidance, it is important to acknowledge the existence of a broader spectrum of works outside this category. This observation suggests that the title of the paper may slightly overreach, potentially implying a more comprehensive coverage than is actually presented. (Question 1) 2. The paper commendably supports the motivations behind conditional diffusion in Appendix A, offering a solid foundation for its relevance. However, the rationale for focusing specifically on training-free approaches appears less extensively articulated. (Question 2) 3. The authors claimed that they have theoretically grounded their unified framework. However, upon review, the theoretical underpinnings presented seem to require further elaboration to fully substantiate this claim. (Question 3, 4, 5) 4. The authors claimed that "the studies of training-free methods become the study within the hyperparameter space of our framework". Nonetheless, it appears that the search space defined within this framework might not fully encompass the range of hyper-parameters considered in existing literature. This limitation could potentially restrict the framework's applicability or comparative analysis capabilities. (Question 6, 7) Technical Quality: 3 Clarity: 4 Questions for Authors: 1. The field of training-free guidance encompasses a wide array of studies beyond those specifically addressed in this paper (e.g. [1]-[3]). Could you elaborate on how these works relate to the scope and contributions of your paper? 2. Your paper operates under the assumption that the forward model is known, leveraging training-free guidance for solving inverse problems. Given that knowing the forward model allows for the generation of a large number of samples to train a conditional diffusion model at a relatively low computational cost (e.g., 10 A100 GPU hours) and training-based approaches are much better than training-free ones, could you discuss the significant motivations of adopting a training-free approach in this context? 3. The theoretical foundation of your paper seems to rest significantly on Lemma 4.1, which revisits the variance of MMSE estimator of the signal corrupted by Gaussian noise (e.g., (2.8) in [4]). This formula is widely adopted in diffusion papers (e.g., [5]). Given its established nature and previous applications, could you elaborate on how this lemma specifically contributes to the novel aspects of your framework? 4. Concerning Lemma 4.1, it appears there is no assurance that the generated image accurately follows the conditional distribution, nor is there a guarantee that the loss decreases at each iteration as suggested by equation (7). Could you provide further clarification or additional theoretical support to address these concerns? 5. Several techniques introduced in the paper lack direct theoretical underpinning: - The concept of "time-travel" being an Ornstein-Uhlenbeck process is intriguing but lacks a detailed derivation. Could you expand on how the theory of the Ornstein-Uhlenbeck process quantifies the benefits of time-travel in your framework? - The selection of hyperparameters seems not to be grounded in theory. Could you discuss the rationale behind these choices and any potential theoretical support? 6. The paper posits that the study of training-free methods can be encapsulated within the hyperparameter space of your framework. However, specific hyperparameter settings critical for the performance in existing works, such as the step sizes used in Face generation (at.sqrt()) and Style transfer ((correction * correction).mean().sqrt().item() * unconditional_guidance_scale / (norm_grad * norm_grad).mean().sqrt().item() * 0.2) in FreeDoM, are not explicitly covered. Could you address the omission of these settings and their impact on the comprehensiveness of your study? 7. The hyperparameter search settings for the baselines in your comparative analysis are not disclosed, raising questions about the fairness and validity of the comparisons. Could you provide more details on these settings to ensure a transparent and equitable comparison? [1] Feng, Weixi, et al. "Training-free structured diffusion guidance for compositional text-to-image synthesis." ICLR 2023. [2] Chen, Minghao, et al. "Training-free layout control with cross-attention guidance." WACV 2024. [3] Mo, Sicheng, et al. "Freecontrol: Training-free spatial control of any text-to-image diffusion model with any condition." CVPR 2024. [4] Efron, Bradley. "Tweedie’s formula and selection bias." JASA 2011. [5] Kadkhodaie, Zahra, et al. "Generalization in diffusion models arises from geometry-adaptive harmonic representation." ICLR 2024. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful and constructive review, and we are so honored that you believe our work is novel, effective, comprehensive, and offers commendable contributions. We are delighted to address your concerns and questions below. 1. The existence of a broader spectrum of works and the overreach of the title. > The field of training-free guidance encompasses …a wide array of studies beyond those specifically addressed in this paper (e.g. [1]-[3]). Could you elaborate on how these works relate to the scope and contributions of your paper? > Thanks for this valuable suggestion. The three works you mentioned are training-free methods specifically for text-to-image tasks (compositional generation [1], layout control [2], spatial control [3]). [1] and [2] exploited the cross-attention mechanism in Stable Diffusion UNet to incorporate guidance targets (noun phrases [1], layout condition [2]), while [3] leverages the diffusion features of the guidance image (with DDIM inversion) to guide the diffusion process. Unlike these works, which design task-specific guiding strategies, our paper focuses on general training-free guidance methods that can be applied to any diffusion model for any objective function. Basically, our work doesn’t focus on the design of the objective function for each task but on the universal methodology of guidance. That said, the improvement in architecture, diffusion models, and guidance targets is “orthogonal” to our work. We would like to discuss these works in our revised manuscript to clarify our scope and contributions. 2. The rationale for focusing specifically on TFG. > Your paper operates under the assumption that the forward model is known, leveraging training-free guidance for solving inverse problems. Given that knowing the forward model allows for the generation of a large number of samples to train a conditional diffusion model at a relatively low computational cost (e.g., 10 A100 GPU hours) and training-based approaches are much better than training-free ones, could you discuss the significant motivations of adopting a training-free approach in this context? > We sincerely thank the question that allows us to discuss the motivation of the paper further. In lots of cases when the conditions are complex, e.g. requires an image of a special dog species in certain scenarios or a molecule that has certain polarizability and synthesizability, they are extremely rare (with probability less than, e.g., $10^{-6}$) across the unconditional distribution. As such, “sampling a large number of targets for training a time-dependent classifier” itself is already impossible. However, training-free methods can effectively increase the sample rate from $10^{-6}$ to $10^{-2}$, making it possible to leverage training-based methods. In fact, we believe that one motivation of TFG is exactly to bridge towards the scenarios you describe. We will explicitly add this discussion in the paper. 3. Further elaboration of the theoretical part of the work. > The theoretical foundation of your paper seems to rest significantly on Lemma 4.1, which revisits the variance of MMSE estimator of the signal corrupted by Gaussian noise (e.g., (2.8) in [4]). This formula is widely adopted in diffusion papers (e.g., [5]). Given its established nature and previous applications, could you elaborate on how this lemma specifically contributes to the novel aspects of your framework? > We want to highlight that lemma 4.1 is simply used to illustrate the effect of “variance guidance” and is not used to prove our major theorem 3.2. Despite its wide use in previous works, here we use the lemma to point out that $\Delta _t$ in Line 7 is to control the second-order information, pointing to the difference between $\Delta_t$ and $\Delta_0$ from a theoretical perspective. > Concerning Lemma 4.1, it appears there is no assurance that the generated image accurately follows the conditional distribution, nor is there a guarantee that the loss decreases at each iteration as suggested by equation (7). Could you provide further clarification or additional theoretical support to address these concerns? > We agree that a theoretical guarantee on loss is important, but to the best of our knowledge, none of the existing training-free methods can prove that the generated sample follows the correct distribution as desired. This is not an issue of Lemma 4.1, but we believe that one crucial future direction is to provide a global-level guarantee for training-free guidance. The intrinsic difficulty here is to analyze the difference between a training-based classifier $f(x,t)$ and standard classifier $f(x)$, which are complex to be quantitatively captured. > The concept of "time-travel" being an Ornstein-Uhlenbeck process is intriguing but lacks a detailed derivation. Could you expand on how the theory of the Ornstein-Uhlenbeck process quantifies the benefits of time-travel in your framework? > We are delighted to further explain this. For each recurrent step, we compute $x_{t-1} = u(x_t)$ where $u$ corresponds to Line 6-9 and then add a Gaussian noise back in Line 10 to obtain updated $x_t$. Together, the process becomes $$ d x_t = \sqrt{\alpha_t }u(x_t) - x_t + \sqrt{1 - \alpha_t} \epsilon, $$ which is a typical OU process. This implies that when $N_{recur}$ goes to infinity, $x_t$ will converge to a certain distribution that is hard to compute analytically. --- Rebuttal 2: Title: Additional responses Comment: > The selection of hyperparameters seems not to be grounded in theory. Could you discuss the rationale behind these choices and any potential theoretical support? > Our beam-search hyperparameter selection strategy depends on an intuition (assumption) that when fixing all other parameters, the loss function of the remaining parameter is a decrease-then-increase function, i.e., as the parameter increases from 0 to infinity, the loss first goes down (or never goes down) and then increases. This is observed and summarized from experiments, and under such an assumption, it can be proved that the (near-)optimal hyperparameter can be found with sufficient search step and small enough stepping size (multiplying factor). We would add this to our paper as suggested. 4. The coverage of the proposed space in existing literature. > The paper posits that the study of training-free methods can be encapsulated within the hyperparameter space of your framework. However, specific hyperparameter settings critical for the performance in existing works, such as the step sizes used in Face generation (at.sqrt()) and Style transfer ((correction * correction).mean().sqrt().item() * unconditional_guidance_scale / (norm_grad * norm_grad).mean().sqrt().item() * 0.2) in FreeDoM, are not explicitly covered. Could you address the omission of these settings and their impact on the comprehensiveness of your study? > We are delighted to explain this further. Theoretically speaking, TFG aims to provide a general framework that unifies different sub-hyperparameter spaces, and the practical implementation tricks are omitted (as this trick only appears in FreeDoM code, not its original paper). That said, our framework can naturally represent this nuanced setting. For example, if we want the gradient to be $\frac{\nabla g}{\| \nabla g\|}$ for a given target function $g$, we could simply set the $f$ in our algorithm as $\exp\{\int \frac{\mathrm dg}{\| \nabla g\|}\}$, in which case the gradient of $f$ becomes exactly what we want. In practice, however, we observe that with the aid of hyperparameter searching, the output quality of our FreeDoM algorithm is comparable to the original implementation. Because of this, and to be consistent with the paper, we simply focus on the setting without normalization. Overall, we believe that such normalization should be similar to normalizing $\mathbf \rho$. > The hyperparameter search settings for the baselines in your comparative analysis are not disclosed, raising questions about the fairness and validity of the comparisons. Could you provide more details on these settings to ensure a transparent and equitable comparison? > We apologize for not providing a more comprehensive discussion beyond what was presented in Appendix E. We'd like to offer additional details here. For all methods discussed in our paper, we employed an identical beam search strategy with three beam search trials and a maximum of 7 steps. For each search, we start with initial parameters (identical across all methods) and double parameters to see if performance improved. Then, we keep the top 3 performances in the beam list after each step until the list becomes unchanged or the maximum step is reached. This beam search program is method-agnostic and compatible with all methods, ensuring fair and objective comparisons. To promote transparency and facilitate future research, we will open-source our code, all beam search runs, the best parameters for each algorithm, and their corresponding performance results. Again, we sincerely appreciate your helpful review. Please let us know whether there is any additional concern that we could address to improve your evaluation of our work. --- Rebuttal 3: Comment: I appreciate the authors' effort in addressing my concerns, which has led me to moderately increase the evaluation score. Regarding Question 2, I have experimented with several methods to fine-tune a network trained on clean images for use with noisy images [A, B]. My findings indicate that the training can be completed in one A100 hour and the performance is quite satisfactory. Could you please provide further clarification on this point? For Question 3, you assert that Theorem 3.2 represents the main theoretical contribution. However, it appears to be straightforward as the proposed method is a combination of these methods. Could you explain why the proof is non-trivial? In relation to Question 5, I would be interested in a more detailed discussion on the practical implications of the connections between the Ornstein-Uhlenbeck process and time travel. Specifically, how do these connections yield new guarantees or insights that could potentially enhance the methodology of time travel? [A] More Control for Free! Image Synthesis with Semantic Diffusion Guidance [B] Towards practical plug-and-play diffusion models --- Rebuttal Comment 3.1: Title: Response to the followup questions Comment: We thank the reviewer for the moderately improved evaluation and we are delighted to address the follow-up concerns. > Regarding Question 2, I have experimented with several methods to fine-tune a network trained on clean images for use with noisy images [A, B]. My findings indicate that the training can be completed in one A100 hour and the performance is quite satisfactory. Could you please provide further clarification on this point? > We have gone through both papers you provided, and we believe that there are major difference between our methods and theirs. Overall, the training efficiency of their method (even if we don’t think about the advantage of “training-free” at all) is highly determined by the structure of the guidance function, and the situation of using CLIP for guidance might be completely different from that of using a general classifier. For example, notice that in [A] the guidance function $F(x_t, t,l)$ is expected to have a special structure, i.e., $F(x_t, t, l) = E(x_t,t) \cdot E(l)$, where $l$ is the language embeddings and $x_t$ is the noisy images. In such case, using undesired images generated from unconditional diffusion models to help learn $E(x_t, t)$ from CLIP model $E(x_0)$ is likely **generalizable** to desired images since the relationship between images and the requirements specified by $l$ is simply multiplicative. However, consider a more general case where the given classifier $f(x_0)$ corresponds to a very special property and only a tiny proportion of images gives $f(x_0) \approx 1$ (e.g., whether the generated image is Larosterna inca, a bird species native to the coastal regions of western South America). Most unconditional images will not even have activated embeddings of the last layer of $f$ since they have a close-to-zero classifier output. In such case, the property of being able to match the embeddings of $f(x_t,t)$ and $f(x_0)$ is unlikely generalizable to desired images with $f\approx 1$, since both noisy and clean undesired images could have close-to-zero embeddings but both noisy and clean desired images do not. Consequently, an extremely large amount of unconditional generation is required to help learn a time-dependent classifier of one particular property. We are willing to see if future works can give a more quantitative result about the training efficiency for general classifiers. > For Question 3, you assert that Theorem 3.2 represents the main theoretical contribution. However, it appears to be straightforward as the proposed method is a combination of these methods. Could you explain why the proof is non-trivial? > We would like to clarify that the major contribution of the unification algorithm does not lie in the novelty or difficulty of the proof of theorem 3.2, but rather in recognizing and quantifying the importance of unifying different methods in the same hyper-parameter space for training-free guidance. For instance, while different existing methods overlap in certain techniques, their intuition and explanation of each of the technique is different, some of which are even incorrect (for example, [C] unconsciously falls into a “fake” training-free guidance setting, as we have discussed in Appendix A.2). By figuring out a way to construct an algorithm with reasonable hyper-parameter space that can encompass existing methods, we unify different techniques and offer a clean way to study the training-free guidance problem. Theorem 3.2 mainly aims to justify the unification theoretically, and the proof is not technically special. --- Reply to Comment 3.1.1: Title: Response to the followup questions (cont') Comment: > In relation to Question 5, I would be interested in a more detailed discussion on the practical implications of the connections between the Ornstein-Uhlenbeck process and time travel. Specifically, how do these connections yield new guarantees or insights that could potentially enhance the methodology of time travel? > We are willing to give a fine-grained theoretical discussion about the insights behind recurrence. Specifically, let’s define a parameterized step $u_t (x)$ that tries to simulate the transformation from distribution $p_{t+1}$ to $p_t$. Then, assume that up to time step $t+1$, the error of distribution estimation is $err_{t+1}$, e.g., if we use the total variation between the resulting distribution $p^\theta_{t+1}$ and $p_{t+1}$, then $err_{t+1} = TV(p_{t+1}^\theta, p_{t+1})$. In such case, consider the following OU process: $$ x_t = u_t(x_{t+1}), $$ $$ x_{t+1} = x_t + \epsilon, $$ where the recurrent step is $K$, then informally, the Wasserstein-1 distance between resulting distribution $p^\theta_t$ and the ground truth distribution $p_t$ can be controlled by the following terms: $$ c_0 (1-\lambda)^K \text{err}_{t+1} + (K+1)c_1, $$ where $c_0, c_1$, and $\lambda \in (0,1)$ are some constants that depend on many variables, including score estimation errors, time step $t$, configurations of Gaussian noise $\epsilon$, and more. Intuitively, the first term implies that the contracted error will shrink as we traverse, thanks to the convergent property of the OU process; the second term is an accumulated error due to inaccurate estimation of the transformation from $t+1$ to $t$. The gradient of the upper bound is $$ c_0 \text{err}_{t+1} (1-\lambda)^K \ln (1-\lambda) + c_1, $$ which, under some assumptions, is negative when $K$ is small and possible after $K$ increases over some certain point. This could intuitively explain why the recurrence is helpful when we increase $K$ from 1 to some certain values, but will lead to under-qualified samples when $K$ becomes overly large: it’s because that recurrence helps find a balance between the contracted error inherited from previous steps and the accumulated error in this step. We are happy to add more discussion to the paper, but we want to emphasize a few issues why we cannot make it a formal theorem. First, notice that we can only control the Wasserstein-1 distance at step $t$ using the total variation distance at step $t+1$, which is not transmissible since W1 distance cannot bound TV distance. Second, even if we demonstrate that the upper bound has a decrease-then-increase pattern, it’s not guaranteed that the actual error follows a similar pattern in theory (although it is in practice). The concrete techniques and discussions about the limitations can be found in [D]. We hope that this can help provide insights into the issue you are concerned about. We thank you for your time reviewing our paper and for providing supportive comments and constructive feedback. Please let us know if any further clarifications are needed. [C] Song, Jiaming, et al. “Loss-Guided Diffusion Models for Plug-and-Play Controllable Generation”. *Proceedings of the 40th International Conference on Machine Learning*, PMLR 202:32483-32498, 2023. [D] Xu, Yilun, et al. "Restart sampling for improving generative processes." Advances in Neural Information Processing Systems 36 (2023): 76806-76838. --- Rebuttal 4: Comment: Thank you for your detailed response. I have adjusted my score to an 8. In particular, **I believe the reproducible benchmark used in this paper will influence the trajectory of future research in training-free guidance**. To ensure a comprehensive comparison, I strongly recommend that you include motion diffusion guidance from the LGD paper (a new application related to diffusion planning) and phase retrieval from the DPS paper (a nonlinear non-NN case) as additional benchmarks in the final manuscript. I have not listed this as a weakness, as I understand that these experiments may not be feasible within the rebuttal period. --- Rebuttal Comment 4.1: Title: Thanks for the review Comment: We sincerely appreciate your timeply reply and considerate response about the short experiments window. We promise to add them in the revised paper. Thanks!
Summary: The authors propose a framework (TFG) for training-free guidance of unconditional diffusion models, enabling their application to conditional generation tasks such as super-resolution, deblurring, etc. via the use of a predictor that evaluates the quality of a clean sample. TFG, as in related past methods, aim to sample from the conditional distribution of samples defined in Equation 4 without either training a conditional diffusion model or a predictor evaluating the quality of noisy samples as in classifier guidance. The gradient of the predictor on noisy samples combined with the unconditional score produce the correct conditional score function in Equation 5. The challenge of training-free methods is to approximate this gradient somehow and a variety of approaches have been previously proposed. TFG defined in Algorithm 1 is shown to include five such previous training-free methods as special cases for particular hyperparameters, demonstrating that these methods can be viewed and understood in a unified framework. The authors then propose a procedure to optimize TFG's hyperparameters jointly. By leveraging the larger design space with optimized hyperparameters, performance gains are observed across 14 tasks, 6 diffusion models, and 38 target predictors compared to the past methods subsumed by TFG. Strengths: - The TFG framework appears to be a non-trivial, novel incorporation of recent training-free methods and helps contextualize the relationships of these methods to one another. Training-free guidance of diffusion models is an unsolved problem with many recent papers and placing a portion of this literature into a unified framework is a valuable contribution. - The framework also led to substantive performance gains on generation validity in the comprehensive benchmarks. The benchmark evaluations included a good variety of tasks, models, and predictors, including particularly out-of-distribution fine-grained label guidance and molecule property guidance. - The paper is generally well-written and organized. Weaknesses: - Some statements and claims are overly broad. The authors claim all existing training-free approaches fit into their framework which is unlikely. Many training-free methods have been proposed and the authors could better place their framework in the context of recent literature. Section 3 Figure 1's demonstration that training-based conditional generation outperforms training-free is unsurprising, given that training-free methods are approximations. - While hyperparameter search was helpful, the unified design space discussion in Section 4.1 led to limited theoretical insight. The authors could consider expanding this section to improve their methodological contributions. - The hyperparameter setting strategy described in Section 4.2, key to getting gains versus past methods, could use more detailed explanation (see questions 2-4 below). - The benchmark Section 5 decides to not compare generation fidelity in favor of comparing the best algorithm in terms of generation validity. However, throughout the rest of the paper (e.g. figure 1, 2, and 3) tradeoffs between fidelity and validity are emphasized. Looking at Table 2, TFG is sometimes but not always the best algorithm in terms of both fidelity and validity simultaneously. The focus on only validity seems unjustified here. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why does increasing $N_{recur}$ and $N_{tier}$ eventually hurt performance? 2. In section 4.2, does the "structure analysis" that "increase" is best hold more generally on other tasks than label guidance? 3. The number of Monte Carlo samples in the Implicit Dynamic is set to 1 based on Table 1, but that choice needs more justification. Does more samples than 4 help? Why is implicit dynamic helpful with few samples? This seems contrary to the original motivation of estimating an average. 4. In section 4.2, the beam search in "Searching strategy" could use more detail. What sample sizes are used and how did that enable quick search? Also, what metric is used when deciding the top K candidates in this search? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are briefly discussed in Section 6, focusing on why training-free guidance remains relevant given language-based image generators. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful and constructive review, and we are honored that you think our work is non-trivial, novel, organized, comprehensive, substantive in experiments, and presents a valuable contribution. We are happy to address your concerns: > The authors claim all existing training-free approaches fit into their framework which is unlikely. Many training-free methods have been proposed and the authors could better place their framework in the context of recent literature. Section 3 Figure 1's demonstration that training-based conditional generation outperforms training-free is unsurprising, given that training-free methods are approximations. > Our algorithm is based on a comprehensive contemporary literature review of more than ten papers (see [1]-[10] below) and on a collection of all of their algorithms (that directly or indirectly use the algorithms we unify), trying to provide an elegant and unified way to study the problem. That said, we completely agree with your suggestion that we never fit all methods, and we are more than happy to explicitly clarify this and discuss our framework in the context of TFG literature. The purpose of Figure 1 is to emphasize our observation that, unlike what previous works have claimed, training-free guidance is far from being addressed, and all existing methods could even fail on a very easy task. In other words, it helps demonstrate our motivation, rather than to help compare training-free methods with training-based methods. We will clarify this. > While hyperparameter search was helpful, the unified design space discussion in Section 4.1 led to limited theoretical insight. > Thanks for the suggestion! Currently, we organize Section 4.1 in a way that is “less theoretical” and “more intuitive” to help readers better understand each part of the parameter space without being overwhelmed by theory concepts. That said, the study of space involves Ornstein–Uhlenbeck process, Tweedie’s formula (second order), and probability convergence. We will try to extend our theoretical insights and put more fine-grained discussions in the appendix to ensure both intuition and theory are clearly conveyed to readers. > The hyperparameter setting strategy… could use more detailed explanation (see questions 2-4 below). > We answer each question separately below. > The benchmark Section 5 decides to not compare generation fidelity in favor of comparing the best algorithm in terms of generation validity. However, throughout the rest of the paper (e.g. figure 1, 2, and 3) tradeoffs between fidelity and validity are emphasized. Looking at Table 2, TFG is sometimes but not always the best algorithm in terms of both fidelity and validity simultaneously. The focus on only validity seems unjustified here. > We thank you for allowing us to further explain this. We completely agree that there is a trade-off between fidelity and validity, and depending on the user’s requirements, different metrics should be emphasized. The reason why we majorly compare validity is that for each algorithm during beam search, the metric we use to select the best run is exactly the validity (in a held-out set, of course). It’s important that the selection metric and the evaluation metric are the same to avoid unfair and tricky comparisons. On the other hand, users could also arbitrarily combine fidelity and validity into a new “metric” and conduct beam searches via this metric if they prefer. Our framework does not have any restriction on this, and, unsurprisingly, TFG will still outperform existing methods. We thank you for your question, and we would like to clarify this in the paper. Additionally, regarding to the questions, > Why does increasing $N_{recur}$ and $N_{iter}$ eventually hurt performance? > In fact, this phenomenon has been pointed out in previous works (e.g., MPGD) as well, and its underlying theory remains unclear. One observation is that if $N$ is too large, the generated images tend to be “valid” but highly unrealistic, possibly due to the amount of injected noise being too large. > In section 4.2, does the "structure analysis" that "increase" is best hold more generally on other tasks than label guidance? > Yes. Generally speaking, we find that “increase” works best (or with a negligible gap) among all of the tasks we consider in the paper. > The number of Monte Carlo samples in the Implicit Dynamic is set to 1 based on Table 1, but that choice needs more justification. Does more samples than 4 help? Why is implicit dynamic helpful with few samples? This seems contrary to the original motivation of estimating an average. > We find that more than $4$ of sample size does not help with sample quality because it reduces the stochasticity of the dynamics (think about changing the Gaussian noise in an SDE to, e.g., its expectation). We are delighted to provide more justification by adding a mathematical explanation of the role that the Gaussian noise is playing in the appendix and explaining why more samples are not beneficial. > In section 4.2, the beam search in "Searching strategy" could use more detail. What sample sizes are used and how did that enable quick search? Also, what metric is used when deciding the top K candidates in this search? > As mentioned in line 286-287, all searches are run with 1/8 of the test sample size and a maximum search step of 6. The test sample size for each task can be found in Appendix D. To conduct exhaustive grid search, we need to run more than $125$ experiments, which are much slower than our beam search strategy. The metric used when deciding the top K candidates is validity. Again, we sincerely appreciate your helpful review. Please let us know whether there is any additional concern that we could address to improve your evaluation of our work. --- Rebuttal 2: Title: Additional references Comment: [1] Controllable Music Production with Diffusion Models and Guidance Gradients. [2] A Framework for Conditional Diffusion Modelling with Applications in Protein Design and Inverse Problems. [3] Solving Audio Inverse Problems with a Diffusion Model. [4] Diffusion Models for Audio Restoration. [5] Vrdmg: Vocal Restoration via Diffusion Posterior Sampling with Multiple Guidance. [6] Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators. [7] Training-free Multi-objective Diffusion Model for 3D Molecule Generation. [8] Control3diff: Learning Controllable 3D Diffusion Models from Single-view Images. [9] Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis. [10] Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning. --- Rebuttal Comment 2.1: Comment: I thank the authors for their rebuttal and believe the discussed changes may improve the paper upon revision. --- Reply to Comment 2.1.1: Title: Thanks for your feedback! Comment: We thank the reviewer for the timely response. We highly appreciate it the reviewer could increase the evaluation score accordingly if you believe that the revised paper is improved.
Summary: This paper scrutinizes existing works on training-free guidance in diffusion models and proposes a unified framework that includes all existing methods as special cases. With this unified framework, this work presents a detailed and informed investigation of the design choices and hyperparameters within this framework. Additionally, it proposes a comprehensive benchmark involving 14 task types and 38 targets to evaluate the performance of this unified framework by optimizing the hyperparameters in the design choices. Strengths: 1. The proposed unified framework is a very interesting and novel summarization of the methods in existing works. Existing training-free guidance often uses different notations and different ways to formulate the problem. It is very helpful that this framework elucidates the design choices in training-free guidance. 2. The proposed benchmark is very comprehensive, and I believe it will certainly be helpful for the community to continue conducting more research on this topic. Weaknesses: 1. While I appreciate the comprehensiveness of the experiments on the proposed benchmark, I think the author fails to provide an informative analysis of the results. Currently, the results simply show that by optimizing the hyperparameters in the unified framework, we obtain better performance, which is a natural outcome since the existing methods are included in the framework. From a research perspective, I can think of several questions worth investigating, such as whether the optimal hyperparameters vary between tasks and, when using fixed models, how the optimal hyperparameters are affected by the target objective function. With this investigation, it would be best to reach some general conclusions to guide users in tuning the hyperparameters in practice, rather than just relying on grid search. 2. It seems that for around half of the tasks, the improvement is only marginal, while for a few other settings, the improvement is significant. I think this phenomenon is worth further investigation to understand in what scenarios TFG will provide improvements. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why does the highest MO energy in Table 3 show a negative improvement? MPGD should be a special case of TFG. Is this because the grid-searched hyperparameters for TFG are not optimal? 2. Can the author comment on other lines of work [1,2,3,4], specifically direct optimization approaches, on training-free optimization of diffusion models with a target objective? The task setting is the same as training-free guidance. Although it is evident that the direct optimization approaches are slower than training-free guidance, it is not clear how their performance compares. [1] Bram Wallace, Akash Gokul, Stefano Ermon, and Nikhil Naik. End-to-end diffusion latent optimization improves classifier guidance. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7280–7290, 2023b. [2] Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, and Yaron Lipman. D-flow: Differentiating through flows for controlled generation. arXiv preprint arXiv:2402.14017, 2024. [3] Korrawe Karunratanakul, Konpat Preechakul, Emre Aksan, Thabo Beeler, Supasorn Suwajanakorn, and Siyu Tang. Optimizing diffusion noise can serve as universal motion priors. arXiv preprint arXiv:2312.11994, 2023. [4] Tang, Zhiwei, et al. "Tuning-Free Alignment of Diffusion Models with Direct Noise Optimization." arXiv preprint arXiv:2405.18881 (2024). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See my comments on the weaknesses above. I think there are a few unclear points worth discussing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thanks for your insightful and constructive review, and we are more than honored that you think our work is interesting, novel, comprehensive, and will be helpful for the community. Regarding your concerns, > While I appreciate the comprehensiveness of the experiments on the proposed benchmark, I think the author fails to provide an informative analysis of the results. Currently, the results simply show that by optimizing the hyperparameters in the unified framework, we obtain better performance, which is a natural outcome since the existing methods are included in the framework. From a research perspective, I can think of several questions worth investigating, such as whether the optimal hyperparameters vary between tasks and, when using fixed models, how the optimal hyperparameters are affected by the target objective function. With this investigation, it would be best to reach some general conclusions to guide users in tuning the hyperparameters in practice, rather than just relying on grid search. > We agree the importance of more fine-grained analysis, and we are delighted to investigate more on our experimental results as you suggested. Specifically, we first explicitly present the optimal hyper-parameters of TFG and other existing methods for all tasks in the uploaded PDF (Table 2). We list several observations below. 1. Overall, optimal parameters vary widely between problems and datasets. For example, even with the same model and objective (e.g., label classifier on ImageNet or CIFAR10), the best hyperparameters vary widely from target to target. This highlights the importance of hyperparameter search. 2. The improvement of TFG over existing methods depends heavily on the difference between the optimal parameters and the subspaces of existing methods. See the next questions for a more detailed analysis. 3. Also note that in our paper, we didn’t do an exhaustive grid search, but instead perform an efficient beam search (line 254-264) to find the optimal hyper-parameters. All searches are run with 1/8 of the test sample size and a maximum search step of 6 (line 287). Our experimental results demonstrate that this is sufficient to find near optimal parameters (check out Table 1 in our uploaded PDF). > It seems that for around half of the tasks, the improvement is only marginal, while for a few other settings, the improvement is significant. I think this phenomenon is worth further investigation to understand in what scenarios TFG will provide improvements. > Thanks for the advice! We conducted a detailed investigation into the phenomenon and found that the improvements are highly related to the differences in the optimal hyperparameters between TFG and existing methods. For example, the $\bar\rho$ for UGD is the same as TFG for gender-age guidance task, where TFG only has 0.133% validity improvement over UGD. On the contrary, their values differ on the fine-grained classification task, and TFG has an 18.7% validity improvement over UGD. Overall, this depends on whether the optimal parameter lies in the subspace that existing methods can find. We will add this analysis to our revised manuscript. In addition, regarding to the questions, > Why does the highest MO energy in Table 3 show a negative improvement? MPGD should be a special case of TFG. Is this because the grid-searched hyperparameters for TFG are not optimal? > We appreciate the reviewer for carefully examining our paper, and we want to point out that the reason TFG could occasionally have slightly worse performance in practice is due to the beam search computation limit we currently pose. More specifically, we allow TFG to search at most six steps (for all hyper-parameters) and all other methods for seven steps (in their subspaces). For the MO energy task, the searched parameter for MPGD is that $\bar \mu = 0.016$ (this is the only parameter that we need to search for MPGD), where the best (and last step) of TFG is that $\bar \mu = 0.004$ (because it uses one step to double another parameter). If we allocate more computational budget for the beam search steps, TFG will outperform MPGD on this target (in fact, eight steps suffice). > Can the author comment on other lines of work [1,2,3,4], specifically direct optimization approaches, on training-free optimization of diffusion models with a target objective? The task setting is the same as training-free guidance. Although it is evident that the direct optimization approaches are slower than training-free guidance, it is not clear how their performance compares. > Thanks for the valuable suggestion. We review all papers to understand the direct noise optimization approach (DNO), and we believe that this DNO has a different motivation than training-free guidance. Specifically, DNO is not only slow but also GPU-memory intensive, as gradients have to be propagated through the entire ODE process for multiple times until convergence. This makes it hard to implement and study within a short amount of time. We would like to leave the comparison to future studies, and we sincerely hope you can understand the difficulty. Again, we sincerely thanks for the helpful review you provide. Please let us know whether there is any additional concern that we could address to help improve your evaluation to our work. --- Rebuttal 2: Title: A gentle reminder Comment: Dear Reviewer J9FB, The deadline of the discussion period is soon approaching. We wonder whether our answers to your questions have addressed your concerns. If there are any additional discussion points or questions, we are happy to discuss. We look forward to your comments. Thank you again for your time. Best, Authors of Paper8134
Summary: This paper focuses on the unification of training-free guidance methods for diffusion models. It defines each method within a unified framework and finds that restricting the hyperparameter space is consistent with existing methods. This framework can be broadly categorized into mean guidance, variance guidance and implicit dynamics. The paper demonstrates performance improvements in several experimental scenarios. Strengths: * Experiments were conducted on different data sets and scenarios. * Training-free guidance, which had been developed in different ways, was unified into a single framework. * If the code is open-sourced, it will greatly benefit the diffusion community. Weaknesses: * Algorithm 1 seems to be one of the most important parts of this paper, but it lacks a detailed explanation. An explanation of Algorithm 1 along with the corresponding hyperparameters from Definition 3.1 in Section 3.1 would greatly aid understanding. * In Algorithm 1, the iteration part in line 8 could be explicitly described using a for loop in the algorithm, rather than as a comment. * I believe that Theorem 3.2 and its proof are not well formulated mathematically. If the authors want to formalize it as a theorem, they need to show mathematically that the same generated distribution can be produced. In my opinion, an explanation at an analytical level would be acceptable. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the Weaknesses part. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: They provided in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and constructive review, and we are delighted that you think our work will greatly benefit the diffusion community. Regarding to the concerning questions: > Algorithm 1 seems to be one of the most important parts of this paper, but it lacks a detailed explanation. An explanation of Algorithm 1 along with the corresponding hyperparameters from Definition 3.1 in Section 3.1 would greatly aid understanding. > We agree that more explanation w.r.t. Alg 1 and the hyperparameter space will help with the understanding. We will incorporate the explanation of the relation between Alg 1 and Def 3.1: generally speaking, Alg 1 contains three operations (Mean Guidance, Variance Guidance, and Implicit Dynamics), and each operation in the algorithm separately control part of the hyper-parameter in Def 3.1. We will explicitly point how this relations, and how each operation in the algorithm affects the parameter space, i.e., what part of the space is controlled. > In Algorithm 1, the iteration part in line 8 could be explicitly described using a for loop in the algorithm, rather than as a comment. > We agree that explicitly writing down the for loop will make it more clear. Thanks for the suggestion. > I believe that Theorem 3.2 and its proof are not well formulated mathematically. If the authors want to formalize it as a theorem, they need to show mathematically that the same generated distribution can be produced. In my opinion, an explanation at an analytical level would be acceptable. > We really appreciate the suggestion, and we believe that keeping it as a theorem would still be helpful. That said, we will make the definition and proof more mathematically formal. Specifically, we will explicitly define the parameter space of each of the existing algorithms, and pointing out that each instantiation in their distribution corresponds to which parameter value in our parameter space. Again, we sincerely thanks for the helpful review you provide. Please let us know whether there is any additional concern that we could address to help improve your evaluation to our work. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response. I expect my concerns are well reflected in the revised version, and I think the current score is appropriate, so I keep it. --- Rebuttal 2: Title: A gentle reminder Comment: Dear Reviewer DCpp, The deadline of the discussion period is soon approaching. We wonder whether our answers to your questions have addressed your concerns. If there are any additional discussion points or questions, we are happy to discuss. We look forward to your comments. Thank you again for your time. Best, Authors of Paper8134
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their insightful and constructive reviews, and we are honored that all reviewers believe that our paper is novel, beneficial, comprehensive, and well-written. Reviewer aTDj thinks the work is “a valuable contribution”, reviewer Xm6V thinks it “reflects a high degree of diligence and thoroughness”, and both reviewer DCpp and J9FB think it will “certainly be helpful for the community”. All reviewers provide fine-grained and important feedback on polishing the paper, and we are more than willing to accept. As NeurIPS policy does not allow us to upload an improved version, we will explicitly explain how we improve the paper in each of your questions, and we promise that all suggestions will be addressed in the final paper. As all reviewers have emphasized the importance of the codebase and benchmarks, we will open-source our code, benchmarks, configurations, and existing runs, as promised in the paper. Reviewer aTDj points out that the claim on unifying *all* works is overly broad, and indeed, reviewers J9FB and Xm6V list a few training-free guidance papers that are related but with different focus. We want to highlight that our work is based on a comprehensive contemporary literature review over more than 15 papers (see response to reviewer aTDj) and on a collection of all of their algorithms (that directly or indirectly use the algorithms we unify), trying to provide an elegant and unified way to study the problem. That said, we completely agree that we never fit all methods, and we are more than happy to explicitly clarify this and discuss our framework in the context of TFG literature. For the uploaded PDF, we present the actual parameter selected by our beam search strategy for each algorithm and task, and we additionally compare the performance of the beam search parameter with the full grid search one. Overall, these results demonstrate that the comparison between different methods is transparent and objective, and that our beam search strategy is effective and efficient compared with full grid search method. Below are our detailed responses. Please do not hesitate to let us know if there is any additional concern that we can help address to help with an objective evaluation of our work. Pdf: /pdf/c3f3c40a190f9ee8084665e58e919669ef03e5e8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Natural Counterfactuals With Necessary Backtracking
Accept (poster)
Summary: The authors propose a novel approach for generating causally valid and *natural* counterfactuals. These counterfactuals are *natural* in that they are close to the observed data manifold. This is achieved by allowing a certain amount of backtracking, which involves tracing back upstream causal effects on variables. The authors essentially propose a more flexible version of Judea Pearl’s $do$-calculus, by replacing the explicitly non-backtracking $do$-operator with a $change$-operators that represents the notion of a *least-backtracking feasible intervention* (LBF). Intuitively, LBF interventions take into account the effects of causal ancestors instead of considering the intervention itself as entirely exogenous. In order to compute LBF interventions, the authors propose a *feasible intervention optimization* that targets minimal changes to the causal ancestors of some variable $A$ while satisfying certain naturalness constraints. These constraints ensure that the generated counterfactual remains anchored in the observed data manifold. Through experiments involving both synthetic and real-world datasets, the authors demonstrate that their proposed method yields more “natural” counterfactuals than the non-backtracking alternative approach. The proposed method has some clear links to the literature on counterfactual explanations, which are mentioned but not thoroughly explored here. This is an interesting avenue for future research and this work constitutes a solid first step in that direction. Overall, the paper seems to make a strong contribution to the field of Causal Inference, although I am not a subject expert. Strengths: - Interesting and natural extension of non-backtracking counterfactual generation. - Empirical results for both synthetic and real-world datasets show improved performance when LFB is used. Weaknesses: - No standard errors reported in Tables, so difficult to assess how substantial the performance differences are after accounting for noise. - Link to CE is mentioned, but the authors could have gone into more detail. In particular, you propose that the literature on CE could benefit from incorporating your definition of *naturalness* (which I don’t disagree with) but you fall short of comparing it to existing definitions in CE. See also my related questions. - The figures and annotations are too small (and in general, both figures and tables currently make the paper look a little crammed). Technical Quality: 3 Clarity: 3 Questions for Authors: ### Link to CE - How does this approach compare to existing approaches to causal algorithmic recourse (e.g. [MINT](https://arxiv.org/abs/2002.06278))? - How do the proposed *naturalness* constraints compare to *plausibility* (e.g. [Artelt and Hammer](https://arxiv.org/abs/2002.04862), [REVISE](https://arxiv.org/abs/1907.09615)) and *feasibility* (e.g. [FACE](https://arxiv.org/abs/1909.09369)) constraints in CE/algorithmic recourse? Equation 2 in your paper looks quite similar to Equation 5 in [Artelt and Hammer](https://arxiv.org/abs/2002.04862). ### Other questions - The sentence spanning from line 81 to 86 is very long and it’s easy to get lost in it. Consider splitting this into 2/3 sentences. - Figures and annotations on page 8 are very small (annotations are barely legible). - Could you highlight the single point from Figure 1 (a) also in panel (b)? To illustrate that in (b) you essentially repeat the experiment over all test samples. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors mention that the assumption of invertibility is a small limitation, but not theoretically problematic. The work may have other limitations that I have failed to recognized, as I’m not a subject expert. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Weaknesses*** **1. No standard errors reported in Tables, so difficult to assess how substantial the performance differences are after accounting for noise.** Thank you for asking this. We have included the standard errors of all experiments in Tables 9-12 in Section A.4 of the Appendix. As you can see, the standard deviation for our natural counterfactuals is generally lower. **2. The figures and annotations are too small (and in general, both figures and tables currently make the paper look a little crammed).** Thank you. We have made modifications according to your helpful suggestions. ***Questions Linked to CE*** **1. How does this approach compare to existing approaches to causal algorithmic recourse (e.g. MINT)?** Thank you for this great question. We would like to mention that the proposed work has different motivations and purposes. Our natural counterfactuals aim to provide a principled inference method to ensure that counterfactual scenarios remain realistic relative to the regular causal mechanisms, with the actual (conditional) data distribution (using a limited extent of backtracking) as a surrogate of the mechanisms. Given an expected change or intervention, we offer a framework for inferring counterfactual outcomes. In contrast, causal algorithmic recourse focuses on identifying interventions to achieve a pre-specified desirable counterfactual outcome. In summary, similar to Pearl's counterfactuals, natural counterfactuals aim to perform counterfactual inference to obtain counterfactual outcomes, while causal algorithmic recourse seeks interventions to achieve a given desirable outcome. However, if we assume the aim is to find intervention variables to achieve $A = a^*$ in our framework, the task becomes more similar to finding interventions that cause a given outcome, as in causal algorithmic recourse. Still, our framework does not operate in exactly this way. Instead, we identify a group of intervention variables $C$ that includes $A$, which does not imply that intervening on variables other than $C$ causes $A$. This means that we perform interventions on all variables in $C$, including $A$, simultaneously. In other words, our aim is to find, so to speak, companion interventions in order to make the result ``natural'' in our sense. **2. How do the proposed naturalness constraints compare to plausibility (e.g. Artelt and Hammer, REVISE) and feasibility (e.g. FACE) constraints in CE/algorithmic recourse? Equation 2 in your paper looks quite similar to Equation 5 in Artelt and Hammer.** Thank you for recommending these useful works. We have discussed them in our updated manuscript, including the paper mentioned in Question 1, summarized below. 1. Our proposed naturalness constraints share a similar spirit with the feasibility constraints in counterfactual explanation (CE)/algorithmic recourse, as existing methods may overlook real-world constraints. However, there are two key differences: - **Different Motivations:** We propose naturalness constraints for counterfactual inference, which generates counterfactual outcomes that are natural with respect to the actual world. In contrast, the feasibility constraints of CE/algorithmic recourse serve to identify possible interventions that can lead to a given desired outcome. - **Scope of Application:** While feasibility constraints may consider more detailed constraints for a specific application, we propose a general framework without considering the meanings of specific variables. Of course, when applying our framework in a specific situation, users can and should consider special constraints relevant to that situation. 2. The reason why Equation 2 in our paper looks similar to Equation 5 in [Artelt and Hammer](https://arxiv.org/pdf/2002.04862) is that both involve optimization with constraints related to the observed distribution. However, Artelt and Hammer focus on CE, and the two papers are fundamentally different: - **Different Purposes:** CE is similar to algorithmic recourse in that it seeks to find the input for a target output. Our ultimate aim is to obtain a feasible output given an input. - **Framework Differences:** Unlike algorithmic recourse, CE does not build on the SCM framework, making it difficult to explain the similarity between actual and counterfactual values in CE. Additionally, CE often studies anti-causal problems, such as classifying an image $x$ into a category $y$, where $y$ is the cause of $x$ instead of being the causal outcome. Although CE tries to minimize the difference between actual and counterfactual values, it is hard to claim that the distance is minimized from a causal perspective. ***Other questions*** **The sentence spanning from line 81 to 86 is very long and it’s easy to get lost in it. Consider splitting this into 2/3 sentences.** and **Figures and annotations on page 8 are very small (annotations are barely legible).** Thank you for the feedback. We have implemented your helpful suggestions to modify lines 81-86 and updated the figures and annotations on page 8. **Could you highlight the single point from Figure 1 (a) also in panel (b)? To illustrate that in (b) you essentially repeat the experiment over all test samples.** Thank you for your excellent suggestion. In addition to your suggestions, we have sampled two more data points. Consequently, we have displayed both the non-backtracking counterfactual values and our natural counterfactual values for three data points. Specifically, we randomly select three data points that require backtracking in our natural counterfactuals. Moreover, when a hard intervention is feasible, our natural counterfactuals and non-backtracking counterfactuals yield the same results, so we do not sample these cases. The updated PDF file with the related figures has been uploaded under the global response. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the detailed rebuttal, I appreciate the effort and have no further questions at this point
Summary: The paper presents a framework for generating "natural counterfactuals" that are more feasible within the support of the training data distribution. This approach includes controlled backtracking through an optimization method that uses a "naturalness" criterion as a constraint. Strengths: Original combination of existing ideas, e.g., Pearl's non-backtracking ( interventional ) counterfactuals, backtracking (observational) counterfactuals, normalizing flows to learn causal mechanisms Weaknesses: My first concern is with the term "natural counterfactual." Both Pearl's non-backtracking counterfactuals and backtracking counterfactuals use an algorithm that begins with the prior probabilities of the exogenous variables. In the abduction step, the counterfactual posterior is computed based on the observed facts in the non-backtracking case, or on both the observed facts and observed counterfacts in the backtracking case. However, this paper samples from the observational distribution of the training data. Theorem 1 builds on prior work $[17]$ where the posterior distribution has been discussed, mentioning that due to the monotonicity assumption of $f_i$ with respect to $U$, the posterior distribution is a point mass on $𝑈=u$. However, the experiments in this paper sample from the prior distributions of the structural causal model (SCM), which means it is not a true counterfactual distribution. This leads me to the purpose of the paper. It uses optimization to find a point within a high-density area of the data manifold generated by the observational distribution, given a request to change a value. Which use case would require this result? For example, Pearl’s non-backtracking counterfactuals address questions on actual cause, causal necessity, and personalized policy. If performing $do(A)$ is not feasible, it does not imply that a feasible data point within the training data distribution can answer these questions. Further, the concept of "naturalness" is not based on the immutability of the variable but on the probability density of the variable given its parents. For example, the variable "Age" might meet the criterion based on its probability density, but it remains immutable. References: [17] Chaochao Lu, Biwei Huang, Ke Wang, José Miguel Hernández-Lobato, Kun Zhang, and Bernhard Schölkopf. Sample-efficient reinforcement learning via counterfactual-based data augmentation. arXiv preprint arXiv:2012.09092, 2020. Technical Quality: 2 Clarity: 3 Questions for Authors: (1) Line 46: “When interventions lead to unrealistic scenarios relative to the training data, predicting counterfactual outcomes in such scenarios can be highly uncertain and inaccurate [12].” Reference [12] addresses selection bias in potential outcome counterfactuals, in particular, when the treatment group does not match the control group. It is not related to SCM counterfactuals. (2) Line 48 “This issue becomes particularly pronounced when non-parametric models are employed, as they often struggle to generalize to unseen, out-of-distribution data [27]. “ Reference [27] discusses the issue of i.i.d. in machine learning, particularly when the training data distribution differs from the interventional distribution, leading to out-of-distribution scenarios. How is the current paper related to the discussion in [27]? (3) Appendix F: “Therefore, [30]’s backtracking counterfactual does not reduce to Pearl’s counterfactual even when $A^∗ =\emptyset$.” This is incorrect. Neither non-backtracking nor backtracking is a special case of the other. Appendix A of [30] presents a unified framework for counterfactual reasoning that integrates both backtracking and non-backtracking counterfactuals, as suggested by an area chair at that conference. Also, if $A^∗ =\emptyset$, then there is no counterfactual query. References: [12] Negar Hassanpour and Russell Greiner. Learning disentangled representations for counterfactual regression. In International Conference on Learning Representations, 2019. [27] Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 109(5):612–634, 2021 [30] Julius von Kügelgen, Abdirisak Mohamed, and Sander Beckers. Backtracking counterfactuals. arXiv preprint arXiv:2211.00472, 2022. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Weaknesses*** **(1) My first concern is ... However, the experiments in this paper sample from the prior distributions of ... counterfactual distribution.** Thanks for raising the concern. To clarify, we sample from the posterior distribution of $U$. First, due to the monotonicity assumption, sampling from the observational distribution amounts to sampling from the distribution of the exogenous variables. More importantly, during inference, the only difference is that non-backtracking counterfactuals use interventions on $A$, while natural counterfactuals use interventions on $C$, with observed evidence considered in both. Hence, we are sampling from posterior distribution of $U$. We have made this clearer in updated manuscript. Your further feedback would be appreciated. **(2) This leads me to the purpose of the paper. ... answer these questions.** This is a good point that we had in mind before starting the project. We have revised the presentation to highlight our motivation and explain why we prefer feasible interventions for addressing counterfactual queries. We are motivated by observing that non-backtracking counterfactual generation, for all its merits, often results in scenarios that are not realistic (and so, as a consequence, unreliable when it is based on training data). On the other hand, we also think that fully backtracking counterfactuals do not have much implication for guiding actions, because they appeal to intervention on unobserved or even unobservable, unidentifiable variables. We thus develop a framework that generates realistic counterfactuals by finding feasible interventions on observed variables while minimizing backtracking. In particular, when the intervention set $C$ picked out by our algorithm is sufficiently simple (a special case is that it is identical to original target set $A$), the counterfactual will remain useful for guiding actions. As demonstrated by our experiments, generating a non-backtracking counterfactual could be very unreliable, when it is not "natural" in our sense. So one use of our framework is to check on the reliability or feasibility of non-backtracking counterfactuals, and when dubious, to generate a reliable surrogate by adjusting the antecedent of the counterfactual. We think that reliable information about the adjusted counterfactuals will be more useful than probably unreliable information about the original, non-backtracking counterfactuals. **(3) Further, the concept of "naturalness" ... but it remains immutable.** Thanks for raising the issue. As noted in the footnote on page 4, "naturalness" can have various interpretations. In this paper, we provide a general framework for natural counterfactuals without considering the meanings of specific variables. Hence, we do not discuss all possible constraints for various real-world scenarios that deserve to be labeled "naturalness". It is possible and in some situations important to incorporate more constraints, such as the relative ease of intervention, into this framework, though we did not attempt that in this paper. We will consider using another term if "naturalness" is perceived as inapt or misleading. We have included a remark on your good point in updated paper. ***Questions*** **(1) Line 46: Reference [12] addresses ... ... It is not related to SCM counterfactuals.** Yes, [12] is based on the potential outcome framework, but we believe that one of its points is aligned with our statement: "The fact that counterfactual outcomes are unobservable (i.e., not present in any training data) makes estimating treatment effects more difficult than the generalization problem in the supervised learning paradigm." Still, we are reconsidering whether this reference is more helpful than misleading in our context. Thanks for this helpful comment. **(2) Line 48: How is the current paper related to the discussion in [27]?** Thanks for the question. One of our concerns about non-backtracking counterfactuals, despite their novelty and elegance, is precisely their frequent requirement to do out-of-sample generalization. This is why we believe the discussion in [27] of the difficulty of the out-of-distribution problem is very relevant. If you do not think this is convincing to you, please let us know. **(3) Appendix F: “[30]’s backtracking counterfactual does not reduce to Pearl’s ... no counterfactual query.** This statement of ours is a bit confusing. Thanks for pointing it out! We did not mean to claim that backtracking counterfactuals in [30] are or should be a special case of Pearl’s counterfactuals. What we meant was that $A^* = \emptyset$ represents a special though extreme case of a counterfactual query, and intuitively, when $A^* = \emptyset$, the counterfactual instance should be exactly the same as the actual instance, for nothing is supposed in the antecedent of the counterfactual. Pearl's counterfactuals agree with this intuition. However, [30]'s backtracking counterfactual does not reduce to this result in this special case, as it allows changes in the exogenous variables even in the case of a null antecedent. Indeed, it would in the special case allow any value setting of the exogenous variables, for they are all compatible with the null antecedent. Note that another extreme case could illustrate that backtracking counterfactuals proposed in [30] seem to invoke gratuitous changes sometimes. In general, when the counterfactual value $a^*$ is supposed to be equal to the actual value $a$, Pearl's theory will return the exact same instance as the actual one, which is intuitively reasonable, but the theory in [30] will give positive probability to other instances as well. One may stipulate that $a^*$ must be different from $a$ in order to count as a counterfactual query, but to our knowledge, the literature usually understands counterfactuals as just subjunctive conditionals, which also includes subjunctive conditionals with true antecedents as special cases. --- Rebuttal Comment 1.1: Comment: Thank you for the responses; I appreciate your efforts. Here is my response: (1) Let's consider the experiment involving the toy 1 example:The prior distribution of the exogenous variables is assumed to be a standard Gaussian. The factual observation is given as $(n_1, n_2, n_3) = (−0.59, 0.71, −0.37)$. Due to the monotonicity of the functions with respect to U, I calculated the values ($u_1, u_2, u_3) = (-0.59, 0.36, -0.93)$. As a result, the posterior counterfactual distribution becomes a point mass (Dirac delta) centered at ($u_1, u_2, u_3) = (-0.59, 0.36, -0.93)$. This means that sampling from this posterior distribution will always gives us the same values, $(u_1, u_2, u_3) = (-0.59, 0.36, -0.93)$. However, if I understand correctly, you are sampling 10,000 times from the prior distribution of the exogenous variables, which is the standard Gaussian. Counterfactual reasoning, whether non-backtracking or backtracking, generally requires knowledge of the Structural Causal Model (SCM). In the absence of an SCM, certain estimates can be made under specific conditions if both observational and interventional data are available, as discussed in a paper by Jin Tian and Judea Pearl. I did not observe any application of non-backtracking or backtracking counterfactual reasoning in this paper. However, the term "counterfactual" has been used in other contexts, similar to this paper, where an optimization method finds the closest point to a given data point. This approach is also seen in works like those by Wachter et al. and subsequent related papers. (2) I still don't see a clear use case or problem statement that can be effectively addressed using the results of your optimization framework. Even in the Toy 1 example, if we make the request $do(n_2=0.19)$, our interest lies in the interventional distribution. This request is necessarily out-of-distribution. How does finding values of $n_1$ and $n_2$ that satisfy the optimization actually solve the problem at hand? It's also worth noting that this approach isn't performing $do(n_1)$ or $do(n_2) $ interventions but rather finding the closest points within the observational distribution. (3) I agree that there could be different different interpretation of "Naturalness". Thanks! Questions: (1) Thanks for the explanation. (2) You wrote: "One of our concerns about non-backtracking counterfactuals, despite their novelty and elegance, is precisely their frequent requirement to do out-of-sample generalization." Out-of-sample generalization is not a concern of non-backtracking (interventional) counterfactuals; instead, it is a necessary feature of it. (3) I read [30] again. They have Property 1 (preference for closeness). In other words, the closest (most similar) world to an actual world is itself. Please note that this is not important for the main issues of the paper. --- Rebuttal 2: Comment: We are truly grateful for your prompt feedback and for providing more details regarding your original concerns. We also appreciate your acknowledgment that "naturalness" can have various interpretations and that you liked the explanation for Reference [12]. We are pleased to address your remaining questions and look forward to your further feedback. **W(1)-1: Let's consider the experiment involving the toy 1 example: ... $(n_1, n_2, n_3)=(-0.59, 0.71, -0.37)$. Due to the monotonicity of the functions with respect to U, I calculated the values $(u_1, u_2, u_3)=(-0.59, 0.36, -0.93)$. As a result, the posterior counterfactual distribution becomes a point mass (Dirac delta) centered at $(u_1, u_2, u_3)=(-0.59, 0.36, -0.93)$. This means that sampling from this posterior distribution will always gives us the same values, $(u_1, u_2, u_3)=(-0.59, 0.36, -0.93)$. However, if I understand correctly, you are sampling 10,000 times from the prior distribution of the exogenous variables, which is the standard Gaussian.** Thank you for this question, and please see our response below. First, *our paper does not mention sampling 10,000 times from the prior distribution of the exogenous variables.* Instead, our test set comprises 10,000 data points, and we form 10,000 counterfactual queries to assess the effectiveness of counterfactual inference. Specifically, for each query, we use one data point as evidence, such as $(n_1, n_2, n_3) = (-0.59, 0.71, -0.37)$, and randomly sample a value for $n_2$ from its distribution as the counterfactual supposition, such as $n_2 = 0.19$. Second, for any query, our inference procedure can be seen as following Pearl's three-step process: (1) Update the noise distribution given the evidence to a posterior; (2) Modify the SCM using the identified "natural intervention"; (3) Conduct inference using the posterior noise distribution and the modified SCM. As stated in Line 146, the only difference from the non-backtracking inference lies in Step (2), where we perform a least-backtracking feasible intervention on the variable set $C$, which consists of some causal ancestors of the variable set $A$ in addition to $A$. Notably, when the intervention on $A$ alone already satisfies our criteria, we have $C = A$, yielding the non-backtracking counterfactual. Specifically, in our method as well as in Pearl's, *the first step* involves updating the prior exogenous distribution $p(U)$ to the posterior distribution given the evidence, and in the example you mentioned, would yield a point mass distribution on $U=(-0.59, 0.36, -0.93)$, as you noted. In our implementation, normalized flows ensure that by inputting $(n_1, n_2, n_3) = (-0.59, 0.71, -0.37)$ into our model, we obtain the unique posterior value of $U$ (as functions are invertible). *The second step* involves modifying the SCM. In non-backtracking counterfactuals, an intervention $do(n_2 = 0.19)$ is applied, without changing anything in $n_2$'s causal upstream. In contrast, natural counterfactuals may invoke some backtracking if needed, and in this case would apply a least-backtracking feasible intervention $do(n_1 = -0.02, n_2 = 0.19)$ obtained through our Feasible Intervention Optimization, which would yield an instance that satisfies our "naturalness" criterion. *In the final step*, both methods utilize the updated noise distribution (now a single point) and the modified SCM to perform inference and obtain the outcome value of $n_3$. We have refined the presentation of our inference procedure in the updated manuscript. Thanks again for giving us the opportunity to clear misunderstanding. **W(1)-2: In the absence of an SCM, certain estimates can be made under specific conditions if both observational and interventional data are available, as discussed in a paper by Jin Tian and Judea Pearl. I did not observe any application of non-backtracking or backtracking counterfactual reasoning in this paper.** Sorry, we are not sure what you meant by "application of non-backtracking or backtracking counterfactual reasoning"--your further feedback would be helpful. In this paper, we made some simplifying assumptions, compared to the classical work on (non-backtracking) counterfactual inference by Pearl and his collaborators (Tian, Shpister, Bareinboim, etc.), especially the assumption of no latent confounding (our causal diagram is a DAG with no bi-directed arcs), along with the assumptions of access only to observational distribution and monotonicity. As a result, we need not invoke the full machinery of those seminal works. Still, as explained above, we are effectively following Pearl's three-step procedure. Perhaps we misunderstood your concern here and, if so, would very much appreciate your elaboration. Title: Response 1-1 --- Rebuttal 3: Title: Response 1-2 Comment: **W(1)-3: However, the term "counterfactual" has been used in other contexts, similar to this paper, where an optimization method finds the closest point to a given data point. This approach is also seen in works like those by Wachter et al. and subsequent related papers.** Thank you for the opportunity to clarify the differences between our paper and Wachter et al's work and much of the literature on counterfactual explanation. We have cited the work of Wachter et al. and related papers, noting that our paper and their work on counterfactual explanations fundamentally differ in purpose, techniques, and cognitive frameworks. The term "counterfactual" in such counterfactual explanations is unrelated to "counterfactual" in causal inference, as they do not rely on causality or the SCM framework. For more details, please refer to Section III-C in [Wachter et al.](https://arxiv.org/pdf/1711.00399), which explains that counterfactual explanations do not employ causal assumptions. Typically, in counterfactual explanations, given an input (image) $x$ and an output (class) $y$ from a model, the goal is to find another input $x'$ that leads to a different output $y'$ while ensuring $x'$ is as similar as possible to the original input $x$ based on their value difference. Although counterfactual explanations are unrelated to causality, they borrow causal terminology, such as "counterfactual," to refer to $x'$. This helps explain the classification behavior of the given model. We can summarize the key differences between our paper and counterfactual explanations as follows: - **Different Purposes and Cognitive Frameworks**: Based on a general machine learning framework, counterfactual explanations aim to explain model behavior by finding another input with minimal changes that results in a different class. In contrast, similar to Pearl's counterfactuals, natural counterfactuals perform counterfactual inference to obtain counterfactual outcomes based on a causal framework, where we also consider the feasibility of interventions and prefer the least backtracking. - **Different Technologies**: Abstractly, our method infers an outcome given an input, while counterfactual explanations find an input given a target output. - **Differences from a Causal Perspective**: Although Wachter et al.'s counterfactual explanations are unrelated to causality, from a causal perspective, they often address anti-causal problems, such as classifying an image $x$ into a category $y$, where $y$ is treated as the cause of $x$, as commonly seen in causal inference literature. Since counterfactual explanations do not build on the SCM framework, even though they try to minimize the value difference between $x'$ and $x$, it is difficult to interpret the distance from a causal perspective, unlike in our approach, which is explicitly based on causal assumptions as given in a causal DAG. --- Rebuttal 4: Title: Response 1-3 Comment: **W(2)-1: I still don't see a clear use case or problem statement that can be effectively addressed using the results of your optimization framework. Even in the Toy 1 example, if we make the request $do(n_2=0.19)$, our interest lies in the interventional distribution. This request is necessarily out-of-distribution. How does finding values of $n_1$ and $n_2$ that satisfy the optimization actually solve the problem at hand?** Thank you for elaborating on your concern. A general counterfactual query takes the following form: Given evidence $E$, what would the value of $B$ have been if $A$ had taken the value $a^*$? Non-backtracking counterfactuals, fully backtracking counterfactuals, and natural counterfactuals offer three different interpretations of this general query. Specifically, in non-backtracking counterfactuals, the counterfactual supposition $A = a^*$ is interpreted to be realized by directly intervening on $A$ alone, i.e., using $do(A)$, as you mentioned. In fully backtracking counterfactuals, [30] interprets the counterfactual supposition to be realized by changing (which amounts to intervening on) exogenous variables. In our natural counterfactuals, we in general realize the counterfactual supposition by intervening on $C$, which includes $A$ and possibly some of $A$'s causal antecedents, to ensure that the intervention is feasible in our sense. As we can see, the three methods use different interventions about the supposition in a counterfactual query, and neither fully backtracking counterfactuals nor our natural counterfactuals interpret a counterfactual query as necessarily a query about the distribution resulting from intervening on $A$ alone. In our framework of natural counterfactuals, we aim to generate counterfactuals that are "natural" with respect to the actual distribution but otherwise stay as close to non-backtracking as possible. Therefore, we use Feasible Intervention Optimization to find the least-backtracking feasible intervention on $C$ within the support of the actual distribution by minimizing the extent of backtracking, as explained in Question W(1)-1. For example, in Question W(1)-1, the set $C = (n_1, n_2)$, obtained through our optimization, is used to answer the query. Additionally, similar to our natural counterfactuals, backtracking counterfactuals in [30] essentially use data points from the observed distribution to answer $A = a^*$ as well. The difference is that we intervene on endogenous variables, which is potentially still useful for guiding actions, while [30] intervenes on exogenous variables, which is not useful for guiding actions, as far as we can see. You wrote: *"Even in the Toy 1 example, if we make the request $do(n_2 = 0.19)$, our interest lies in the interventional distribution. This request is necessarily out-of-distribution."* In the sense of "out-of-distribution" used in our paper, we respectfully think this is not true. An intervention, though cutting out some mechanisms originally in place, does NOT necessarily result in an instance outside the support of the observed distribution. For example, although the request $do(n_2 = 0.19)$ in the Toy 1 example is indeed out-of-distribution in our sense, not every intervention is out-of-distribution. For example, given the evidence $(n_1, n_2, n_3) = (-0.59, 0.71, -0.37)$, $do(n_2 = 0.50)$ is feasible, resulting in the counterfactual $(n_1 = -0.59, n_2 = 0.50)$ being within the support of the observable distribution. In our natural counterfactuals, we actually distinguish between in-distribution $do(A = a^*)$ and out-of-distribution $do(A = a^*)$ and adjust out-of-distribution cases by performing a least-backtracking feasible intervention on $A$'s ancestors $C$. --- Rebuttal 5: Title: Response 1-4 Comment: **W(2)-2: It's also worth noting that this approach isn't performing or interventions but rather finding the closest points within the observational distribution.** As explained in Questions W(1)-1 and W(2)-1, our method also performs interventions, following the same three-step inference procedure as non-backtracking counterfactuals: updating the posterior distribution, modifying the SCM by intervention, and conducting inference on the modified SCM. The key difference is that we perform a least-backtracking feasible intervention on set $C$, which usually includes some of $A$'s causal ancestors in addition to $A$. If an intervention on $A$ alone is feasible, then $C = A$. In the example from W(1)-1, we perform interventions on both $n_1$ and $n_2$ simultaneously, whereas non-backtracking counterfactuals perform an intervention only on $n_2$. Specifically, in natural counterfactuals, both $n_1$ and $n_2$ have their links to their respective parents severed. Since $n_1$'s only parent node is $u_1$, the link between $n_1$ and $u_1$ is disconnected, and $n_1$ is directly assigned the value $-0.02$. Similarly, $n_2$'s links to $u_2$ and $n_1$ are severed, and $n_2$ is set to $0.19$. As noted in Question W(2)-1, when intervention $do(A)$ alone is feasible, i.e., $C = A$, our counterfactual outcome matches the non-backtracking outcome. For example, given the evidence $(n_1, n_2, n_3) = (-0.59, 0.71, -0.37)$, $do(n_2 = 0.50)$ is feasible, meaning that $(n_1 = -0.59, n_2 = 0.50)$ falls within the support of the observable distribution. In this instance, the inference is the same for both natural and non-backtracking counterfactuals. During the second step of inference, $n_2$'s links to $u_2$ and $n_1$ are severed, and $n_2$ is set to $0.50$. To reiterate, our approach is a *causal* approach, relying on a given causal structure represented by a DAG. It is not simply minimizing a non-causal distance within the observational distribution. **Questions:** **Q(2): You wrote: "One of our concerns about non-backtracking counterfactuals, despite their novelty and elegance, is precisely their frequent requirement to do out-of-sample generalization." Out-of-sample generalization is not a concern of non-backtracking (interventional) counterfactuals; instead, it is a necessary feature of it.** As explained in Question W(2)-1 above, being out-of-sample (in the sense used in our paper) is NOT a necessary feature of non-backtracking counterfactuals. For example, unlike $do(n_2 = 0.19)$ given the evidence $(n_1, n_2, n_3) = (-0.59, 0.71, -0.37)$, the intervention $do(n_2 = 0.50)$ is not out-of-sample, meaning that $(n_1 = -0.59, n_2 = 0.50)$ falls within the support of the observable distribution. Additionally, out-of-sample scenarios can pose challenges under certain conditions. For example, non-parametric models routinely struggle and often fail to generalize to out-of-sample data points, as demonstrated in our experiments, among others. **Q(3): I read [30] again. They have Property 1 (preference for closeness). In other words, the closest (most similar) world to an actual world is itself. Please note that this is not important for the main issues of the paper.** Yes, property 1 in [30] states that the highest probability density is assigned to the same value. However, during inference, they will sample not only the data point with the highest probability but also other data points with lower probability density. Technically, since [30] assumes continuous variables, the probability of sampling the exact closest world is zero (e.g., the value $a$ when $a^* = a$), whereas the probability is positive for sampling values around $a$. But we agree this point is not important for their main purposes. ``Thank you again for your useful feedback and precious time! We are eager to have further discussions with you and address your remaining concerns, if any.`` --- Rebuttal Comment 5.1: Comment: Thank you for your elaborate response! I do not have any further questions at this point. --- Rebuttal 6: Title: Thank you again and please consider updating the score Comment: Dear Reviewer JBox, Thank you once again for your prompt feedback. We hope your concerns have been addressed. Your comments have substantially helped improve our paper. If you think your questions have been properly addressed, we would be immensely grateful if you could *reconsider and update your recommendation*. If there are other questions, we hope to have opportunities to respond to them. Sincerely, Authors --- Rebuttal Comment 6.1: Comment: Based on your technical efforts, I’ve decided to increase my rating. However, in my humble opinion, the following two main concerns remain: (1) The terms "non-backtracking" and "backtracking" have specific meanings in fields like ML, philosophy, and cognitive science. To avoid confusion, it might be better to talk only about a new concept "Natural Counterfactuals." For instance, backtracking refers to observational counterfactuals and deals with use cases like, "Had I observed $n_2=0.19$, what would $n_3$ be?". Here, you track the difference between factual and counterfactual values back to the parents. In contrast, your paper begins by requesting a change to $n_2=0.19$, and then your optimization framework finds new values for both $n_1$ and $n_2$, resulting in a simultaneous change for two variables. This approach doesn't align with the established definitions of backtracking or non-backtracking. (2) What specific problem or use case does your approach address? For example, in lines 50-51, you mention autonomous driving. If the observational distribution is sunny weather with 40°C, then asking what would happen if the weather changed to heavy rain is a valid non-backtracking counterfactual, which would likely involve out-of-distribution samples. Replacing it with cloudy weather and 35°C, which is closer to the observational distribution, doesn't fully address the original question. The XAI recourse literature uses optimization methods to find counterfactual values based on causal structures. While these techniques are typically applied in supervised learning, similar use cases may exist in other areas, such as generative AI (GenAI). --- Reply to Comment 6.1.1: Title: Thank You for Your Kind Feedback Comment: Thank you very much for increasing your rating, and for sharing your remaining concerns. Regarding Question/Suggestion (1), we will follow your advice to focus more on the concept of "natural counterfactuals" and be clearer about the sense of backtracking used in this paper. With due respect, we do not think the term "backtracking" has become so fixed to a specific meaning in a specific approach (among many possible alternatives) that its general meaning of "not keeping the temporal or causal upstream fixed in making counterfactual supposition" should not be used anymore. But thank you for your reminder of the potential of causing confusion; we will make extra efforts to make the meanings of terms clear. For Question (2), we agree that our approach does not fully address the original question in that example, if the original question is a non-backtracking interpretation of the counterfactual query and the non-backtracking interpretation turns out to violate our standard for picking out an intervention to realize the counterfactual supposition. That is NOT our aim. Our motivation is that in such cases in which we have good reasons to doubt that we can answer the non-backtracking interpretation of a counterfactual query (say, what if it were raining hard *and, implicitly, other variables in its causal upstream were kept the same*?) reliably based on available data, instead of returning a bad answer or simply saying "do not know", we can also (or even should) tell the user that if they adopt an answerable interpretation of the question that involves some backtracking (say, what if it were raining hard *and the atmospheric pressure were low*? We use atmospheric pressure instead of temperature in our example because atmospheric pressure is more likely to be a causal ancestor of weather in the given causal diagram), there is an answer we can reliably offer and here is the answer. Moreover, the user may have started with such an interpretation to begin with, as counterfactual queries are often ambiguous or under-specified. In any case, we reiterate that we are dealing with a distinctive interpretation of a counterfactual query and do not aim to address the "original" non-backtracking interpretation of the query, when that "original question" is demonstrably hard to answer given the available data. Thank you once again for your time and excellent questions! --- Rebuttal 7: Title: Your Feedback Would Be Appreciated Comment: Dear Reviewer JBox, Thank you once again for your valuable comments. Your suggestions on clarifying our motivations and inference procedure were very helpful. We are eager to know if our responses have adequately addressed your concerns. Due to the limited time for discussion, we look forward to receiving your feedback and hope for the opportunity to respond to any further questions you may have. Yours Sincerely, Authors of Submission 4096
Summary: The paper takes the recently developed idea of backtracking counterfactuals and applies it to improve the realistic generation of counterfactuals from data, which is known to be a hard task as the standard, non-backtracking, counterfactuals lie out of the distribution and generative models perform badly on those. Strengths: The idea of using backtracking counterfactuals as being more "natural", and then invoking them to improve the generation of counterfactuals when the full SCM is not available, is a very good one. Weaknesses: There are some technical issues which need to be addressed before the paper can be accepted. Technical Quality: 2 Clarity: 3 Questions for Authors: Full disclosure: I reviewed a previous version of this paper for last year's conference, and the current version is much much better. Still, there are some issues which the authors should address. Some of them might be mistakes on my part due to my misunderstanding, others are questions for clarification, others are problems that need to be solved. First I present some larger issues, followed by some minor issues as they appear in the order of the paper. 1: I find the motivation rather odd, which is that the supposed benefit of natural counterfactuals is that they are easier to learn. But surely the main focus should be: which counterfactual (backtracking, non-backtracking, partial backtracking) is appropriate/meaningful in a given situation, rather than “which one can we learn”. Simply put, we shouldn't just focus on learning something because it's easy to learn, the thing learned should also be sensible. The paper actually remains silent on this issue, or rather, it seems to imply that non-backtracking counterfactuals are always the only sensible ones, and their natural counterfactuals are therefore just a heuristic. The reason I say this is implied is because in the experiments the non-backtracking one is taken as the ground truth. This is at odds with the related literature cited, which in fact argues that sometimes the backtracking counterfactual is the "ground truth", i.e., semantically it is what is meant with a counterfactual query. It would be good if the authors could be more explicit on where they stand on this issue. 2: There are several implicit assumptions that should be addressed. - It is assumed that there is a unique an(A)*, yet the definitions given do not guarantee unicity. - Initially it seems as if it is also assumed that an an(A)* always exists, which is not the case. Although this is made clear later, it would be good to flag this early on. 3: 159: This I don’t understand, because it violates the idea that C has to include A. Take the trivial case in which a*=a. Then you will get the empty LBF intervention, which of course does not include A. 4: Building on the previous point, what if we have A=a, and yet A=a was extremely unlikely (so in the far-end of the tail of the distribution). Now consider the counterfactual a*=a. Will we not get that there is no solution? And if so, isn't that strange? This could be mitigated by combining the distance criterion and the naturalness criterion in a more weighted fashion, rather than having the latter be a necessary criterion and only then apply the distance one. This would bring it more in line with [30], see their 3.17 in particular. 5: Continuing, note that Choice (2) seems to result in a variation of 3.17, except that the distance is here between endogenous variables. Note though that a distance on endogenous variables can easily induce a distance between exogenous variables, by considering the two endogenous states that would result from two exogenous states, and thus this is not an essential difference. 6: Th4.1 is described as being about the identifiability of counterfactuals, but it is not, because it already assumes knowledge of do(C=c*), and that is part of what needs to be identified. Furthermore, it is then said that the theorem confirms that counterfactuals fall within the support of the observational distribution, but this ignores the earlier point that an LBF need not exist, and thus we are only guaranteed to identify counterfactuals in the case that it does. 7: I did not have time to closely examine the experiments, except for the following: all the results are only about those counterfactuals which happen to be in the scope of the natural counterfactuals, and the others are excluded. Yet it seems crucial to know how many were excluded in this manner in order to evaluate how useful in practice this method is, so this should be reported as well. Minor issues: 81: "a most recent paper" I’d say a more accurate description is that [30] was the first paper to formally introduce backtracking counterfactuals within the causal models framework, and thus the current paper builds on that one. (Also, the usefulness for counterfactual explanations is also something that is explicitly discussed in [30].) 100: So the paper is limited to assuming independent and unique exogenous variables. Note that this is not the case in [30]. 121: "In this paper..." Why? The generalization seems trivial. 175: This assumes that all variables are real-valued. Yet later it is mentioned that not all variables need to be at the same scale. Isn't that inconsistent? 221: Why restrict to a single variable A all of a sudden? Th4.1: Doesn't the independence of U_i and Pa_i already follow from the independence of the exogenous variables? Th4.1: Why is there no mention of the distance criterion for do(C=c*)? Typos: 110: "distribution" -> "distributions" 123: "date" 234: "encourage" 303: "datasets, which" Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your previous feedback, which helped a lot to improve the paper. Thank you also for the new comments. We will address them and correct the typos you pointed out. **1: Surely the main focus should be: which counterfactual (backtracking, non-backtracking, partial backtracking) is appropriate/meaningful in a given situation, rather than “which one can we learn”. The reason I say this is implied is because in the experiments the non-backtracking one is taken as the ground truth.** Thank you for this insightful comment, which has helped make our motivation more clearly articulated. It is a consequence of our motivation that our counterfactuals are more learnable than non-backtracking counterfactuals, but being easier to learn is not the primary motivation. We are mainly motivated by observing that thoroughly non-backtracking counterfactual generation, for all its merits, often results in scenarios that are too unrealistic (and so, as a consequence, unreliable when it is based on training data). On the other hand, however, we also think that fully backtracking counterfactuals may not have much implication for guiding actions or decision-making, because they appeal to inventions in unobserved or even unobservable, unidentifable variables. We thus aim to develop a framework that generates sufficiently realistic counterfactuals by finding feasible interventions on observed variables while minimizing the extent of backtracking. In particular, when the intervention set $C$ picked out by our algorithm is sufficiently simple (a special case is that it is identical to the original target set $A$), the counterfactual will remain useful for guiding actions. *In addition, we hasten to clarify that we did not use non-backtracking counterfactuals as the ground truth.* We apply the same trained models for inferring non-backtracking and natural counterfactuals. In toy experiments, a ground-truth SCM serves as the gold standard to measure the error of inference. Specifically, we compare the ground-truth outcomes with the predicted outcomes after intervening on $A$ in non-backtracking counterfactuals or intervening on $C$ in our natural counterfactuals. In other words, non-backtracking counterfactuals and natural counterfactuals have different ground truths, and the experiments illustrate that the generative algorithm for the former deviates more from its ground truth than our generative algorithm does from the latter's ground truth. In real-world datasets, due to the lack of a ground-truth SCM, we measure how accurately predicted outcomes reflect desired image attributes, which are represented by the input values stated in the antecedents of the counterfactuals, for example, $t = 3$, meaning that thickness should be $3$. **2: It is assumed that there is a unique $an(A)^\*$, yet the definitions given do not guarantee unicity.** Thank you for pointing this out. We do not assume that $an(A)^*$ is unique. In fact, $an(A)^*$ can theoretically have multiple solutions (or no solution at all as mentioned in Section 5). We have made this clearer in our updated paper. Even though $an(A)^*$ can have multiple solutions, the inference method remains the same after we sample one value from the available solutions. **3: 159: This I don’t understand, because it violates the idea that C has to include A. Take the trivial case in which a\*=a. Then you will get the empty LBF intervention, which of course does not include A.** We follow the standard treatment by requiring that even when the counterfactual value $a^*$ is equal to the actual value $a$, an intervention would still be invoked to realize $a^*=a$ (among other things, the links between $A$ and its parents would still be severed). Hence, in the special case when $a^*=a$, $C$ still contains $A$. **4: Building on the previous point, what if we have A=a, and yet A=a was extremely unlikely (so in the far-end of the tail of the distribution). Now consider the counterfactual a\*=a. Will we not get that there is no solution?** Excellent point! In natural counterfactuals, we use a hyperparameter $\epsilon$ to determine whether a point is considered natural. Therefore, in this situation, there will be no solution if $A = a$ is so unlikely that it conflicts with $\epsilon$-natural generation. This is a consequence of introducing a parameter to control the degree of naturalness. However, if we only require the lowest level of naturalness, where $\epsilon$ is zero (i.e., we only require that counterfactuals are within the distribution support), this situation will be eliminated. Otherwise, yes, there could turn out to be no solutions for natural counterfactuals. As you noticed, this may happen in extreme situations, which are not "natural" themselves and already deserve extra inspection and attention. We have made it clearer in the paper. **5: Continuing, note that Choice (2) seems to result in a variation of 3.17, except that the distance is here between endogenous variables.** Thank you for this interesting comparison. Here we do not consider Choice (2) to be a variation of 3.17, because while 3.17 is based on value distance, Choice (2) relies on CDF distance. This distinction reflects a key shortcoming of 3.17: the distribution of exogenous variables is not theoretically identifiable, making it challenging to apply in practice. However, due to the properties of CDF distance, Choice (2) allows us to avoid calculating distances related to exogenous variables. Thus, Choice (2) can be computed without considering the distributions of exogenous variables. ***``Due to space limitations, we have included questions 6 and 7, along with minor issues, in the global response.``*** --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: I thank the authors for their rebuttal. It has prompted some more questions. Re1: Thanks for the clarification. It still does not really address the question what the correct semantics for counterfactuals is though. 1.1: You speak of the non-backtracking counterfactual being "unrealistic", but here unrealistic seems to mean: if we try to estimate the standard, Pearl-style, interventional counterfactual without access to the ground truth SCM, we get bad answers. So the underlying semantics is still the standard interventional one, it's just that we don't have a good estimator for it. (If I understand correctly, the ground truths for both the non-backtracking and the natural counterfactuals are both expressed in terms of an interventional counterfactual, it's just that they are compared to different interventions.) 1.2: "because they appeal to interventions..." Fully backtracking counterfactuals do not appeal to interventions at all, on the contrary, they're directly at odds with the interventional semantics of Pearl. Instead, they appeal to a change in the initial conditions, which is not an intervention because the initial conditions are not determined by causal equations and thus no "laws of nature" are broken. So aside from the issue as to which styles of counterfactuals are easier to estimate without access to the ground truth SCM, there is the more fundamental issue of which type of counterfactual is the formal representation of the informal scientific or even natural language query that we are formalizing. 1.3: To be clear, the above is not a criticism of the method itself, I'm simply trying to get conceptual clarity on what the assumed true underlying semantics for counterfactuals is. I guess this is the question that would resolve it: say you _do_ have access to the ground truth SCM, and thus perfect inferences can be made. And now we want to reason about some counterfactual "If A were a*". How would you compute it? Using do(A=a*), or using change(A=a*)? Re: 3: You should change the wording, because it does not match your explanation. The difference between a and a is the emptyset. --- Rebuttal 2: Title: Response 1 Comment: We are grateful for your time and insights, and really appreciate the opportunity to further clarify our responses to your questions. **1.1: You speak of the non-backtracking counterfactual being "unrealistic", but here unrealistic seems to mean: if we try to estimate the standard, Pearl-style, interventional counterfactual without access to the ground truth SCM, we get bad answers. So the underlying semantics is still the standard interventional one, it's just that we don't have a good estimator for it. (If I understand correctly, the ground truths for both the non-backtracking and the natural counterfactuals are both expressed in terms of an interventional counterfactual, it's just that they are compared to different interventions.)** This is an excellent point. Our semantics is indeed still based on interventions, and you are exactly right that the difference between ours and the standard Pearlian semantics is that they (often) pick out different interventions to realize the counterfactual supposition in the antecedent of a counterfactual conditional. In other words, given a counterfactual query: what if $A = a^*$?, we have different interpretations on how this change is supposed to be realized (or how the supposition $A=a^*$ is to be further specified). In Pearl's semantics, the interpretation is simple and elegant: The change is to be realized by an intervention on A alone (or the supposition is to be specified as "$A=a^*$ *and every variable in the causal upstream of A remains the same*"); in our semantics, by contrast, we impose a standard for picking out an intervention to realize the supposition, which may require intervening on some of $A$'s causal antecedents as well (and in this sense involve backtracking). So, although they are both based on interventions, they still constitute different interpretations of the original counterfactual supposition. Regarding our claim that the non-backtracking counterfactuals are sometimes "unrealistic", you gave a very nice statement of one important reason why we think this. For the purpose of this paper, perhaps this epistemic motivation is the most relevant and easiest to see, and we will consider focusing on this motivation and elaborating on its significance. In your original review, you seemed to indicate that this motivation is not sufficiently compelling, on which we respectfully disagree. If reliable answers to a counterfactual query interpreted in a certain way are very difficult or even impossible to obtain, it seems to us an excellent reason to develop a different but related interpretation that can still serve the desired functions to a good extent and admit more reliable answers. Imagine someone comes with a question: If his weight were decreased to 70kg, what would be his risk of diabetes? Suppose, given the causal assumptions and data we have, we cannot reliably answer the non-backtracking interpretation of this question. It seems that instead of just saying that we do not know, we can or even should adopt an answerable interpretation of the question and tell him something we can reliably offer, say, if he decreased weight to 70kg *and exercised for more than 150 minutes a week*, his risk would be such and such. For what it is worth, we also think a perhaps more important reason for sometimes regarding non-backtracking counterfactuals as "unrealistic" is that their way of interpreting the counterfactual supposition, say, $A=a^*$, as intervening on A alone while keeping every causal ancestor in the model intact, may be extremely difficult or practically impossible to realize (e.g., having a person stand still on a bus while keeping the sudden braking in the toy example we mentioned in the paper), as indicated by the rarity of the scenario in the data, and for that reason is not very relevant or helpful for guiding actual actions or decision-making. Imposing a naturalness criterion is one way to ensure that the picked out interventions are at least feasible to carry out in light of the evidence we have. --- Rebuttal 3: Title: Response 2 Comment: **1.2: "because they appeal to interventions..." Fully backtracking counterfactuals do not appeal to interventions at all, on the contrary, they're directly at odds with the interventional semantics of Pearl. Instead, they appeal to a change in the initial conditions, which is not an intervention because the initial conditions are not determined by causal equations and thus no "laws of nature" are broken. So aside from the issue as to which styles of counterfactuals are easier to estimate without access to the ground truth SCM, there is the more fundamental issue of which type of counterfactual is the formal representation of the informal scientific or even natural language query that we are formalizing.** We agree. In our view, informal scientific and natural language queries in the form of counterfactual conditionals are very often ambiguous or under-specified (in their antecedents), the disambiguation or specification of which depends on contexts. We do not pretend to be able to resolve this fundamental issue. We also agree that fully backtracking counterfactuals do not appeal to interventions in the usual Pearlian sense, for those are restricted to endogenous variables and are associated with breaking endogenous mechanisms. However, formally speaking, changing the values of exogenous variables is analogous to intervening on exogenous variables, in the sense that they change those variables without affecting the mechanisms for other variables in the model (the invariance of the mechanisms for other variables is for many the crucial hallmark of an intervention). In fact, some authors explicitly advocate applying the notion of intervention to exogenous variables as well. It is in this spirit that we say fully backtracking counterfactuals appeal to "interventions" to exogenous variables. We now see that it is potentially misleading, and we will be more careful with the wording. **1.3: To be clear, the above is not a criticism of the method itself, I'm simply trying to get conceptual clarity on what the assumed true underlying semantics for counterfactuals is. I guess this is the question that would resolve it: say you do have access to the ground truth SCM, and thus perfect inferences can be made. And now we want to reason about some counterfactual "If A were a\*". How would you compute it? Using do(A=a\*), or using change(A=a\*)?** In such a case, yes, it will depend on how to interpret the counterfactual supposition, and we agree that in many contexts, the non-backtracking interpretation will probably stand out as the most salient (or as the philosopher David Lewis once wrote, "standard") resolution. In other words, in such a case, the epistemic motivation for adopting our interpretation is of course annulled. Still, the more ontic motivation described above, having to do with the feasibility of carrying out a non-backtracking intervention, may still be relevant. Thank you again for making these important and subtle questions so clear. We will improve our statements of the motivations accordingly. **Re: 3: You should change the wording, because it does not match your explanation. The difference between a and a is the emptyset.** That's right. It is our sloppiness and thank you for catching it. We will rephrase carefully. ``We look forward to receiving your feedback and would appreciate the opportunity to answer any additional questions you may have.`` --- Rebuttal Comment 3.1: Comment: Thanks for these further clarifications. I still have some reservations about your description of fully backtracking counterfactuals, because their philosophical motivation was precisely to avoid anything even resembling an intervention, (strictly speaking, there is backtracking all the way to the initial conditions of the universe, it just so happens that in causal models the buck stops at the exogenous variables), but I think this is an issue that is mostly orthogonal to the paper and thus not that relevant. I agree that the epistemic perspective does in and of itself justify the value of your contribution, and I would even suggest that in the final version you try to remain as neutral as possible regarding what the correct semantics for counterfactuals is (or even what the correct semantics is in some specific context), because the contribution of the paper is not really about that and therefore the less you assume the better. For what it's worth, personally I do expect that in some contexts, your partially backtracking counterfactuals are also the correct counterfactuals, in the sense that people would indeed be inclined to interpret counterfactuals in that manner if the circumstances warrant. But perhaps that's something to evaluate in future work, together with some psychologists. --- Reply to Comment 3.1.1: Title: Your Feedback Is Much Appreciated Comment: Thank you so much for all of your feedback and for appreciating the value of this work. We greatly enjoyed our discussions and learned a lot. We will avoid the misleading characterization of fully backtracking counterfactuals as appealing to interventions on exogenous variables, and will make it clear that our aim is not to propose a "correct" semantics for counterfactuals, but instead one that is useful in some situations and for some purposes. Once again, we are very grateful for your time and insights.
Summary: This paper addresses a key limitation of non-backtracking counterfactual reasoning in causal inference. The authors argue that Pearl's framework often generates unrealistic scenarios. To solve this, they propose "natural counterfactuals," which allow controlled backtracking to ensure scenarios remain realistic.They introduce Feasible Intervention Optimization (FIO), a novel framework that generates natural counterfactuals by incorporating a naturalness constraint. This ensures counterfactual instances are plausible given the observed data distribution. The authors also propose distance measures to minimize backtracking while achieving the desired outcome. Strengths: - The paper is well-written. - The paper identifies a crucial issue with non-backtracking counterfactuals, highlighting the importance of generating realistic scenarios. - FIO provides a principled and practical method for generating natural counterfactuals with clear mathematical structure. - The paper presents convincing experiments on both simulated and real-world datasets, demonstrating the effectiveness of the proposed approach. Weaknesses: - FIO involves several parameters and choices (naturalness criteria, distance measures), which may require careful tuning and consideration for different applications. - The paper mainly focuses on contrasting natural counterfactuals with non-backtracking ones. A more thorough comparison with existing backtracking approaches is missing in the experiment section. - The paper mainly focuses on contrasting natural counterfactuals with non-backtracking ones. A more thorough comparison with existing backtracking approaches is missing in the experiment section. Technical Quality: 3 Clarity: 3 Questions for Authors: - How sensitive is the proposed method to violations of the assumptions about the underlying SCM? - What is the computational complexity of FIO and how does it scale with the size of the causal graph and the dataset? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Weaknesses*** **1. FIO involves several parameters and choices (naturalness criteria, distance measures), which may require careful tuning and consideration for different applications.** Thanks for raising this point. We agree that different applications may warrant different choices, as stated in our Conclusion section. Our main purpose is to provide a general and flexible framework for generating natural or realistic counterfactuals with minimal backtracking. The exact choices of parameters and measures can and should be tailored to specific applications, as you suggested. **2. The paper mainly focuses on contrasting natural counterfactuals with non-backtracking ones. A more thorough comparison with existing backtracking approaches is missing in the experiment section.** Thank you for the suggestion. In light of them, we have added such comparisons to our updated paper. Backtracking Counterfactuals in [30] assume a known joint distribution over factual and counterfactual exogenous variables, which is often impractical due to the non-identifiability of the exogenous distribution. In contrast, we only require access to the observed distribution. In our simulation experiments, for the purpose of implementing Backtracking Counterfactuals, we assume the joint distribution over factual and counterfactual exogenous variables is known. the results show that backtracking counterfactuals can generate infinite counterfactuals for a single query, with higher probability given to those more similar to the actual instance. Conversely, our approach typically produces a single counterfactual closest to the actual instance. Note that an extreme case illustrates that [30] sometimes invokes gratuitous changes. When the counterfactual value $a^*$ equals the actual value $a$, the counterfactual instance should logically match the actual one. However, [30] also assigns positive probability to other instances. Reference: [30] Julius von Kügelgen, Abdirisak Mohamed, and Sander Beckers. Backtracking counterfactuals. arXiv preprint arXiv:2211.00472, 2022. ***Questions*** **1. How sensitive is the proposed method to violations of the assumptions about the underlying SCM?** As widely seen in the literature of causal inference and counterfactual inference, certain assumptions are essential. For Theorem 4, if the assumptions do not hold, identifiability may not be guaranteed. For example, suppose $Y=XU_1+U_2$ where $Y$ and $X$ are endogenous variables and $U_1$ and $U_2$ are exogenous noises, then the counterfactual outcome will not be identifiable. **2. What is the computational complexity of FIO and how does it scale with the size of the causal graph and the dataset?** The computational complexity of FIO scales linearly with both the size of the causal graph (specifically, the number of ancestors of $A$) and the dataset size. Formally, the overall complexity is $O(KPTM)$, where $K$ is the number of ancestors of $A$, $P$ is the number of parameters in the neural networks, $T$ is the number of optimization steps, and $M$ is the number of data points in the dataset. This linear scaling indicates that FIO is reasonably scalable, making it suitable for large causal graphs and datasets.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their time dedicated to reviewing this paper and their valuable comments. We are encouraged by and grateful for the comments that say our paper was described as ``"well-written"`` (DQof) and ``"interesting"`` (fYWn), and was considered to``"make a strong contribution"`` (fYWn). Additionally, we appreciate the recognition of our approach as ``"a principled and practical method"`` with a ``"clear mathematical structure"`` (DQof), and as ``"a very good one"`` (v2As) and ``"interesting"`` (fYWn). The acknowledgment of our empirical results demonstrating ``"convincing experiments"`` (DQof) and ``"improved performance"`` (fYWn) is particularly exciting. We have carefully responded to all important questions and hope to properly address any remaining concerns. --------------- ***Below are additional responses to questions 6 and 7, as well as minor issues, for Reviewer v2As.*** **6: Th4.1 is described as being about the identifiability of counterfactuals, but it is not, because it already assumes knowledge of do(C=c\*), and that is part of what needs to be identified. Furthermore, this ignores the earlier point that an LBF need not exist, and thus we are only guaranteed to identify counterfactuals in the case that it does.** Thank you for this valuable comment. When discussing the identifiability of natural counterfactuals, we assume that a natural counterfactual exists, meaning that $C$ has at least one solution. Therefore, we do not address the distance criterion for $do(C=c^*)$ and the solutions of $C$ directly. Theoretically, $C$ may have one or more identifiable solutions, or no solution at all, and this can always be identified. For multiple solutions, we sample one value from these solutions, and its counterfactual is identifiable according to Theorem 4.1. We have clarified this in our updated manuscript. **7: I did not have time to closely examine the experiments, except for the following: all the results are only about those counterfactuals which happen to be in the scope of the natural counterfactuals, and the others are excluded. Yet it seems crucial to know how many were excluded in this manner in order to evaluate how useful in practice this method is, so this should be reported as well.** We present this result in Table 6 on Page 14, where we report the frequency of unfeasible solutions per 10,000 instances in the MorphoMNIST dataset. The data reveals a consistent trend: as the value of $\epsilon$ increases, the frequency of unfeasible solutions also rises. This occurs because a higher $\epsilon$ corresponds to a stricter standard of naturalness. In theory, when $\epsilon = 0$ (i.e., we only require that counterfactuals are within the distribution support), there are no unfeasible solutions. | $\epsilon$ | Unfeasible Solutions | |:------------:|:---------------------:| | 1e-4 | 794 | | 1e-3 | 975 | | 1e-2 | 1166 | ***Minor issues*** **81: "a most recent paper" I’d say a more accurate description is that [30] was the first paper to formally introduce backtracking counterfactuals within the causal models framework, and thus the current paper builds on that one. (Also, the usefulness for counterfactual explanations is also something that is explicitly discussed in [30].)** Thanks for this suggestion. We will add an accurate description of [30], as the first paper to formally introduce (fully) backtracking counterfactuals within the SCM framework. At the same time, we wish to add that our work differs significantly in both the motivation and the approach. We aim to find feasible interventions to make counterfactuals more realistic by intervening on endogenous variables. In contrast, [30] assumes a known exogenous distribution and always intervenes on exogenous variables. We explain the key differences between our work and [30] in Section E2 of the Appendix. **100: So the paper is limited to assuming independent and unique exogenous variables. Note that this is not the case in [30].** We completely agree. One purpose of using the independence assumption is to ensure the identifiability of natural counterfactuals when we only have access to endogenous variables. If we also assume a known distribution of exogenous variables, our work can be generalized. **121: "In this paper..." Why? The generalization seems trivial.** and **221: Why restrict to a single variable A all of a sudden?** We agree. $A$ could be a set of variables. In fact, in our experiment on the 3DIdentBox dataset, we perform interventions on a set of three variables. In the paper, for the sake of simplicity, we initially assume $A$ is a single variable. We have modified our presentation to explicitly state that $A$ can be a single variable or a set of variables. **175: This assumes that all variables are real-valued. Yet later it is mentioned that not all variables need to be at the same scale. Isn't that inconsistent?** We meant to say that all variables are continuous, and scale refers to the range of support. For example, $V_1$ follows a normal distribution with a range of $(-\infty, \infty)$. $V_2$ follows a uniform distribution over the interval $[0, 1]$. We have clarified this point in our updated manuscript. **Th4.1: Doesn't the independence of U_i and Pa_i already follow from the independence of the exogenous variables?** That is true. We have updated the presentation accordingly. Thank you. **Th4.1: Why is there no mention of the distance criterion for do(C=c\*)?** Please see our response to Q6 above. We have clarified it in the updated manuscript. Pdf: /pdf/905466e80dc9aff4e105d5c30dc83f3d715306ef.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On Divergence Measures for Training GFlowNets
Accept (poster)
Summary: This paper investigates alternative training methods for Generative Flow Networks (GFlowNets) by evaluating various divergence measures, including Renyi-α, Tsallis-α, reverse, and forward Kullback-Leibler (KL) divergences. Traditional methods focusing on minimizing log-squared differences are shown to lead to biased and high-variance estimators. The authors propose efficient estimators for the stochastic gradients of these divergences and introduce control variates to reduce gradient variance. Empirical results across diverse tasks demonstrate that minimizing these divergence measures significantly accelerates training convergence and enhances stability compared to traditional methods. The paper establishes theoretical connections between GFlowNets and variational inference, extending these insights to arbitrary topological spaces, and highlights the practical effectiveness of control variates in improving training efficiency. Strengths: - **Comprehensive Evaluation**: The paper thoroughly evaluates multiple divergence measures and their impact on GFlowNet training. - **Theoretical Insights**: Establishing theoretical connections between GFlowNets and VI broadens the understanding of these models. - **Practical Contributions**: The design of control variates to reduce gradient variance is a significant practical contribution that can be applied in various optimization scenarios. - **Variance Reduction Techniques**: The introduction of variance reduction techniques, including control variates and leave-one-out estimators, effectively addresses the high variance issue in gradient estimates, enhancing the learning stability and efficiency. Weaknesses: - **Theory** The theoretical contribution builds upon previous works [4,5]. However, I do not think the theory itself has enough contribution. Because the Measurable pointed DAG is from previous work [5] and the main theoretical claim, Proposition 1, has limited novelty compared with Proposition 1 in [4]. Therefore, unlike previous works [4,5], they have their novel theoretical contributions and synthetic experiments would suffice. Thus, for this paper, stronger experiments are required, as will be discussed in the next. - **Experiments - Part 1** This paper does not contain enough real-world tasks, such as fragment-based molecule generation [1], graph combinatorial optimization [2], and RNA sequence generation [3] to illustrate its main contribution. These are standard tasks in evaluating GFN performances and are necessary to include. For the only real-world task in the paper, i.e., the BPI task, this paper admits that "not statistically significant" in line 334. Therefore, it is unclear whether the proposed methods to use other divergence measures will be meaningful in real-world scenarios. - **Experiments - Part 2** For the synthetic tasks, the plots in Figure 3 are also missing some lines. For example, why are there only two lines in the **Sets** plots? Also, there is no single best loss function that is uniformly better than the KL baseline. Therefore, additional divergence might seem unnecessary. There should be stronger evidence to support the use of other measures. Or, the authors can provide a guideline on how to choose the best measures given prior information about the tasks. - I would raise my scores if additional experiments are included and the proposed methods are indeed beneficial in more complex tasks. [1] Jain, Moksh, et al. "Biological sequence design with gflownets." International Conference on Machine Learning. PMLR, 2022. [2] Zhang, Dinghuai, et al. "Let the flows tell: Solving graph combinatorial problems with GFlowNets." Advances in Neural Information Processing Systems 36 (2024). [3] Kim, Minsu, et al. "Local search gflownets." arXiv preprint arXiv:2310.02710 (2023). [4] Malkin, Nikolay, et al. "GFlowNets and variational inference." arXiv preprint arXiv:2210.00580 (2022). [5] Lahlou, Salem, et al. "A theory of continuous generative flow networks." International Conference on Machine Learning. PMLR, 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: See **Weaknesses**. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See **Weaknesses**. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and suggestions. Below, we address the specific weaknesses and questions. We hope our clarifications and additional experiments address your concerns and elevate your appraisal of our work. . > Because the Measurable pointed DAG is from previous work [5] and the main theoretical claim, Proposition 1, has limited novelty compared with Proposition 1 in [4]. We would like to emphasize Proposition 1 is not the main contribution of our work. In fact, note this proposition is part of our background section. While perhaps a simple extension of previous results, Proposition 1 is a necessary formalism to our developments in the following sections, as we consider discrete as well as continuous state spaces in our work. > This paper does not contain enough real-world tasks, such as fragment-based molecule generation [1], graph combinatorial optimization [2], and RNA sequence generation [3] to illustrate its main contribution. These are standard tasks in evaluating GFN performances and are necessary to include. For the only real-world task in the paper, i.e., the BPI task, this paper admits that "not statistically significant" in line 334. Therefore, it is unclear whether the proposed methods to use other divergence measures will be meaningful in real-world scenarios. We would like to emphasize that, before our work, divergence measures were perceived as inferior choices to traditional GFlowNet criteria (e.g., TB) as suggested by Malkin et al. [4]. The main reason for our success is the design of appropriate control variates (CVs) for variance reduction, which preserve the unbiasedness of gradient estimates. It is important to note that Malkin et al. [4]'s gradient estimators differ from ours — and are not bias-free, as we discuss in lines 272-278 of our manuscript. To strengthen our empirical claims, we have included additional baselines (DB, VarInf and SubTB) [8, 9, 10, 11] and environments (hypergrid and causal structure learning) [4, 6, 7] — see Figure 2 in the rebuttal PDF. It is worth mentioning that causal structure learning is one of the prime applications of GFlowNets [6, 7] and can be seen as an instance of combinatorial optimization. Our results reinforce that (given appropriate control variables) divergence-based objectives perform similarly to or better than balance-based objectives. Please let us know if this addresses your issue. If you believe considering additional environments is necessary to verify our claim, we would gladly run more experiments in the discussion period. > For the synthetic tasks, the plots in Figure 3 are also missing some lines. For example, why are there only two lines in the Sets plots? Also, there is no single best loss function that is uniformly better than the KL baseline. Therefore, additional divergence might seem unnecessary. There should be stronger evidence to support the use of other measures. Or, the authors can provide a guideline on how to choose the best measures given prior information about the tasks. We apologize for the oversight in Figure 3. The missing lines in the plot for the set generation task were due to the overlap of learning curves for the divergence-based objectives. We will include dashed lines to improve the visualization. We highlight that we are considering four divergence measures: Forward-KL, Reverse-KL, $\alpha$-Renyi, and $\alpha$-Tsallis, obtaining for each a variance-reduced gradient estimators for GFlowNet training. These estimators are distinct from the proposal in Malkin et al. [4]. We considered the TB loss as a baseline. We have also run comparisons against the DB, SubTB, and VarGrad losses for the rebuttal (Figs 1 and 2, rebuttal PDF). Compared to the baseline losses, we can always find a divergence loss with better performance. On the other hand, no single divergence has uniform dominance over the others (Reverse-KL and Forward-KL are distinct measures). Overall, the experimental evidence favors the class of divergence measures. One practical advantage of Renyi and Tsallis is to represent intermediate measures between the extremes of mode-seeking and mass-covering of Forward-KL and Reverse-KL. We have added experiments exploring the effect of $\alpha$ for the distribution of Hypergrid (Fig 3, rebuttal PDF). Regarding how to choose the most appropriate divergence, we agree that clear guidelines would be useful. However, developing these guidelines an open problem in the VI literature [12, 13]. [1] Biological sequence design with gflownets. ICML 2022 [2] Let the flows tell: Solving graph combinatorial problems with GFlowNets. NeurIPS 2024 [3] Local search gflownets. arXiv 2023 [4] GFlowNets and variational inference. ICLR 2023 [5] A theory of continuous generative flow networks. ICML 2023 [6] Bayesian structure learning with generative flow networks. In UAI, 2022 [7] Joint Bayesian Inference of Graphical Structure and Parameters with a GFlowNet. NeurIPS 2023 [8] Learning GFlowNets from partial episodes for improved convergence and stability. ICML 2023 [9] GFlowNet Foundations. JMLR 2023 [10] VarGrad: A Low-Variance Gradient Estimator for VI. NeurIPS 2020 [11] Robust Scheduling with GFlowNets. ICLR 2023 [12] Divergence measures and message passing. 2005 [13] Rényi Divergence Variational Inference. NeurIPS 2016 --- Rebuttal Comment 1.1: Comment: Thanks for your clarifications. I think the design of variance reduction in divergence-based objectives is a quite novel contribution. Admittedly, the design of appropriate divergence is an open question in VI literature. However, since the current experiments still do not include real-world environment like molecules, I can only raise my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you very much for engaging in the discussion and acknowledging our rebuttal and the novelty of our work.
Summary: GFlowNets are a probabilistic framework for training amortized samplers for high-dimensional compositional spaces. The samplers are typically trained using local consistency objectives which are squared log losses. This paper examines alternatives to these obejctives in the form of general statistical f-divergence measures. The authors consider forward and reverse KL, Renyi-$\alpha$ and Tsallis-$\alpha$ divergences. The authors derive gradients for the divergences in the case of a fixed $p_B$ and on-policy samples. These gradient computations rely on REINFORCE-style estimators and consequently can suffer from high-variance gradient estimates which affect the optimization procedure. For variance reduction, the authors derive control variates for their gradient estimators. Through a series of experiments on a variety of tasks, the authors demonstrate the effectiveness of these divergences for training GFlowNets. Strengths: * The paper is quite well written and clear. The authors are thorough and clear in introducing the central ideas and provide sufficient details making it easy to follow. * The paper studies the important problem of finding the "right" learning objective in the context of training GFlowNets. Following a long line of work in VI, the authors leverage statistical divergences and propose optimizing them directly to learn the GFN sampler. * Additionally, the authors improve upon prior work drawing a connection between GFNs and VI by proposing principled control variates to reduce the variance in the gradient estimates for the divergences from the REINFORCE estimators. The CVs seem to have a significant effect on the performance. * The empirical analysis covers a variety of problems including continuous and discrete spaces as wells as general DAGs and tree spaces in the case of discrete spaces. * The authors also include code with the submission aiding reproducibility. Weaknesses: * The paper considers the on-policy setting for training GFlowNets and the proposed learning objectives based on the divergences assume on-policy smaples. However, existing flow-based objectives are all off-policy, and the advantage of GFlowNets on a lot of tasks (specifically challenging tasks with multi-modal target distributions) comes from the ability to train on off-policy trajectories (e.g. replay buffer[1] to local search [2]). So while the proposed learning objectives empirically perform better than TB on tasks where on-policy sampling is enough, it lacks the flexibility of accomodating off-policy training. * The authors also assume that the backward policy $P_B$ is fixed (L153). While this is true in some cases, learning $P_B$ results in significant improvements to the learned sampler[3,4]. As far as I can tell, modifying the proposed objectives to accomodate learning the P_B is non-trivial. * I appreciate the diversity of problems studied by the authors in their experiments, but there are some gaps in the empirical analysis. Specifically, the authors only include a TB baseline and not other objectives such as SubTB, DB which can perform better than TB (and have better training stability) in some cases. Additionally, the paper also does not include the VarGrad-style objective [5] which does away the need for estimating $Z$ in TB. * Moreover, the tasks consider relatively small tasks so it is not clear how scalable the proposed objectives are. [1] Towards Understanding and Improving GFlowNet Training. Max W. Shen, Emmanuel Bengio, Ehsan Hajiramezanali, Andreas Loukas, Kyunghyun Cho, Tommaso Biancalani. ICML 2023. [2] Local Search GFlowNets. Minsu Kim, Taeyoung Yun, Emmanuel Bengio, Dinghuai Zhang, Yoshua Bengio, Sungsoo Ahn, Jinkyoo Park. ICLR 2024. [3] Trajectory balance: Improved credit assignment in GFlowNets. Nikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, Yoshua Bengio. NeurIPS 2022. [4] A theory of continuous generative flow networks. Salem Lahlou, Tristan Deleu, Pablo Lemos, Dinghuai Zhang, Alexandra Volokhova, Alex Hernández-García, Léna Néhale Ezzine, Yoshua Bengio, Nikolay Malkin. ICML 2023. [5] Robust Scheduling with GFlowNets. David W. Zhang, Corrado Rainone, Markus Peschl, Roberto Bondesan. ICLR 2023. Technical Quality: 3 Clarity: 4 Questions for Authors: In addition to the points in Weaknesses: * L135 says TB requires estimating $Z$ but KL doesn't but I am not sure that is correct? Since even in the KL you need the normalizing constant in the $P_B$ term. * L146: [1] would be a more appropriate reference here I think? * L157: Missing reference to [2] * What is the computational performance (in terms of runtime) of the proposed objectives relative to TB? [1] Rubinstein, R. Y. (1981). Simulation and the Monte Carlo Method. In Wiley Series in Probability and Statistics. Wiley. https://doi.org/10.1002/9780470316511 [2] f-Divergence Variational Inference. Neng Wan, Dapeng Li, Naira Hovakimyan. https://arxiv.org/abs/2009.13093. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors do not explicitly address the limitations of their approach (discussed in the weaknesses) section, though the assumptions are mentioned briefly in Section 2 and 3. The authors only mention the choice of $\alpha$ as a limitation. There is also no discussion of broader impacts (the reference in the checklist is broken too). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the suggestions and for appreciating our work. We did our best to address each of your concerns below. Please let us know if you have other questions or require further clarification. > The paper considers the on-policy setting for training GFlowNets and the proposed learning objectives based on the divergences assume on-policy smaples. However, existing flow-based objectives are all off-policy, and the advantage of GFlowNets on a lot of tasks [...] comes from the ability to train on off-policy trajectories (e.g. replay buffer [1] to local search [2]). So while the proposed learning objectives empirically perform better than TB on tasks where on-policy sampling is enough, it lacks the flexibility of accomodating off-policy training. Indeed, many heuristics for off-policy learning of GFlowNets cannot be adapted to the context of divergence-based training. Nonetheless, please note that KL-, Renyi-, and Tsallis- divergences are also amenable to a degree of off-policy learning by implementing an importance sampling estimator — as long as the sampling policy can be directly evaluated at each individual trajectory. This is the case for, e.g., the widely-used $\epsilon$-greedy policy. Fundamentally, we see this as an instantiation of Wolpert’s No Free Lunch theorem: while, as we show, the minimization of statistical divergences is more sample-efficient and frequently leads to faster training convergence for GFlowNets, implementing these objectives constrains the user to the adoption of tractable off-policy sampling schemes. We will include this discussion in the revised manuscript. > The authors also assume that the backward policy is fixed (L153). While this is true in some cases, learning PB results in significant improvements to the learned sampler [3,4]. As far as I can tell, modifying the proposed objectives to accomodate learning the P_B is non-trivial. Thank you for the thought-provoking remark. In principle, learning $\log p_{B}$ is quite straightforward; the objective $$ \min \mathcal{D}(p_{B}, p_{F}) $$ can be jointly minimized in both $p_{F}$ and $p_{B}$ for any divergence measure $\mathcal{D}$ in a VAE-style learning. However, implementing a learnable $p_{B}$ incurs a non-trivial computational overhead due to the additional number of backward passes for reverse-mode autodifferentiation. To illustrate this, we will run additional experiments for divergence-based training of GFlowNets with a learnable $p_{B}$ and report the results during the discussion period. > I appreciate the diversity of problems studied by the authors in their experiments, but there are some gaps in the empirical analysis. Specifically, the authors only include a TB baseline and not other objectives such as SubTB, DB which can perform better than TB (and have better training stability) in some cases. Additionally, the paper also does not include the VarGrad-style objective [5] which does away the need for estimating Z in TB. Thank you for your compliment. We have included DB, SubTB, and VarGrad as additional baselines for our experiments in Figures 1 and 2 of the attached PDF, in addition to two novel generative tasks. Remarkably, with the exception of the extremely sparse hypergrid environment, divergence-minimization algorithms lead to the fastest convergence rate among the tested learning objectives. We also note that, for the very sparse and hard-to-explore hypergrid, off-policy training is necessary and (as we discussed earlier) purely on-policy-based methods should be avoided. Nonetheless, for these problems, our empirical analysis shows that the mode-covering behavior of $\alpha$-divergences with large and negative $\alpha$ is very beneficial for speeding up training convergence (please refer to Figure 3 of the attached PDF). We will include these experiments in the revised manuscript. > Moreover, the tasks consider relatively small tasks so it is not clear how scalable the proposed objectives are. Thanks for bringing up the discussion. We emphasize that the computation overhead incurred by the proposed gradient estimation techniques is negligible. In this sense, the divergence-based learning objectives are as scalable as their balance-based counterparts; please refer to Table 1 below. Results represent averages of 24 runs per criterion. Table 1: Runtime in minutes, avg over runs and environments. | criterion | avg | |:-----------------|---------:| | KL |9.5| | Renyi-$\alpha$ | 9.4| | Rev. KL | 10.4| | Tsallis-$\alpha$ | 10.5| | TB | 9.2| > L135 says TB requires estimating Z Indeed, the mathematical definition of TB and KL depends on the constant Z. Nevertheless, it is correct that Z is not needed in the context of gradient-based optimization of the KL. To see this, please note that we may write the reverse KL-divergence as $$ \mathcal{D}\_{KL}[p\_{F} || p\_{B}] = \mathbb{E}\_{\tau \sim p\_{F}}[\log p\_{F}(\tau) - \log p\_{B}(\tau | x) R(x) / Z] = \mathbb{E}_{\tau \sim p\_{F}}[\log p\_{F}(\tau) - \log p\_{B}(\tau | x) R(x)] + \log Z. $$ Consequently, when taking gradients with respect to the parameters of $p_{F}$, the terms corresponding to $\log Z$ in the right-hand side of the equation above vanish, as it does not depend on $p_{F}$. In particular, $\log Z$ does not interfere with the problem of minimizing $\mathcal{D}\_{KL}[p_{F} || p\_{B}]$ and can be ignored. A similar argument holds for the forward KL-, Renyi-, and Tsallis divergences. We will update the revised manuscript to clarify this point. > L146: [1] would be a more appropriate reference here I think?; L157: Missing reference to [2] We agree! These important references will be incorporated into the revised manuscript. > What is the computational performance (in terms of runtime) of the proposed objectives relative to TB? Please refer to Table 1 above. As we remarked earlier, our approach adds a negligible computational overhead to the training of GFlowNets. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the comments. I have a few follow-ups > amenable to a degree of off-policy learning by implementing an importance sampling estimator Indeed one can always use importance sampling to do off-policy training with an on-policy objective (e.g. as noted in [1]) but IS also introduces other challenges (e.g. high variance). So as you note there is certainly a trade-off. However recent work has also illustrated the importance of off-policy samples for training GFlowNets in challenging domains [2,3] which might make the objectives like TB better suited. > we will run additional experiments for divergence-based training of GFlowNets with a learnable $p_B$ Looking forward to the results. > computation overhead incurred by the proposed gradient estimation techniques is negligible The compute overhead is indeed not high, but my comment was more about the scale of the problems used in the paper. Prior work on learning objectives has considered much larger and challenging problems. I see the authors have added the hypergrid and causal DAG tasks but do not provide details about the size of the problems? What size was the hypergrid and the number of variables considered for the causal DAG task? > mathematical definition of TB and KL depends on the constant Z I agree that the Z does not play a role in the optimization but I would note that in a similar fashion TB can also be optimized without the Z (i.e. VarGrad) In the common response the authors claim "broadest experimental evaluation of GFlowNet objectives in the literature" but I think this loses a lot of nuance. While the paper does consider 6 tasks, they are all much smaller (and potentially simpler) than tasks considered in prior work (e.g. much larger molecular optimization and sequence design). This nuance is critical and I hope the authors avoid making sweeping claims. [1] GFlowNets and variational inference, ICLR 2023. [2] Amortizing intractable inference in large language models, ICLR 2024. [3] Improved off-policy training of diffusion samplers, arXiv:2402.05098 --- Rebuttal 2: Comment: Thank you for engaging in the discussion! > However recent work has also illustrated the importance of off-policy samples for training GFlowNets in challenging domains [2,3] which might make the objectives like TB better suited. We agree! We will include our discussion regarding the trade-off between off- and on-policy learning in the revised manuscript. > Looking forward to the results. Below, we report the results comparing the accuracy of the GFlowNet when $\log p_{B}$ is either learned or fixed, respectively, for the tasks of set generation and structure learning (Tables 2 and 3). We observed related results for the task of BPI - with similar performances for learnable and uniform $p_{B}$’s - and we will include these experiments in the revised manuscript. Although they show that jointly minimizing the learning objective wrt both $\log p_{B}$ and $\log p_{F}$ is a sound strategy for divergence-based measures, they do not reveal clear benefits favoring its implementation. Table 2: Set generation. Results are averaged across 3 runs. | | learn | unif | |---|---|---| | TB | 0.29 ± 0.02 | 0.13 ± 0.00 | | Reverse KL | 0.09 ± 0.01 | 0.03 ± 0.00 | | Renyi-0.5 | 0.09 ± 0.01 | 0.03 ± 0.00 | Table 3: DAGs. Results are averaged across 3 runs. | | learn | unif | |---|---|---| | TB | 0.43 ± 0.11 | 0.47 ± 0.13 | | Reverse KL | 0.31 ± 0.14 | 0.32 ± 0.14 | | Renyi-0.5 | 0.14 ± 0.02 | 0.14 ± 0.02 | To further investigate the shape of the learned backward policy, we computed the expected entropy of $p_{B}(\tau | x)$ under the learned marginal $p_{T}$ over terminal states, i.e., $\mathbb{E}\_{x \sim p\_{T}}[\mathbb{E}\_{\tau \sim p\_{B}(\cdot | x)}[ - \log p\_{B}(\tau | x)]]$. Intuitively, a highly entropic policy is closer to a uniform policy, following the analysis of Shen et al. [1]. In this sense, the results in Tables 4 and 5 below suggest that the learned backward policy closely resembles an uniform distribution. Nonetheless, we believe that a deeper investigation of the potential advantages of learning $\log p_{B}$ is a relevant and interesting research direction. Table 4: Set generation | | learn | unif | |---|---|---| | TB | 30.61 ± 0.01 | 30.67 ± 0.00 | | Reverse KL | 30.61 ± 0.01 | 30.67 ± 0.00 | | Renyi | 30.61 ± 0.01 | 30.67 ± 0.00 | Table 5: DAGs | | learn | unif | |---|---|---| | TB | 18.53 ± 3.61 | 18.55 ± 3.61 | | Reverse KL | 19.26 ± 3.52 | 19.27 ± 3.52 | | Renyi | 19.29 ± 3.42 | 19.31 ± 3.43 | [1] Towards Understanding and Improving GFlowNet Training. Shen et al. ICML 2023 > The compute overhead is indeed not high, but my comment was more about the scale of the problems used in the paper. Apologies for the oversight. For the DAG task, we considered graphs with 6 nodes; the target distribution’s support contains approximately 3.5M graphs. For the hypergrid task, we considered a 12 x 12 grid in Figure 1 and a 9 x 9 grid in Figure 3 of the rebuttal PDF; similarly to [2, 3], we set $R_{o} = 10^{-3}$. We will include these details in the revised manuscript. [2] GFlowNets and Variational Inference. Malkin et al. ICLR 2023. [3] Trajectory balance: Improved Credit Assignment in GFlowNets. Malkin et al. NeurIPS 2022. > This nuance is critical and I hope the authors avoid making sweeping claims. Thanks for the advice. We only meant to flesh out the diversity of tasks and baselines of our experimental campaign in the rebuttal. We are very grateful for your detailed feedback and contribution towards strengthening our paper! --- Rebuttal 3: Title: Response Comment: Sorry for the delay in my response, but I appreciate the detailed reply. > learned $p_B$ Thanks for sharing these results. I am a bit confused that on the set generation experiments for TB uniform does better than learned $p_B$. But indeed these results are interesting and could be an interesting avenue for future work. > For the DAG task, we considered graphs with 6 nodes. I appreciate the authors including these two tasks but I should emphasize that as with all the other experiments in the paper, these tasks are significantly smaller (and thus potentially easier) than prior work. For instance, in the TB paper the hypergrids considered were $8\times 8\times 8\times 8$ and $64\times 64$, not to mention much larger sequence design and molecular design tasks. I stand by my original review that the major weakness of the paper is the scale and difficulty of tasks considered in the empirical evaluation. --- Rebuttal 4: Comment: Thank you for your continued engagement. While we acknowledge many studies consider problems on a larger scale, we note that our empirical analysis primarily focuses on assessing GFlowNet’s distributional accuracy during training. For problems of much larger scale, such as molecule generation (with approximately $10^{16}$ terminal states), it is not feasible to accurately measure the goodness-of-fit of a trained GFlowNet. This is because it would require exhaustively enumerating the target’s support, which is necessary to assess the total variation distance between the learned and target distributions. Therefore, we have constrained our analysis to the environments discussed in the rebuttal PDF, which we believe are sufficiently diverse. We will discuss this matter at the end of the revised manuscript. Thank you very much for your detailed and constructive feedback.
Summary: This paper investigates the potential of using a variety of divergence measures directly as training losses for GFlowNets, relying on many of the connections made between GFlowNets and variational inference. Training GFlowNets essentially consists in enforcing balance/flow-matching conditions between a proposal and a target distribution given some data. Given the links between GFlowNets and variational inference, training the GFlowNet to directly minimize the KL divergence was observed to result in biased and high-variance estimators in general. This paper aims to verify the latter claim, for different divergence measures (KL, reverse KL, Tsallis-\$\alpha\$, Renyi-\$\alpha\$). The paper alleviates the afore-mentioned limits by proposing low-variance gradient estimates to that of the latter quantities through control variates. Finally, the paper verifies the proposed methods experimentally through an extensive set of experiments. Strengths: - The paper widens the scope of the theoretical links between GFlowNets and variational inference beyond the assumption of finitely supported measures. - The authors alleviate the high variance of divergence measures' gradients in practice, relying on control variates. - The paper is very well written and presented and is easy to follow. - The experimental setup is exhaustive, and shows the effect of each of the proposed design choices. Weaknesses: See questions. Technical Quality: 3 Clarity: 4 Questions for Authors: - How is the diversity of samples impacted? - There should be experiments that show that the different behaviors of each method with respected to the number of discovered modes throughout training (that should be compared to TB too!). For instance, similarly to Figure 4 in (Malkin et al., https://arxiv.org/pdf/2201.13259). Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Limitations adequately addressed throughout the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for valuable suggestion and review. We did our best to address each of your questions, and extended our experiments following your suggestion. Please let us know if you have other questions or require further clarification. > How is the diversity of samples impacted? Considering that the *diversity of the samples* could be approached from different perspectives, we made our best effort to address your question in a broad sense. **Support coverage: divergence measures versus TB.** Overall, we observe that our divergence-based learning procedures yield better support coverage than the TB, consequently capturing shapes more accurately. Figure 4 in the manuscript illustrates this phenomenon for a banana-shaped target distribution. **Number of visited modes.** Compared to balance-based objectives, we observe that our divergence-based training leads to faster increase in NoM and average reward of the top-K samples during the training process in the majority of environments — Figure 2 (rebuttal PDF). This metrics can be interpreted as indicative of diversity of the high-reward samples. **Impact of $\alpha$ on sample diversity.** In general, $\alpha$ allows us to modulate between mode-seeking and mass-covering behaviors, directly impacting the sample diversity. Low $\alpha$ leads to capturing more modes, while exceedingly high $\alpha$ may cause mode collapse — see, e.g., Figure 1 (manuscript) and Figure 3 (rebuttal PDF). > There should be experiments that show that the different behaviors of each method with respected to the number of discovered modes throughout training (that should be compared to TB too!). For instance, similarly to Figure 4 in (Malkin et al., https://arxiv.org/pdf/2201.13259). Thank you for the question and suggestion. We have included an analysis of the number of modes (NoM) discovered as a function of time in Figure 2 (rebuttal PDF). Overall, except for the Hypergrid environment, the divergence measures (Rev. KL, KL, Renyi-α and Tsallis-α, w/ α=0.5) have a higher NoM visited earlier in the training process than the balance-based losses (TB, DB, SubTB, VarGrad). Furthermore, their NoM learning curves display a faster rate of increase in the Sets and Sequences environment. For completeness, we include the average reward for the K highest scoring samples (top-K), showing similar results as the NoM. The distinct behavior for Renyi-α and Tsallis-α in the hypergrid environment could be explained by the results in Figure 3 (rebuttal PDF), since α=0.5 leads to mode collapse.
Summary: This paper investigates divergence measures as learning objectives for Generative Flow Networks, which are amortized inference models designed for sampling from unnormalized distributions over composable objects. The authors review four divergence measures - Renyi-\alpha, Tsallis-\alpha, reverse and forward Kullback-Leibler divergences, and design estimators for their stochastic gradients in the context of training enerative Flow Networks. The authors verify that minimizing these divergences yields correct and empirically effective training schemes on several toy environment, and show that it often lead to faster convergence than previously proposed optimization methods. The authors also design control variates based on REINFORCE and score-matching estimators to reduce gradient variance. Strengths: - The paper provides evaluation of different divergence measures as learning objectives for GFlowNets, showcasing their effectiveness in improving training convergence in several toy environments. - The authors develop control variates for reducing the variance of gradient estimates. Weaknesses: 1. While the paper provides empirical evidence for the effectiveness of divergence-based objectives, a more extensive comparison with traditional GFlowNet training methods across a wider range of datasets and applications would strengthen the claims. 2. The choice of the \alpha parameter in Renyi-\alpha and Tsallis-\alpha divergences is not extensively explored. More insights into the impact of \alpha on the learning dynamics and guidance on selecting an appropriate value would be beneficial. 3. The computational overhead introduced by the control variates and their impact on training time is not explicitly discussed. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How do the proposed divergence-based objectives perform on more complex and high-dimensional datasets commonly used in fields such as drug discovery and natural language processing? 2. Can the control variate techniques be extended to other GFlowNet training objectives beyond the divergence measures considered in this paper? 3. How does the choice of the \alpha parameter affect the learned GFlowNet's ability to capture multi-modal target distributions or discover diverse high-quality samples in combinatorial optimization tasks? 4. Are there any theoretical guarantees or bounds on the sample complexity or convergence rates when using the proposed divergence-based objectives for training GFlowNets? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We hope our answers address your concerns and elevate your appraisal of our work. Otherwise, we would be happy to engage further. > … a more extensive comparison across a wider range of datasets and applications would strengthen the claims. We have included SubTB [1], DB [2], and VarGrad [3,4] losses as baselines and included the hypergrid [5] and causal DAG environments [6]. With this, our experimental campaign is broader than most works in the GFlowNet literature, including six different environments with continuous and discrete target distributions. In contrast, most works in the GFlowNet literature use three to four environments [1, 2, 4, 5, 6, 7, 8], typically using discrete targets and comparing against a small set of learning objectives. Given our comprehensive experimental suite, we can confidently conclude that divergence-based objectives are beneficial for training GFlowNets as long as they are equipped with appropriate control variates, contrary to the belief established by the results in [18]. > on choosing $\alpha$ and its effect on GFlowNet's ability to capture modes Thank you for the opportunity to improve our discussion on the role of $\alpha$. To the best of our knowledge, delineating specific guidelines for choosing optimal $\alpha$ for VI is still an open problem [11, 12]. Our submission (Lines 164-186 and Figure 1) discusses how $\alpha$ modulates mode-seeking vs. mass-covering behavior. This is a necessary starting point practitioners considering the choice of $\alpha$. To further illustrate the effect of varying $\alpha$, we executed a series of experiments for the hypergrid task – please refer to Fig 3 of the rebuttal PDF. Overall, our analyses indicate that, for sparse distributions, large negative values of $\alpha$ perform better – on par with the mass-covering effect of the corresponding divergence measure. In addition, we also ran experiments for the set generation, sequence design, and BPI tasks with varying values of $\alpha$, observing similar results. We will include all additional results in the revised manuscript. > computational cost of control variates In practice, the computational overhead of our variance reduction techniques is negligible. We report in Table 1 the runtime (in minutes) of the training process averaged over all environments considered in the experiments, a total of 120 runs, for each learning objective. The avg runtime confirms the small overhead of our variance reduction method. Furthermore, the control variate significantly speeds up the convergence, as shown in Fig. 5 of our manuscript (Section 5.3). Table 1: Runtime in minutes, avg over runs and environments. | criterion | avg | |:-----------------|---------:| | KL | 9.5| | Renyi-$\alpha$ | 9.4| | Rev. KL | 10.4| | Tsallis-$\alpha$ | 10.5| | TB | 9.2| > Performance in complex and high-dimensional tasks Thanks for the question. We would like to clarify that our adopted environments represent both realistic and prototypical for GFlowNets. For example, the phylogenetic tree inference [7] and sequence generation [8], presented in Figure 3, are real-world tasks with papers assessing the effectiveness of GFlowNets in solving them [7, 8]. Also, we estimate ~$10^{7}$ possible final objects in the set generation and sequence design tasks, highlighting their realistic scales. We have now included hypergrid [1] and Causal DAG [2] as novel tasks; and DB, SubTB, and VarGrad as baselines. Strikingly, Renyi, Tsallis, and KL perform on par with or better than balance-based losses in most scenarios (Figs 1 and 2, rebuttal PDF). > Can the control variate techniques be extended to [balance-based] GFlowNet training objectives [...]? In principle, yes. However, it is unclear how to devise efficient CVs for balance-based objectives. Firstly, conventional objectives rely on off-policy sampling, and it is mostly unclear how to define a control variate that (i) has zero expectation under a chosen policy and (ii) is correlated to the objective's gradient. Secondly, as shown in Fig 6 (supplement), the smoothness of the learning curve for TB suggests that gradient variance is not an issue for balance-based objectives. Notably, it is well-known that the high variance for divergence-based objective stems from the score-function estimator, $\mathbb{E}\_{\tau \sim p_{F}}[\nabla \log p_{F}(\tau)]$, which is not present in the gradient of balance-based objectives such as TB. > Theoretical guarantees on the training convergence Developing convergence rate analysis for GFlowNets remains an open problem. Many works focus on designing sample-efficient learning objectives for GFlowNets, but none provide convergence rate analyses. Also, establishing convergence rates for VI for generic distribution classes is an open problem. Initial works [17] assumed strong convexity in a lower-bound problem relevant to GANs, obtaining geometric convergence rates. More recently, [16] obtained guarantees for BBVI and Gaussian approximations. Adapting these results to GFlowNet is a fruitful line of work, meriting an investigation of its own. [1] Learning GFlowNets from partial episodes for improved convergence and stability. ICML 2023 [2] GFlowNet Foundations. JMLR 2023 [3] VarGrad: A Low-Variance Gradient Estimator for VI. NeurIPS 2020 [4] Robust Scheduling with GFlowNets. ICLR 2023 [5] TB: Improved credit assignment in GFlowNets. NeurIPS 2022 [6] Bayesian structure learning with GFlowNets. UAI 2022 [7] PhyloGFN: Phylogenetic inference with GFlowNets. ICLR 2024 [5] Biological sequence design with GFlowNets. ICML 2022 [12] Rényi Divergence VI. In NeurIPS 2016 [13] Meta-learning divergences for VI. In AISTATS 2021 [16] Provable convergence guarantees for BBVI. NeurIPS 2024 [17] g-GAN: Training generative neural samplers using variational divergence minimization. NeurIPS 2016 [18] GFlowNets and VI. ICLR 2023
Rebuttal 1: Rebuttal: Dear reviewers and AC, We appreciate that reviewers evaluation of our work as both theoretically principled [Gn1e, HRNm] and empirically well-grounded [Gn1e, 3sBX, NozV], expanding the link between VI and GFlowNets [Gn1e, 3sBX, HRNm], with practical contribution of effective control variates for variance reduction [Gn1e, 3sBX, HRNm, NozV], presented with clarity [3sBX, HRNm], and with reproducible results [3sBX] supported by a comprehensive range of experiments [Gn1e, 3sBX, HRNm]. We present a summary of the central points in the discussion and how they were addressed. 1. Reviewers NozV, 3sBx, Gn1e, and HRNm suggested including additional benchmark tasks, baseline objectives, and evaluation metrics. a. Firstly, we included the hypergrid and DAG environments in our experimental suite. b. Secondly, we added SubTB, VarGrad, and DB among our baselines. c. Thirdly, we measured the number of modes and the average reward of the highest $K$ scoring samples during training (top-$K$). d. Our results confirm that, for the majority of problems, divergence-based objectives lead to faster training convergence than balance-based ones. To the best of our knowledge, this is the **broadest experimental evaluation of GFlowNet objectives in the literature**. 2. Reviewers NozV and 3sBx requested a runtime analysis to quantify the computational overhead introduced by the control variates. We provide a table in their respective answers showing that their overhead is negligible for both the KL- and $\alpha$-divergences. 3. Reviewers NozV and Gn1e asked for an assessment of the effect of $\alpha$ on training convergence and guidelines for choosing it. Our novel experiments ilustrate how varying $\alpha$ modulates the balance between mode-seeking and mass-covering behaviors and that large negative $\alpha$ values are more suited to sparse target distributions (Fig. 1, main text; Fig. 3, rebuttal PDF). 4. Reviewers NozV and Gn1e questioned the comparative benefit in real-world and high-dimensional tasks; reviewer 3sBX raised concerns about scalability. Beyond the extensive experiments and realistic tasks such as BPI, we included the hypergrid and Causal DAG as additional environments. Compared to baselines (TB, DB, SubTB, and VarGrad), divergence-minimization algorithms lead to faster training convergence in most problems (Figs 1 and 2, rebuttal PDF). We would also like to thank the reviewers again for their invaluable feedback and their help in strengthening our work. Pdf: /pdf/de60e3d32b6b05592d669202c6aaf07811b105c3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning
Accept (poster)
Summary: This paper introduces a new framework, AVATAR, that allows LLM agents to optimize the performance of Knowledge Retrieval. In the framework, a Comparator agent is adopted to extract insight from the positive and negative samples. The experiments on four retrieval datasets show the effectiveness of the method. Strengths: 1. The schematics in the paper are well drawn and demonstrate the proposed method well. 2. The experimental settings and prompts are detailed. 3. The experiments are performed on STARK and Flickr30K Entities benchmarks to show the effectiveness of the methods. Weaknesses: The main weakness is the lack of innovation and comparison compared with cutting-edge works. 1, This proposed AVATAR is very similar to ExpeL [1] in terms of method and has no obvious innovation in comparison. ExpeL adopts a Reflexion agent (actor) to gather success and failure experiences of multi-step tasks, which is the same as the actor's role in this paper. ExpeL adopts another agent to compare a failed trajectory with a successful trajectory for the same task and extract insights from the comparison, which is the same as the instruction generation process of the comparator in this paper. ExpeL adopts the experience pool to retrieve successful trajectories, which is the same as the Memory Bank in this paper. Similarly, Autoguide [2] also extracts insights from experiences. Therefore, the method of auto-optimizing the instruction by comparing the successful and failed trajectories has no innovation compared with Expel [1] and Autoguide [2]. The author should conduct more surveys and read relevant cutting-edge works. 2, The paper claims their method achieves SOTA on STARK and FLICKR30K-ENTITIES benchmarks, but they only compared with relatively basic methods. The author employs several outdated embedding-based retrievers as the comparison. However, text-to-image retrieval on FLICKR30K-ENTITIES has long been dominated by the vision-language model [3] [4]. Internvl [3] and Beit [4] achieve high recall@1 (from 80 to 90) on this benchmark. Therefore, it is unreasonable to claim that the proposed method achieves the retrieval SOTA on these two benchmarks. 3, The results of the comparison on FLICKR30K-ENTITIES are missing. Quantitative results on FLICKR30K-ENTITIES are only shown in Figure 5 (right), but the results of all methods on this benchmark are not found in the paper. Considering that STaRK is a relatively new benchmark, not many methods have experimented on it, so results and analysis on recognized benchmarks are important. 4, Many sentences in the paper are too obscure to understand and lack clear explanations. For example, "these per-sample instructions tend to be narrow and fail to identify flaws across all components of a complex solution. Additionally, while certain tool combinations may be effective for one type of input, their effectiveness can vary with others, leading to decreased performance when applied across different scenarios." in lines 152-154, what do "per-sample instructions" mean, and why will these tool combinations "lead to decreased performance"? Providing some simple examples can make it easier to read. [1] Zhao, Andrew, et al. "Expel: Llm agents are experiential learners." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 17. 2024. [2] Fu, Yao, et al. "Autoguide: Automated generation and selection of state-aware guidelines for large language model agents." arXiv preprint arXiv:2403.08978 (2024). [3] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [4] Wang, Wenhui, et al. "Image as a foreign language: Beit pretraining for vision and vision-language tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: In line 160, "the comparator samples a group of data samples (question-answer pairs), executes the current actions for each question," but how can the comparator execute the current actions? What are the current actions for the comparator? The text in Figure 3 is too small to read and the meaning of the marked text is difficult to understand. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: There is no separate Limitation or Broader Impacts section in the paper. In addition, the authors said, "We did an extensive survey on related work in the area of LLM agents, agent optimization, LLM agent for retrieval, and further discuss their limitations," but the discussion of the limitations in this paper is required. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments! We carefully justify our novelty and clarify the misunderstandings. We will be grateful for your patience of reading our response: --- ## **Comment 1: Comparison with Expel [1] and Autoguide [2]** We apologize for missing these important and relevant papers. Thank you for helping us do right by the existing work. While agree that they are relevant, we summarize the key differences: ||ExpeL|AutoGuide|AvaTaR| |:--|:--|:--|:--| |**Contrast Over**| Successful and failed trajectories (action sequences)|Trajectories|Positive & negative data samples| |**Training Phase**|Two phases: sample experiences, then extract insights|Two phases, similar to ExpeL|One phase training with iterative instruction generation and action refinement| |**Inference Method**|In-context generation with extracted insights|Iterative generation with state guidelines|**Direct generalization with optimized action sequence| --- Specifically, ExpeL is an innovative method that leverages experiences across multiple training tasks, and AutoGuide introduces state-aware guidelines for agents. All three methods learn from past experience. However, the high-level ideas are different. Specifically, AvaTaR **contrasts positive and negative queries for the action sequence generated by the actor at current step**, while both ExpeL and AutoGuide **contrast failed and success action sequences given an instance**. As an analogy with RL, the instructions from AvaTaR comparator is **on-policy** (target at current action sequence), while ExpeL and AutoGuide requires **off-policy** approximation with data pool. Moreover, AvaTaR conducts end-to-end training and direct inference on up to 4k testing queries in total, which achieves higher scalability compared to in-context inference methods. We added discussions in our revision. Once again thank you very much for helping us position our work better. --- ## **Comment 2.1: “The paper claims their method achieves SOTA on the benchmarks”** Our statement in the abstract - “We find AvaTaR consistently outperforms SOTA approaches” - refers to SOTA in the scope of LLM agents. We apologize for the confusion and have clarified the statement. --- ## **Comment 2.2: “they only compared with relatively basic methods”** - ### **a) Recap: Baselines used** - For STaRK, we compared **all available methods** reported by the benchmark and two agent methods: ReAct and Reflexion. - While image retrieval tasks are less explored by previous agent works including Expel and Autoguide, we made **non-trivial adaptations** to apply ReAct and Reflexion. - ### **b) Justification: Are the baselines sufficient?** We believe AvaTaR is compared against essential baselines. For reference, Expel compared with ReAct and Act. While ReAct generally outperforms Act, we use Reflexion as an alternative. Similarly, Retroformer [3] used ReAct and Reflexion as baselines. --- ## **Comment 2.3: “outdated embedding-based retrievers ... dominated by the VLMs (Internvl and Beit)”** - ### **a) Recap: What are the embedding-based retrievers we used?** - For STaRK, we used `text-embedding-ada-002`, a competitive model on the Mteb leaderboard during the time of this work. - For Flickr30k-entities, we used `clip-vit-large-patch14`, a commonly used VLM. - ### **b) Clarification: The embedding models are not only baselines, but also tools for agents** Table 5 and 6 listed a set of tools for **all agent methods**. E.g., `GetClipImageEmbedding` uses the same CLIP model as the baselines for fair evaluation.With the **given tools**, our goal is to improve agents' tool-use ability, which leads to improved performance. - ### **c) Justification: AvaTaR v.s. VLMs or AvaTaR+VLMs?** In the above setting, AvaTaR achieve proof-of-concept results which outperforms the baselines **with the same tools or embedding model**. Differently, `Internvl (6b)` and `Beit (1.6b)` pretrain large VLMs, which are not fairly comparable with AvaTaR since it used a much smaller clip model with 427m parameters. More importantly, *the goal of AvaTaR (improving agents) and the pretraining methods (training powerful VLMs) are different*. In fact, **these two method classes are not conflicting. Both VLMs can be tools leveraged by AvaTaR!** This inspires a practice future direction to construct powerful tools from open world. --- ## **Comment 3: Missing comparison on FLICKR30K-ENTITIES** The results include all available methods. Compared with Table 2, Dense Retriever is a finetuned text encoder from STaRK; QAGNN is for knowledge graphs; Multi-VSS chunks documents and is not applicable to images. We added explanations for clarity. --- ## **Comment 4: Statements in L152-154** - **“Per-sample instructions”**: Instructions generated for a failure/negative query instance. - **Example**: When answering query `What are some recommended traditional setup fishing rigs from the Thill brand?` Reflexion tend to rely on specific details like "Thill". E.g., It (1) computes a token match score, and if there is a match (2) returns all fishing rigs items, which fails to generalize when user requests other Thill items" --- ## **Comment 5: Limitations** We added a limitation section. Due to character limit, please refer to our response to `reviewer CkbF`. Thanks! --- **Question 1: Clarification of comparator** - “Current actions” refers to the action sequence generated by the actor in the current training step. - “Execute the current actions” means evaluating the current action sequence on sampled queries. Actions are executed by actor, we clarify it in the revision. **Question 2: Figure 3.** The light orange and blue highlight features of positive and negative queries, respectively. We will adjust the caption size for readability. --- # **Summary** We sincerely hope we address your concerns. We would very much appreciate it if you could reconsider your evaluation if some concerns are addressed. Thank you very much! --- Rebuttal 2: Comment: Thanks for the detailed responses, but they have not fully addressed my concerns. 1- Even though there are differences in details, the existence of papers like ExpeL and AutoGuide clearly reduces this paper's innovation in the direction of self-optimization of LLM agents. Given ExpeL and AVATAR both utilize the comparator to optimize the prompt, the main innovation of this paper would be replacing 'off-policy' with 'on-policy' learning. However, as 'off-policy' and 'on-policy' are both useful for RL, the authors should perform more experiments and studies to show that their 'on-policy' methods are better than the 'off-policy' version. 2- Previous works (ExpeL and Retroformer were published in Aug 2023) used ReAct and Reflexion as baselines, which cannot be the reason that this paper still uses the same baseline. The field of LLM Agents is a rapidly developing field. In the year after reflexion (published in Mar 2023), a large number of high-quality methods have been proposed, e.g., [1][2][3]. However, in the experiment, the author ignored the comparison with these cutting-edge methods. In addition, in the introduction and related work of the paper, the author repeatedly pointed out that the previous LLM Agent methods via self-optimizing cannot solve complex problems. However, there is no comparison with these methods in the experiment to verify these claims. Therefore, the authors should keep up with recent methods, and consider these approaches in their experiments, rather than just the basic methods. 3- The method proposed in this paper is not only applicable to retrieval tasks but also to general agent tasks. Therefore, it should also be experimented on some common and difficult benchmarks. As I said before, one of my concerns is that STARK is a very new benchmark (published in April 2024), and Flickr30k-entities is a very rarely used benchmark for LLM Agent. Achieving good results on LLM agent benchmarks that have been studied more, such as HotpotQA [4], AgentBench [5], and WebArena [6], the method can be more convincing. 4- For the question about the clarification of the comparator, your answer should be added to the revised version because the presentation of the original text is unclear and misleading. 5- One more new question: I found the authors implemented AVATAR with a batch size of 20: "the metric Recall@20 for constructing positive and negative queries, and hyperparameters (l = h = 0.5, b = 20)". Given that 20 trajectories of Actors can be quite long, can the authors provide the context length required by the comparator and the token overhead required for AVATAR optimization? [1] Zhu, Zhaocheng et al. “Large Language Models can Learn Rules.” ArXiv abs/2310.07064 (2023): n. pag. [2] Majumder, Bodhisattwa Prasad et al. “CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization.” ArXiv abs/2310.10134 (2023): n. pag. [3] Qian, Cheng et al. “Investigate-Consolidate-Exploit: A General Strategy for Inter-Task Agent Self-Evolution.” ArXiv abs/2401.13996 (2024): n. pag. [4] Yang, Zhilin et al. “HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering.” Conference on Empirical Methods in Natural Language Processing (2018). [5] Liu, Xiao et al. “AgentBench: Evaluating LLMs as Agents.” ArXiv abs/2308.03688 (2023): n. pag. [6] Zhou, Shuyan et al. “WebArena: A Realistic Web Environment for Building Autonomous Agents.” ArXiv abs/2307.13854 (2023): n. pag. --- Rebuttal 3: Title: 2nd Batch response Comment: We appreciate the reviewer for further communication. We hope our response this time could mostly address your concerns. ### **TL;DR;** - We clarify Avatar's novelty, especially on AvaTaR $\neq$ ExpeL + on-policy training. - We show that AvaTaR outperforms ExpeL and Retroformer on HotpotQA. We also show the performance and scalability advantage of AvaTaR over ExpeL on STaRK-MAG; - We show that AvaTaR works well on new QA datasets (HotpotQA, ToolQA, ArxivQA) --- Before started, we make the following terms consistent to avoid confusion: - Queries = instances (used in AvaTaR) = tasks (used in ExpeL) - Actions to answer queries = action sequences (AvaTaR) = trajectories (ExpeL) --- **Comment 1: About novelty** While we pointed out "on-policy" and "off-policy" as one difference between AvaTaR and ExpeL, the novelty of AvaTaR goes beyond that. Here are our reasons: - **What to contrast**: This is relevant to the reviewer's question about batch size - *"Given that 20 trajectories of Actors can be quite long."* We believe this is a misunderstanding (please let us know if otherwise). Please note that AvaTaR contrasts positive and negative queries, unlike ExpeL, which contrasts trajectories. Therefore, the context length is less of an issue since the queries are mostly short, we will provide detailed statistics. - **Benefits of contrast queries**: Following the last point, the reviewer actually pointed out one benefit of contrasting queries - it allows us to contrast among a batch of pos/neg queries `(batch_size=b)` rather than a well-performing and an under-performing trajectory (`batch_size=2`). This design offers better scalability and generalization ability. We provided intuitions in L173-184, and we repeat them here for your convenience: ``` Moreover, as contrastive reasoning directly targets disentangling the performance gap related to input patterns and how they are handled differently by the tools, it is particularly effective in helping comparators differentiate and select tools for use. Finally, by identifying systemic flaws across a wide array of negative queries, comparator generates modifications that are not only tailored to individual samples but also to diverse data samples, offering benefits for better generalization to novel samples. ``` This design enables AvaTaR to generate more holistic instructions from the insights on multiple queries and/or generalize the final action sequence to hundreds of testing queries without test inference, which are validated by the strong generalization in our existing experiments. - **AvaTaR $\neq$ ExpeL + on-policy training**: Due to the differences in design, AvaTaR is able to conduct on-policy training since it directly optimizes the action sequence for multiple queries. However, one can imagine it would be hard for ExpeL to conduct on-policy training with trajectory comparison. --- **Comment 2: Comparison with more baselines** We added ExpeL and Retroformer as our baselines in our paper. We firstly compare them with AvaTaR on HotpotQA on the dev subset (100 queries) in their repositories. Here are the results, where all the baseline results are reported by ExpeL and Retroformer): ||HotpotQA (EM)| |:---|--:| |Act|29%| |ExpeL|39%| |ReAct |40% | |Reflexion|46%| |Retroformer (#retry=1)|51%| |AvaTaR|55%| For AvaTaR, we take 100 training queries to optimize prompts for 10 steps and set the number of retries as 0, which achieves the best performance. Then, we compare AvaTaR with ExpeL on STaRK-MAG. STaRK datasets involve over 20 tools, leading to long action sequences and high token overhead for ExpeL to process and evaluate. Due to this, we randomly sampled 100 training queries and evaluated ExpeL on 50 testing queries, comparing with the other methods on the same set. |||MAG|(#Test=50)|| |:---|:---|:---|:---|:---| ||Hit@1|Hit@5|Recall@20|MRR| |Dense Retriever|16.00|40.00|51.84|27.39| |QAGNN|20.00|52.00|49.71|36.39| |VSS|40.00 |58.00|55.93|47.76| |Multi-VSS|32.00|58.00|58.81|43.58| |ReAct|46.00|60.00|54.67|50.92| |ExpeL|40.00|58.00|55.94|47.43| |Reflexion|48.00| 64.00| 57.43|52.31| |AvaTaR-C|44.00|60.00|52.49|50.16| |AvaTaR|52.00| 64.00|53.86 |56.74| For ExpeL, we used `text-embedding-3-large` embedding model to retrieve insights. We found ExpeL’s performance similar to ReAct, which might be because the STaRK queries are diverse, therefore require more training data to gather enough experience. We will have ExpeL results on the other two STaRK datasets and include a table similar to Table 2 in our paper. Hopefully, these will address your concern about the comparison with previous LLM Agent methods and justify our claims. --- Rebuttal 4: Title: 2nd Batch response (Continue) Comment: **Comment 3: More benchmarks.** Our experiments on HotpotQA improve this aspect. Please also see our response to `Reviewer CkbF` for AvaTaR results on ArxivQA and ToolQA. **Comment 4: Clarity on comparator.** Thanks! Yes, we made sure the statement is clear now. **Comment 5: Content requirements and token overhead.** On STaRK-MAG, the context length for the comparator is approximately 4k tokens per step, including initial instructions and tool usage (with pos/neg queries taking around 0.8k). The actor's token cost, including memories, is around 8k. Running AvaTaR for 50 steps accumulates about 600k tokens, costing under $10 with gpt-4-turbo. There is no inference cost on STaRK dataset as we directly apply the action sequence. As a reference, we also compute the token cost for ExpeL on STaRK-MAG: Training token cost: 357k in total (3.57k per task); Testing token cost: 389k in total (7.8k per task). In this comparison, we believe AvaTaR has an advantage in scaling especially when the number of testing queries are large. For HotpotQA, the comparator's context length is around 2k tokens per step (with pos/neg queries at 0.5k). The actor's token cost ranges from 2k to 8k, depending on the number of actions. Running 10 steps accumulates around 80k tokens. *Additional related works.** Thanks! We have also added them to our related work.
Summary: This paper introduces AVATAR, a novel framework for optimizing large language model (LLM) agents to effectively use provided tools and improve performance on complex multi-step tasks, with a focus on retrieval tasks. The key innovation is a comparator module that generates holistic instructions to improve the actor (main agent) through contrastive reasoning on batches of well-performing and poorly-performing queries. The authors demonstrate AVATAR's effectiveness on four challenging retrieval datasets from the STARK and FLICKR30K-ENTITIES benchmarks. Results show significant improvements over state-of-the-art baselines like ReAct and Reflexion. Strengths: Novel approach: The comparator module using contrastive reasoning on query batches is an innovative way to generate holistic instructions for improving agent performance. This addresses limitations of per-sample instruction approaches. Clear motivation and explanation: The paper draws insightful analogies to concepts like batched training and gradient computation in neural networks to explain the intuition behind the approach. This makes the core ideas easy to understand. Strong empirical results: AVATAR consistently outperforms strong baselines across multiple datasets, with significant improvements on key metrics. The ablation study clearly demonstrates the value of the comparator module. Weaknesses: Limited scope of evaluation: While the retrieval tasks are complex, it would be valuable to see AVATAR applied to a broader range of tool-use benchmarks to demonstrate generality. Lack of comparison to finetuning: The paper does not discuss or compare to alternative approaches like directly finetuning the actor model via rejection sampling. It's unclear how AVATAR compares to such methods in terms of performance and efficiency. Scalability considerations: The paper does not thoroughly address how the method scales with increasing numbers of tools or more complex task structures. Technical Quality: 3 Clarity: 3 Questions for Authors: Regarding the use of the memory bank as in-context learning examples, How does AVATAR compare to directly finetuning the actor model in terms of performance, efficiency, and data requirements? What are the key advantages of the instruction-based approach? How does the method handle scenarios where the action model improves significantly, making it difficult to find negative examples for the contrastive learning process? Is there a strategy for addressing this? Have you explored applying AVATAR to other types of tool-use tasks beyond retrieval? What challenges do you anticipate in extending to more diverse task types? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have not adequately addressed the limitations of their work. I recommend adding a dedicated limitations and broader impacts section to address these points more thoroughly. This should include: Discussion of computational requirements and scalability limitations, potential failure modes or scenarios where the method may struggle Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive feedback! We provide point-to-point responses: --- ## **Comment 1: Applying AvaTaR to more tool-use benchmarks** Thanks! We've conducted initial experiments on ArxivQA and are currently running tests on ToolQA. Please stay tuned! For ArxivQA, we randomly sampled 500/200 training/testing queries, providing three web search tools and five LLM tools such as summarization. | | CoT| ReAct | AvaTaR | |:---|:---|:--|:--| |Correctness by LLM Judge|$58.0$\%|$73.5$\%|$85.5$\%| We are happy to provide more details. Notably, AvaTaR achieved a substantial improvement (12%) over ReAct. For our other evaluation datasets, we highlight that they are more comprehensive than some existing tool-use datasets. For example, ToolEye [1] offers a fine-grained system but only has 54 retrieval queries, while MetaTool [2] involves only one or two tools per query. In contrast, our datasets involve action sequences with over ten tools (Figure 2) and span multiple modalities, totaling over 4k testing queries. --- ## **Comment 2 & Question 1,2: Comparison between finetuning methods and AvaTaR (with memory module)** Thanks for the insights! We agree that finetuning is a potential way to improve the actor LLM. We give an overview first and then conduct the comparison. - **Overview**: Several benchmarks and methods focus on fine-tuning tool-augmented LLMs. ToolBench (ICLR’24) [3] and GPT4Tools [4] create instruction-tuning datasets to enhance tool-use capabilities. Similarly, API-Bank [5] provides a dataset to improve planning, retrieval, and tool-calling skills. ToolAlpaca [6] generates tool-use corpus to develop generalized tool-use abilities in smaller LMs through fine-tuning. However, we believe **finetuning approaches may be challenging to adapt to our tasks** and AvaTaR has better advantages in the following aspects: - **Data Requirements**: Previous works [4,5,6,7] require large-scale datasets involving thousands of tools for LLM finetuning, while some tasks use fewer than a hundred tools, making data generation challenging. Moreover, creating instructions with ground truth tool use from ChatGPT [3,4] or multi-agent frameworks [5,6] can be difficult, especially for retrieval tasks with external knowledge bases. In contrast, AvaTaR needs only a subset of training QA and tool descriptions, making its data requirements less demanding. - **Efficiency:** - **Inference:** AvaTaR applies a fixed solution to testing queries without additional memory modules or comparators, making its inference efficiency comparable to or better than finetuning methods. - **Training:** AvaTaR reduces human effort by eliminating the need to design GPT prompts or multi-agent frameworks for annotating tool usage. For time cost, AvaTaR is observed to be efficient (c.f. Figure 4), typically converges within 50 iterations. At each iteration, we track only the top-5 action sequences with the best performance, minimizing token costs from the memory bank. - **Performance:** The reasoning ability of the actor LLM impacts task performance, motivating the use of state-of-the-art LLMs like Claude3, which are closed-source and mostly unavailable for finetuning. Besides more flexibility to use these models, AvaTaR can use any finetuned model with enhanced tool usage abilities as the actor LLM, which can be further trained by the comparator. We added this discussion in our revision. Thank you for highlighting finetuning methods to help us better position our work. --- ## **Question 3: Strategy for selecting negative samples** Great question! This inspires us to implement an adaptive threshold strategy. Specifically, we track evaluation results on a subset of training queries from the past several epochs and use the median performance as the threshold for the current epoch. This helps ensure a sufficient number of negative samples for selection. --- ## **Question 4 and Comment 3.2: Challenges of extending AvaTaR** We have extended AvaTaR to general QA tasks with newly added datasets. With recent support for AvaTaR in the DSPy [7] library, we expect its application to more tasks, such as coding problems, where positive/negative sampling can be determined by unit test results. A key challenge for complex tasks, like visual reasoning [8,9], is that the input space can make extracting insights difficult. These tasks often require stronger reasoning to identify patterns compared to simpler formats like natural language queries. We added this to the future work. --- ## **Comment 3.1: Scalability and limitations** Our paper used around 25 tools per dataset, offering a decent level of complexity. Further, we discuss the limitations: - **Scalability**: With LLM context lengths increasing (up to 128k), AvaTaR can scale to handle hundreds of tools and complex tasks, but practical limitations like increased latency may impact performance. Future efforts could integrate finetuned tool-augmented LLMs as actors or comparators to facilitate smooth scaling. - **Computation Requirements**: Scaling AvaTaR can increase computational costs due to the need to manage longer contexts and multiple tool interactions, leading to higher expenses. - **Potential Failure Modes**: While AvaTaR generalizes well to testing queries, performance may degrade if queries require novel tool combinations that AvaTaR hasn't trained for. Robust monitoring and adaptive learning strategies may help mitigate these risks. We added a limitation section with extended discussions. --- # **Summary** We appreciate your approval on our novelty, motivation, and effectiveness. We hope we address your concerns with 1) more experiments on two QA datasets (ongoing), 2) extensive comparison with finetuning methods, 3) a limitation section. We are more than happy to follow up. We also kindly ask if you could reevaluate our work if some concerns are addressed. Thank you for your insights and support again! *Reference is in the uploaded PDF* --- Rebuttal 2: Title: (Continue) Experiments on more tool-use benchmarks Comment: Hi Reviwer CkbF, Thanks for waiting for the follow-up results on ToolQA! We introduce the setup and present the full results here: - **Datasets**: SciREX (a dataset for document-level information extraction based on full-length ML scientific papers) and Agenda (personal agenda questions based on a private knowledge base) are the two (and only two) datasets based on external text corpora from ToolQA. Each dataset has `easy` and `hard` splits. - **Configuration for AvaTaR:** We randomly split the questions by 40%:60% for training and testing, as official dataset splits are not provided. We set the maximum epoch to 5. - **Tools:** We use the knowledge base tools provided by ToolQA and web search tools for all agent methods. - **Agent Backbone:** We use `gpt-4o` for the LLMs of AvaTaR and the baselines, which are evaluated on our testing split for fair comparison. | | SciREX | SciREX | Agenda | Agenda | |------------|--------|--------|--------|--------| | | Easy | Hard | Easy | Hard | | CoT | $0.0$% | $0.0$% | $0.0$% | $0.0$% | | ReAct | $8.3$% | $18.3$% | $31.6$% | $11.7$% | | Reflexion | $10.0$% | $13.3$% | $30.0$% | $13.3$% | | AvaTaR | $11.6$% | $21.7$% | $36.7$% | $23.3$% | We observed consistent improvements across these two datasets, with a significant improvement on the hard queries of Agenda. We hope the experiments on ToolQA and ArxivQA can further validate the effectiveness and applicability of AvaTaR on more tool-use datasets. We are happy to provide any details (e.g., insights or example instructions) that are not covered here for conciseness.
Summary: This paper proposes a new framework AVATAR for LLM agent that operates in two stages: -The first stage is optimization during the training process, which integrates the LLM comparator component into the AVATAR. The comparator summarizes holistic prompts from positive and negative queries and iteratively optimizes LLM actor using the prompts. -The second stage is deployment during the inference process, where participants respond without the involvement of the comparator. Strengths: Developing prompts for LLM agents is heuristic and laborious, but this paper proposes a new method called AVATAR, which automatically iteratively generates holistic prompts from positive and negative queries. Experimental results show that AVATAR outperforms SOTA methods. and some points summary as following: 1. It is a novel framework, comparing with the current SOTA approach such as ReAct, the Comparator component design is reasonable that it generates prompts from positive and negative queries. 2. It is new SOTA of LLM Agent. Experiments are conducted against current SOTA approaches as baseline, the framework outperform the current SOTA 3. It is a common framework, and it is effective with both Claude and GPT4. Weaknesses: 1. The optimization is data-driven, so comparator works effectively when the positive and negative queries are well sampled and balanced during iteration; when to stop iteration and how to sample the queries; it is not clearly clarified. 2. Regarding the construction of positive and negative queries, it is necessary to conduct experiments on the lower and upper bounds(l&h) to select appropriate values for optimal performance. 3. Regarding the memory bank, it is a important component to swap in and out according to data quality or balanced distribution, it is not well designed and discussed. Technical Quality: 3 Clarity: 3 Questions for Authors: Is "AIR (gpt4)" in Table 3 a typo? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your time and insightful comments! Here are our point-to-point responses: --- ## **Comment 1: Clarification about when to stop iteration and how to sample the queries** Thanks! We apologize for the brief information there. We added the following details in our revision: - **Early stopping for iteration control**: In our implementation, we train AvaTaR for a fixed number of epochs and select the action sequence with the highest validation performance. Similar to training deep neural networks, early stopping is efficient when performance on a hold-out subset doesn’t improve for a set number of epochs. This approach can further save time and cost for our framework. => ***We included more details in the experiment section and made early stopping available in our implementation.*** - **Positive & Negative Query Sampling**: We employed random sampling, a simple yet effective method. Meanwhile, to ensure a balanced set of positive and negative queries (totaling $b$ queries) for contrastive reasoning, our training framework uses a larger sampling batch size, such as $1.5b$. The sampled queries are classified as positive or negative based on performance and then balanced. For example, if there are $0.5b$ positive and $b$ negative queries in the **sampling batch**, we further randomly sample $0.5b$ from the negative queries, resulting in a **training batch** of $0.5b$ positive and $0.5b$ negative queries. => ***We added the detail to Section 4.2, Step 1: Constructing positive and negative queries.*** We hope the above discussion can address your concern about the clarity of our work! --- ## **Comment 2: Hyperparameter search on lower and upper bounds ($\ell / h$)** Good question! Yes it is important to search these two hyperparameters. In our previous experiments, we found these two hyperparameters are pretty robust *w.r.t.* small adjustments. Therefore, we set $\ell=h=0.5$ for all datasets. But now we draw more clear conclusions with additional experiments. To recap, queries where the Recall@20 metric is higher than $\ell$ are grouped into positive and queries where the Recall@20 metric is lower than $h$ are grouped into negative groups. Thus we have $\ell \geq h$. Specifically, we tested multiple combinations of $\ell$ and $h$. We conducted experiments on STaRK-Amazon only due to the time and computational cost involved. And here are the Hit@1 results: | | $h=0.3$ | $h=0.4$ | $h=0.5$ | |:---:|:---:|:---:|:---:| | $\ell=0.5$ | $48.32$ | $50.01$ | $49.87$ | | $\ell=0.6$ | $47.89$ | $49.56$ | $50.45$ | | $\ell=0.7$ | $47.75$ | $48.56$ | $49.34$ | Interestingly, we obtained **a better Hit@1 result** than the one reported in the paper when $\ell = 0.6$ and $h = 0.5$. Results for other metrics can be found in the uploaded PDF. **Key Observations:** - The performance is **robust** to the large adjustments, with variations of 2.7% in Hit@1. - We observe some **decline when the gap between $\ell$ and $h$ is too large**. This is likely due to omitting part of the training queries (the performance on which are within $(h, \ell)$. - We see **slight improvement when there is a moderate gap** between $\ell$ and $h$, which could help establish a clearer pattern difference between positive and negative queries without “sacrificing” too many training queries. We add the above results and discussion to Appendix B. We hope our study can address your concern about hyperparameter selection. In general, our framework relies on a very small number of hyperparameters, *i.e.*, $\ell$, $h$, batch size $b$, and training epochs. We believe the study on $\ell$ and $h$ demonstrate the robustness of our framework. --- ## **Comment 3: Design of memory bank and its influence on the optimization process** Thanks a lot for this insightful comment! For memory bank construction, we follow Reflexion [1], which also exhibits long-term memory. Alternatively, our framework allows for the direct plug-in of other kinds of memory banks. For example, a dynamic and multi-layered memory structure proposed by [2] can be used for storing knowledge and experience dynamically from past training. We believe a more advanced memory component is a great addition to our optimization framework, which has been less explored in previous studies. We also want to point out that our main contribution is orthogonal to the design of memory banks. Specifically, our novelty and success largely attribute to the comparator in the optimization module, which is validated in the ablation study between AvaTaR and AvaTaR-C. Following your suggestions, in our revision, 1) we added more discussion about the memory bank, and 2) we pointed out that enhancing the memory bank could be a key innovation in the future. Note that while revision is not allowed during rebuttal, we are happy to copy the updated paragraphs here if you would like to see them! We hope the memory module is well-discussed now. We thank you for these insightful suggestions again, which inspired us for future extensions. **Typo**: Good catch! Yes it should be "AvaTaR," and we've fixed it in the revision. Thanks! --- # **Summary** We thank you for your time and insights! We hope our 1) clarification on the training process, 2) experiments on hyperparameters, and 3) discussions on memory banks address your concerns well. At last, we would greatly appreciate your support and reconsideration given our response. We would like to emphasize that our main contribution is the development of an optimization framework featuring a novel comparator module. This module enhances contrastive reasoning and improves generalization, allowing our approach to significantly outperform baseline methods. Thank you in advance! --- **Reference** [1] Shinn et al. 2023. Reflexion: language agents with verbal reinforcement learning. In NeurIPS. [2] Zhong et al. MemoryBank: Enhancing Large Language Models with Long-Term Memory. --- Rebuttal Comment 1.1: Title: Thanks for your response, no more comments Comment: See it above.
null
null
Rebuttal 1: Rebuttal: # **General response** We truly appreciate the reviewers' efforts and valuable suggestions in reviewing our paper. We are glad that all/most reviewers reached a positive consensus on our work's presentation, motivation, novelty, and experimental effectiveness. Here is a summary of the reviewers’ major feedback and our corresponding actions: ----- | | Reviewer JqLP | Reviewer CkbF | Reviewer vMWD | Action/Summary | |:---|:---|:---|:---|:---| | **Presentation** | “Presentation: good” | “Clear motivation and explanation” | “demonstrate the proposed method well” | `NA`| | **Novelty** | “novel framework, …, the Comparator component design is reasonable” | “The comparator module … is an innovative way” | “the method…has no innovation compared with Expel [1] and Autoguide [2]” | RE reviewer vMWD: `We carefully read the related works shared by reviewer vMWD. While highly relevant, we found essential differences between the mentioned works and AvaTaR in terms of novelty and insights. Please see our response to reviewer vMWD for a detailed explanation.` | | **Empirical Improvements** | “effective with both Claude and GPT4” | “AvaTaR consistently outperforms strong baselines across multiple datasets, with significant improvements on key metrics” | “The experiments are performed … to show the effectiveness of the methods.” | `NA`| | **Evaluation Setting** | NA| Suggestions on a broader application of AvaTaR on tool-use benchmarks | “The paper.. (1) only compared with relatively basic methods, … (2) employs several outdated embedding-based retrievers…(underperform) Internvl and Beit VLMs. ” | RE reviewer CkbF: `We add additional experiments on two QA and tool-use benchmarks - ArxivQA and ToolQA (ongoing).` RE reviewer vMWD: (1) `We justified the baselines included, where ReAct and Reflexion are two prevailing agent methods.` (2) `We emphasize that our goal is to optimize agents and improve their tool-use abilities. We use clip-vit-large-patch14 (427 million parameters) embedding model as a tool for all of the agent methods. In contrast, Internvl (6b parameters) and Beit (1.6b parameters) study pretraining methods for large-scale VLMs, which have different goals and settings than AvaTaR.` | | **More Study and Discussion on AvaTaR** | Pos & neg query selection and the contribution of memory bank | (1) Scaling up with more tools and complex task structures. (2) Comparison between AvaTaR and finetuning methods | NA| RE reviewer JqLP: `We added experimental studies on pos & neg query selection.` RE reviewer CkbF: (1) `We highlight the level of complexity that our tasks reached and discuss future directions on scaling up.` (2) `We added extensive discussion between AvaTaR and finetuning methods in terms of performance, efficiency, and data requirements` | Moreover, following the suggestions from Reviewers CkbF and vMWD, we added a dedicated `limitation section` (see response to the reviewers) in our final draft. ------ # **Summary** We thank the reviewers for their suggestions which make our work more solid. We have improved our manuscript accordingly. We hope our responses can clarify any confusion and alleviate the remaining concerns above. We would be thrilled if you could let us know whether your concerns have been addressed or if you have any follow-up questions! — Best, Authors of Paper 5514 Pdf: /pdf/3196bedad49c8ca4cc00c5573d4f7b48a75f2eae.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Understanding Scaling Laws with Statistical and Approximation Theory for Transformer Neural Networks on Intrinsically Low-dimensional Data
Accept (poster)
Summary: This work derived a generalization error bound of using transformer architecture to estimate beta-Holder continuous functions. The bounds depend on the intrinsic dimension of the data. The generalization error can be decomposed into an approximation error and a variance error. They further showed that TFs with finite number of blocks can approximate arbitrary beta-Holder continuous functions up to any precision. Both their generalization and error bound and approximation error bound exhibit power-law decay, with exponents depending on the smoothness and intrinsic dimension. Empirical results on real datasets also validated their theoretical findings. Strengths: The paper present novel approximation theory for transformers in terms of the intrinsic dimension. Empirical observations are well-aligned with the theory. Weaknesses: 1. The statement that TFs can approximate $f$ to any precision $\epsilon$ with finite number of blocks is a bit inaccurate and misleading. Namely, Theorem 2 assumes $L_{FFN}$ is of order $O(\log(1/\epsilon))$. This means in the standard TF architecture, we would have $O(\log(1/\epsilon))$ number of FFN-attention blocks instead of only a finite number. It would be great if the author can clarify this point. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the number of hidden neurons in FFNs? 2. Does the result in (3) matches the minimax lower bound for estimating beta-holder class? What is the gain of using TFs instead of simple nonparametric methods to estimate $f$? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. As the author discussed, this work doesn't consider the training dynamics of the learning problem. 2. The exponents in (3), (4) highly depend on the smoothness of $f$ (i.e., $\beta$) and the intrinsic dimension of the data. However, the exact value of $\beta$ is hard to predict in practice. Is there any practical way to estimate $\beta$? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough repsonse! We address some of your comments below. **Strengths:** 1. The paper present novel approximation theory for transformers in terms of the intrinsic dimension. Empirical observations are well-aligned with the theory. We are glad the reviewer found our approximation theory novel and our empirical findings well-aligned with our theory! **Weaknesses:** 1. The statement that TFs can approximate to any precision with finite number of blocks is a bit inaccurate and misleading. Namely, Theorem 2 assumes is FFN depth is of order $O(\log(\epsilon^{-1}))$. This means in the standard TF architecture, we would have $O(\log(\epsilon^{-1}))$ number of FFN-attention blocks instead of only a finite number. It would be great if the author can clarify this point. This is a good point, the number FFN layers per transformer block is $\log(\epsilon^{-1})$. However, we recall that the weight-matrices in each FFN are of constant size (5x5 since $d_{embd} = 5$) and thus the number of parameters in each FFN layers is negligible. The vast majority of parameters are used in highly-parallel attention layers of which we only need $\log(d)$ layers. Note: a transformer *block* consists of an attention layer and then a FFN network. **Questions:** 1. What is the number of hidden neurons in FFNs? The number of parameters in each FFN is $d_{embd}^2\cdot \log(\epsilon^{-1}) = O(\log(\epsilon^{-1}))$. Note $d_{embd} = 5$. 2. Does the result in (3) matches the minimax lower bound for estimating beta-holder class? What is the gain of using TFs instead of simple nonparametric methods to estimate $f$? Yes, our bound matches the min-max rate up to logarithmic factors [1]. For Holder functions, both transformer and many nonparametric methods (kernel regression and piecewise polynomial regression) can achieve the min-max rate up to logarithmic factors. However, transformer has achived remarkable success in large language models, which makes a theoretical understanding of transformer significant and meaningful. It is well known that simple nonparametric methods can not achive the same performance as transformers in real-world applications with large scale complex data. We believe (but have not proved) that transformer networks are more *adaptive* to the regularity of the target function at different areas of the domain. For simple nonparametric methods, such as kernel methods, often one must choose a kernel width, making adaptivity more difficult. **Limitations:** 1. As the author discussed, this work doesn't consider the training dynamics of the learning problem. To our best knowledge, the training dynamics of multi-layer transformers is an open-question. We agree studying transformer dynamics is of great interest, but this is not the goal of the current paper. 2.. The exponents in (3), (4) highly depend on the smoothness of $f$ (i.e., $\beta$) and the intrinsic dimension of the data. However, the exact value of $\beta$ is hard to predict in practice. Is there any practical way to estimate $\beta$. In general we expect the target function to have some regularity as otherwise it would be impossible to estimate [1]. Assuming the output varies according to the variation of the input, Holder continuity is the proper and mostly widely considered assumption in nonparametric estimation. A Lipschitz assumption with $\beta = 1$ implies the variation of the output for the target function is proportional to the variation of the input which is widely used in literature. In general we are unaware of any methods for measuring the Holder index of the target function from samples. **References:** [1] A Distribution-Free Theory of Nonparametric Regression, https://link.springer.com/book/10.1007/b97848 --- Rebuttal Comment 1.1: Comment: Thanks the authors for their response. My questions are partially addressed and I will maintain my score.
Summary: This document appears to be a research paper on predicting scaling laws for transformer neural networks, particularly when applied to data with low intrinsic dimensionality. Here are some key points: 1. The paper aims to establish mathematical theories to explain and predict scaling laws observed in transformer models like large language models (LLMs). 2. It presents both statistical estimation and approximation theories for transformers when the input data lies on a low-dimensional manifold. 3. The main results predict power law relationships between generalization error and both training data size and network size, where the power depends on the intrinsic dimension d of the training data. 4. A key finding is that the transformer architecture constructed in their theory only requires logarithmic depth in d, which is an advantage over feedforward networks. 5. The authors test their theoretical predictions empirically by training LLMs on natural language datasets and find close agreement with observed scaling laws. 6. The paper argues that the intrinsic dimension of data is crucial in determining transformer scaling laws both theoretically and empirically. It provides formal definitions and theorems around transformer networks, generalization bounds, and approximation capabilities. The work aims to bridge gaps between theory and practice in understanding neural network scaling, leveraging the low-dimensional structure often present in real-world data Strengths: 1. The paper provides a rigorous mathematical framework for understanding transformer scaling laws, which has been a significant open question in the field. 2. The work attempts to reconcile theoretical predictions with empirical observations, particularly in the context of large language models. 3. By incorporating the intrinsic dimension of data, the theory provides a more nuanced understanding of scaling laws that aligns better with real-world observations. 4. The authors test their theoretical predictions on actual language models, providing evidence for the practical relevance of their work. Weaknesses: 1. Simplified assumptions: The theory relies on assumptions about data lying on a low-dimensional manifold, which may not always hold in practice, especially for complex, high-dimensional data like natural language. 2. Limited scope: The paper focuses primarily on regression tasks, while many practical applications of transformers (like language modeling) involve more complex objectives. 3. Generalizability: It's unclear how well the theory generalizes to other types of transformer architectures or variations that differ from the specific formulation used in the paper. 4. Computational complexity: The paper doesn't deeply address the computational aspects of their proposed methods, which could be a practical limitation for very large models. 5. Data dimension estimation: The reliability and stability of estimating the intrinsic dimension of complex datasets (like text) remains a challenge and could impact the practical application of the theory. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Can you state the process of estimating the intrinsic dimension of textual datasets which is important for your work? If the dimension depends on the pre-trained model, it seems the posterior estimate is not supering to be good and its practicability seems weak. 2. In Theorem 1 and 2, the parameters of the network should satisfy a specific magnitude assumption, is this assumption significant and practical in applications? 3. In figure2, only 5 points are presented in each subfigure and the x-axis’s scope is different, how does the empirical result appear in other regions? 4. Why did you assume the Lipschitz regularity of language modeling objective equals 1? Is there any explanation for it? 5. The results shown in figure3 seem not good so what’s the significant reason? 6. The order of subscripts seems incorrect in Table 1. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: 1. The assumptions in the theorem seem not practical. 2. The construction in the proof of estimation theory is hard and not practical in empirical inference. 3. The estimation of the intrinsic dimension is not clear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Strengths:** 1. The paper provides a rigorous mathematical framework for understanding transformer scaling laws, which has been a significant open question in the field. [...] The authors test their theoretical predictions on actual language models, providing evidence for the practical relevance of their work. We are glad the reviewer recognizes our work as making progress on a significant open question in the field and that the reviewer recognizes the importance of our theory being validated with experiments on actual language models. **Weaknesses:** 1. Simplified assumptions: The theory relies on assumptions about data lying on a low-dimensional manifold, which may not always hold in practice, especially for complex, high-dimensional data like natural language. The widely supposed *manifold hypothesis* suggests that most natural data, such that images and text, lie on low-dimensional manifolds embedded in a high-dimensional ambient space. Many papers [2,3] find evidence suggesting this is true for image data and show this helps explain the success of deep-learning in learning high-dimensional data with low-dimensional structures. In particular, we and others [4,5] find evidence that low-dimensional structures exist in textual data once it is embedded by a model. (15 ID vs. 768 ambient dimension). 2. Limited scope: The paper focuses primarily on regression tasks, while many practical applications of transformers (like language modeling) involve more complex objectives. Despite being proved for regression, our result also sheds light on real world applications using next-token prediction (classification). Theoretically it is known that results on regression problems can be used to estalish results for classification problems [1, Theorem 1.1] since classification essentially depends on estimation of the probability function for classification. This allows our results on regression for transformers to applied directly to next-token prediction which translates to practical applictaions. Additionally, we remark the regression objective itself is commonly used in many real-world scenarios e.g. training a transformer diffusion backbone. 3. Generalizability: It's unclear how well the theory generalizes to other types of transformer architectures or variations that differ from the specific formulation used in the paper. Our theory crucially relies upon a number of key lemmas (e.g. the Interaction lemma facilitating pairwise interaction between tokens in the attention mechanism) to develop our novel approximation theory and a novel covering number result to develop the statistical theory. Both of these are general and can be extended to different transformer architectures. Further, our experimental results suggest that, despite the architectures being used in practice differing from our theory, we are still able to make good predictions about the scaling behavior of these models both in terms of the model size and the number of training samples. 4. Computational complexity: The paper doesn't deeply address the computational aspects of their proposed methods, which could be a practical limitation for very large models. Our paper aims to predict scaling laws of transformer neural network which crucuially relies on the intrinsic dimension of training data. Typically, to estimate a scaling exponent empirically, one must pre-train a family of models across different data/model sizes. In contrast, with our theory, one only needs to pre-train a single model which can then be used to estimate intrinsic dimension from which all scaling can be theoretically predicted. In comparison with pre-training, which can take multiple weeks for the largest models, estimating the intrinsic dimension takes only a couple minutes. 5. Data dimension estimation: The reliability and stability of estimating the intrinsic dimension of complex datasets (like text) remains a challenge and could impact the practical application of the theory. Estimating the intrinsic dimension via model embeddings is a relatively established practice [1,4,5]. Further, in the case of text we have no other choice as existing ID estimation algorithms are designed for continuous data. However this should not be taken to mean that textual data does not have low-dimensional structure. While using embeddins to measure this low-dimensional structure may introduce some noise, we expect a model (such as an LLM) to have learned good enough representations to preserve the majority of the low-dimensional structure. We do ablations at the end of the paper to examine how sensitive this estimation is to various hyperparameters and find it is relatively stable. While embeddings from any layer could be chosen, we choose the final layer to stay consistent with prior work. **Please refer to our comment for the remainder of our response** --- Rebuttal 2: Title: Rebuttal cont. Comment: **Questions:** 1. Can you state the process of estimating the intrinsic dimension of textual datasets which is important for your work? If the dimension depends on the pre-trained model, it seems the posterior estimate is not supering to be good and its practicability seems weak. We refer the reviewer our response to the above bullet point. Maximum Likelihood Estimatio (MLE) gives rise to a non-linear measure of intrinsic dimension by applying the principle of maximum likelihood to the distances between close neighbors [6]. It has been commonly used in prior works [2,3] to estimate the intrinsic dimension of natural data and assess impact on downstream model performance. Our theoretical predictions of scaling laws for transformers using these estimates of ID are closer to empirical predictions in comparison with existing work [2]. 2. In Theorem 1 and 2, the parameters of the network should satisfy a specific magnitude assumption, is this assumption significant and practical in applications? In practice neural networks are often trained with implicit regularization such as weight-normalization to prevent the weight parameters from blowing up. This makes it easy for such networks to satisfy our assumption on the magnitude of their weights. Note: Our magnitude upper bound is fairly large ($O(dn^{\frac{2}{2\beta +d}}M)$), allowing for plenty of flexibility during model training. 3. In figure2, only 5 points are presented in each subfigure and the x-axis’s scope is different, how does the empirical result appear in other regions? We find that transformer scaling laws are stable in certain ranges of data and model size as shown in Figures 2 and 3. It is well known that, empirically, scaling laws can break down when the data and model size further increases [7], probably due to optimization error or computational limit, which is not the focus of our paper. 4. Why did you assume the Lipschitz regularity of language modeling objective equals 1? Is there any explanation for it? In general we expect the target function to have some regularity as otherwise it would be impossible to estimate [1]. Assuming the output varies according to the variation of the input, Holder continuity is the proper and mostly widely considered assumption in nonparametric estimation. A Lipschitz assumption with $\beta = 1$ implies the variation of the output for the target function is proportional to the variation of the input which is widely used in literature. In general we are unaware of any methods for measuring the Holder exponent of the target function from samples. 5. The results shown in figure3 seem not good so what’s the significant reason? The results in Figure 3 have $\pm 0.03$ i.e. nearly as accurate as Figure 2 with $\pm 0.02$ error. We regard both as fairly accurate predictions of the scaling exponents. 6. The order of subscripts seems incorrect in Table 1. Thank you for flagging this! The values for $\alpha_N$ and $\alpha_D$ should be flipped. **Limitations:** 1. The construction in the proof of estimation theory is hard and not practical in empirical inference. Our construction in the approximation theory shows the universal approximation ability of transformers for Holder functions. These results are significant because **they allow us to quantitatively and precisely control the approximation error $\epsilon$ as a function of the transformer architecture size**. Importantly, this construction to control the approximation does not say anything about the learned parameters of the empirical risk minimizer or the parameters learned during the optimizaion process. We view this as an advantage of our theory, as we do not need to know anything about the empirical risk minimizer's learned parameters to control its generalization error. 2. The estimation of the intrinsic dimension is not clear. Re-iterating, MLE gives rise to a non-linear measure of intrinsic dimension by applying the principle of maximum likelihood to the distances between close neighbors [6]. It has been commonly used in prior work [2,3] to estimate the intrinsic dimension of natural data and assess impact on downstream model performance. **References:** [1] A Distribution-Free Theory of Nonparametric Regression, https://link.springer.com/book/10.1007/b97848 [2] Scaling Laws from the Data Manifold Dimension, https://jmlr.org/papers/v23/20-1111.html [3] The Intrinsic Dimension of Images and Its Impact on Learning, https://arxiv.org/abs/2104.08894 [4] The Shape of Learning: Anisotropy and Intrinsic Dimensions in Transformer-Based Models, https://arxiv.org/abs/2311.05928 [5] An Intrinsic Dimension Perspective of Transformers for Sequential Modeling, https://openreview.net/forum?id=0UzYWLzPBjA [6] Maximum Likelihood Estimation of Intrinsic Dimension, https://www.stat.berkeley.edu/~bickel/mldim.pdf [7] Scaling Laws for Neural Language Models, https://arxiv.org/abs/2001.08361
Summary: This paper makes a series of contributions: - Transformer Generalization Error: Loosely speaking, assuming a transformer is trained to approximate a Holder function in a regression setting, and assuming the data lives on the low dimensional manifold, then the generalization error of the transformer is upper bounded in a particular manner (Theorem 1) - New Covering Number for Transformers (although I believe this is not presented in the main text) - Predicting Transformer Empirical Scaling Laws - Some investigations into how transformer hyperparameters affect the estimated intrinsic dimensionality of text data I am not familiar with these more theoretical methods, and I have never taken a course on differential geometry. Consequently, my review will be quite limited. I focus more on the empirical methods since I am more familiar with these. Strengths: Note: I am not familiar with these more theoretical methods, and I have never taken a course on differential geometry. Consequently, my review will be quite limited. I focus more on the empirical methods since I am more familiar with these. - Having minor familiarity with Sharma and Kaplan, I think this work is a great extension towards language modeling. - The paper is clearly very well written and thorough - Figure 1 is visually nice (although I don’t have the background required to understand it) Weaknesses: Note: I am not familiar with these more theoretical methods, and I have never taken a course on differential geometry. Consequently, my review will be quite limited. I focus more on the empirical methods since I am more familiar with these. - “Overall, we find the estimated ID is fairly stable across each factor.” -> Looking at Figure 4, this feels very wrong to me. In 3 out of 4 subplots, the intrinsic dimensionality clearly seems to be changing with no asymptotic value in sight. - [nit] Figures 2 and 3: Please use a log scale. Don’t log transform the number of samples and then plot the log-transformed variable linearly. I have many questions (below) that may become weaknesses, but I don't wish to penalize the authors if I've misunderstood their work. Technical Quality: 3 Clarity: 4 Questions for Authors: - Looking at equation (3) and (4), the scaling exponents $\alpha_D, \alpha_N$ are partially determined by $\beta$. Later, $\beta$ is set to 1. How is this justified? Why would this be true empirically? - The paper critically concerns estimating the intrinsic dimensionality of the pretraining dataset D, but then claims this cannot be done directly for textual datasets. Why? No justification is given. - Following the above bullet, the paper then estimates the intrinsic dimension of the data by estimating the intrinsic dimension of the output token embedding matrix. Why is using this alternative quantity reasonable? - I’m not familiar with the “Maximum Likelihood Estimation ID algorithm”. What is this algorithm? Why was it chosen? How do the results depend on this choice? The ID estimation methods I'm more familiar with are quantities like participation ratio - how do other method - Line 71: Is the word “layers” missing? i.e. “requiring only O(log(d)) layers independent of the desired accuracy…” - Figure 3: Why are there 5 GPT2 model sizes? HuggingFace offers only 4. - Figure 3: Why are only 4 Pythia models used? I believe that there are 8. - What in this paper is a prediction? To me, the hallmark of a scientific prediction is a statement regarding what one should expect beyond already-existing data, followed by new experiments to confirm the new behavior. With the exception of Figure 2, I don't see much in the way of new predictions or new experiments to confirm predictions, and even Figure 2 is relatively weak in the sense of external verifiability. Confidence: 1 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Strengths:** - Having minor familiarity with Sharma and Kaplan, I think this work is a great extension towards language modeling. We thank the reviewer for recognizing this work as a great extension towards lanugage modeling! However, we want to emphasize the main contribution of our work is **theoretical** (as oppsed to [1] which is empirical). Via our novel approximation and statistical theory, we are able to establish theoretical bounds on Transformer scaling laws. We then do experimentation, extending similar experiments in [1] to transformers, demonstrating this theory holds well in practice. **Weaknesses:** 1. I am not familiar with these more theoretical methods, and I have never taken a course on differential geometry. Consequently, my review will be quite limited. I focus more on the empirical methods since I am more familiar with these. Again, we would like to emphasize that the main contributions of this paper are **theoretical**. This includes our statistical estimation theory in Theorem 1 which crucially relies upon a novel and non-trivial covering number result for transformers (Lemma 2) and a novel approximation theory in Theorem 2 which is constructed from a number of key sub-lemmas (e.g. see Lemma 3 - Interaction Lemma) allowing for sophisticated interaction of multiple tokens which is also of independent interest. We believe these results constitute a signficant step forward in the theoretical understanding of transformers. 2. “Overall, we find the estimated ID is fairly stable across each factor.” -> Looking at Figure 4, this feels very wrong to me. In 3 out of 4 subplots, the intrinsic dimensionality clearly seems to be changing with no asymptotic value in sight. Since the measurement of intrinsic dimension (**ID**) is essential to our theoretical predictions in practice, our ablations in Figure 4 are meant to demonstrate the stability of our measurements of intrinsic dimension under various **reaslistic** model hyperparameters - **not to suggest any asymptotic behavior**. In general the estimation of intrinsic dimension can be fairly noisy, depending variably even on the hyperparameters chosen in the ID estimation algorithm [1,2]. For example, [2] finds the ID of imagenet varies anywhere between 20-40 depending on the algorithmic hyperparameters used. Our estimates for the ID vary only by $\pm 5$ units across all model parameters we considered, motivating our remark that the estimation is fairly stable. 3. [nit] Figures 2 and 3: Please use a log scale. Don’t log transform the number of samples and then plot the log-transformed variable linearly. Our results show that the (squared) generalization error (**SGE**) can be bounded by power-laws in the number of samples $n$ as $SGE \lesssim n^{-\alpha_D}$. Then taking log of both sides yields $\log(SGE) \lesssim -\alpha_D \log(n)$ which exactly bounds the (log) SGE by the (log)-linear line in the number of samples with slope given by the scaling exponent $-\alpha_D$. This should hopefully clarify why we plot in log-log scale. **Questions:** 1. Looking at equation (3) and (4), the scaling exponents are partially determined by $\beta$. Later, $\beta$ is set to 1. How is this justified? Why would this be true empirically? Recall $\beta$ represents the Holder regularity index of the target function $f$. In general we expect the target function to have some regularity as otherwise it would be impossible to estimate [3]. Assuming the output varies according to the variation of the input, Holder regularity is the proper and mostly widely considered assumption in nonparametric estimation. A Lipschitz assumption with $\beta = 1$ implies the variation of the output for the target function is proportional to the variation of the input which is also widely used in literature. In general we are unaware of numerical methods for measuring the Holder index of the target function from samples. 2. The paper critically concerns estimating the intrinsic dimensionality of the pretraining dataset D, but then claims this cannot be done directly for textual datasets. Why? No justification is given. 3. The paper then estimates the intrinsic dimension of the data by estimating the intrinsic dimension of the output token embedding matrix. Why is using this alternative quantity reasonable? Estimating the intrinsic dimension via model embeddings is a relatively established practice [1]. Further, in the case of text we have no other choice as existing ID estimation algorithms are designed for continuous data. However this should not be taken to mean that textual data does not have low-dimensional structure. While using embeddins to measure this low-dimensional structure may introduce some noise, we expect a model (such as an LLM) to have learned good enough representations to preserve the majority of the low-dimensional structure. We do ablations at the end of the paper to examine how sensitive this estimation is to various hyperparameters and find it is relatively stable. While embeddings from any layer could be chosen, we choose the final layer to stay consistent with prior work. **Please refer to our comment for the remainder of our response** --- Rebuttal 2: Title: Rebuttal cont. Comment: 4. I’m not familiar with the “Maximum Likelihood Estimation ID algorithm”. What is this algorithm? Why was it chosen? How do the results depend on this choice? The ID estimation methods I'm more familiar with are quantities like participation ratio - how do other method Participation ratio is a linear measure of intrinsic dimension. Maximum Likelihood Estimation (MLE) [4] measures the intrisic dimension for low-dimensional nonlinear models by applying the principle of maximum likelihood to the distances between close neighbors. MLE has been commonly used in prior works [1,2] to estimate the intrinsic dimension of natural data and assess impact on downstream model performance. 4. Figure 3: Why are there 5 GPT2 model sizes? HuggingFace offers only 4. For GPT2 we reported the results published in [5]. 5. Figure 3: Why are only 4 Pythia models used? I believe that there are 8. We used only four models because we were already able to extract an empirical scaling law over two orders of model size. However, if the reviewer feels including the full suite would strengthen our experimental results we would be happy to plot more intermediate model sizes. 6. What in this paper is a prediction? To me, the hallmark of a scientific prediction is a statement regarding what one should expect beyond already-existing data, followed by new experiments to confirm the new behavior. With the exception of Figure 2, I don't see much in the way of new predictions or new experiments to confirm predictions, and even Figure 2 is relatively weak in the sense of external verifiability. We believe that, in science, the role of theory is not just to predict unobserved events but to provide **rigorous explanation** for observed phenomena. We would describe the latter situation as the theory **correctly predicting** the observed phenomena. Concretely, in our case, we refer to the exponents $\alpha_D, \alpha_N$ as being *predicted* by our theory as a function of input quantities $(\beta, d)$. As you point out, we make three previously unreported predictions in Figure 2. These are **new** in the sense that scaling laws on these datasets have not been previously reported in the literature. Additionally, to our knowledge, scaling behavior for the Pythia suite has also not been previously analyzed. Practically speaking, it would be very difficult to produce a new dataset/model substantially different from what is currently used in industry as, from the scaling perspective, this would require the collection of a tens of trillions token dataset or the training of a hundred-billion parameter model. Our goal, as a theory paper, is instead to rigorously explain existing phenomena and validate with novel, modestly-sized experiments on existing data. Further, these experiments are fully reproducible, as all code and data is open-source. **References:** [1] Scaling Laws from the Data Manifold Dimension, https://jmlr.org/papers/v23/20-1111.html [2] The Intrinsic Dimension of Images and Its Impact on Learning, https://arxiv.org/abs/2104.08894 [3] A Distribution-Free Theory of Nonparametric Regression, https://link.springer.com/book/10.1007/b97848 [4] Maximum Likelihood Estimation of Intrinsic Dimension, https://www.stat.berkeley.edu/~bickel/mldim.pdf [5] Scaling Laws for Neural Language Models, https://arxiv.org/abs/2001.08361 --- Rebuttal 3: Title: Response to Authors' Rebuttal (Part 1) Comment: I thank the authors for their rebuttal. Responding sequentially: > Again, we would like to emphasize that the main contributions of this paper are **theoretical**. I understand. Sadly, I'm not well equipped to evaluate the maths, which is why my confidence is so low. I did not bid on this paper, but I will try my best regardless. > A Lipschitz assumption with $\beta = 1$ implies the variation of the output for the target function is proportional to the variation of the input which is also widely used in literature. In general we are unaware of numerical methods for measuring the Holder index of the target function from samples. Without knowledge of any problem with this choice or any alternatives, this seems reasonable. > Estimating the intrinsic dimension via model embeddings is a relatively established practice [1]. Further, in the case of text we have no other choice as existing ID estimation algorithms are designed for continuous data. After reading your response, I wonder if there might be some miscommunication here. Sharma and Kaplan state that to estimate ID: "we use the activations from the last token in each sequence to measure the ID, though the ID does not vary significantly across token positions (see figure 10)", whereas your lines 219-220 state " we will estimate the intrinsic dimension of the input data by estimating the intrinsic dimension of token embeddings." I interpreted your sentence to mean that you took either the embedding matrix or the unembedding matrix (which may be the same, if you are tying/sharing the matrices). After rereading the paragraph on lines 219 to 230, I think you are using the word "embedding" when Sharma and Kaplan (as well as I) use "activations". Could you please clarify what exactly you are doing? Are you using activations or the embedding vectors that comprise the (un)embedding matrix? If you are using activations, this seems far more sensible to me. > [nit] Figures 2 and 3: Please use a log scale. Don’t log transform the number of samples and then plot the log-transformed variable linearly. I fear we might have miscommunicated here. I wasn't questioning plotting in log-log scale. Rather, as I understand, you appear to have log transformed your data and then plotted the log-transformed data linearly, rather than plotting your data and transforming the axes logarithmically. In matplotlib pseudo-code, you appear to have plotted: ``` log_n = np.log10(n) log_sge = np.log10(sge) plt.plot(log_n, log_sge) ``` rather than plotting ``` plt.plot(n, sgd) plt.xscale("log") plt.yscale("log") ``` I feel like the latter is far more common, hence why I suggested it. I did label this point as a nit, so if you feel strongly that the former is preferable and tell me so, that's fine. --- Rebuttal 4: Title: Response to Authors' Rebuttal (Part 2) Comment: > Participation ratio is a linear measure of intrinsic dimension. Maximum Likelihood Estimation (MLE) [4] measures the intrisic dimension for low-dimensional nonlinear models by applying the principle of maximum likelihood to the distances between close neighbors. This doesn't actually tell me what MLE ID is or why PR is inappropriate here (certainly it's linear, but there's missing next step). However... > MLE has been commonly used in prior works [1,2] to estimate the intrinsic dimension of natural data and assess impact on downstream model performance. ... based on this information, MLE ID seems like a more reasonable choice (since I lack knowledge that would favor or disfavor this choice). > We used only four models because we were already able to extract an empirical scaling law over two orders of model size. However, if the reviewer feels including the full suite would strengthen our experimental results we would be happy to plot more intermediate model sizes. I strongly suspect that others will ask you this question, so I recommend doing it, but I now better understand that it probably won't matter as much. > our ablations in Figure 4 are meant to demonstrate the stability of our measurements of intrinsic dimension under various reaslistic model hyperparameters - not to suggest any asymptotic behavior Thank you for clarifying this point. I suggest perhaps lightly rephrasing "Overall, we find the estimated ID is fairly stable across each factor." to integrate the clarification you shared with me here. Based on the authors' rebuttal, I will increase my score to a 6 and decrease my confidence to a 1. To justify why: - I don't feel competent to assess the novelty or significance of the theoretical contributions (results or proof techniques), which is really the heart of this paper - the authors addressed my more empirical concerns **Ask**: If I can make one last request of the authors, I think a "Future Directions" section would be a nice addition to help readers decide which next research problems are worth pursuing. --- Rebuttal 5: Title: Rebuttal cont. Comment: Thank you for reading our rebuttal and taking the time to respond. We address some of your points below. - After reading your response, I wonder if there might be some miscommunication here. Sharma and Kaplan state that to estimate ID: "we use the activations from the last token in each sequence to measure the ID, though the ID does not vary significantly across token positions (see figure 10)", whereas your lines 219-220 state " we will estimate the intrinsic dimension of the input data by estimating the intrinsic dimension of token embeddings." I interpreted your sentence to mean that you took either the embedding matrix or the unembedding matrix (which may be the same, if you are tying/sharing the matrices). After rereading the paragraph on lines 219 to 230, I think you are using the word "embedding" when Sharma and Kaplan (as well as I) use "activations". Could you please clarify what exactly you are doing? Are you using activations or the embedding vectors that comprise the (un)embedding matrix? If you are using activations, this seems far more sensible to me. Ah sorry about the confusion. We are using activations from the final transformer layer (pre logit transformation). Figure 9 measures the ID resulting from instead using activations from earlier transformer layers. - I fear we might have miscommunicated here. I wasn't questioning plotting in log-log scale. Rather, as I understand, you appear to have log transformed your data and then plotted the log-transformed data linearly, rather than plotting your data and transforming the axes logarithmically. In matplotlib pseudo-code, you appear to have plotted: log_n = np.log10(n) log_sge = np.log10(sge) plt.plot(log_n, log_sge) rather than plotting plt.plot(n, sgd) plt.xscale("log") plt.yscale("log") I feel like the latter is far more common, hence why I suggested it. I did label this point as a nit, so if you feel strongly that the former is preferable and tell me so, that's fine. Thanks for clarifying! We agree adjusting the plot scale (as you suggest) is cleaner and will update our paper accordingly. - This doesn't actually tell me what MLE ID is or why PR is inappropriate here (certainly it's linear, but there's missing next step). However [...] based on this information, MLE ID seems like a more reasonable choice (since I lack knowledge that would favor or disfavor this choice). A non-linear measure of ID is generally preferred since it can capture non-linear structure that a linear measure of ID will miss. See [1,2] for a demonstration why real-world data has a low non-linear ID but high linear ID. As a simple example, consider a three-dimensional (unit) helix. Inspecting the singular values of the helix would suggest ID = 3 when in reality the helix has ID = 1. MLE is simply a standard choice of non-linear ID measure. - Ask: If I can make one last request of the authors, I think a "Future Directions" section would be a nice addition to help readers decide which next research problems are worth pursuing. Thank you for the suggestion. We would be happy to include this in a longer version of the paper. References: [1] Nonlinear Dimensionality Reduction by Locally Linear Embedding, https://www.science.org/doi/full/10.1126/science.290.5500.2323?casa_token=yfncfe5drp0AAAAA%3AeCd9RAaQNuvRbWlvXsOnGOlpX0BtboQ4U3k-eQYu0oztePn9TZOXPGoPktxvph4GfvA-bmIKMBSO [2] A Global Geometric Framework for Nonlinear Dimensionality Reduction, https://www.science.org/doi/full/10.1126/science.290.5500.2319?casa_token=Lo9xDQgvfxAAAAAA%3Ap2bdmZCL2lej6ljrkru8QxldbkbpEjDfSWW3w3ix0D8-kjSw-rI68bZWBb_PczxfiifAZT8vdqc2
Summary: This paper investigates the representational capabilities of transformers in regression tasks and their correlation with scaling laws. The authors present a novel analysis of the transformer's sample complexity on datasets with low intrinsic dimension $d$, or those residing on a $d$-dimensional manifold $\mathcal{M}$. Their findings yield a bound reminiscent of standard non-parametric regression results, with the key distinction that it depends on the intrinsic dimension rather than the actual data dimension. The analysis hinges on a specific transformer construction that performs approximation on subregions of $\mathcal{M}$. Notably, when the model is sufficiently wide and deep, this construction can approximate the target function to $\epsilon$ precision with a depth independent of $\epsilon$. This characteristic contrasts favorably with feedforward models using ReLU activation, which typically require $O(\epsilon^{-1})$ layers. Finally, the paper shows that if we estimate the intrinsic dimension of the data, the resulting estimate of the exponent can be informative of that predicted by the sample complexity from the theoretical result. Strengths: As far as I know, the theoretical result of the paper is novel and highly non-trivial. To a certain degree, it sheds light on the fundamental difference between a transformer and simpler models like the feed-forward model. I foresee these results to be useful for future research into the statistical properties of transformers. Weaknesses: The theoretical result is novel but I am not sure how insightful the bound of the construction is about what the model actually is doing and how the result is related to scaling law. There are several weaknesses in the paper which I will discuss below: Despite its novel theoretical contributions, this paper has several significant weaknesses that warrant discussion: 1. **Limited applicability to real-world scenarios**: The theoretical model presented in the paper assumes a regression or supervised learning task, which diverges significantly from how transformer models are typically used in practice. The input representation ($x$ as a $D$-dimensional vector with the sequence being a linear transformation of $x$) also appears overly simplistic and may not adequately capture the complexity of real-world data and tasks. This disconnect raises questions about how well the theoretical results translate to practical applications. 2. **Relevance of the proposed construction**: While the construction presented in the paper is mathematically interesting at a technical level, it's unclear whether it accurately represents the actual learning process of transformer models. The authors do not provide empirical evidence to support that their theoretical construction aligns with the internal representations or mechanisms developed by transformers during training. This lack of validation leaves a significant gap between theory and practice. I would be more than happy to be proven wrong with empirical evidence. 3. **Tenuous connection to scaling laws**: The paper's attempt to link its theoretical results to empirical scaling laws appears weak. The predicted line for $\alpha_D$ in Figure 2 shows a substantial divergence from actual scaling laws for most datasets, with OpenWebText being a notable exception. The claimed error margin of $\pm 0.2$ is exceptionally large on a log-log scale, potentially overstating the accuracy of the predictions. Moreover, the fit for $\alpha_N$ is even less convincing, further undermining the paper's claims about its relevance to scaling laws. These weaknesses suggest that while the theoretical work presented is novel, its practical implications and connections to real-world transformer behavior may be limited. The paper would benefit from stronger empirical validation and a more thorough exploration of how its theoretical insights relate to the actual functioning of transformer models in practice. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Why is the intrinsic dimension dependent on the model since they are supposed to be intrinsic to the data? How do I know the approximation of it is reasonable? For example, why is "sub-sample final-layer tokens from the embedded subsequence and shuffle together all the embedding" a reasonable thing to do? 2. How does the computational complexity of the proposed transformer construction compare to that of the feedforward model? If the transformer requires significantly more parameters to achieve the $\epsilon$-independent depth, does this truly represent an advantage, given that other parameters in the bound still depend on $\epsilon$? Can you provide a more comprehensive comparison that takes into account total computational resources, including the number of parameters and operations required? Without such a comparison, it's challenging to conclude whether this result definitively demonstrates the benefit of transformers over MLPs. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Some limitations I discussed above are already in the paper but many are not. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough response! We address some of your concerns below. **Strengths**: - As far as I know, the theoretical result of the paper is novel and highly non-trivial. To a certain degree, it sheds light on the fundamental difference between a transformer and simpler models like the feed-forward model We appreciate the reviewer recognizes our theoretical result as novel and higly non-trivial and that it shed light on the fundamental difference between a transformer and simpler models like the feed-forward network. We would like to emphasize that the **main contribution of this paper is theoretical**. This includes our statistical estimation theory in Theorem 1 which relies upon a novel and non-trivial covering number result for transformers (Lemma 2) and a novel approximation theory in Theorem 2 which is constructed from a number of key sub-lemmas (e.g. see Lemma 3 - Interaction Lemma) allowing for complex interaction of multiple tokens which is also of independent interest. We believe these results constitute a signficant step forward in the theoretical understanding of transformers **Weaknesses:** 1a. Limited applicability to real-world scenarios: The theoretical model presented in the paper assumes a regression or supervised learning task, which diverges significantly from how transformer models are typically used in practice... This disconnect raises questions about how well the theoretical results translate to practical applications. Despite being proved for regression, our result also sheds light on applications using next-token prediction (classification). Theoretically it is known that results on regression problems can be used to estalish results for classification problems [2, Theorem 1.1] since classification essentially depends on estimation of the probability function for classification. This allows our results on regression for transformers to be applied to next-token prediction which translates to practical applictaions. Additionally, we remark the regression objective itself is commonly used in many real-world scenarios e.g. training a transformer diffusion backbone [1] 1b. The input representation ($x$ as a - $D$ dimensional vector with the sequence being a linear transformation of $x$) also appears overly simplistic and may not adequately capture the complexity of real-world data and tasks We note that most common real-world tasks including language modeling, which embeds an input $x \in R^{vocab\_size}$ via a linear token embedding matrix, and ViT, which takes an image $x \in R^D$ and embeds it as token *patches* using a (linear) convolution, utilize this embedding strategy. 2. Relevance of the proposed construction: While the construction presented in the paper is mathematically interesting at a technical level, it's unclear whether it accurately represents the actual learning process of transformer models. The authors do not provide empirical evidence to support that their theoretical construction aligns with the internal representations or mechanisms developed by transformers during training. This lack of validation leaves a significant gap between theory and practice. I would be more than happy to be proven wrong with empirical evidence... This disconnect raises questions about how well the theoretical results translate to practical applications. In machine learning theory, to obtain an empirical minimizer $\hat{T}_n$, we start with an empirical risk function (in our case mean squared error (mse) on training data) and a hypothesis function class $\mathcal{T}$. The empirical risk minimizer (ERM) $\hat{T}_n$ is then the candidate in $\mathcal{T}$ which minimizes the empirical risk. Our goal is to understand how the generalization error of $\hat{T}_n$ depends on the number of samples and the intrinsic dimesion of the data. Note that our statistical learning theory does not make any claim about how $\hat{T}_n$ is achieved (i.e. the optimization process). The generalization error can be decomposed into two parts: bias (approximation error) and variance (stochastic error). The approximation error represents the expressivity of the transformer architecture space $\mathcal{T}$ e.g. how well can a function in $\mathcal{T}$ approximate the target $T$. Our approximation theory results are significant because **they allow us to quantitatively and precisely control the approximation error $\epsilon$ as a function of the transformer architecture size**. Importantly, this construction to control the approximation does not say anything about the learned parameters of the empirical risk minimizer or the parameters learned during the optimizaion process. Roughly speaking the discrepancy between the ERM and our construction is controlled by the stochastic error which we bound using the covering number of the transformer hypothesis space. We view this flexiblity as an advantage of our theory, as **we do not require the learned parametrs of the ERM to align with our construction in the approximation theory to control the ERM's generalization error**. We would further remark that this theory **does** translate to practical applications for understanding transformer scaling laws as evidenced by our experimental results on datasets similar to those used in [3]. 3. The predicted line in Figure 2 shows a substantial divergence from actual scaling laws for most datasets, with OpenWebText being a notable exception. The claimed error margin of $\pm 0.2$ is exceptionally large on a log-log scale, potentially overstating the accuracy of the predictions. Moreover, the fit for $\alpha_D$ is even less convincing. Our margin of error in Figure 2 when predicting $\alpha_D$ is $\pm 0.02$ (an order of magnitude lower than $\pm 0.2$). We believe this is fairly accurate. The prediction of $\alpha_N$ is similarly accurate (up to $\pm 0.03$). Further this is an improvement over other empirically driven works [4] which estimate scaling laws using intrinsic dimension. --- Rebuttal 2: Title: Rebuttal cont. Comment: **Questions:** 1. Why is the intrinsic dimension dependent on the model since they are supposed to be intrinsic to the data? How do I know the approximation of it is reasonable? For example, why is "sub-sample final-layer tokens from the embedded subsequence and shuffle together all the embedding" a reasonable thing to do? Approximation of the intrinsic dimension of data via model embeddings is a relatively established practice [4,5,6]. In the case of text we have no other choice as existing ID estimation algorithms are designed for continuous data. However, this inapplicability should not be taken to mean that textual data does not have low-dimensional structure. While using embeddings to measure this low-dimensional structure may introduce some noise, we expect a good model (such as an LLM) to have learned good enough representations to preserve the majority of this structure. We do ablations at the end of the paper to examine how sensitive this estimation is to various hyperparameters and find it is relatively stable. While embeddings from any layer could be chosen, we choose the final layer to stay consistent with prior work [4,5,6]. 2. How does the computational complexity of the proposed transformer construction compare to that of the feedforward model? If the transformer requires significantly more parameters to achieve the $\epsilon$-independent depth, does this truly represent an advantage, given that other parameters in the bound still depend on $\epsilon$? Can you provide a more comprehensive comparison that takes into account total computational resources, including the number of parameters and operations required? Without such a comparison, it's challenging to conclude whether this result definitively demonstrates the benefit of transformers over MLPs. Via our approximation theory, the number of parameters in a transformer is \begin{align*} O(L_T\cdot(m\cdot d_{embd}+L_{FFN}\cdot d_{embd})) &= O(\log(d)(\log(\epsilon^{-1})+d\epsilon^{-\frac{d}{\beta}})) \\ &= O(\log(d)d\epsilon^{-\frac{d}{\beta}}) \end{align*} The number of parameters in a FFN with the same target $\epsilon$ is \begin{align*} O(Depth \cdot Width) = O(\log(\epsilon^{-1}) \log(d)d\epsilon^{-\frac{d}{\beta}}) \end{align*} where the depth depends logarithmically on $\epsilon^{-1}$ [7]. So the transformer does indeed only require a factor of $\log(\epsilon^{-1})$ fewer parameters than the FFN [7]. Additionally, most of the parameters in our proposed transformer architecture class come from the attention heads. Computationally speaking, this is desirable as the attention heads can be more easily parallelized than sequential feed-forward layers. **References:** [1] Scalable Diffusion Models with Transformers, https://arxiv.org/abs/2212.09748 [2] A Distribution-Free Theory of Nonparametric Regression, https://link.springer.com/book/10.1007/b97848 [3] Scaling Laws for Neural Language Models, https://arxiv.org/abs/2001.08361 [4] Scaling Laws from the Data Manifold Dimension, https://jmlr.org/papers/v23/20-1111.html [5] The Shape of Learning: Anisotropy and Intrinsic Dimensions in Transformer-Based Models, https://arxiv.org/abs/2311.05928 [6] An Intrinsic Dimension Perspective of Transformers for Sequential Modeling, https://openreview.net/forum?id=0UzYWLzPBjA [7] Error bounds for approximations with deep ReLU networks, https://arxiv.org/abs/1610.01145 --- Rebuttal Comment 2.1: Comment: Thank you for the detailed response. Some of my questions have been addressed, but many are still unresolved. > We note that most common real-world tasks including language modeling [...] Indeed we use embedding in practice too but in practice, we have a sequence of tokens/inputs and here a *single* input is being embedded into a sequence of latent tokens. In other words, the modeling assumption is that the whole sequence is a linear projection of a low-dimensional input. I don't see an immediate connection between this setting and real data. Could you explain this more? > Note that our statistical learning theory does not make any claim about how T is achieved [...] Respectfully, I disagree that a good explanation of generalization in deep learning can be independent of the optimization procedure, see [1] for a more detailed discussion. Nonetheless, I think the jury is still out on this question so I will not include this in my final decision. Regarding scaling law, I will elaborate on this in the next point. > Our margin of error in Figure 2 when predicting [...] I apologize for the typo in my review where I meant to type 0.02 instead of 0.2, but the point I believe still stands. Let's take the Bigcode-SQL plot in Figure 2. By eyeballing, I'd say the final prediction is roughly -0.1 (empirical fit) and -0.05 (predicted). This translates to roughly $\exp(-0.05) - \exp(-0.1) = 0.046$ difference in validation loss. This is a huge difference. For a comparison, let's look at llama2 [2]. In figure 5, going from 34B to 70B, the *perplexity* decreased by about less than 0.1, roughly, that's a 0.028 difference in validation loss but represents a huge change in the model performance. As such, I still think the prediction made by the proposed framework is pretty far away from reality. If I misinterpreted the plot in any way, please let me know. > Approximation of the intrinsic dimension of data via model embeddings is a relatively established practice [...] I understand that it is a standard practice but it doesn't make it a reasonable one. You are making a claim about an intrinsic property of the data-generating process yet the actual computation relies on an arbitrarily chosen model. Real data is most likely not generated by a transformer so your measure of intrinsic dimension cannot be correct, so it is not clear to me what I can take away from empirical validation that is based on an erroneous foundation. Note that this is independent of the theoretical framework. **Reference** [1] Fantastic Generalization Measures are Nowhere to be Found. Gastpar et al. [2] llama 2: Open Foundation and Fine-Tuned Chat Model. --- Reply to Comment 2.1.1: Title: Rebuttal cont. Comment: Thank you for taking the time to read our rebuttal and respond. We address some of your points below: - Indeed we use embedding in practice too but in practice, we have a sequence of tokens/inputs and here a single input is being embedded into a sequence of latent tokens. In other words, the modeling assumption is that the whole sequence is a linear projection of a low-dimensional input. I don't see an immediate connection between this setting and real data. Could you explain this more? The pre-embeded input $x \in \mathbb{R}^D$ can be viewed as a sequence components. This is similar to what vision transformers do: take a vector $x \in \mathbb{R}^D$ as input and embed it as a sequence of pixel tokens. This is the same as our formulation. We would like to clarify that we do **not** assume that the whole sequence is a linear projection of a low-dimensional input. Rather, we apply a linear transformation to **each component** of the input i.e. tokenwise (as is usually done with transformers). - Respectfully, I disagree that a good explanation of generalization in deep learning can be independent of the optimization procedure, see [1] for a more detailed discussion. Nonetheless, I think the jury is still out on this question so I will not include this in my final decision. Regarding scaling law, I will elaborate on this in the next point. We agree that an understanding of optimization is essential for a complete understanding of generalization. However, other components (e.g. approximation and statistics) are equally necessary and useful for explaining certain **components** of the generalization error: namely those coming from the bias and variance. This is exactly what our work (and more generally the entre field of non-parametric statistics) does. These error components **directly result** in the model and data scaling laws observed empirically when other sources of error are small. Further, our bound is mini-max optimal (up to logarithmic factors). Using these results, which importantly depend on the intrinsic dimension of the data, we are able to theoretically predict empirical scaling laws more accurately than any other (theoretical works) we are aware of. We would again like to note that this theory is the **main contribution** of our work and therefore should be considered in its evaluation. In regards to [1], we agree that it is important to consider the algorithm used in generalization error analysis. As far we as we know, the convergence landscape of a multi-layer transformer network is a widely open-question in of itself. However, it does not negate the importance of the theory we contribute. --- Rebuttal 3: Title: Rebuttal cont. Comment: - How do you incorporate sequential structure? A vector $x \in \mathbb{R}^D$ is very general. Each component (or sub-sequence of components) could be taken to be time-series measurements or a sequence of pixels (as is done in ViT). We then use a position encoding to embed this sequential structure. - It is ok that a theoretical framework is not accurate but the claim you are making in the paper is "predicting" and "explaining". These are tall orders and I don't think the empirical results sufficiently support your claims. By systematically off I just mean that they are already diverging somewhat significantly for SQL and Tiny story whereas the deviation in Openwebtext is much more reasonable. Scaling law is only interesting precisely because one can extrapolate from it and I don't think we can extrapolate from the theoretical prediction. If you strongly object to our characterization of theory as "predicting" scaling laws we would be willing to use a softer characterization (perhaps "estimating"?) in an updated version of the paper. --- Rebuttal Comment 3.1: Comment: > Each component (or sub-sequence of components) could be taken to be time-series measurements or a sequence of pixels [...] I see. That is an interesting interpretation. I did not get this picture from the paper. Apologies if I missed it but if I didn't miss it, I think it would be good to highlight it. > our characterization of theory as "predicting" scaling laws [...] I think "estimating" would be better to use or at least greatly highlight that the prediction made is better but still far from reality so intrinsic dimension is only a part of a puzzle. I guess the bottom line is it's clear that deep learning can only work if the data are "low-dimension" in nature so it's natural that the same would apply to transformer. It is not surprising that things are a straight line on a log-log curve since even PAC bounds would look like a straight line on the log-log curve so how well the slope matches is the only thing that matters for the prediction. In any case, I think a large portion of my questions and concerns have been addressed and I think the theoretical construction is quite interesting and worthy of follow-up so I am raising my score to 6. Thank you for the discussion. --- Reply to Comment 3.1.1: Title: Rebuttal cont. Comment: We thank the reviewer improving their score and for the helpful discussion!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Geometry of naturalistic object representations in recurrent neural network models of working memory
Accept (poster)
Summary: The paper presents a study of Working Memory (WM) in Recurrent Neural Network (RNN) models. The main contribution is the study of the latent space dynamics of RNNs during WM-related tasks with respect to naturalistic stimuli, instead of abstract categorical stimuli that are commonly used. The paper analyses gated and non-gated RNNs, and characterises how the geometric properties of their latent spaces change during the trials, as new stimuli are integrated in the WM. Strengths: * The paper focuses on a more realistic setup than what usually done. It incorporates a “perceptual backbone” given by a CNN, and a “WM backbone” given by a RNN. * The paper clearly states the tasks (particularly via the plots) and the architectural choices * The results, being obtained in a more realistic setup that previous works, serve as a basis to formulate testable predictions on biological entities Weaknesses: * Very small dataset (4 categories, 2 identities per category, 4 possible locations). The N back window also seems limited (how does WM scale for N>3?) * Stimuli are more realistic, but there is space for improvement (i.e., realistic backgrounds instead of pitch black?) * The discussion of the three hypotheses in subsection 4.3 is unclear to this reviewer. If these weaknesses are addressed, the rating may be increased. Technical Quality: 2 Clarity: 2 Questions for Authors: * Does the training from ImageNet transfer to this dataset? * The normal vectors to the decision boundaries can be taken to be unit norm. In a high dimensional space, however, unit-normed vectors are, with high probability, orthogonal to each other. How does this impact the validity of the orthogonality index? * It is unclear to this reviewer the reasoning behind the usage of Procrustes analysis for studying the evolution of the decision boundaries. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The dataset and the task extent seem to be undersized with respect to what seems reasonably possible. Including larger N-back windows and more objects could be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's positive assessment and their suggestions. We address their specific questions and weaknesses below: * __Small dataset and limited N-back window__ We appreciate the reviewer highlighting the limited size of our stimuli and task sets. We would like to clarify that our stimuli dataset includes 4 categories, each with 2 identities. For each identity, there are approximately 20 view angles in the training set and 4 view angles in the validation set. Additionally, we include 2 additional identities for each category in the validation novel object dataset, resulting in a total of 192 stimuli. Despite this dataset size, the model is capable of generalizing to identities with novel angles and novel identities, suggesting that the model is indeed learning (and generalizing) the task. We argue that with larger datasets, the model should retain the same representation geometry as discovered with the given dataset. To directly address the reviewer's concerns, we ran several additional experiments. In particular, we trained models on a relatively larger dataset containing 8 categories and 4 identities per category. Due to rendering quality restrictions, we obtained models with comparable training and validation accuracy (~80%), suggesting that the models still captured the task dynamics. We performed a cross-condition decoding analysis (similar to Fig. 2a) and found consistent results, indicating shared representation for vanilla RNNs but not gated RNNs. The relatively low decoding accuracy is a direct consequence of the lower saturation accuracy of the model that will likely be ameliorated with additional training time. Additionally, we further trained models to perform 1-4 back tasks across all 3 features (adding the additional 4-back task setting). Models achieved comparable performance to our original 9-task model, and we were able to replicate the findings from Fig. 4f and g (see attached 1-page PDF Fig 2a). Together, these new results suggested that our findings generalize to broader settings including more diverse object categories and broader task settings. * __Suggestion to add realistic background__ We thank the reviewer for their suggestion and acknowledge that more natural experimental settings are desired and may be important for proper analysis of the neural computations underlying working memory. To address the reviewer’s concern, we ran an additional experiment using stimuli overlaid on synthetically generated natural textures. For this, we synthesized textures as stimulus background using the method from [Texture Synthesis](https://github.com/Devashi-Choudhary/Texture-Synthesis) and overlaid the 3D objects on them to create each frame (see attached 1-page PDF Fig 1c). We then trained models to perform n-back tasks with these new stimuli. We found that models achieved equally well performance levels with comparable speed (see attached 1-page PDF Fig 1d). In other words, we found that the choice of background does not affect our main results. We plan to include a more detailed analysis of this model in the revised submission (with the full set of analyses). * __Interpretation of subsection 4.3__ Please refer to the general reply for an updated interpretation. * __Does the training transfer from Imagenet to Shapenet?__ As mentioned on line 122-123, our vision backbone consisted of an Imagenet-trained ResNet50 model with fixed parameters. In all our experiments, we used the unit activations of this model as the input to the RNN models. All object features including category, identity, and location were highly decodable from these activations (category: 100.00%, identity: 99.57%, location: 100.00%; 2-fold cross-validation), confirming that the Imagenet trained ResNet50 model generalized to our stimulus set from ShapeNet. * __Orthogonalization index interpretation__ We agree with the reviewer that the high dimensionality of the latent space, specially the perceptual space, could make the comparison potentially invalid. To address the reviewer’s concern, we repeated this analysis by first applying PCA on the activations within perceptual and RNN spaces and equalized the number of dimensions in both. We then calculated the orthogonality measure on the dimensionality matched spaces (perceptual and RNN encoding). This analysis replicated the same findings from the original analysis done without dimensionality matching (see Fig. 1e in the attached 1-page PDF). Lastly, we want to emphasize that the point made in our orthogonality analysis experiments is the difference in orthogonality between the two spaces and we do not directly examine the absolute value of orthogonality, which as the reviewer suggested may be affected by the dimensionality of the latent space. * __Rationale behind using the Procrustes analysis__ We were interested in investigating whether the geometry of object representations is unchanged across different time steps (e.g. between encoding representation and 1st, 2nd, or 3rd memory representations). For this we needed to analyze the likeness of the representational geometry across time steps and for encoding and memory representations. The orthogonal Procrustes analysis is a statistical shape analysis which enables discovering simple rotation transformations that superimpose a set of vectors/points onto another set. We used this analysis to inquire whether each set of object feature decoders can be rotated to align with the set of same decoders at a different time or for a different stimulus. --- Rebuttal Comment 1.1: Comment: I thank the authors for thoroughly answering to my questions. In particular, the discussion of the datasets (their size of the stimuli, the N-back extent and the independence from the background) addresses my doubts entirely, and is convincing. I also appreciate the revised explanations for section 4.3, which were a major point of confusion. Finally, the discussion of the orthogonalisation index was very useful to help my understanding. As for the apparent contradiction in the results obtained when considering the revised definition, the authors’ hypothesis about PCA being unable to capture the high dimensional nature of the perceptual space seems reasonable. As a result of the thorough answers that were provided, I will increase my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback! We appreciate the time you took to review our manuscript and thank you once again for your support in our work!
Summary: The manuscript uses various RNN architectures as "model animals" to study the representation and processing of naturalistic stimulus during several working memory (K-back) tasks. Unlike prior work, the current study considers various contexts for the cues, making the representation by the RNNs inherently higher dimensional. They find that when RNNs are required to perform a task with multiple contexts, all RNNs kept track of irrelevant stimuli; yet only vanilla (not gated) RNNs used shared representations across tasks. Moreover, the authors find distinct features are stored in orthogonal subspaces, a finding consistent with the prior experimental literature. They conclude with a study of temporal stability in the face of incoming distractors. Strengths: - The memory task is complex and requires the RNNs to learn the context. - The use of naturalistic inputs is in direct contrast and a welcome addition to traditional studies where categorical inputs are used. Though please see [1]. - The observation "These observations challenge the generality of prior studies using vanilla RNNs and categorical inputs in which shared representations and dynamical motifs across related tasks were found [35, 9]." is rather intriguing and very timely, given that [9] just got published as a high-impact publication. Weaknesses: - Some relevant citations are missing, please see below. - The presentation, for me, was a bit dense. I was not fully able to follow Section 4.3, though I believe I understood the general takeaways. The authors should work significantly on the presentation. Technical Quality: 3 Clarity: 2 Questions for Authors: I believe the manuscript is novel, convincing, and impactful. Some methodological details are missing, but I am sure the authors can add them during the revisions. Please find my comments/questions below: - Could you please define what is meant by a latent space, preferably in mathematical terms? I think the authors refer to subspaces defined by the decoder weights, but this should be stated more clearly since latent subspace has different meanings in neuroscience. - I believe there are at least two seminal works that need to be cited. Please see [1] and [2]; and several citations therein. Specifically, the claim "This is in contrast to prior studies, which primarily studied WM tasks such as delayed-match-to-sample tasks, which don’t evaluate WM maintenance in the face of incoming (and distracting) information." is not correct as both of these works considered attractors. There are more works that should be cited. I believe the authors can find them through a short search, anchoring on these two papers. - "For this, we trained decoders to predict the value of each object property using the RNN unit activity during the Encoding Space and evaluated its generalization performance in consecutive Memory Spaces." I am wondering if this is a fair test, since the representation may settle into a steady-state after a few time steps. I am not exactly sure how to test this in a way that can account for transient response to die out, so I will leave it to the authors if you wish to address this or not. Overall, I ended with more exciting questions after reading the manuscript than I started with, which is a mark of a paper that I believe deserves a publication in NeurIPS. I would kindly ask the authors to perform *substantial* edits to improve the presentation so that it is easier to follow the details of Section 4.3. Citations: [1] Masse, N. Y., Yang, G. R., Song, H. F., Wang, X. J., & Freedman, D. J. (2019). Circuit mechanisms for the maintenance and manipulation of information in working memory. Nature neuroscience, 22(7), 1159-1167. [2] Finkelstein, A., Fontolan, L., Economo, M. N., Li, N., Romani, S., & Svoboda, K. (2021). Attractor dynamics gate cortical information flow during decision-making. Nature neuroscience, 24(6), 843-850. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful response. We also appreciate the acknowledgment of the potentially conflicting results our study highlights in relation to prior works. Below, we respond to individual questions the reviewer raised: * __Presentation styles__ We significantly revised the text in section 4.3 to improve its clarity as the reviewer suggested. Since updating the submission is not allowed during this period, we provide a brief summary of changes: 1) expanding the definition of hypotheses 1-3, adding references to prior related work to each hypothesis; 2) functional interpretation of each hypothesis (Hypothesis 1: sustained WM; Hypothesis 2: dynamic updating; Hypothesis 3: dynamic updating of stimulus-specific memory spaces); 2) adding more references to relevant figures containing definitions; 3) further methodological clarification (time steps and labels used for fitting and testing each decoder); 4) clarified the goal of several analyses (e.g., why did we use the Procrustes analysis, and how was it implemented); 5) added geometric interpretation for each result. We believe that these changes have substantially improved the readability and clarity of the results presented in section 4.3 and hope that will address the reviewer’s concern. * __Mathematical definition of latent space__ The latent space of the RNN refers to a D-dimensional space $\mathbb{R}^D$ where $D$ is the number of units in the RNN model. Consider a set of $N$ vectors $\{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_N\}$ in D-dimensional space $\mathbb{R}^D$, that represent $N$ decoders in that space. These vectors span a subspace of $\mathbb{R}^D$ that is the set of all possible linear combinations of these vectors. Mathematically, the subspace $S$ spanned by these vectors ( $\text{span} \{ \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_N \}$) is defined as: $$S = \left\{ \sum_{i=1}^N \alpha_i \mathbf{v}_i \mid \alpha_i \in \mathbb{R} \right\}$$. We will revise the text to make this information and the relation between the decoder weights and the RNN subspace more explicit. * __Relevant citations__ We thank the reviewer for pointing us to these papers. We agree that these two references are highly relevant and we will add them to the updated manuscript and adjust the text accordingly. We also identify the following references relevant to our studies. * __Questions regarding cross-time generalization analysis__ We like to first clarify that in our experiments, each model executes exactly one step of computation per input observation and in that sense is different from many prior RNN models used in the literature that involve hundreds of steps within each trial to investigate attractor dynamics in high temporal resolution. For that reason, our experiments do not face the issue of transient responses between two consecutive inputs. In our experiments, we tested the generalization of the decoders up to five steps (the duration of the trial) after observing a stimulus. Thus, if any transient response fades during the trial, our analysis would be able to capture that phenomenon. ## References 1. Kozachkov, Leo, John Tauber, Mikael Lundqvist, Scott L. Brincat, Jean-Jacques Slotine, and Earl K. Miller. “Robust and Brain-like Working Memory through Short-Term Synaptic Plasticity.” PLOS Computational Biology 18, no. 12 (December 27, 2022): e1010776. 2. Curtis, Clayton E., and Thomas C. Sprague. “Persistent Activity During Working Memory From Front to Back.” Frontiers in Neural Circuits 15 (2021). 3. Mejías, Jorge F, and Xiao-Jing Wang. “Mechanisms of Distributed Working Memory in a Large-Scale Network of Macaque Neocortex.” Edited by Tatiana Pasternak and Tirin Moore. eLife 11 (February 24, 2022): e72136. 4. Murray, John D., Alberto Bernacchia, Nicholas A. Roy, Christos Constantinidis, Ranulfo Romo, and Xiao-Jing Wang. “Stable Population Coding for Working Memory Coexists with Heterogeneous Neural Dynamics in Prefrontal Cortex.” Proceedings of the National Academy of Sciences 114, no. 2 (January 10, 2017): 394–99. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. As noted in my response, I already believe this work to be a suitable contribution to NeurIPS. Both of my concerns were related to the writing, which the authors have committed to addressing. Yet, I have no way of checking the final version, so my current score is the highest I am willing to give due to the poor presentation of the initial submission, which is what I am asked to judge by the guidelines. I wish the authors all the best! --- Rebuttal 2: Title: updated sec 4.3 Comment: We understand the reviewer's continuing concern in terms of the clarity of the revised text given that updates to the manuscript are not allowed at this stage. To help the reviewer with their judgement, below we include all of the text from the revised section 4.3, hoping that the reviewer can appreciate the improvements made to the clarity of the new revised section. We highlighted the parts of the text with substantial changes. # Section 4.3 Having examined the encoding of objects in RNN latent space, we next investigated how RNN dynamics enable simultaneous encoding, maintenance, and retrieval of information. **Performing our N-back task suite required the RNN to keep track of prior objects’ properties as well as the incoming stimuli with minimal interference.** We reasoned that the RNN may implement one of three possible mechanisms to perform the n-back working memory task suite (Figure 4e). - H1: Slot-based memory subspaces **(Luck and Vogel 1997, Whittington et al. 2023)**. Where the RNN latent space is divided into separate subspaces that are indexed by time within the sequence. Each object is encoded into its corresponding subspace (i.e. slot) and is maintained there until retrieved. **By definition, the subspace assigned to each memory slot is distinct and “sustained” in time.** - H2: Relative chronological memory subspaces. Where the RNN latent space is divided into separate subspaces that each maintains object information according to their age (i.e. how long ago they were observed, **for example memory of previous observation or prior to last observation). Such a mechanism will require a dynamic process for updating the content of each memory space at each time step during the task.** - H3: Stimulus-specific relative chronological memory subspaces. Which is similar to the relative chronological memory hypothesis but with independent subspaces assigned to each object. **Each observation in the sequence is thus encoded into a distinct subspace and encoding of each stimulus is in turn distinctly transformed into associated memory representations.** To identify the hypothesis that best matches the computations performed by the RNN, we analyzed how the RNN **latent subspaces that encodes each observed object property (E(S,T ) ) is transformed across time into memory representations (M(S,T ) )(Fig. 1d).** We first tested whether object information is maintained in a temporally stable subspace **(i.e. sustained working memory representation)** which aligns with H1 prediction (i.e. E(S=i,T =1) =? M(S=i,T =k), k ∈ {2, 3, 4...}; Fig. 4a). For this, we trained decoders to predict the value of each object property using the RNN unit activity during the encoding **phase (i.e. Encoding Space)** and evaluated its generalization performance in consecutive **steps (i.e. Memory Spaces)**. We reasoned that if the object information is encoded in a subspace that is stable across time **(as in a memory slot),** the decoders’ generalization performance should be high and comparable to its performance during the encoding phase. Contrary to H1’s prediction, we found that the decoders do not generalize well (Figure 4b), suggesting that the object information is not **stably** encoded in **a temporally-fixed** RNN latent space. However, we observed that unlike STSF models, in STMF and MTMF models, the decoding accuracy at the step where the object information needs to be recalled for comparison with the most recent object (i.e. the executive step) is consistently higher than other time steps (Figure 4b and c). This suggests that the object representation is partially **realigned** with its original encoding representation at the step where a comparison is to be made **(Fig. 4b; slightly higher decoding accuracy at the executive step).** Next, we examined whether the object Encoding Space is shared between all incoming stimuli **within the sequence** (H2) or not (H3) (i.e. E(S=i,T =i) =? E(S=j,T =j)) **– the primary difference between H2 and H3 hypotheses.** We thus fitted classifiers to decode each object property using the hidden activity from **encoding phase (i.e. Encoding Space) of each stimulus within the sequence (i.e. decoding S=i from E(S=i,T =i)), testing it on the stimuli appearing at other time steps (i.e. decoding S=j from E(S=j,T =j)).** We performed this analysis for all object properties and for all model operating modes (i.e. each individual task). The validation and generalization accuracies were almost identical (Figure 4d), suggesting a stable encoding representation **(E(S=i,T =i)=E(S=j,T =j))** consistent with H2. **In other words, each object in the sequence was encoded within the same RNN latent subspace regardless of its order in the sequence.** *Continued in the next comment* --- Rebuttal Comment 2.1: Title: updated section 4.3 cont'd Comment: *Continued from previous comment* Having examined how the RNN latent space allows concurrent encoding, retention, and retrieval of information, we next investigated what transformations underlied the conversion of information from one subspace to another. Specifically, we inquired whether the transformation of feature subspaces across timesteps is stable w.r.t the same encoded stimulus (i.e. Ti = Ti+1; **see Fig. 4e)**. As detailed in Appendix B A.2, we adopted the **orthogonal** Procrustes analysis to obtain rotation matrix RS,T to characterize the transformation. **The orthogonal Procrustes analysis is a statistical shape analysis which enables discovering simple rotation transformations that superposition a set of vectors/points onto another. We used this analysis to inquire whether each set of object feature decoders can be rotated to align with the set of decoders for the same object properties but at a different time step.** Thus, the above test can be reformulated as *Equation (2)* Additionally, we also tested if the transformation is consistent across stimuli **within the sequence,** that is to say: *Equation (3)* Before delving into testing 2 and 3, we first checked whether the feature representation subspace transformation across timesteps are structured or not (Figure 4f, left). In other words, we evaluated whether Procrustes analysis can capture the rotation transformation of feature representation sub- spaces **in the first place. The reconstructed** decision hyperplanes **resulting from rotating the original decoders by the Procrustes transformation matrix R were highly accurate (Figure 4f-right) which indicated that a rotation operation was able to well capture the transformation performed by the RNN across time steps** (also see Appendix A.1. Figure A4). Further, we tested 2 and 3 by swapping the rotation matrix at RS=i,T=j by RS=i,T=j+1 or RS=i+k,T =j+k respectively and plotted the accuracy of reconstructed decision hyperplane for MTMF models in Figure 4g. **We reasoned that if these rotation operations were shared across time steps and stimuli, swapping them should not significantly affect the decoding accuracy.** Across all model architecture and model operating modes, we found that replacing RS=i+k,T =j+k **consistently yields** good accuracy, **whereas replacing** RS=i,T =j+1 **does not. These results** suggest that while the transformation remains consistent across **different** stimuli, it is not stable over time. --- Rebuttal 3: Comment: Dear Authors, I understand your desire and ambition to continue the discussion to increase my current score. However, likely unknowingly, you are utilizing a rebuttal strategy known as "wearing down the reviewers." Such a strategy might work for less experienced reviewers, but I simply used it to update my priors with more evidence for a bad behavior I suspected (more on this later). In my latest response, I made it crystal clear that 6 was the highest score I am comfortable giving, the best response to this would be to thank me for my time and wish me well as well. Since you want to continue, let me explain why I cannot go beyond 6. The initial submission feels incomplete, as if it is the version that you could get done until the deadline (also the assigned number of 20855 suggests it was one of the last submitted papers). To some extent, I assigned a solid chance for it being a placeholder for the extensive revisions you planned to perform after the deadline has passed. As several reviewers noted, methodology is incomplete, results are rushed, and several definitions missing. It is *inappropriate* to submit a half-finished work to a conference, in which reviewers are already overburdened. It is even more inappropriate to wear those reviewers down with extensive revisions. Thus, though this work was initially appropriate for a poster thanks to the novel science, the possibility of it being a placeholder prevented me from giving a score higher than 6 for a spotlight recommendation. Now, with your latest response that seems to add major revisions to your work, my priors are updated and I now more strongly suspect that this is indeed what happened. I do not believe this type of behavior should be rewarded, but **I will leave it to the AC to decide if such major revisions would be appropriate for the NeurIPS conference.** Due to the reasons described above and my increased suspicion, I will **decrease** my score down to 5. To be perfectly clear and candid, my score can only go down from here, not up. I hope you will take this as a learning experience for future and once again wish you all the best. --- Rebuttal Comment 3.1: Comment: Thank you for your candid feedback and for taking the time to review and discuss our work. We respect your decision and will take your feedback into account for our future work.
Summary: This paper examines the mechanisms of working memory in recurrent neural networks (RNNs) trained with naturalistic objects and N-back tasks, a classic task in cognitive and neuroscience. The aim is to study more complex, ecologically valid stimuli than the abstract categorical input that previous studies of working memory in RNNs have used. Additionally, the choice of N-back tasks provides a setting in which encoding of new memories, maintenance of prior ones in the face of distracting stimuli, and retrieval must all be balanced. For example, in a 2-back task, I must maintain a memory of the stimulus I saw 1 trial ago, and protect it from interference, as I encode the current stimulus and retrieve the stimulus from 2 trials ago to compare them. Here, the authors investigate what strategies RNNs use to dynamically maintain this object information. Moreover, they investigate RNNs' ability to flexibly switch between multiple tasks (1-, 2-, or 3-back tasks) as well as multiple object properties (identity, category and location). The findings can be summarized as follows: 1. Information about both task-relevant and task-irrelevant object properties (as measured through a linear classifier's decoding accuracy from the latent vector) is present in multi-task RNNs, but not in single-task RNNs, where only the single property that is relevant for the task is decodable. 2. Information about object properties is orthogonalized in the RNNs' latent space, meaning that the normal vectors of the hyperplanes separating different values of the features used in the task (identity, category and location) are close to orthogonal. 3. In multi-task settings, a vanilla RNN encodes object properties in a subspace that is shared across tasks, while gated RNNs (LSTM and GRU) keep separate subspaces for different tasks. 4. In N-back tasks that require to simultaneously encode, maintain, and retrieve information in working memory, RNNs implement a shared encoding space for all stimuli that appear for the first time, and they do not maintain consistent codes for different objects across time. 5. The transformations of feature subspaces across time can be well-approximated by a rotation matrix, and they are consistent across stimuli, but not stable over time. In other words, the rotation matrix that predicts the transformation of one stimulus from one timepoint to the next can predict the transformation of another stimulus at equivalent processing stages (e.g. between the initial encoding of the stimulus and the next timestep) but it cannot generalize to other processing stages. Overall, this paper is a thorough investigation of the mechanisms underlying working memory in RNNs in a complex, ecologically valid task setting. Strengths: - The paper studies a class of tasks (N-back tasks) that elegantly combines different requirements of working memory. It thus allows to study several simultaneous operations, providing a considerably more complete picture relative to previous work. It also uses realistic stimuli, providing a closer approximation to complex tasks used to study working memory in humans and animals. - The "related works" section provides a concise but informative overview of previous work, including the study of working memory in both biological and artificial systems. - The experiments are creative and sound, and provide a detailed picture of the computational solutions found by RNNs to solve N-back tasks. Weaknesses: - The paper contains several typos: for example, at line 106 "dynamical dynamics", lines 238-240 "We thus fit classifiers to decode each object property using the hidden activity from Encoding Space of the stimuli first appears at one timestep testing it on stimuli first appears other time steps.", line 244 "we next investigated how what transformations underlied the conversion of information", line 258: "architecutre". I would recommend the authors to go through the manuscript and correct these errors, as they make reading the paper unnecessarily harder. - Several details of the model and training are left out: for example, figure 1c shows a hidden layer between the RNN's latent space and the output. What is the size of this layer? What is the training loss? I imagine it is cross-entropy, but this information should be made explicit. Also, how was the identity of the task encoded? Was a dummy (constant, or random) task identity vector also provided to the single-task single-feature model? How many iterations was the model trained for? How long were the sequences in the training and validation sets? - The generalization to novel object views and novel object instances is briefly mentioned, but not developed any further. The authors should add an explanation of what can be inferred from these results, and how they related to other findings in the paper. Otherwise, I see little reason for including them. - I am not sure what conclusions can be drawn from the orthogonalization found in the RNNs' latent space. The authors seem to attribute it to the dynamical encoding of stimulus information implemented by the RNNs, but the comparison between the perceptual space (penultimate layer of ResNet) and the latent space of the RNN does not seem appropriate, especially since the RNN is trained, while the ResNet (if I understand correctly) is frozen. It seems natural then that any network (even an additional feedforward layer) explicitly trained for a task would represent stimuli along orthogonal dimensions that corresponds to the task-relevant features. It becomes hard then to interpret the finding of orthogonalized representations. - While the paper convincingly characterize the subspaces used by the RNNs to perform the N-back tasks, it does not directly test their causal relevance. This seems like it would be relatively straightforward: by shifting the network's representation along the direction of the normal vector to a given hyperplane, it should be possible to "push it" towards giving a specific answer. For example, once the hyperplane separating "car" from other categories has been computed, the RNN's hidden state can be shifted along the normal vector's direction to (falsely) make it output "match" if the object seen N timesteps ago was a car (in a category N-back task). A similar analysis would validate the notion that the subspaces found by the decoding analysis are actually used by the networks in solving the task. - While the paper contains several analyses looking at _cross_-decoding, in which a classifier trained on a timestep in the sequence is evaluated on another timestep, there is no plot showing the _within-timestep_ decoding accuracy. Such an analysis would be helpful for characterizing the dynamics of the information contained in the representation: for example, is the representation of an object simply maintained in memory, iteratively enhanced, or does it degrade in time? Looking at decoding accuracy over time, or also at the orthogonalization index over time, would help to adjudicate between these possibilities. The fact that representation transformations across timesteps are well approximated by rotation matrices suggests that stimulus discriminability is roughly constant (at least until recall, as shown in fig. 4f) but measuring this discriminability directly as decoding accuracy would help to interpret the finding of rotation-like dynamics in functional terms. As decoding accuracy seems to be in general close to 100%, perhaps a continuous measure such as distance from the classification hyperplane would be more sensitive in revealing these dynamics. - In general, I feel that the paper does not provide a cohesive story to bind the different results together and interpret them. I think the findings are really interesting, but it is not clear what the take-home message is in terms of what they teach us about working memory in humans, animals or artificial systems. For example, what does the relative chronological memory subspace proposed in H2 (and confirmed by the results) mean in functional terms? What is the functional usefulness of having an encoding-specific subspace shared across stimuli, for example? Or what does the finding of rotation-like dynamics tell us in functional terms? One possible interpretation that I take away from this is that _protection against interference_ is a major principle determining RNNs' dynamical codes. For example, having an encoding-specific subspace can help to maintain the current stimulus separate from the ones being held in memory. Is this also your interpretation of the results? The results, as I said, are very interesting by themselves, but providing your own (even if speculative, that is completely fine) interpretation of what principles underlie these results would substantially strengthen the paper. Relatedly, what is the link between the present results and the broader literature on working memory in neuro- and cognitive science? Do these results allow us to distinguish between theories that have been previously proposed, or do they require a new explanatory framework? I appreciated the "Related works" section, which gave a nice overview of research in this field, but I felt the Discussion was not as thorough in placing the current findings in context. Technical Quality: 3 Clarity: 2 Questions for Authors: No questions, aside from those mentioned in the "weaknesses" section. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper does not explicitly address the limitations of the current approach. In particular, to what extent the current findings generalize to other working memory tasks, and to what extent they reflect working memory operations in biological systems, is not clarified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the importance of our study. We have carefully considered the reviewer's comments and have provided the following response: * __Typos.__ We sincerely thank the reviewer for carefully listing all of the typos in our original draft. We will fix them and more thoroughly review the revised manuscript to ensure clarity. * __Missing details about model training.__ Thank you for highlighting this. We have included additional methodological (e.g., training details) in our general response to all reviewers. In the revised version of our manuscript, we will include those details, in addition to other missing method details. * __Discussion on the relevance of generalization to novel object/view angle.__ The primary reason for our evaluation of novel view/angle generalization was to highlight that the mechanisms of working memory learned by the model were not strictly limited to the specific set of visual features used during training. Hence, we used the same stimuli but with novel views/angles. Nevertheless, we agree with the reviewer that it will be potentially interesting to further examine the representational dynamics of RNNs when considering alternative stimuli (e.g. novel objects). (Note, however, that novel objects would prevent us from evaluating on the n-back identity task, since this requires the same stimulus.) Given the limited time at our disposal for providing a response to the reviews we were unable to perform further analyses in this direction. However, we have added this suggestion as an important future direction in our discussion section. * __Interpretation of the orthogonalization results.__ Regarding the update of the orthogonalization analysis and interpretation, we kindly ask the reviewer to refer to our general reply for a detailed explanation due to the limited character count for individual rebuttals. * __Causal relevance test.__ We thank the reviewer for their suggestion. We performed a new analysis to address this point. Specifically, we considered trials from the 1-back location task where two objects had matching location property. We then manually adjusted the hidden state of the network in the direction normal to the decoder’s decision boundary. We then calculated the network’s generated output probabilities as a result of this hidden state adjustment. This analysis revealed that when adjusting the hidden state such that the decoder judges the object location to be different from its original value, the probability of match output is reduced while the probability of non-match and no-action outputs increase (Fig 1c in the attached 1-page PDF). This result thus supports the causal role of the particular encoding of object properties on the network’s generated behavioral output. * __Within time-step decoding analysis.__ Following the reviewer’s suggestion, we performed the decoding analyses for each time step separately. The results qualitatively mirrored those we had reported in our Procrustes analysis that showed near perfect decodability up the time step where the object information was necessary for performing the task. Please see Fig. 1b in the attached 1-page PDF. In addition, as suggested by the reviewer, we acknowledge that given the near-perfect decoding accuracies, using a continuous measure like distance from the decoding hyperplane could provide more nuanced insights into the representation dynamics. We will include the results of this analysis in the final revised version of the manuscript. * __Interpretation and relationship to previous literatures.__ Our goal from the present work was to characterize the way multidimensional object properties are represented in the RNN models of WM and to reveal the mechanisms employed by these models for concurrent encoding, retention, and recall of information according to task demands. We will revise the abstract and the main paper to more clearly reflect these aims and to highlight its importance and distinctness from prior work. Furthermore, our findings in section 4.3 provide evidence against one of the prominent models of working memory, the slot-based model of working memory (Luck and Vogel 1997, Whittington et al. 2023). Relatedly, our results further show that within the family of n-back tasks, the memory representation is not “sustained” as is commonly suggested by prior studies. We expect that a sustained or dynamic memory representation will largely depend on the task structure (e.g., n-back task vs. a working memory delay with a static fixation). Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657), 279-281. Whittington, J. C., Dorrell, W., Behrens, T. E., Ganguli, S., & El-Gaby, M. (2023). On prefrontal working memory and hippocampal episodic memory: Unifying memories stored in weights and activity slots. bioRxiv, 2023-11. * __Limited discussion.__ We agree that the discussion on the limitations of our work was largely incomplete. To address the reviewer’s concern, we substantially revised the Discussion and Conclusion sections, and included a list of limitations and future directions. This includes additional experiments, including the reviewer’s suggestion on studying other working memory tasks. In particular, we are planning to perform an additional analysis using a simple delayed match-to-sample task to compare how working memory representations are maintained in the absence of incoming stimuli (i.e., the N-back task has incoming stimuli at every ‘delay’ period). If indeed the rotation-like dynamics are employed to protect against interference in the N-back task, we hypothesize that there may be the lack of rotation-like dynamics across time in a delayed match-to-sample (in which the delay is just a fixation). Though we could not perform this analysis during this rebuttal period, we are aiming to include this analysis in the final version of this paper. --- Rebuttal 2: Title: Response to authors Comment: I thank the authors for their rebuttal, and apologize for the delay in responding. Generally, their responses to my points were helpful in clarifying my doubts. I only have further comments on the following points: - **Orthogonalization results.** As I had written in my initial review, I wasn't sure how to interpret the orthogonalization finding in the first place, so I also don't know how to interpret the updated finding of *lower* orthogonalization in the RNN. It is indeed counterintuitive, as I would think that training with a task would lead to more orthogonalization along task-relevant dimensions. Either way, I don't think this finding is central to the paper's message, so I believe you could simply remove it. - **Causal relevance test.** I do appreciate that the authors have run this analysis. However, given the high level of variability in the network's response across trials, I do not think any conclusion can be made from this analysis about the causal relevance of the network's subspaces. The significance of the trends shown in Fig. 1c of the attachment could be simply checked using a measure of correlation between percentages of shift and probability of each given response. I doubt that any of the correlations shown in Fig. 1c will be significant. One possible reason for the high variability might be that the authors subsampled "match" trials and tried to push them towards giving a "mismatch" response. This means that the network's state was pushed towards the "all" direction of the "one vs. all" classifier, which probably does not correspond to any particular category, hence the high variability. It is possible that trying to push a "mismatch" response towards a "match" response (i.e. pushing the network towards a specific category) would lead to more reliable results. However, my intuition might be wrong and I understand that the authors don't have the time to run this analysis. If the authors wish to include this additional analysis as it is, then, they should describe it as inconclusive rather than try to infer any causal effect from it. - **Within-timestep decoding.** The finding of constant decoding accuracy for task-relevant features (and slightly degrading accuracy for task-irrelevant ones) is quite straightforward, and I think it would make a nice addition to the paper. In particular, it provides a nice contrast to findings in RNNs that had to categorize objects without any memory requirements, such as [1, 2], in which decoding accuracy was found to be increasing across time. This suggests a potential distinction between dynamic encoding strategies in perception and working memory. One thing that is not fully clear to me: does Fig. 1b in the attachment indicate that the accuracy was still close to 100% also several timesteps after the time of the response? If so, this would be interesting to highlight. To make this figure clearer, I would suggest adding vertical lines corresponding to the response times in the different conditions, so that it is clear when the accuracy is being measured before or after the response. - **Relation to other models.** I thank the authors for their clarification. While I'm familiar with the general idea of slot-based models, I haven't had the time to go through the more explicit model proposed by Whittington et al. (2023). However, from a quick skim of that paper it seems like they do address temporally-structured tasks as well (beyond just delayed response)? Particularly, in their Figure 5, they describe a model that shows some similarity to what the authors propose here, whereby stimuli are encoded in a given slot and then rotated, such that the initial slot can be occupied by the new stimulus to be encoded. I would appreciate it if the authors could clarify the distinction between their model and the one in Whittington et al. (2023). In general, I agree about the importance of across-tasks comparisons, and I am glad that the authors plan to add a comparison with a delayed match-to-sample task. I think that will be a nice addition to the paper. To conclude, as the authors have clarified several points but not made major changes to the paper, I plan to keep my score. Again, apologies for not leaving enough time for more discussion with the authors, but I believe any further revisions would have been minor anyway. [1] Spoerer, C. J., Kietzmann, T. C., Mehrer, J., Charest, I., & Kriegeskorte, N. (2020). Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision. PLoS computational biology, 16(10), e1008215. [2] Thorat, S., Aldegheri, G., & Kietzmann, T. C. (2021). Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization. arXiv preprint arXiv:2111.07898. --- Rebuttal Comment 2.1: Comment: We thank the reviewer for the detailed feedback. Below are our responses to some of the questions: 1. **Causal relevance test**. Thank you for pointing out possible roots of the observed high variability in the results of this additional experiment. We will perform the updated version of the experiment using 1 vs. 1 decoders and pending results will include them into the revised manuscript. 2. > does Fig. 1b in the attachment indicate that the accuracy was still close to 100% also several timesteps after the time of the response? We would like to clarify that the analysis presented in Fig. 1b of the attached PDF pertains to decoding features of the present stimuli, rather than features of stimuli from memory. As a result, it is expected that the information is decodable irrespective of the timestep. We realized that the reviewer may have asked about memory decoding where the object properties of past stimuli are decoded from each time step. For results specifically related to memory decoding, please refer to Fig. 4f in the manuscript, where the solid line represents the decoding of task-relevant features from memory. The dotted lines that represent the decoding accuracy after the rotation transformation almost exactly overlaps with the solid lines. 3. **Relation to Whittington et al. 2023.** We thank the reviewer for further inquiring about the differences between our findings and those of the model in Whittington et al.. The model proposed in Whittington et al. 2023 assumes that the RNN hidden space consists of a fixed number of “activity slots” with fixed, slot dimensionality and read out weight (reading out of a single slot). Furthermore, the dynamics of the hidden state are limited to identity or zero matrices that can be used to create an exact copy of the contents from one slot to another. While there are conceptual similarities between this theory and our findings from task-optimized RNNs, our findings highlight that 1) strict assumptions on the dimensionality of the subspaces, readout weights, and the dynamics need not be fixed and suitable values could emerge through learning on tasks. 2) Moreover, the proposed mechanism in Whittington et al. is insufficient for performing tasks that require comparison of multiple stimuli (e.g. n-back task) as in such situations the network needs to produce an output by comparing its memory content with perceptual inputs but the proposed model consistently reads out of a single memory slot. We will extend the discussion about the differences and similarities between that work and ours in the revised version of the manuscript.
Summary: This work involves training RNNs on the n-back task with naturalistic images fed into the RNN through a CNN front-end. Through a large number of permutations comprising different task requirements, different combinations of tasks and different architectures, the authors study how task relevant and task irrelevant information is processed in the RNN back-end. Strengths: The analysis is principled and rigorous. A large number of model-task permutations have been trained, which all support the same conclusions. Weaknesses: This work is critically lacking the control experiment of simply training an RNN without natural images or CNNs. The aim of this work is to understand how RNNs process natural images in working memory, but I feel that these results (such as representational orthogonalization) can be reproduced if you simply use randomly generated vectors as inputs. I foresee that this would be a logical doubt from many readers, including myself. One reason to think this way is that the output of the final layer of a pretrained CNN is essentially uninterpretable (and hence no different from being a random vector of numbers) beyond how it can be clustered according to its softmax boundaries. This is especially so when CNNs have location invariance due to their shared convolution across space, and produce very similar outputs at the final layer (from which the RNN is connected to) for images from the same category. Technical Quality: 2 Clarity: 1 Questions for Authors: There seems to be a lack of details in the methods. For example, are the models only trained/tested on 6 time steps? Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: In the checklist, the authors claim that "We point out the limitations of our model and analysis methods in the discussion." But I really cannot find any sentence in Section 5 that describes any limitation of this work at all. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer’s recognition of the strengths of our manuscript, particularly noting the principled and rigorous nature of the approach. We are glad that the extensive training across various model-task permutations robustly supported our conclusions. In response to the weaknesses and limitations the reviewer highlighted, we carefully provide detailed responses addressing each point: * __Lack of control experiments and whether naturalistic stimuli can be replaced by random vectors.__ We used a pretrained CNN (on ImageNet), which was trained to categorize objects. This suggests that the output layer of the CNN is, overall, lower-dimensional than its inputs (given that the number of objects classified is fewer than the number of total samples in ImageNet). Nevertheless, to verify this intuition on images sampled from our dataset, we measured the dimensionality of the inputs relative to the dimensionality of the CNN output layer, and the dimensionality of the RNN space. As the Reviewer suggested, we also used randomly sampled vectors (the same number as the number of images in our dataset), and measured the dimensionality of the stimulus vectors in the input space, CNN output layer, and RNN layer. We then compared the dimensionality of our stimulus set vs. randomly sampled vectors for each of these embeddings. We found that the dimensionality of randomly sampled vectors is significantly higher than the dimensionality of the stimuli used in our experiments, suggesting that our results are unique to our specific chosen stimuli and task, and cannot be replaced by using randomly sampled vectors (see fig 1a in the attached 1-page PDF). **Importantly, to directly address the Reviewer’s comment, what this control analysis shows is that the output of the final layer of the pretrained CNN is different from being a random vector of numbers, since the dimensionality of random vectors is higher-dimensional than input vectors from our stimulus set.** Since dimensionality measures the orthogonality between pairs of vectors (stimuli), our analysis shows that the degree of orthogonalization differs (and cannot be directly reproduced) with random vectors. * __Lack of details in the Methods.__ We thank the reviewer for bringing this point to our attention. All models were trained and tested on trials of the same length (6 time steps). Moreover, we have included additional methodological (e.g., training details) in our general response to all reviewers. In the revised version of our manuscript, we include those details, in addition to other missing method details. * __Limitations in the discussion.__ Thank you for bringing this issue to our attention. We have substantially revised the Discussion and Conclusion sections of the paper, expanding on the current study’s limitations and likely future directions. Since updating the submission is not allowed during this period, we provide a brief summary as follows: 1) our findings are limited to the N-back task structure and it’s unclear whether similar computational strategies will emerge in other tasks requiring working memory. However, we provide specific predictions and further experiments that may lead to the discovery of more general principles used by RNNs for different WM tasks (e.g., in a simple delayed-match-to-sample task with only a fixation for the delay, we hypothesize that there will not be a rotation of representations through time); 2) we haven’t considered novel objects in our analyses. We reported reduced performance for novel objects but did not examine the representational geometry with those stimuli or the possible reasons for the diminished performance. We welcome any further suggestions from the reviewer; 3) Our findings are limited to the commonly-used RNN architectures, and we cannot make definitive claims about the representational geometry resulting from other neural network architectural choices. Additionally, we have not examined how scaling the network affects the results; 4) Our results suggest a possible hypothesis about working memory, but further investigation with real neural data is needed to validate this. It would also be valuable to explore the relationship between architectural choices and experimental outcomes. --- Rebuttal Comment 1.1: Title: Early response Comment: I thank the authors for the response, and I will reply again with my final decision after carefully reviewing the results. I just have a simple question for the authors: The authors stated in another reply to another reviewer that the total stimuli count is 192. Can I ask how the dimensionality of random vectors, in Figure 1A of the rebuttal PDF, is more than 4000? --- Reply to Comment 1.1.1: Comment: We initialize the random vector input to match the dimensions of our naturalistic input, 224×224×3, drawn from a standard Gaussian distribution. For the input frames, we place one of the 192 possible stimuli in one of the four location quadrants, introducing slight variations from frame to frame to add randomness. Specifically, we add a Gaussian random variable to the sampled central location coordinates within the selected quadrant when placing the stimulus on the frame. As a result, each frame with the same stimulus in the same quadrant is not identical in pixel space. To address input dimensionality, we perform PCA on both the random vector inputs and naturalistic inputs, selecting the first k principal components that account for at least 90% of the variance. This analysis is conducted using 5,000 task trials, which explains why the dimensionality of the random vector inputs exceeds 4,000. --- Rebuttal 2: Title: Controls Comment: It sounds like there are 192 $\times$ 4 different stimuli instead of 192? I will just arbitrarily assume its 192 in my argument below but replace it with the correct number if I'm wrong. I would normally consider what the authors did as a strawman argument. Noise aside, 192 stimuli identities would have a maximum of 192 dimensions. Comparing them to 5000 randomly generated vectors makes absolutely no sense at all. But I would like to give the authors the benefit of the doubt and clarify myself: Let $x$ be the final layer of the CNN. $W_{xh} x$ is the input to the RNN, where $W_{xh}$ is the set of weights that transforms CNN outputs into RNN inputs. I am saying if you replace $x$ with 192 random vectors, each representing one identity of your stimuli, and $W_{xh}$ is trainable, it is possible to get the same results. Nothing before $W_{xh} x$ matters. The title of the paper contains the phrase "naturalistic object representations in recurrent neural network models", and I am challenging the very fact that the input to the RNN is not naturalistic. In fact, prior literature has shown that WM orthogonalization can occur with random inputs [1]. It is extremely important that the authors distinguish their model from a model that does not have a CNN, yet capable of producing the same phenomenon. [1] https://doi.org/10.1371/journal.pcbi.1011555 --- Rebuttal Comment 2.1: Comment: **Regarding the total number of stimuli**: We rendered 192 stimuli from the 3D dataset ShapeNet. These stimuli were then positioned in one of the four quadrants on the frame to generate N-back task frames, as location is a key feature of interest. **192 stimuli identities and dimensionality**: We believe the reviewer may be referring to categorical encoding (binary encoding/one-hot encoding etc) when suggesting that 192 stimuli identities would have 192 dimensions. However, our definition of dimensionality pertains to the neural representation space. As illustrated in Figure 1a of the attached PDF, the dimensionality of the CNN embeddings is 92, not 192, as the total number of stimuli identities might suggest. This reduction in dimensionality reflects the CNN's invariance to viewing angles, mixed selectivity, and the hierarchical nature of features like category and identity. Manually constructing an abstract code that encompasses all these aspects is challenging. Additionally, we use CNN embeddings rather than hard-coded stimulus identities as input to the RNN because CNNs pretrained on ImageNet exhibit significant representational similarity to the human visual system (Yamins et al., 2014 etc). Furthermore, we would like to emphasize that the models we trained can generalize to novel view angles within the same identity and to novel identities within the same categories. This suggests that the model is truly learning the task rather than overfitting the data. We find it unlikely that this level of generalization would emerge from a model trained with an abstract input space. Moreover, we are interested in the relative orthogonalization level within the perceptual and encoding spaces. As shown in the updated Figure 2e of the attached PDF, RNN representations are orthogonalized, with the orthogonalization index for the encoding space nearing 1. However, a critical point we wish to address is whether RNNs orthogonalize task-relevant feature spaces more due to task requirements compared to CNN space. If we were to use hardcoded stimulus identity encoding as RNN input, such a comparison analysis would not be feasible. Unlike Piwek et al. (2023), who utilized feature-sensitive neurons with circular normal tuning functions to encode categories or identities, our feature spaces are not one-dimensional continuous variables. Our claim differs from Piwek et al. (2023), which demonstrates an orthogonal-to-parallel transformation of the cued versus un-cued color planes from the pre-cue to retro-cue period. In contrast, our feature spaces remain relatively orthogonal in both the perceptual and encoding spaces. The input construction in Piwek et al. (2023) assumes an orthogonal relationship between the concatenated encodings of two locations and the colors (using 17 feature-sensitive units for each possible location, following a circular normal tuning function). Therefore, the orthogonal representation at the beginning of each trial is expected. In our case, we did not assume that CNN embeddings are orthogonal, and any representational geometry we observe is an emergent property. Finally, our use of naturalistic encoding is not intended to suggest that results obtained using abstract encoding are incorrect. Rather, it aims to remove input hypotheses that are manually introduced into model design. While Piwek et al. (2023) could potentially achieve similar representational dynamics using a model that takes RGB encoding of color conjugated with location as input, our naturalistic approach allows us to explore questions beyond the constraints of input encoding design. We agree with the reviewer that some of the same findings may be replicated if non-naturalistic inputs (such as random vectors) were to be used as input to the RNN but we respectfully disagree with the reviewer that the input to the RNN in our experiments is not naturalistic. In our experiments, the RNN receives a vector representation of a naturalistic stimulus on every step which is processed by a biologically-plausible vision neural network. We remain open to further comments by the reviewer. We hope this response addresses the reviewer’s concerns, and we look forward to further feedback! Thanks! [1] Yamins, Daniel LK, et al. "Performance-optimized hierarchical models predict neural responses in higher visual cortex." Proceedings of the national academy of sciences 111.23 (2014): 8619-8624.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their insightful and positive feedback. We are encouraged by the reviewers’ acknowledgement of the rigorousness and thoroughness of our work (*reviewer VPbf and YsNL*), creativity and soundness of our experiments (*reviewer YsNL*), and novelty and impactfulness of our results (*reviewer YsNL and 8PnP*), while identifying its questions and limitations. Here we point out several common questions raised in the reviews. (We respond to all inquiries in individual responses.) * **Updated results in Fig. 3b.** We would like to first acknowledge an error in producing the plots in Fig 3b. The measure we used in the original manuscript was $O = \| \text{abs}(\cos(w_i, w_j)) - I\|_F$ with $I$ as the identity matrix. This measure however produced an inverted measure of orthogonality as we should have used $O = \|\text{abs}(\cos(w_i, w_j)) - \mathbf{1}\|_F$ and only considered the upper (or lower) triangle (thus the higher the orthogonalization index, the more orthogonalized the representation). When correcting the measure, our results and interpretation of them were reversed (see fig 2e in the attached 1-page PDF). Both space are highly orthogonalized, however, even when equalizing the dimensionality of the perceptual and RNN space with PCA, the results demonstrated significant reduction in orthogonalization of the latent space in the RNN. We revised our description of the method and results related to this section accordingly. Our current hypothesis for this counter-intuitive result is that the high-dimensional perceptual space may not have been fully captured by PCA. Specifically, PCA might not have adequately represented the relative dimensional differences between the two spaces under comparison. We kindly ask for the reviewers' opinions and insights regarding this updated result. * **Missing details in Methods.** We appreciate the reviewers’ suggestions and for pointing out the lack of details in some places. There is only one feedforward layer with a softmax activation that projects the hidden state to one of the three possible output actions. We understand that the visualization may have caused confusion, and we will update it accordingly in the revised manuscript. We used cross-entropy loss for training, and the identity of the task was encoded in a 6-digit binary format: the first 3 digits represented the one-hot encoding of the feature (e.g., stimulus location, category, or identity), and the second 3 digits represented the one-hot encoding of the n-back choice of n. For the single-task single-feature model, we used the same task identity vector as in the multi-task models. The multi-task multi-feature model typically takes around 4-8k iterations with a batch size of 256, and we cut off training at 14k iterations. The sequence length is fixed at 6 for both the training and validation sets. In the revised version of the paper, we will include all necessary training details in the supplementary materials. In addition, we will make the code and trained models publicly available. * **Interpretation of subsection 4.3**. To address the reviewer’s comment, we revised the portion of the text that details the three hypotheses. Since updating the manuscript is not allowed during this period, we present the revised text below: **H1**: Slot-based memory subspaces (Luck and Vogel 1997, Whittington et al. 2023). Where the RNN latent space is divided into separate subspaces that are indexed by time within the sequence. Each object is encoded into its corresponding subspace (i.e. slot) and is maintained there until retrieved. By definition, the subspace assigned to each memory slot is distinct and “sustained” in time. **H2**: Relative chronological memory subspaces. Where the RNN latent space is divided into separate subspaces that each maintains object information according to their age (i.e. how long ago they were observed, for example memory of observation from one, two or three time steps ago). Such a mechanism will require a dynamic process for updating the content of each memory space at each time step during the task. **H3**: Stimulus-specific relative chronological memory subspaces. Which is similar to the relative chronological memory hypothesis but with independent subspaces assigned to each object. Each observation in the sequence is encoded into a distinct subspace and encoding of each stimulus is in turn distinctly transformed into associated memory representations. Pdf: /pdf/c0e99f7e3977d8f70a1e7addaf5ef2b5b78c7495.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Global Optimal Visual In-Context Learning Prompt Selection
Accept (poster)
Summary: The paper proposes a framework called Partial2Global for ranking in-context examples in visual language models. This method uses a transformer-based list-wise ranker and a consistency aware ranking aggregator to approximate the global optimal prompt selection. The framework is validated through experiments on tasks such as foreground segmentation, single object detection, and image colorization, demonstrating its effectiveness over existing methods. Strengths: 1:In the analysis, a relatively comprehensive examination was conducted on the sensitivity of different components and various backbones to the order of ICL selection, demonstrating the importance of reasonable prompt ranking for ICL in segmentation tasks. 2:The writing and expression are relatively good. 3:The reasons why sample ranking in ICL may affect the final outcome are analyzed in this paper. Weaknesses: 1.The workload is relatively low. I think this work should also explore the impact of models with a larger number of shots on ICL. Additionally, it should investigate the impact of data quality in the alternative dataset, and include the importance of different parts of the loss in the ranking model during ablation experiments in formula (1). 2.The conclusions presented in some charts are not intuitive. For example, in Figure 4, can the similarity of our method and the visual similarity of VPR be displayed in the same chart? 3.Compared to previous methods, although it achieves a much better level, the improvement is relatively small. Technical Quality: 2 Clarity: 2 Questions for Authors: See weakness part Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The workload is relatively low. I think this work should also explore the impact of models with a larger number of shots on ICL. Additionally, it should investigate the impact of data quality in the alternative dataset, and include the importance of different parts of the loss in the ranking model during ablation experiments in formula (1).** A1: (1) Larger number shots:Thanks, yes, we had tried many number of shots in our experiments, which are solid enough to support our contributions in generally. And we further add more number of shots in rebuttal. Particularly, our choice of number of shot can be referred to previous methods such as MAE-VQGAN and VPR. As for multi-shot experiment, we in this paper present two trials of multi-shot experiments. First, in Tab.1, we adopt a voting strategy to merge predictions of different permutations of each alternative-query pair. Second, in Tab.4, we present multi-alternative fusion. Both results indicate the efficacy of our method. To further verify the role of multi-alternative fusion, we extend experiments in Tab.4 to top-7 and top-10 fusion. The results are shown in the following table. We can find that the results are consistent with the original conclusion, where the performance is nearly saturated with 5 alternatives. This again supports that the experiments in our paper well-supports our claim. | \#shot | fold0 | fold1 | fold2 | fold3 | Avg | | :---- | :---- | :---- | :---- | :---- | :---- | | top2 fusion | 39.08 | 42.61 | 38.17 | 36.67 | 39.13 | | top3 fusion | 40.07 | 42.48 | 38.77 | 37.61 | 39.73 | | top5 fusion | 40.12 | 42.59 | 39.09 | 37.28 | 39.77 | | top7 fusion | 40.14 | 42.36 | 39.01 | 37.32 | 39.71 | | top10 fusion | 40.14 | 42.43 | 39.09 | 36.66 | 39.58 | (2) Impact of data quality. Our choice of alternative set is the same as VPR, i.e. selecting 50 most visually similar samples based on CLIP features. Typically, this choice is good enough as shown in previous works like VPR. Here, to further investigate the impact of data quality, we thus tried the suggestion to test our method on all folds of segmentation task with 25 or 100 alternatives for each query. The results are shown in the following table. One can find that our method is also very robust against different sizes of alternative set. alternative set size | set size | fold0 | fold1 | fold2 | fold3 | | :---- | :---- | :---- | :---- | :---- | | 25 | 38.48 | 41.82 | 37.14 | 35.60 | | 50(main) | 38.81 | 41.54 | 37.25 | 36.01 | | 100 | 38.81 | 41.80 | 37.90 | 36.00 | (3) Different parts of the loss. Thanks. We have conducted an additional ablation study to compare models trained for colorization without each loss term. The results are shown in the following table. In general all three loss terms contribute the final performance, with L\_sort plays the most important role. Loss terms | colorization | MSE | | :---- | :---- | | w/o L\_sort | 0.601 | | w/o L\_margin | 0.595 | | w/o L\_mse | 0.585 | | full loss | 0.583 | **Q2: The conclusions presented in some charts are not intuitive. For example, in Figure 4, can the similarity of our method and the visual similarity of VPR be displayed in the same chart?** A2: Thank you, but as analyzed in L310-L314, visualization in Fig.4(a,b) is used to show that visual similarity can only act as basic heuristic for in-context example selection. Since we do not intend to compare results in these two figures, presenting results for two methods respectively is enough to support our claim. We will clarify this in our paper. **Q3: Compared to previous methods, although it achieves a much better level, the improvement is relatively small.** A3: For better understanding we summarize the improvement of both our method and SupPR against UnsupPR in the following table. Concretely, for segmentation and detection improvement of our method against SupPR is comparable to that of SupPR against UnsupPR. For colorization SupPR cannot improve based on UnsupPR while our method improves it by 0.05 MSE. These results are significant enough to validate the effectiveness of our proposed method. | Model | Seg-fold0 | Seg-fold1 | Seg-fold2 | Seg-fold3 | Det. | Color. | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | SupPR | \+2.33 | \+2.51 | \+1.99 | \+1.16 | \+1.38 | \-0.00 | | Ours | \+4.06 | \+5.62 | \+4.84 | \+4.85 | \+3.82 | \-0.05 | --- Rebuttal Comment 1.1: Comment: Most of my concerns are solved, Thanks for the response.
Summary: This paper addresses the fundamental problem in Visual In-Context Learning (VICL), which is how to select the best prompt. Specifically, it focuses on the ranking problem. The paper proposes an algorithm called Partial2Global, which conducts an in-context example selection framework to find the global optimal prompt. The effectiveness of the proposed algorithm is demonstrated through several tasks, such as segmentation, object detection, and colorization. Strengths: (1) The algorithm is simple yet intuitive for finding the optimal global ranking using a list-wise ranker. It adopts a consistency-aware ranking aggregator. (2) The performance is evaluated across various tasks, showing improved results in those tasks. Weaknesses: The method seems to incur additional costs due to the ranking process. Is there any further analysis on this? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weakness part. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The method seems to incur additional costs due to the ranking process. Is there any further analysis on this?** A1: Thank you. Please refer to the general response. Generally, we can summarize that the usage of list-wise ranker and the ranking aggregation will inevitably introduce additional computational cost, while the increased complexity during inference is affordable in our experiments.
Summary: This paper addresses Visual In-Context Learning (VICL), which uses examples to help models learn new tasks. The main challenge is choosing the best prompt to improve learning and prediction. The authors introduce Partial2Global, a new method to find the best examples for each query. This method uses a transformer-based ranker to compare alternatives and a ranking aggregator for consistency. Experiments show that Partial2Global outperforms other methods in tasks like segmentation, object detection, and colorization, setting new performance standards. Strengths: 1.The proposed method, Partial2Global, introduces a novel framework for in-context example selection that addresses the fundamental challenge in Visual In-Context Learning (VICL) with a unique transformer-based list-wise ranker and a consistency-aware ranking aggregator. 2.The authors have conducted extensive experiments across multiple tasks, including foreground segmentation, single object detection, and image colorization. This robust evaluation provides strong evidence of the method's effectiveness and generalizability. 3. The paper is well-written and easy to follow, with complex ideas clearly explained. In particular, Figure 2 offers a clear and detailed comparison of existing methods, helping to contextualize the contributions of the proposed approach. Weaknesses: 1.The paper does not include ablation studies on the critical parameters, such as δ and τ. These studies are essential for understanding how variations in these parameters affect the model's performance. Without this information, it is difficult to assess the robustness and sensitivity of the model to changes in these parameters. 2.The paper introduces heuristic methods for ranking in-context examples using a list-wise ranker and a consistency-aware ranking aggregator. However, there is no analysis provided on the computational complexity and time cost associated with these methods. Given the potential for high computational demands, especially with large datasets, it is important to understand the scalability and practicality of these approaches. 3.Figure 4 of the paper presents results related to visual similarity, but it lacks a clear explanation of how this similarity is computed. The method or formula used to calculate visual similarity is not provided, which is crucial for the reproducibility of the results and for understanding the underlying methodology. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors address the limitations and potential negative societal impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The paper does not include ablation studies on the critical parameters, such as δ and τ (NDCG). These studies are essential for understanding how variations in these parameters affect the model's performance. Without this information, it is difficult to assess the robustness and sensitivity of the model to changes in these parameters.** A1: In general, our method is robust against changes of hyper-parameters. To show this, we try the suggestions to conduct ablation studies on these two hyper-parameters, whose results are presented in the following table. For delta which denotes the margin, using 0 margin leads to worse results while larger margins would be better for learning ranking models. For tau which is the temperature coefficient in NeuralNDCG, we simply use the best setting tau=1 from the original paper. As can be seen in the following table, using smaller temperature would not lead to better results. Nonetheless, all hyperparameter settings enjoy better performance than SupPR, validating the effectiveness of our method. | | MSE | | :---- | :---- | | $\delta$=0 | 0.594 | | $\delta$=2 | 0.588 | | $\tau$=0.01 | 0.608 | | $\tau$=0.1 | 0.592 | **Q2: The paper introduces heuristic methods for ranking in-context examples using a list-wise ranker and a consistency-aware ranking aggregator. However, there is no analysis provided on the computational complexity and time cost associated with these methods. Given the potential for high computational demands, especially with large datasets, it is important to understand the scalability and practicality of these approaches.** A2: Thank you. Please refer to the general response. We give a thorough analysis. Generally, we can summarize that the usage of list-wise ranker and the ranking aggregation will inevitably introduce additional computational cost, while the increased complexity during inference is affordable in our experiments. **Q3: Figure 4 of the paper presents results related to visual similarity, but it lacks a clear explanation of how this similarity is computed. The method or formula used to calculate visual similarity is not provided, which is crucial for the reproducibility of the results and for understanding the underlying methodology.** A3: Thank you. As illustrated in L310, we use commonly used CLIP to extract features for each image, and then we calculate the cosine similarity between the alternatives and query, which are then averaged. We will add the details to our paper. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response. I have read the author's responses, and most of my concerns have been addressed. Therefore, I have decided to maintain a 'Borderline Accept' stance on this paper.
Summary: This paper studies the demonstration retrieval mechanism for visual in-context learning. The authors propose Partial2Global which uses a transformer-based ranker and a consistency-aware aggregator to find the optimal demonstration. Experiments show that Partial2Global outperforms existing methods in tasks like segmentation and object detection, setting new state-of-the-art results. Strengths: 1. The paper is well-written and easy to follow. 2. The motivation is clear and reasonable. 3. The proposed method is interesting and novel. Exploring efficient and effective global ranking in place of the previous local information and using least-squares fusion to mitigate iterative loss. 4. The ablation studies are adequate, revealing the effectiveness of global information, the efficacy of the fusion method, and that similar images do not necessarily yield better contextual effects. It also identifies defects in samples retrieved by methods like VPR (e.g., frequently retrieving smaller objects in detection tasks, which results in poorer VICL performance). Weaknesses: 1. Since training exclusively on a single dataset is limiting, I would appreciate a demonstration of transfer learning performance to prove its adaptability and universality across a broad range of data. 2. It would be beneficial to analyze the efficiency of your model to the size of the retrieval set, as complexity is a crucial aspect. 3. It would be insightful to examine the "upper bound" of performance, such as by concatenating each demonstration with a query to see what the maximum Intersection over Union (IoU) is, and to assess how much room there is for improvement from this upper bound. Technical Quality: 3 Clarity: 2 Questions for Authors: See the Weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Since training exclusively on a single dataset is limiting, I would appreciate a demonstration of transfer learning performance to prove its adaptability and universality across a broad range of data.** A1: Thank you for the suggestion. We would like to highlight that the current experiment results can sufficiently validate the efficacy of our method. Nevertheless, to test the transferability of the prompt selection method and further reveal the potential of our method, we can add the demonstration suggested by conducting the following experiment for both SupPR and our method: we use models trained on each fold of segmentation and apply it to other folds. The results are shown in the following two tables. We find that while both two methods degrade in the transfer learning setting, our method still outperforms SupPR in general. This interesting results indicate that, the training data size of each fold is insuifficient for training a robust and generalizable ranking model. On the other hand, this again indicates that prompt selection for VICL cannot be simply based on visual similarity, as claimed in our paper. Ours | source/target fold | 0 | 1 | 2 | 3 | | :---- | :---- | :---- | :---- | :---- | | 0 | — | 36.38 | 32.63 | 30.90 | | 1 | 35.74 | — | 32.94 | 31.32 | | 2 | 34.16 | 36.16 | — | 30.44 | | 3 | 34.28 | 35.93 | 32.98 | — | SupPR | source/target fold | 0 | 1 | 2 | 3 | | :---- | :---- | :---- | :---- | :---- | | 0 | — | 35.46 | 32.44 | 30.95 | | 1 | 34.92 | — | 32.69 | 31.03 | | 2 | 34.71 | 36.48 | — | 30.08 | | 3 | 34.01 | 35.83 | 32.15 | — | **Q2: It would be beneficial to analyze the efficiency of your model to the size of the retrieval set, as complexity is a crucial aspect.** A2: Thank you. Please refer to the general response. Generally, we can summarize that the usage of list-wise ranker and the ranking aggregation will inevitably introduce additional computational cost, while the increased complexity during inference is affordable in our experiments. **Q3: It would be insightful to examine the "upper bound" of performance, such as by concatenating each demonstration with a query to see what the maximum Intersection over Union (IoU) is, and to assess how much room there is for improvement from this upper bound.** A3: Thank you for this suggestion. We want to first highlight that our experiment results can already validate the efficacy of our method against the competitor as a powerful in-context prompt selection method. To further provide insight to this task, we try to examine an upper bound: directly testing all alternatives for each query in the segmentation task and presenting the best IoU in the following table. While our proposed method is much better than SupPR, it still leaves great potential for better performance, which we will take as future works. | | fold0 | fold1 | fold2 | fold3 | | :---- | :---- | :---- | :---- | :---- | | SupPR | 37.08 | 38.43 | 34.40 | 32.32 | | Ours | 38.81 | 41.54 | 37.25 | 36.01 | | best iou among 50 alternative (oracle) | 48.75 | 52.62 | 49.75 | 49.03 | --- Rebuttal Comment 1.1: Title: Thanks for the response. I maintain my positive score. Comment: I appreciate the authors' response. I have read the response and some of my concerns (Q1 & Q3) have been addressed. I decide to keep a Weak Accept for the paper.
Rebuttal 1: Rebuttal: We thank the reviewers for all of your time to write valuable and constructive comments. Your feedback will definitely assist us in enhancing the quality of our paper, and thus we are committed to incorporating these suggestions in our revision process. Meanwhile, we feel encouraged that the reviewers find our paper well-written (Reviewer crFX, dHhp and dbja), our method novel and effective (Reviewer crFX, dHhp, PwDC), and our experiments solid and comprehensive (Reviewer crFX, dHhp and PwDC). Your support means a lot to us\! At this juncture, we would like to illustrate the general concern of training and testing efficiency of our method. In general, the usage of list-wise ranker and the ranking aggregation will inevitably introduce additional computational cost, while the increased complexity during inference is affordable under common circumstances. Specifically, we provide the training and inference time cost as follows. 1) The training of list-wise ranker on the colorization task, which contains about 500000 ranking sequences, takes about 10 hours on 8 V100s. Once the model is trained it can be directly used for any other queries with the same ranking criteria as the training task without any further finetuning. 2) During inference on one V100 gpu, our proposed pipeline requires about 1.17s to rank 50 alternatives for each query sample in the complete process, including feature extracting (0.3s), sub-sequence ranking with list-wise ranker (0.8s), and ranking aggregation (0.07s). Note that some techniques could be utilized to accelerate this process. For example, when we prepare the extracted features in advance (which is reasonable given the candidate set can be prepared in advance), we can skip the feature extraction stage and reduce the time cost by 0.3s. With engineering works, the inference time cost can be further reduced. The detailed inference speed (feature extraction included) given different alternative set size is presented as follows | alternative set size | inference time for each query (s) | | :---- | :---- | | 25 | 1.03 | | 50 | 1.17 | | 100 | 1.40 | In response to the reviewers' comments, we have thoroughly reviewed our paper, performed additional experiments, and prepared a comprehensive response. We will improve the manuscript according to your comments. We hope that our paper adequately addresses your concerns. We kindly look forward to your recognition. Best regards.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mirror and Preconditioned Gradient Descent in Wasserstein Space
Accept (spotlight)
Summary: The authors study the mirror descent method in Wasserstein-2 spaces. This is based on constructing the Bregman divergence functionals in Wasserstein-2 spaces. One can use it to define a mirror descent direction in the sense of pushforward mapping functions, which generalizes the classical Wasserstein gradient descent direction. The authors then studied the convergence analysis of the Wasserstein mirror descent algorithm. Several numerical examples are used to demonstrate the effectiveness of the proposed method. Strengths: The authors define and apply the Wasserstein mirror descent directions to conduct sampling algorithms. The mirror descent method in Wasserstein space is a very natural and novel approach. Congratulations to the authors for using the Wasserstein-Bregman divergences in sampling algorithms. This paper's proof is very clear in the mirror descent method from optimization and optimal transport. Weaknesses: On page 8, the authors don't study the mirror descent of KL divergences for non-Gaussian target distributions. This could be a very intriguing example. The authors may think about how to implement this case. This could be compared with the Wasserstein gradient flow of KL divergence. Technical Quality: 4 Clarity: 4 Questions for Authors: The authors also propose the Wassrsetin-Bregman proximal gradient method. Can authors illustrate the main difficulty of implementing it in practice or some potential computational schemes? How does it compare with the classical sampling schemes if the objective functional is chosen as the KL divergence? I am willing to improve my score after addressing these questions. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: There is no limitation on this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appraisal and positive comments on our paper. **On page 8, the authors don't study the mirror descent of KL divergences for non-Gaussian target distributions. This could be a very intriguing example. The authors may think about how to implement this case. This could be compared with the Wasserstein gradient flow of KL divergence.** The difficulty of optimizing the KL divergence with a non-Gaussian target is that we cannot use closed-forms for the mean and covariance anymore (as done in the experiment of Figure 2 for Gaussian, where closed-forms were obtained by projecting the distributions at each step on the space of Gaussian). A possible solution is to use a particle approximation, which we considered in Appendix I.2, with the goal to sample from a distribution on the simplex. Thus, the Bregman potential was chosen here as a potential energy, with potential a barrier allowing to keep the samples onto the simplex. The density of the distribution at each step was approximated with a Kernel Density Estimator to approximate the Wasserstein gradient of the KL. **The authors also propose the Wasserstein-Bregman proximal gradient method. Can authors illustrate the main difficulty of implementing it in practice or some potential computational schemes? How does it compare with the classical sampling schemes if the objective functional is chosen as the KL divergence?** The main difficulty of implementing the Wasserstein-Bregman proximal gradient method is that it is required to solve at each step a backward step (see equation (128)), which does not have a closed-form in general [1]. Thus, at each step, it is required to solve another optimization problem. For the classical JKO scheme, it has been proposed to solve it using approximations with e.g. entropic approximation [2] or neural networks [3,4], but these methods are computationally costly. Nonetheless, note that there is a closed-form when the target and initial distribution are Gaussian [5,6]. In Figure 2, we considered the Wasserstein-Bregman proximal gradient method with the KL divergence as objective functional, and a Gaussian $\mathcal{N}(0,\Sigma)$ as target distribution. For a Bregman potential of the form $\phi(\mu) = \int V\mathrm{d}\mu$, with $V(x)=x^T\Lambda^{-1} x$, we derived the closed-form in Appendix H.4 leveraging the fact that the distributions are always Gaussian. Using $\Lambda=\Sigma$, we showed on Figure 2 that it converges much faster than the regular Wasserstein proximal method studied in [1,5] (which corresponds to $\Lambda=I_d$). [1] Salim, A., Korba, A., & Luise, G. (2020). The Wasserstein proximal gradient algorithm. Advances in Neural Information Processing Systems, 33, 12356-12366. [2] Peyré, G. (2015). Entropic approximation of Wasserstein gradient flows. SIAM Journal on Imaging Sciences, 8(4), 2323-2351. [3] Mokrov, P., Korotin, A., Li, L., Genevay, A., Solomon, J. M., & Burnaev, E. (2021). Large-scale wasserstein gradient flows. Advances in Neural Information Processing Systems, 34, 15243-15256. [4] Alvarez-Melis, D., Schiff, Y., & Mroueh, Y. (2021). Optimizing functionals on the space of probabilities with input convex neural networks. arXiv preprint arXiv:2106.00774. [5] Diao, M. Z., Balasubramanian, K., Chewi, S., & Salim, A. (2023). Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein space. In International Conference on Machine Learning (pp. 7960-7991). PMLR. [6] Wibisono, A. (2018). Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem. In Conference on Learning Theory (pp. 2093-3027). PMLR. --- Rebuttal Comment 1.1: Title: Reply to authors Comment: The authors have addressed my questions. Congratulations to the authors for working in a very important direction. More serious computations and analysis can be conducted along this line. --- Reply to Comment 1.1.1: Comment: Thank you for your response and again for your positive comments!
Summary: In this paper, the authors endeavor to integrate the concepts of mirror descent (MD) and preconditioned gradient descent (PGD) within the framework of Wasserstein distances, from a convergence theory perspective. Initially, they provide a comprehensive background including, but not limited to, Wasserstein distances, Bregman divergence, convexity, and smoothness. With assumptions regarding $\beta$-smoothness and $\alpha$-convexity, they successfully demonstrate the iterative procedures for MD and PGD in the Wasserstein space. Subsequently, they conduct experiments focusing on sampling methodologies and single-cell dynamics. Strengths: 1. **Solid Mathematical Foundations:** The authors offer a rigorous mathematical derivation concerning the convergence behaviors of MD and PGD, supplemented by a thorough analysis of the theoretical background. 2. **Relevance to Conference Themes:** The paper presents fundamental theoretical analysis that is highly pertinent to the themes of the conference. 3. **Comprehensive Background Introduction:** The supplementary materials provide an extensive overview necessary to understand the nuances of the paper. The foundational work done by the authors is commendable. Weaknesses: 1. **Results Presentation:** The presentation of results could be enhanced by providing more detailed visualizations. For instance, it would be beneficial to include a snapshot illustrating particle evolution for Figure 1, right $K_2$. Additionally, if the authors have access to MATLAB, it is recommended to utilize the `surf` function for the contour of Dirichlet distribution's pdf in Figure 4 to better demonstrate the sampling tasks. 2. **Clarification of Definitions:** It is recommended that the authors include explicit definitions of convergence and global convergence to preclude any potential misunderstandings in the appendix. 3. **Visualization of Convergence Data:** I suggest employing seaborn's `sns.lineplot` with a shaded area to depict the error bars, which would support the assertions made in checklist item 7 more robustly. 4. **Overlooked References:** It appears that significant references relevant to discussions on the convergence of mirror descent may have been overlooked. I recommend incorporating additional reference [1] that provides pertinent insights into these discussions. ---- References: [1]. Tzen, Belinda, et al. "Variational principles for mirror descent and mirror langevin dynamics." IEEE Control Systems Letters 7 (2023): 1542-1547. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. **About the Proof of Convergence:** From the perspective of gradient flow, assuming we have a velocity field $\dot{X} = v_t$, we can derive that $v_t = -\Vert \nabla\frac{\delta \mathcal{F}}{\mu} \Vert_2^2$. In this context, does assuming $\beta$-smoothness offer additional advantages beyond the conventional benefits? 2. **Alternative Proof Approaches:** Could the convergence location and rate be established from the perspective of a Lyapunov functional? This approach may provide a robust framework for proving stability and convergence. 3. **Riemannian-based Approaches:** Is it feasible to extend the proposed method to Riemannian-based approaches, such as Riemannian SVGD [1]? 4. **About Higher Order Systems and Extensions:** Is it possible to extend this approach to systems characterized by the Fisher-Rao metric [2]? Furthermore, considering that the continuity equation in Wasserstein space represents a first-order system, could this approach be adapted to Hamiltonian Flow [3], which is the second order system to my knowledge? --- ### References [1]. Zhang, Ruqi, Qiang Liu, and Xin Tong. "Sampling in constrained domains with orthogonal-space variational gradient descent." Advances in Neural Information Processing Systems 35 (2022): 37108-37120. [2]. Neklyudov, Kirill, et al. "Wasserstein Quantum Monte Carlo: A Novel Approach for Solving the Quantum Many-Body Schrödinger Equation." Advances in Neural Information Processing Systems 36 (2024). [3]. Ambrosio, Luigi, and Wilfrid Gangbo. "Hamiltonian ODEs in the Wasserstein Space of Probability Measures." Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 61.1 (2008): 18-53. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: I appreciate the efforts of the authors. Nevertheless, I recommend that the authors include a dedicated section in the appendix detailing the assumptions such as convexity and smoothness within functionals. This addition would significantly aid in understanding and following the methodologies employed in this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reading the paper and for your feedback. We answer your comments below. Please do not hesitate if you have other questions. **The presentation of results could be enhanced by providing more detailed visualizations.** Thank you for these suggestions, we will take it into account and add snapshots of particles as well as contours. **It appears that significant references relevant to discussions on the convergence of mirror descent may have been overlooked. I recommend incorporating additional reference [1] that provides pertinent insights into these discussions.** Thank you for this reference that we did not know. We will add it to the paper. **From the perspective of gradient flow, assuming we have a velocity field $\dot{X} = v_t$, we can derive that $v_t = -\Vert \nabla\frac{\delta \mathcal{F}}{\mu} \Vert_2^2$. In this context, does assuming $\beta$-smoothness offer additional advantages beyond the conventional benefits?** For Wasserstein gradient flows, the velocity field is $v_t=-\nabla_{W_2}\mathcal{F}(\mu_t)$, and we have $\frac{\mathrm{d}\mathcal{F}(\mu_t)}{\mathrm{d}t} = \langle \nabla_{W_2}\mathcal{F}(\mu_t), v_t\rangle_{L^2(\mu_t)} = - \|\nabla_{W_2}\mathcal{F}(\mu_t)\|_{L^2(\mu_t)}^2$. Thus, the objective of the continuous formulation is necessarily non-increasing without assuming any smoothness of the objective. In discrete time, the Wasserstein gradient descent (obtained as the explicit Euler discretization) $\mu_{k+1} = (\mathrm{Id}-\tau\nabla_{W_2}\mathcal{F}(\mu_k))_\\#\mu_k$ requires smoothness of the objective to obtain descent at each iteration (and eventually global convergence guarantees if we assume furthermore convexity). See for instance analysis of gradient descent [2]. If the step size is too big, it is not guaranteed that the scheme decreases at each iteration, and could actually diverge. Hence in practice, knowing/estimating the smoothness constant will guide the choice of the step size. **Could the convergence location and rate be established from the perspective of a Lyapunov functional?** Our rates can indeed be obtained by decreasing the following Lyapunov functional: $L_k = k\tau(\mathcal{F}(\mu_k) - \mathcal{F}(\nu)) + \tau(\frac{1}{\tau}-\alpha) W_\phi(\nu,\mu_k)$. Indeed, $L_{k+1}-L_k \le 0$ by using the inequalities from equations (85) and (7). Thus $L_k$ is a Lyapunov function [3]. By using a telescopic sum, we recover the inequality $\mathcal{F}(\mu_k) - \mathcal{F}(\nu) \le \frac{1-\alpha\tau}{k\tau} W_\phi(\nu,\mu_0)$ obtained in Proposition 3. We will make this clearer. **Is it feasible to extend the proposed method to Riemannian-based approaches, such as Riemannian SVGD?** Extending the proposed methods to Riemannian manifolds is an interesting future direction of works that we did not consider yet. For now, Wasserstein gradient flows on manifolds have been sparsely studied. For instance, [4] studied theoretically Wasserstein gradient flows of the negative entropy on manifolds. In a more applied setting, [5] computed Wasserstein gradient flows on Lie group using the JKO scheme and [6] performed Wasserstein gradient descent of the Sliced-Wasserstein distance on Hadamard manifolds. There, the schemes and analyis have been derived on a case-by-case basis. To the best of our knowledge, the Mirror Descent algorithm has not been yet extended to Riemannian manifolds. One difficulty might come from the current theoretical analysis which requires to split the inner product, which is not directly possible when using log maps. Moreover, the scheme might also come with computational difficulties, as there might not be closed-forms easy to compute in general. **Is it possible to extend this approach to systems characterized by the Fisher-Rao metric?** The gradient in the space of probability distributions endowed with the Fisher-Rao metric is the 1st variation [7]. Thus, we might define Bregman divergences and the associated Mirror-Descent scheme using this gradient. This was studied e.g. in [8] for Mirror Descent with applications on the minimization of the KL divergence. **Could this approach be adapted to Hamiltonian Flow, which is the second order system to my knowledge?** This is an interesting direction for future works which we did not yet consider and which goes beyond our knowledge. **I recommend that the authors include a dedicated section in the appendix detailing the assumptions such as convexity and smoothness within functionals.** Thank you for this suggestion. We will expand our existing sections B.2 and C.3 in the appendix in this direction. [1] Tzen, B., Raj, A., Raginsky, M., & Bach, F. (2023). Variational principles for mirror descent and mirror langevin dynamics. IEEE Control Systems Letters, 7, 1542-1547. [2] Garrigos, G., & Gower, R. M. (2023). Handbook of convergence theorems for (stochastic) gradient methods. arXiv preprint arXiv:2301.11235. [3] Wilson, A. (2018). Lyapunov arguments in optimization. University of California, Berkeley. [4] Erbar, M. (2010). The heat equation on manifolds as a gradient flow in the Wasserstein space. In Annales de l'IHP Probabilités et statistiques (Vol. 46, No. 1, pp. 1-23). [5] Bon, D., Pai, G., Bellaard, G., Mula, O., & Duits, R. (2024). Optimal Transport on the Lie Group of Roto-translations. arXiv preprint arXiv:2402.15322. [6] Bonet, C., Drumetz, L., & Courty, N. (2024). Sliced-Wasserstein Distances and Flows on Cartan-Hadamard Manifolds. arXiv preprint arXiv:2403.06560. [7] Gallouët, T. O., & Monsaingeon, L. (2017). A JKO splitting scheme for Kantorovich--Fisher--Rao gradient flows. SIAM Journal on Mathematical Analysis, 49(2), 1100-1130. [8] Aubin-Frankowski, P. C., Korba, A., & Léger, F. (2022). Mirror descent with relative smoothness in measure spaces, with application to sinkhorn and em. Advances in Neural Information Processing Systems, 35, 17263-17275. --- Rebuttal 2: Title: Comments on Authors from Reviewer [PBU2] Comment: Thank you for your detailed response, and I appreciate to adjust the score. Additionally, I have a follow-up question regarding lines 1447 and 1448. Why is KDE necessary when conducting mirror descent on the simplex? To my understanding, since the density of the Dirichlet distribution is bounded, ensuring that the velocity field meets the boundary condition $\lim_{\Vert x \Vert\rightarrow\infty} v(x)=0$ should suffice. We could potentially use the equation $ \int{\nabla_{x} [p(x)v(x)]} = 0 = \int{\nabla_x p(x) v(x)} + \int{p(x)\nabla v(x)} $ to circumvent the need for explicit density estimation. Could you please clarify this approach? --- Rebuttal Comment 2.1: Comment: Thank you for your response and again for your feedback. We address your question below. **Why is KDE necessary when conducting mirror descent on the simplex?** In the experiment of Mirror Descent on the simplex, we minimize the Kullback-Leibler divergence $\mathcal{F}(\mu) = \mathrm{KL}(\mu||\nu)$ for $\nu \propto e^{-V}$ restricted to the simplex. To enforce the distributions to stay on the simplex, we used as Bregman potential $\phi(\mu) = \int \psi \mathrm{d}\mu$ with $\psi(x) = \sum_{i=1}^d x_i \log(x_i) + (1-\sum_{i=1}^d)\log(1-\sum_{i=1}^d x_i)$, which acts as a barrier. In this situation, the Mirror Descent scheme translates as $\mu_{k+1} = \big(\nabla \psi^* \circ (\nabla \psi - \tau \nabla_{W_2}\mathcal{F(\mu_k)})\big)_\\#\mu_k$, with $\nabla\_{W_2}\mathcal{F}(\mu_k) = \nabla V + \nabla \log p_k$ and $p_k$ the density of $\mu_k$. To approximate the distributions, we use an approximation with particles $\hat{\mu}_k^n = \frac{1}{n}\sum\_{i=1}^n \delta\_{x_i^k}$. However, we need to approximate the density $p_k$ to be able to compute $\nabla\_{W_2}\mathcal{F}(\mu_k)$. Thus, we used a kernel density estimator to do this. The equation you mention might provide another way to enforce the particles to stay on the simplex, hence avoiding the use of a barrier for the Bregman potential. This is an interesting direction of work that we did not consider yet.
Summary: This paper presented a unified view on the functional optimization on the Wasserstein space. Authors proposed a mirror descent algorithm and a preconditioned gradient descent algorithm, both are applied to minimize the objective functional over the Wasserstein space. Many existing algorithms in the Wasserstein space-related literature can be recovered or seen as a special case of the algorithms proposed by the authors. Convergence guarantees were provided and backed with numerical results. Strengths: Mirror descent and preconditioned GD are closely related and have been seen as generalizations of GD. Though Wasserstein GD have been proposed, the extension to MD and preconditioned GD is a natural step. The formulation in the paper is general and can be applicable to a wide range of studies in both machine learning application and theory. The literature review provided by the authors seem to be thorough, and the position of the current manuscript in the literature is well explained. The relationships between the proposed algorithm and existing algorithm were also clear throughout the paper. The reviewer did not read the full analysis in the appendix, but the technical aspects of this paper seems solid and intuitive. Weaknesses: The main concern I have is as presented in the MD optimization literature: what new tool does the MD formulation bring to the table? Previously, the convergence of MD is similar to or worse than GD. The literature on relative strong smoothness and convexity enabled MD to converge even when the objective function is not smooth/convex in the Euclidean sense (which was really mostly KL). The function properties in the Wasserstein space are more complicated and therefore difficult to align, this was briefly discussed by the authors at the start of section 5. But the application offered later is largely limited to existing applications KL or Sinkhorn, whereas these specific cases seem to have been already studied by the literature anyway. Some sections are not well presented. For instance, the subsection starting at l.136, the statements on the potential energies are confusing and it is very difficult to associate on sentence with either the previous or the latter sentence (such as "... specific example. In particular, we have xxx. We will also consider...", is the "xxx" with the specific example of what the authors consider?) Similar confusions also exist in this paper, but to a lesser extent. While the contribution is clear from reading the entire paper, the "Contributions" paragraph does not help reader and clarify what this paper offers. Though discussed briefly, the computational complexity and real-life run-time of these type of methods should be better addressed both in theory and experiments, especially on the comparison between the proposed and benchmark methods. Technical Quality: 4 Clarity: 3 Questions for Authors: some minor grammatical errors such as l.158 ($T_\eta^\nu$ OT maps from $\eta$...) Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors did address the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and time. We have addressed your comments point-by-point below. Please don't hesitate to let us know if you have any further questions. **What new tool does the MD formulation bring to the table?** First, the Mirror Descent algorithm on the Wasserstein space allows to perform optimization over probability distributions on constrained spaces (and in particular sampling on constrained spaces for specific objectives). This motivated the development of e.g. Mirror Langevin [1,2,3]. This lens on Mirror Descent on the Wasserstein space was also studied e.g. in [4] to optimize the KL, but without a theoretical analysis of the discrete algorithm. Here, we chose to study the convergence of these algorithms, and one of the main contributions of this work is to define appropriate notion of relative smoothness and convexity. This differs a lot from previous works, and is new on the Wasserstein space to the best of our knowledge. Moreover, we also proposed to use general Bregman potentials, instead of just considering potential energies as done in most previous works focusing on Bregman divergences and Mirror Descent on the Wasserstein space. We believe that this additional flexibility could allow to find new pairs of functionals/Bregman potentials, with theoretical guarantees for the convergence of the Mirror Descent scheme. For instance, a convex functional is 1-smooth and 1-convex relative to itself, and thus could be used as Bregman potential for its optimization, provided we can compute the scheme. We note that this point of view was also recently taken to optimize the MMD in [5], while the MMD is known to be non convex w.r.t the Wasserstein distance [6], which leads MMD flows to converge to spurious local minima in practice [5,6,7]. **The function properties in the Wasserstein space are more complicated and therefore difficult to align, this was briefly discussed by the authors at the start of section 5.** Showing the relative smoothness and convexity of functionals is indeed not straightforward in general. In the first paragraph of Section 5, we described some cases where we can show it leveraging the relative smoothness and convexity on $\mathbb{R}^d$, and suggested in the general case to compare the Hessians. **But the application offered later is largely limited to existing applications KL or Sinkhorn.** We did not focus only on minimizing the KL or Sinkhorn. On Figure 1, we applied the Mirror Descent scheme on an interaction energy with an interaction energy as Bregman potential, in a case where it is well relatively smooth. We focused on the KL for the Gaussian example of Figure 2, but this specific example has not been yet studied in the literature as we use Mirror Descent with negative entropy as Bregman potential. Finally, on Figure 3, we applied the preconditioned scheme on different divergences such as the Sliced-Wasserstein distance, the Sinkhorn divergence and the Energy distance. While Wasserstein gradient flows of these divergences have already been proposed and studied, applying the preconditioned gradient descent to them is new to the best of our knowledge. **Some sections are not well presented.** We apologize for the confusions and will revise the paper to improve its clarity. **The computational complexity and real-life run-time of these type of methods should be better addressed both in theory and experiments, especially on the comparison between the proposed and benchmark methods.** Compared to the usual Wasserstein gradient scheme, the preconditioned Wasserstein gradient scheme only requires to evaluate additionaly the gradient of $h^*$, and thus has the same computational complexity as the Wasserstein gradient descent. Nonetheless, we noted in Figure 3 that it required less iterations to converge for a well-chosen preconditioner. For Mirror Descent, the complexity depends on the Bregman potential. If we use $\phi_\mu(T)=\int V\circ T\ \mathrm{d}\mu$ with $V$ a potential which we know in closed form, and from which we can compute $\nabla V$ and $\nabla V^*$, then the computational cost is the same as the Wasserstein gradient descent. In the more general case, where we do not know how to invert $\nabla\phi_\mu$, we need to use the Newton method, which is more costly as it requires to invert a Jacobian of size $nd\times nd$ as stated Section I.1. The runtimes are reported in Appendix I. For instance, as stated line 1490, for the experiment of Figure 1, the Mirror Descent used in dimension $d=2$ for $n=100$ particles and $120$ epochs with $K_2$ and $K_2^\Sigma$ took about 5 minutes while with $K_4$ and $K_4^\Sigma$, it took about 1h. We leave for future works the development of cheaper optimization methods. **some minor grammatical errors** Thank you, we will correct the grammatical errors. [1] Ahn, K., & Chewi, S. (2021). Efficient constrained sampling via the mirror-Langevin algorithm. Advances in Neural Information Processing Systems, 34, 28405-28418. [2] Chewi, S., Le Gouic, T., Lu, C., Maunu, T., Rigollet, P., & Stromme, A. (2020). Exponential ergodicity of mirror-Langevin diffusions. Advances in Neural Information Processing Systems, 33, 19573-19585. [3] Hsieh, Y. P., Kavis, A., Rolland, P., & Cevher, V. (2018). Mirrored langevin dynamics. Advances in Neural Information Processing Systems, 31. [4] Sharrock, L., Mackey, L., & Nemeth, C. (2023, January). Learning Rate Free Bayesian Inference in Constrained Domains. In NeurIPS. [5] Gladin, E., Dvurechensky, P., Mielke, A., & Zhu, J. J. (2024). Interaction-Force Transport Gradient Flows. arXiv preprint arXiv:2405.17075. [6] Arbel, M., Korba, A., Salim, A., & Gretton, A. (2019). Maximum mean discrepancy gradient flow. Advances in Neural Information Processing Systems, 32. [7] Galashov, A., de Bortoli, V., & Gretton, A. (2024). Deep MMD Gradient Flow without adversarial training. arXiv preprint arXiv:2405.06780.
Summary: The paper generalises mirror descent and preconditioning approaches from optimisation over R^d to optimisation over Wasserstein space, which is applicable in some actual problems, and confirms by the corresponding numerical experiments the efficiency of these approaches. Strengths: The paper completely corresponds to its stated goal, uses the most actual theoretical frameworks of long known approaches from optimisation over R^d, and focuses indeed on the most fundamental methods, which will allow a follower to develop any desired extension based on idea from optimisation over R^d and the detailed examination of technical issues related to the transition to Wasserstein space made in this work. Practical contribution of this paper is also notable, as it demonstrates in numerical experiments, but I guess that thanks to the proposed theoretical basement, many improvements can be made on top of that, which is to be done in future works. Weaknesses: The preconditioning is considered in some particular yet practically important case, but the complete generalisation of preconditioning is still to be finished. Technical Quality: 4 Clarity: 3 Questions for Authors: There is an aspect which was not mentioned in this paper, but is interesting from the theoretical point of view: stochastic oracle. The generalisation of stochastic mirror descent to Wasserstein space seems to be the demanding yet possible task, when that of preconditioning is challenging even in R^d case, but it would be interesting to find the proper notions of stochasticity and flexible enough analysis to deal with it in case of Wassestein space. Is your current analysis generalisable to stochastic optimisation and to inexact model of function, more generally? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The limitations are clear from reading. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appraisal and positive comments on our paper. **The preconditioning is considered in some particular yet practically important case, but the complete generalisation of preconditioning is still to be finished.** We acknowledge that the complete generalization and study of the preconditioned gradient descent on the Wasserstein space is left for future works. We note that the convergence results would still be valid for $\phi_\mu$ strictly convex, and differentiable as described in the first paragraph of Section 4. **Is your current analysis generalisable to stochastic optimisation and to inexact model of function, more generally?** We expect our analysis to be generalisable to the stochastic setting. While we leave it for future works, we note that [1, 2] studied stochastic Wasserstein gradient schemes, while [3] studied Mirror Descent on $\mathbb{R}^d$ in the stochastic setting. Thus, it could be a nice avenue of future works to extend these results to Mirror Descent on the Wasserstein space. [1] Diao, M. Z., Balasubramanian, K., Chewi, S., & Salim, A. (2023, July). Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein space. In International Conference on Machine Learning (pp. 7960-7991). PMLR. [2] Backhoff-Veraguas, J., Fontbona, J., Rios, G., & Tobar, F. (2022). Stochastic Gradient Descent for Barycenters in Wasserstein Space. arXiv preprint arXiv:2201.04232. [3] Hanzely, F., & Richtárik, P. (2021). Fastest rates for stochastic mirror descent methods. Computational Optimization and Applications, 79, 717-766.
Rebuttal 1: Rebuttal: We thank all the reviewers for their positive comments and common appraisal of the soundness of our approach to lift Euclidean optimization schemes to the Wasserstein space. Following the reviewers' comments, we will improve the clarity of the exposition for the revised version, taking advantage of the allowed extra page.
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper provides convergence analysis for Mirror descent (with and without additional preconditioning) in the Wasserstein space. The analysis is performed for relatively smooth and convex functionals. The use of the studied algorithms on computational biology problem is illustrated. Strengths: The theory in the paper seems sound and substantially extends the prior work overcoming some technical difficulties related to the discrete nature of the algorithms (explicit discretization) and non-Euclidean geometry in the Wasserstein space. Convergence rates are similar to those in finite dimensional space for MD. Weaknesses: The paper is very dense and seems like not well adapted to 9 page format. A lot of explanations and implementation details are referred to the appendix. It is very difficult to read the paper when it is written is such style. For example, although I have spent several hours only trying to understand how the algorithm is implemented, I did not manage to do it even with the help of the appendix. I would advise the authors to revise the paper to make it more accessible. 1. Notations section should be improved. Many things are not properly defined throughout the paper, e.g., the coupling between distributions, what "Id" is? 2. On line 138, what does it mean that the sets $V$ and $W$ are "differentiable and $L$-smooth with $W$ even? 3. On line 142, the calligraphic $\mathcal W$ is not defined. 4. OT map mentioned on line 151 is not defined. Technical Quality: 4 Clarity: 2 Questions for Authors: n/a Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and time. We have addressed your comments point-by-point below. Please don't hesitate to let us know if you have any further questions. **The paper is very dense. I would advise the authors to revise the paper to make it more accessible.** We understand your concerns about the paper's density and the difficulty in following the implementation details. Taking advantage of the extra page, we will work on revising the paper to make it more accessible and ensure that key explanations are clearer and more integrated into the main text. **How the algorithm is implemented?** We will describe more clearly how the schemes are implemented in practice in a dedicated section of the appendix. In practice, we use a particle approximation of the probability measures, i.e. we sample $x_1^{(0)},\dots,x_n^{(0)}\sim \mu_0$ and then we apply the schemes to $\hat{\mu}_k^n = \frac{1}{n} \sum\_{i=1}^n \delta\_{x\_i^{(k)}}$ for all $k\ge 0$. For Mirror Descent with a Bregman potential $\phi$ chosen as a potential energy $\phi(\mu) = \int V\ \mathrm{d}\mu$ with $\nabla V$ and $\nabla V^*$ known in closed-forms, we use equation (9) to implement it: for all $k\ge 0$ and $i\in \{1,\dots,n\}$, $x\_i^{(k+1)} = \nabla V^*\big(\nabla V(x\_i^{(k)}) - \tau \nabla\_{W\_2}\mathcal{F}(\hat{\mu}\_k^n)(x\_i^{(k)})\big)$. The Wasserstein gradient of $\mathcal{F}$ in most of the cases considered can be computed in closed-form, e.g. for $\mathcal{F}(\mu)=\int U\mathrm{d}\mu$, $\nabla_{W_2}\mathcal{F}(\mu) = \nabla U$, or for $\mathcal{F}(\mu)=\iint W(x-y)\ \mathrm{d}\mu(x)\mathrm{d}\mu(y)$, $\nabla_{W_2}\mathcal{F}(\mu) = \nabla W\star \mu$. In the more general case, where the gradient of the Bregman potential cannot be inverted in closed-form, we used Newton's algorithm at each step, which is detailed in Appendix I.1, and we use equation (191) to implement it. For the Gaussian experiment, we update directly the means and covariances. The closed-form of NEM is reported in equation (197), and the closed-form of the Forward-Backward scheme are in Appendix H.4 in equation (160). Finally, for the preconditioned gradient descent scheme, we implemented it using equation (11): for all $k\ge 0$ and $i\in\{1,\dots,n\}$, $x_i^{(k+1)} = x_i^{(k)} - \nabla h^*\big(\nabla_{W_2}\mathcal{F}(\hat{\mu}_k^n)(x_i^{(k)})\big)$. In the experiments of Figure 3, we considered $h^*(x)=(\|x\|_2^a + 1)^{\frac{1}{a}} - 1$ with $a>0$, and its gradient can be computed in closed-form or by backpropagation. **Notations section should be improved. Many things are not properly defined throughout the paper, e.g., the coupling between distributions, what "Id" is?** We will complete the notation section. A coupling between $\mu\in \mathcal{P}\_2(\mathbb{R}^d)$ and $\nu\in\mathcal{P}\_2(\mathbb{R}^d)$ is a probability distribution $\gamma\in\mathcal{P}(\mathbb{R}^d\times \mathbb{R}^d)$ such that $\pi^1_{\\#}\gamma=\mu$ and $\pi^2\_\\#\gamma=\nu$ with $\pi^1:(x,y)\mapsto x$ and $\pi^2:(x,y)\mapsto y$. The $\mathrm{Id}$ is the identity mapping $x\mapsto x$. **On line 138, what does it mean that the sets $V$ and $W$ are "differentiable and $L$-smooth with $W$ even?** $V$ and $W$ are functions from $\mathbb{R}^d$ to $\mathbb{R}$. $V$ (or $W$) $L$-smooth means that its gradient is $L$-Lipschitz. We will clarify this in the revision of the paper. To ask for $W$ even means that $W(-x)=W(x)$ for all $x\in \mathbb{R}^d$. **On line 142, the calligraphic $\mathcal W$ is not defined.** $\mathcal{W}$ is defined as $\mathcal{W}(\mu) = \iint W(x-y)\ \mathrm{d}\mu(x)\mathrm{d}\mu(y)$, see line 127. **OT map mentioned on line 151 is not defined.** An OT map between $\mu,\nu\in\mathcal{P}\_2(\mathbb{R}^d)$ is a map $T:\mathbb{R}^d\to\mathbb{R}^d$ satisfying $T\_\\#\mu=\nu$ and $W_2^2(\mu,\nu) = \|T-\mathrm{Id}\|_{L^2(\mu)}^2$. It exists e.g. provided $\mu\ll\mathrm{Leb}$ by Brenier's theorem. We will clarify this in the revised version of the paper. --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledgement Comment: I thank the authors for answering my questions and I trust the authors can improve the presentation in the next revision. I have no further concerns.
null
null
null
null
null
null
DiffCut: Catalyzing Zero-Shot Semantic Segmentation with Diffusion Features and Recursive Normalized Cut
Accept (poster)
Summary: This paper proposes a model for zero-shot semantic image segmentation based on clustering feeatures from an off-the-shelf text-to-image diffusion model. Features are extracted from the U-Net used in the diffusion model. The features are then clustered using a recursive normalized cut algorithm. The clustering is scaled up to the original image resolution by semantic guidance from the extracted features. The proposed model consistently outperforms previous methods on standard benchmarks for image segmentation. Strengths: - The paper makes a very targeted contribution to the field. An off-the-shelf feature extractor is combined with an established clustering algorithm and a novel upscaling method. The method is evaluated on standard benchmarks which allows for a clear comparison to prior work. - The model is compared to a range of state-of-the-art methods and outperforms them consistently, in most cases by a substantial margin. - Several ablation studies are conducted to show the impact of the contributions in comparison to other diffusion model based segmentation methods. Weaknesses: - The main contribution explitly states the improvements over the TokenCut and MaskCut models, but does not compare to these models in the experiments. - In my view, the comparison of the respective strengths and weaknesses of different base models is not detailed enough. The experiment on semantic coherence targets this direction. However, beyond this analysis it would have been interesting to directly compare the segmentation results after clustering that are obtained from the different base models, both in terms of quantiative metrics and example cases and further analyses that show the differences. Technical Quality: 3 Clarity: 2 Questions for Authors: - The paper should state more clearly that it target *semantic segmention* to avoid confusion with other types of image segmentation. In my view this would have a major impact on the clarity of the paper. - How large is the computational improvement over previous methods? The differences are mentioned in terms of parameter count, but a comparison in terms of runtime would be interesting as well. - In comparison to standard semantic segmentation, the method proposed here predicts unlabelled classes. Are the features consistent enough to allow for an easy (e.g. linear) mapping to the labelled classes across images? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I don't fully agree with the statement that the method does not have potential societal impacts. For example, as briefly mentioned by the authors elsewhere, resusing pre-trained models in a zero-shot fashion can save computational and labeling resources. In terms of energy consumption and human labour, this line of works might have a societal impact. One limitation in my view stems from the fact that the diffusion model used as a basis probably does not work well on all types of images. While unsupervised, the approach followed in the paper is dependent on the capabilities of the foundation model. This is not mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments and appreciate the positive feedback. #### **Weaknesses** #### **1. The main contribution explicitly states the improvements over the TokenCut and MaskCut models, but does not compare to these models in the experiments.** Unlike TokenCut or MaskCut, our method can provide dense segmentation maps and adapt the number of detected segments based on the visual content of an image. MaskCut and TokenCut cannot do this because they can only detect a fixed number of segments. (l.101-104). Therefore, these methods are not well-suited for an image segmentation task. Technically, MaskCut uses an iterative graph partitioning method and masks the graph nodes associated with detected objects. As a result, each segment is treated as a single object that cannot be refined once detected, which in practice severely limits its ability to identify a large number of objects. To support this claim and following the reviewer’s suggestion, we provide below a comparison between the performance of MaskCut and DiffCut. Overall, the improvement of DiffCut highlights our two main contributions: the quality of our visual features for semantic segmentation, and the capacity of the recursive Ncut algorithm to adjust the number of segments to the visual content of each image. We would be glad to include these results in the final paper. | Model | VOC20 | Context | COCO-Object | COCO-Stuff-27 | Cityscapes | ADE20K | |----------------|-------|---------|-------------|------------|------------|--------| | DiffCut (ours) | **62.0** | **54.1** | **32.0** | **46.1** | **28.4** | **42.4** | | MaskCut $(k=3)$ | 53.7 | 42.3 | 30.9 | 41.8 | 18.0 | 33.7 | | MaskCut $(k=5)$ | 53.8 | 43.4 | 30.1 | 41.7 | 18.7 | 35.4 | | MaskCut $(k=20)$ | 53.8 | 43.5 | 30.0 | 41.5 | 18.0 | 35.6 | #### **2. In my view, the comparison of the respective strengths and weaknesses of different base models is not detailed enough. [...]** In addition to the experiments conducted in Section 4.2 (Fig. 3 and 4), we evaluated in Supplementary D the potential of different base models for zero-shot segmentation using a simple KMeans clustering on their features for various datasets. This comparison reveals that among the different vision encoders assessed, the diffusion model shows the greatest potential for zero-shot segmentation, as the quality of clustering on its features consistently outperforms other backbones. Given this, we believe that comparing segmentation results after clustering obtained from different base models would yield similar observations. #### **Questions** #### **1. The paper should state more clearly that it targets semantic segmentation to avoid confusion with other types of image segmentation. In my view this would have a major impact on the clarity of the paper.** We thank the reviewer for this helpful comment. We will make sure to clearly state and emphasize that this paper targets semantic segmentation to avoid confusion with other types of segmentation. #### **2. How large is the computational improvement over previous methods? The differences are mentioned in terms of parameter count, but a comparison in terms of runtime would be interesting as well.** The primary bottleneck in graph-cut methods is the eigensystem solving which requires $O(n^3)$ operations. In MaskCut, the graph size remains constant, whereas in DiffCut, it has its maximum size only during the first iteration of the recursive NCut. For each subsequent partition, only the relevant nodes from the original graph are selected, reducing the eigensystem's cost. The concept assignment consists in multiplying the matrix of concepts with the feature map and retrieving the indices of the concepts with the highest similarity. Following your remark, we provide in the general response a table comparing the runtime execution of MaskCut, DiffCut and DiffSeg. We will be glad to include these results in the final manuscript. #### **3. In comparison to standard semantic segmentation, the method proposed here predicts unlabelled classes. Are the features consistent enough to allow for an easy (e.g. linear) mapping to the labelled classes across images?** We have not explored this path yet. However, based on the results in Fig. 3 and 4, we believe that such a mapping would work well, as features from different images belonging to the same semantic class show high similarity. Consequently, we estimate that mapping these features to a class label should be consistent. #### **Limitations** #### **1. I don't fully agree with the statement that the method does not have potential societal impacts. [...]** Thank you for pointing this out. You are correct that our method which reuses pre-trained models in a zero-shot fashion can indeed save computational ressources as well as reduce energy consumption and human labor, contributing to more sustainable AI practices. We appreciate your remark and will ensure to highlight this in our paper. ##### **2. One limitation in my view stems from the fact that the diffusion model used as a basis probably does not work well on all types of images. [...]** Since the diffusion backbone is not specifically trained to generate images in specialized domains like biomedical imaging for example, the method may struggle to transfer to out-of-distribution images. However, this issue could potentially be mitigated by fine-tuning the pre-trained diffusion model on the target data domain. As this limitation was not addressed in the main paper, we will discuss it in a Limitations section added to the Supplementary. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. The additional comparisons reported here and in the responses to the other reviewers as well as the clarifications are very helpful in my view. I agree with with reviewer jdn9 that providing steps towards instance or panoptic segmentation would have increased the impact of the paper. Nevertheless, I think the authors demonstrated the potential of diffusion model features in this setting and overall addressed my concerns, so I am happy to increase my score.
Summary: This work proposes a new strategy for the task of unsupervised zero-shot segmentation using diffusion model features. The semantic maps are extracted from the U-net features of a diffusion model by applying a recursive algorithm that allows various levels of granularities of the segmentation maps. The proposed method beats competing methods on unsupervised and open-vocabular semantic segmentation on six datasets. Strengths: In general, the work addresses a problem of unsupervised zero-shot segmentation that has not been addressed a lot by the community, proposing new ways to extract knowledge from the rich U-Net features of a diffusion model. While the method is not overly complex and mainly builds on previously published works, it nicely connects more conventional non-DL research from the past with the opportunities arising in foundational models, and it adds a few more technical contributions that are demonstrated to improve performance. In general, the method seems to clearly outperform previous methods on various benchmarks, including the open-vocabulary setting. The paper is clearly written, and the structure allows to easily follow the work’s main components and contributions. Weaknesses: My main concern is that the experimental comparison seems rather unfair. While the proposed method uses powerful distilled SD-XL model the baselines only use SD1.4, which might explain most of the performance improvements. In my opinion, the ablation with respect to SD1.4 is not comprehensive enough: 1) The proposed method is not applied with SD1.4, which I think would be mandatory. 2) The DiffSeg results in Tab. 2 seem to be worse than in Tab. 1. I would appreciate a clarification of the effect of using SD-XL features. The proposed method should also be evaluated using the less powerful SD1.4 features to allow a direct comparison with the original DiffSeg method. Otherwise, the comparisons cannot be really fair, in my opinion. Tab. 4: I assume, DiffSeg would again underperform DiffCut. However, having DiffSeg in this table would be a more complete comparison, not only relaying on the Hungarian algorithm-based evaluation strategy in Tab. 1. When considering Fig. 9, the masks do get more fine-granular for higher tau. However, the masks do not always align with the object boundaries (e.g. the surfer, the roofs, or the sign). I am wondering whether this is because of using the low-dimensional resolution and only bi-linearly upsampling? While the previous method DiffSeg also pursued this strategy, I am unsure if the application of the Hungarian matching algorithm to assign predicted masks to a ground truth mask is fair when comparing with methods that do not apply this strategy for the evaluation. It appears not directly comparable. I would appreciate a discussion of this fact and an argumentation why the presented evaluation strategy is reasonable and comparable to the other methods (that do not apply the Hungarian algorithm). I like the additional baseline AutoSC, which is also seems to be very effective. While the recursive and adaptive formulation of the baseline seem so to be reasonable, the gap to DiffCut is, however, not very clear. The results are almost the same. I would appreciate a more complete ablation of this. The set of chosen exponents seems to be arbitrary (l.302). Did you study whether AutoSC outperforms DiffCut if more possible alpha values were chosen? I would appreciate such an analysis and discussion of it in the rebuttal. Also, I am wondering – which model is faster, AutoSC or DiffCut? Technical Quality: 3 Clarity: 3 Questions for Authors: ReCO is first listed training-free, in the open-vocabulary setting with extra-training, but this is not explained in the text? Regarding terminology: The section 4.4) model analysis is nicely to read. I would maybe consider replacing the term robustness with hyperparameter sensitivity (also in G). But I have no strong opinion on this. Tab. 3: Did you also study the results if only 64x64 resolution were used? l.339: Could you please discuss this strategy more comprehensively in the final manuscript (or refer to a related work if existing)? It is not very clear. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The method seems to be profound. However, some important evaluations are missing, as explained above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments and appreciate the positive feedback. #### **Weaknesses** #### **1. [...] The proposed method is not applied with SD1.4, which I think would be mandatory. [...] to allow a direct comparison with the original DiffSeg method. [...].** As mentioned in the general response, we emphasize that the ablation study carried out in Tab. 2 uses the same diffusion backbone (SSD-1B) for DiffCut and Diffseg, making the comparison between both approaches fair. Therefore, the large gain of DiffCut over DiffSeg in Tab. 2 directly validates our two main contributions: the choice of the diffusion features and the clustering algorithm. Following the reviewer’s suggestion, we present in the general response a quantitative comparison between DiffCut with SD1.4 and the original DiffSeg method. #### **2. The DiffSeg results in Tab. 2 seem to be worse than in Tab. 1. I would appreciate a clarification of the effect of using SD-XL features.** The discrepancy between the results of DiffSeg in Tab. 1 and Tab. 2 can be explained by the different backbones used as well as the absence of the refinement module in Tab. 2 as mentioned at lines 293-294. As DiffSeg relies on an attention aggregation process and iterative attention merging, its performance depends on the architecture of the backbone, especially the number and placement of attention modules which significantly differ between SSD-1B and SD1.4. #### **3. [...] I assume, DiffSeg would again underperform DiffCut. However, having DiffSeg in this table would be a more complete comparison [...].** We agree that having DiffSeg in this table would be a more complete comparison with this particular method. Nevertheless, having demonstrated in previous sections the superiority of DiffCut for generating high-quality masks, the aim here is rather to demonstrate that a straightforward extension to an open-vocabulary setting is competitive with existing approaches targeting this task. #### **4. When considering Fig. 9, the masks [...] do not always align with the object boundaries [...]** The masks not always aligning perfectly with object boundaries can indeed be attributed to using low-dimensional resolution and relying solely on bi-linear upsampling. The low-dimensional resolution does not capture the fine boundaries of objects, and bi-linear upsampling does not refine the edges to match the true boundaries. Exploring more sophisticated feature upsampling techniques could mitigate this misalignment. #### **5. [...] I am unsure if the application of the Hungarian matching [...] is fair when comparing with methods that do not apply this strategy [...]** All baselines but ACSeg, ReCO and MaskCLIP are evaluated with Hungarian matching, which is the standard evaluation protocol for methods that provide unlabeled clusters. While ACSeg also evaluates with Hungarian matching, we reported scores with labeled clusters, which are significantly higher (47.1 vs. 53.9 on VOC and 16.4 vs. 28.1 on Stuff-27). Finally, while ReCO and MaskCLIP may not seem directly comparable, we believe that assigning labels to predicted masks can offer more advantageous performance. Hungarian matching performs a one-to-one assignment between each ground-truth label and the single predicted cluster with the greatest overlap, meaning some predicted clusters may not be matched with any ground-truth label. While predicting a class may seem more challenging, it allows assigning a label to every predicted cluster, giving each a chance to be correctly classified, which can ultimately lead to better performance. #### **6. I like the additional baseline AutoSC [...] which model is faster, AutoSC or DiffCut?** AutoSC is a variant of DiffCut where we replace the recursive clustering by the proposed automated spectral clustering that automatically estimates the number of clusters. Despite its simplicity, it is highly effective and demonstrates competitive performance. Differences mainly appear in datasets containing small objects (e.g., COCO-Object, Cityscapes, ADE20K). DiffCut's recursive partitioning offers flexibility in detecting small objects by allowing large segments to be further divided into smaller ones. In contrast, AutoSC is effective at uncovering the most salient clusters but may overlook smaller objects. For the set of exponents, we chose the same set we explored for DiffCut. We also examined AutoSC's behavior with higher alpha values, but this did not lead to performance improvements as the index of the largest relative eigen-gap generally corresponds to what is found for alpha in $\\{1, 5, 10, 15\\}$. Lastly, since AutoSC does not require a recursive partitioning of the graph, it benefits from lower runtime execution. #### **Questions** #### **1. ReCO is first listed training-free [...]?** We thank the reviewer for pointing this out. This is an oversight on our part; ReCO and MaskCLIP should be listed as training-free methods since the reported scores do not involve any training. We will update the table accordingly. #### **2. [...] I would maybe consider replacing the term robustness with hyperparameter sensitivity (also in G). [...]** We appreciate the reviewer's suggestion and agree that this terminology is more appropriate for these sections. We will update the paper accordingly. #### **3. [...] Did you also study the results if only 64x64 resolution were used?** We have not analyzed the results using only $64 \times 64$ resolution. Since the image semantics are concentrated in the layers near the UNet’s bottleneck, with the outer layers focusing more on details, we would expect the performance to be less favorable with only $64 \times 64$ resolution. #### **4. l.339: Could you please discuss this strategy more comprehensively in the final manuscript [...].** We thank the reviewer for this valuable feedback. We will discuss this strategy more comprehensively in the final manuscript with complementary details in Supplementary. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and additional experiments and clarifications. Considering the other reviews and the rebuttal of the authors, I am happy to increase my score 5->6. This paper presents solid work with a fair contribution that should be presented to the community.
Summary: This paper addresses harnessing semantic localization information in pretrained diffusion UNet models. With the proposed recursive normalized cut on the final self-attention features of the diffusion UNet encoder, this paper achieves better performance on unsupervised zero-shot segmentation comparing to previous methods utilizing other fundation vision encoders, with regulation to the granularity of details. Experiments on several benchmarks and ablation studies show the efficacy of the proposed method. Strengths: This paper addresses an interesting and important problem of analyzing semantic information in pretrained generative diffusion models comparing to other fundation models. Conducting semantic clusterting on the latents of UNet encoder to reduce computation cost and then applying concept assignment while upsampling seems reasonable. The experiments are overall comprehensive and informative. The emperical results significantly outperform pervious baseline method. Weaknesses: My main concerns consists of several aspects: 1. The writing seems a bit too vague, maybe with more visual illustraition of the observations? For example, can you demonstrate the "patch-level alignment" with visual examples? It's hard to imagine when reading this in Line 168. 2. Are there any visual verification and description for what kind of concepts you get in Section 3.3? 3. The experiments with the proposed DiffCut seems to mainly excute with SSD-1B. What about using other SD models? Regarding computation cost and performance benefit. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In Line 223, what do you mean by "perform a many-to-one matching"? 2. Are the hyperparameters the same for DiffCut on different datasets? Since you seem to use the same hyperparameters for DiffSeg? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: This author did not discuss the limitations in the paper, which potentially could include: 1. How is the computation efficiency of the proposed method comparing to other methods? Considering the recursive partitioning and concept assigning process. 2. The semantic clustering knowledge in pretrained diffusion models might not be able to transfer to data domains like biomedical images. 3. If LD, AX, or UA is given, would the proposed method still outperform the corresponding SOTAs? Or can it further improve its performance? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments and appreciate the positive feedback. #### **Weaknesses** #### **1. Writing is a bit too vague [...]. For example, can you demonstrate the "patch-level alignment" with visual examples?** For vision tasks, we expect the vision encoder to be semantically coherent i.e to produce similar patch representations for regions belonging to the same semantic concept. We refer to it as the “patch-level alignment”. We show in Fig 4. and Fig 10. some qualitative results that illustrate the inner patch-level alignment for different vision encoders. We visualize the patch level alignment by encoding an image and computing the cosine similarity between a reference patch and all other patches. For SSD-1B, in Fig 4. the dog is uniformly and strongly highlighted, indicating a strong patch-level alignment. Following the reviewer’s remark, we will refer to Fig.4 at l.168 for better clarity on this point. #### **2. Are there any visual verification and description for what kind of concepts you get?** In section 3.3, each concept is obtained via a reduction of the spatial dimension of the corresponding segment (Masked SMM l.195) . This way, a single concept has the form of a feature vector that globally describes its corresponding segment. Providing a visual description of a single concept is not straightforward and would imply a form of “decoding” of this concept. #### **3. The experiments [...] execute with SSD-1B. What about using other SD models? [...]** Following the reviewer’s recommendation, we conduct a new evaluation to demonstrate that DiffCut provides competitive performance with other smaller diffusion backbones than SSD-1B. Specifically, we use SD1.4 and SSD-vega (another distilled version of SDXL). For SD1.4 the UNet encoder has 260M parameters (~ 30% of the UNet), for SSD-vega, the UNet encoder has 240M parameters (~32% of the UNet). | Model | VOC20 | Context | COCO-Object | COCO-Stuff | Cityscapes | ADE20K | |----------|-------|---------|-------------|------------|------------|--------| | SD1.4 | 57.5 | 52.8 | 30.0 | 45.2 | 24.5 | 36.7 | | SSD-Vega | 62.2 | 56.4 | 34.9 | 49.5 | 30.1 | 45.7 | | SSD-1B | 65.2 | 56.5 | 34.1 | 49.1 | 30.6 | 44.3 | | DiffSeg | 49.8 | 48.8 | 23.2 | 44.2 | 16.8 | 37.7 | The results obtained with these two backbones are consistent with those achieved using SSD-1B. Although there is a slight performance drop with the SD1.4 encoder backbone compared to SSD-1B, the method still significantly outperforms DiffSeg. Remarkably, DiffCut with SSD-vega UNet encoder achieves performance comparable to SSD-1B, despite being only half its size. We will be glad to add these results in the final paper. #### **Questions** #### **1. What do you mean by "perform a many-to-one matching"?** In datasets that include a background class, this label implicitly encompasses a variety of concepts related to "things" or "stuff," which vary depending on the dataset. Since our method generates a segment for every detected object in an image, a one-to-one matching between a single cluster ID and the entire background's ID does not accurately represent the model's true categorization capabilities. Therefore, in such cases, we use a many-to-one matching approach by associating the clusters that primarily overlap with the background to its ID. We will be glad to include this clarification in the final paper. #### **2. Are the hyperparameters the same for DiffCut on different datasets?** As highlighted in the general response and noted in the Implementation Details section (l.230), we used a fixed set of hyperparameters, $\tau$ and $\alpha$, across all datasets for our evaluations. This approach ensures a fair comparison against the baselines and is particularly important for our evaluation of DiffSeg, where we also maintain a single hyperparameter value across different datasets. #### **Limitations** #### **1. How is the computation efficiency of the proposed method comparing to other methods? [...]** The primary bottleneck in graph-cut methods is the eigensystem solving which requires $O(n^3)$ operations with $n$ being the number of nodes in the graph. In MaskCut, the graph size remains constant, whereas in DiffCut, it has its maximum size only during the first iteration of the recursive NCut. For each subsequent partition, only the relevant nodes from the original graph are selected, reducing the eigensystem's cost. The concept assignment essentially consists in multiplying the matrix of concepts with the feature map. Following your remark, we provide in the general response a table comparing the runtime execution of MaskCut, DiffCut and DiffSeg. We will be glad to include these results in the final manuscript. #### **2. The semantic clustering knowledge in pretrained diffusion models might not be able to transfer to data domains like biomedical images.** Since the diffusion backbone is not specifically trained to generate images in specialized domains like biomedical imaging, the method may struggle to transfer to out-of-distribution images. However, this issue could potentially be mitigated by fine-tuning the pre-trained diffusion models on the target data domain. As this limitation was not addressed in the main paper, we will discuss it in a Limitations section added to the Supplementary. #### **3. If LD, AX, or UA is given, would the proposed method still outperform the corresponding SOTAs? [...]** Since our method is entirely training-free and does not rely on LD, AX, or UA, we believe that incorporating any additional information could further enhance its performance. Specifically, given the superior quality of our masks compared to MaskCut, we believe that integrating our DiffCut method to generate high-quality pseudo-masks within an unsupervised training scheme, would lead to even better performance. --- Rebuttal Comment 1.1: Comment: I thank the author for the detailed response. After reading the reviews and the rebuttal, I agree this is a good work and would like to increase my rating.
Summary: This paper introduces an innovative unsupervised, zero-shot image segmentation method called DiffCut. This method leverages the encoder features of a pre-trained diffusion model within a recursive graph partitioning algorithm to create finely detailed segmentation maps without requiring labels from downstream segmentation datasets. DiffCut exploits features from the last self-attention block of a diffusion UNet encoder to perform image segmentation. This approach does not rely on paired image and text data, making it suitable for unsupervised and zero-shot learning tasks. The core algorithmic innovation in DiffCut is the use of a recursive Normalized Cut (NCut) that allows the model to regulate the granularity of detected objects and consequently adapt the number of segments to the visual content of each image. Compared to existing methods like DiffSeg and other graph-based object localization techniques, DiffCut significantly outperforms in terms of the quality of segmentation maps and alignment with semantic visual concepts. The effectiveness of DiffCut was validated across multiple standard benchmarks with a focus on mIoU scores, where it consistently outperformed the state-of-the-art unsupervised semantic segmentation methods. Strengths: 1. The paper is clearly written and logically structured, making it easy to understand. 2. The authors conducted thorough experiments to demonstrate the effectiveness of DiffCut, which achieves state-of-the-art performance in many downstream tasks. 3. In contrast to earlier works that employ graph-based clustering methods for unsupervised segmentation, such as TokenCut and MaskCut, this method introduces a soft thresholding technique for constructing the affinity matrix. This approach effectively maintains high affinity between highly similar patches while reducing the weights between dissimilar patches to near zero—an improvement that addresses a limitation often neglected in previous studies. Weaknesses: 1. **[Unsupervised Instance or Panoptic Segmentation Performance]** While the paper positions itself as effective in segmenting more objects within an image, it primarily focuses on semantic segmentation, potentially overlooking the model's capability for instance discrimination. I am interested in how the model performs in unsupervised instance or panoptic segmentation tasks, as these require differentiating individual objects or integrating both semantic and instance segmentation. 2. **[Technical Contribution and Comparison with MaskCut]** MaskCut is actually capable of segmenting multiple objects per image, with the ability to set a large number of masks per image. This raises questions about the novelty and technical contributions of this paper, particularly in the absence of quantitative comparisons with established methods like TokenCut [30] or MaskCut [32]. 3. **[Lack of Comparisons with Previous Works]** The paper does not compare its results with some previous state-of-the-art works in unsupervised panoptic and semantic segmentation, such as U2Seg [N1] and EAGLE [N2]. Notably, [N2] also explores the use of prototypes derived from eigenvectors of feature maps. 4. **[Adequacy of Low-Resolution Features]** The use of low-resolution feature maps to produce concept-embeddings may result in missing small-scale objects in the images, which could lead to the exclusion of these small objects during the "High-Resolution Concept Assignment" phase. This could limit the model’s effectiveness in capturing medium or small-sized objects, which are crucial for detailed image segmentation. 5. **[Recursive Normalized Cuts or Multi-class Spectral Clustering]** I am curious if the authors have explored to use spectral clustering for multi-entity/object segmentation. The granularity of the segmentation masks can be controlled by using a different cluster numbers. [N1] Niu, Dantong, Xudong Wang, Xinyang Han, Long Lian, Roei Herzig, and Trevor Darrell. "Unsupervised universal image segmentation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22744-22754. 2024. [N2] Kim, Chanyoung, Woojung Han, Dayun Ju, and Seong Jae Hwang. "EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3523-3533. 2024. Technical Quality: 2 Clarity: 3 Questions for Authors: Please check the weakness section. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments and appreciate the positive feedback. #### **Weaknesses** #### **1. [Unsupervised Instance or Panoptic Segmentation Performance]** We have not yet evaluated DiffCut’s performance in instance or panoptic segmentation. The goal here is to demonstrate that the features of a diffusion UNet surpass those of other vision backbones in precise semantic segmentation. Given that our method can generate high-quality semantic masks under the most restrictive conditions (training-free and independent of LD, AX, and UA), future work could involve relaxing these constraints and incorporating DiffCut as a high-quality pseudo-label generator within an unsupervised training pipeline—such as proposed in U2Seg [1]. This could significantly enhance performance in unsupervised semantic, instance, and panoptic segmentation. #### **2. [Technical Contribution and Comparison with MaskCut]** Unlike TokenCut or MaskCut, our method can provide dense segmentation maps and adapt the number of detected segments based on the visual content of an image. MaskCut and TokenCut cannot do this because they can only detect a fixed number of segments. (l.101-104). Therefore, these methods are not well-suited for an image segmentation task. Technically, MaskCut uses an iterative graph partitioning method and masks the graph nodes associated with detected objects. As a result, each segment is treated as a single object that cannot be refined once detected, which in practice severely limits its ability to identify a large number of objects. To support this claim and following the reviewer’s suggestion, we provide below a comparison between the performance of MaskCut and DiffCut. Overall, the improvement of DiffCut highlights our two main contributions: the quality of our visual features for semantic segmentation, and the capacity of the recursive Ncut algorithm to adjust the number of segments to the visual content of each image. We would be glad to include these results in the final paper. | Model | VOC20 | Context | COCO-Object | COCO-Stuff-27 | Cityscapes | ADE20K | |----------------|-------|---------|-------------|------------|------------|--------| | DiffCut (ours) | **62.0** | **54.1** | **32.0** | **46.1** | **28.4** | **42.4** | | MaskCut $(k=3)$ | 53.7 | 42.3 | 30.9 | 41.8 | 18.0 | 33.7 | | MaskCut $(k=5)$ | 53.8 | 43.4 | 30.1 | 41.7 | 18.7 | 35.4 | | MaskCut $(k=20)$ | 53.8 | 43.5 | 30.0 | 41.5 | 18.0 | 35.6 | #### **3. [Lack of Comparisons with Previous Works]** We thank the reviewers for these recent references. In comparison with EAGLE [2], we propose a much simpler and effective training-free pipeline that reaches higher results on COCO-Stuff and Cityscapes due to the quality of the extracted features from the diffusion backbone. Their pipeline is based on DINOv1 which we proved in Fig. 3 has worse feature correspondence than our backbone. U2Seg [1] proposes a framework to unify unsupervised semantic, instance and panoptic segmentation. This method only evaluates on COCO-Stuff-27 for unsupervised semantic segmentation. We will be glad to add these baselines to our main Tab. 1. | | COCO-Stuff-27 | Cityscapes | |---------|:-------------:|:----------:| | DiffCut | **49.1** | **30.6** | | DiffSeg | 43.6 | 21.2 | | EAGLE | 27.2 | 22.1 | | U2SEG | 30.2 | - | #### **4. [Adequacy of Low-Resolution Features]** We empirically observe in Fig. 9, 11 and 12 that the excellent quality of the features provided by the diffusion model at a reasonably low spatial resolution $(32 \times 32)$ effectively enables the detection of medium to small-sized objects. Increasing the value of the hyperparameter $\tau$ can even further help to detect objects at a finer granularity. This spatial resolution of feature maps does not present a significant drawback for small object localization. #### **5. [Recursive Normalized Cuts or Multi-class Spectral Clustering]** In Tab. 2, we explore the use of spectral clustering for multi-object segmentation by introducing a variant of DiffCut called AutoSC. This variant adapts the number of segments based on the visual content using a heuristic called “relative-eigen-gap” which estimates the number of connected components in a graph. This estimated number of connected components, $k$, is then used to determine the number of clusters in $k$-way spectral clustering. We observe that AutoSC also achieves excellent results, often comparable to those obtained with DiffCut. We note that the main differences between AutoSC and DiffCut appear in datasets containing small objects (e.g., COCO-Object, Cityscapes, ADE20K). DiffCut’s recursive partitioning offers flexibility in detecting small objects by allowing large segments to be further divided into smaller ones, while the “relative eigen-gap” heuristic in AutoSC is effective at uncovering the most salient clusters, potentially overlooking smaller objects. [1] Niu, Dantong, Xudong Wang, Xinyang Han, Long Lian, Roei Herzig, and Trevor Darrell. "Unsupervised universal image segmentation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22744-22754. 2024. [2] Kim, Chanyoung, Woojung Han, Dayun Ju, and Seong Jae Hwang. "EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3523-3533. 2024. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for addressing most of my concerns in the rebuttal. I will keep my rating as borderline accept. I encourage the authors to include the additional comparisons, especially the comparisons with MaskCut in the main paper, which can help other researchers to better understand the key distinctions and benefits of the proposed method.
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful comments and valuable suggestions. We would like to clarify here some key points raised by the reviewers and we then provide individual responses to each one. ### **Comparison with MaskCut** Reviewers jdn9 and 2hwn mentioned that although we positioned our method as an improvement over MaskCut, we did not quantitatively compare against it. In the specific answers to R.jdn9 and R.2hwn, we explain why MaskCut was not initially designed for semantic segmentation, in contrast to state-of-the-art methods evaluated in Tab. 1. To further analyze the potential of MaskCut in semantic segmentation and meet R.jdn9 and R.2hwn’s request, we evaluate this method on the semantic segmentation task. Specifically, as MaskCut requires a predefined number of iterations $k$ to detect a fixed number of segments per image, we evaluate MaskCut with $k$ in $\\{3, 5, 20\\}$. The quantitative comparison between DiffCut and MaskCut is shown below, and we would be glad to include it in the final paper. | Model | VOC20 | Context | COCO-Object | COCO-Stuff-27 | Cityscapes | ADE20K | |----------------|-------|---------|-------------|------------|------------|--------| | DiffCut (ours) | **62.0** | **54.1** | **32.0** | **46.1** | **28.4** | **42.4** | | MaskCut $(k=3)$ | 53.7 | 42.3 | 30.9 | 41.8 | 18.0 | 33.7 | | MaskCut $(k=5)$ | 53.8 | 43.4 | 30.1 | 41.7 | 18.7 | 35.4 | | MaskCut $(k=20)$ | 53.8 | 43.5 | 30.0 | 41.5 | 18.0 | 35.6 | We can see that DiffCut significantly and consistently outperforms MaskCut across all datasets for any chosen $k$. These additional experiments further highlight the contributions of our DiffCut method, i.e., the diffusion features used for segmentation, and the recursive NCut method able to adapt the number of segments to the visual content of each image. In addition, we observe no significant improvements for $k=5$ or $k=20$ in comparison with $k=3$, supporting the claim that MaskCut is inadequate to detect a large number of objects due to its iterative process. ### **Clarifications in Experiments** #### **1. Comparison to DiffSeg.** Reviewers qiwu and oADK raised concerns about the fairness in the comparison of our DiffCut method with respect to the DiffSeg baseline. First, R.qiwu asked if a common hyperparameter setting has been used for all datasets with DiffCut, as it is the case for DiffSeg. We highlight that we used a fixed set of hyperparameters for DiffCut across all datasets for our evaluations, ensuring a fair comparison with DiffSeg, as mentioned in lines 232-233. Additionally, R.oADK suggested that the comparison might be unfair due to the different backbones used by our method and DiffSeg. We emphasize that the ablation study carried out in Tab. 2 uses the same diffusion backbone (SSD-1B) for DiffCut and Diffseg, making the comparison between both approaches fair. Therefore, the large gain of DiffCut over DiffSeg in Table 2 directly validates our two main contributions: the choice of the diffusion features (vs self-attention maps in DiffSeg), and the clustering algorithm (recursive NCut vs iterative attention merging). To explicitly meet R.oADK’s request, we conducted an additional comparison between DiffCut and DiffSeg, using the SD1.4 backbone as employed in the original DiffSeg method. The results shown below confirm that even with SD1.4, DiffCut significantly and consistently outperforms DiffSeg across all datasets but ADE20K where the difference of only 1pt. We will be glad to add these new results to the final paper to emphasize on the fairness of the conducted evaluation. | Model | VOC20 | Context | COCO-Object | COCO-Stuff-27| Cityscapes | ADE20K | |------------------|-------|---------|-------------|------------|------------|--------| | DiffSeg w/ SD1.4 | 49.8 | 48.8 | 23.2 | 44.2 | 16.8 | **37.7** | | DiffCut w/ SD1.4 | **57.5** | **52.8** | **30.0** | **45.2** | **24.5** | 36.7 | #### **2. Evaluation Strategy.** On R.oADK’s concern about the fairness of our evaluation strategy, we highlight that all methods in Tab. 1 are evaluated using Hungarian matching but ACSeg, ReCO and MaskCLIP which attribute labels to predicted masks. For these latter methods, we clarify that the Hungarian matching may leave some predicted clusters unassigned to any ground truth labels whereas assigning text labels offers a chance to each cluster to be correctly classified. For example, STEGO and ACSeg, which also evaluate with label predictions, show significantly higher performance in this setup than with Hungarian matching evaluation. Our evaluation setup is thus reasonable, making the comparison with ReCO and MaskCLIP in Tab. 1 fair. #### **3. Runtime Comparison** Reviewers qiwu and 2hwn have asked for a runtime comparison between methods. Following their requests, we provide in the following table a runtime comparison between DiffCut, MaskCut and DiffSeg which are respectively the two main baselines when it comes to graph-based image clustering and diffusion-based zero-shot segmentation. | | MaskCut $(k=5)$ | DiffCut | DiffSeg - SD1.4 | DiffSeg - SSD-1B | |------------------------|---------------|---------|-----------------|-------------------| | Images / sec | 0.84 | 1.11 | 2.75 | 1.25 | | | | | | | MaskCut with $k=5$ is the slowest method, segmenting 0.84 images per second. Using SSD-1B with images at $1024 \times 1024$ resolution, DiffCut’s speed is slightly lower than DiffSeg’s based on the same architecture. With the SD1.4 backbone, DiffSeg demonstrates superior runtime performance due to the smaller size of the architecture and the input image size. We will be glad to include these results in the final manuscript.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-Object Demand-driven Navigation
Accept (poster)
Summary: The paper introduces a new benchmark Multi-object Demand-driven Navigation and trains various models on it. Strengths: 1. This paper introduces a new benchmark, which are historically undervalued 1. The ablation section is very clear and well written 1. There is adequate coverage of baselines to my knowledge, although my knowledge may not be current on this area. 1. The paper does achieve SOTA performance on many of the tasks relative Weaknesses: 1. Some of the figures are a bit hard to read 1. Requires pretrained foundation models for the task 1. The method section can be tough to follow at times 1. Frequent gramatically errors and typo 1. They are a variety of CLIP encoders available now, and it would be interesting to compare to newer models or other variants of OpenClip Technical Quality: 3 Clarity: 2 Questions for Authors: typo on line 82 incorrect capitailzation typo on line 240 missing a Why was that specific version of CLIP chosen? For the LLM component, why was the specific version chosen Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate the time and effort you put into reviewing our paper! We are very grateful to you for appreciating our benchmarks and experiments! We hope the following clarification will ease your concerns, and hope to hear back from you if you have further questions! >Q1: Some of the figures are a bit hard to read A1: We apologize for the confusion caused by the figures. We will modify the content of these figures to make them easier to understand. Figure 1 shows an example of a MO-DDN task. The robot visits five different locations and finds multiple objects that fulfill the user's demand. Figure 2 shows how we train the attribute features. The ends of the arrows in the figure represent the targets for loss computation, and the colors of the arrows represent different ways of loss computation. Please read Figure 2 and Section 4.1.3 jointly. We have drawn a new figure in the PDF attached at Common Response that describes the switches between the coarse exploration phase and the fine exploration phase, and we hope that this will help you to understand the relationship between Figure 3 and 4. Please see **Common Response 1 for Method** for detailed description. We draw a new figure in the attached PDF to describe the switches between the coarse (Figure 3) and fine (Figure 4) exploration phase. >Q2: Requires pretrained foundation models for the task A2: Thanks for pointing this out. The usage of pre-trained models is common in navigation, e.g. EmbCLIP[1], LM-Nav[2], MOPA[3]. These pre-trained models have good generalization due to training on large-scale datasets. >Q3: The method section can be tough to follow at times A3: We apologize for the difficulty in understanding the method section. We will add more articulation and section summaries to enhance readability. Please see **Common Response 1 for Method** for detailed description and **Revision Plan** at the bottom of Common Response. We will also add the above modification in the video in the supplemental material that hopefully provide a better understanding. >Q4: Frequent gramatically errors and typo A4: We appreciate you pointing out these typo and grammatical errors. We'll revise the issues you've mentioned and double-check the entire paper to correct grammatical issues and typos. >Q5: they are a variety of CLIP encoders available now, and it would be interesting to compare to newer models or other variants of OpenClip. Why was that specific version of CLIP chosen? A5: Thank you very much for your advice. We are using the official model provided by OpenAI, ViT-L/14, which is **the most popular and downloaded** model on the hugging face website among all OpenAI's CLIP models. We argue that the shared semantic space of vision and text provided by the CLIP model can effectively migrate attribute features to vision, which is important for the end-to-end model in the fine exploration phase (a similar conclusion in DDN). We add an experiment named **Ours (ViT-H-14 Encoder)**, which uses the OpenCLIP's ViT-H-14 model as you suggested. See the attached PDF in Common Response for experimental results. Experimental results show that the larger CLIP model does slightly improve navigation performance, but the time and computational resources required for training are also increased. These experimental results still support the conclusion in our paper that attribute features can improve navigation performance at two different levels of the exploration phase. >Q6: For the LLM component, There seems to be unfinished questions here, and we look forward to your suggestions for LLM! ## References [1] Simple but Effective: CLIP Embeddings for Embodied AI [2] LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action [3] MOPA: Modular Object Navigation with PointGoal Agents --- Rebuttal Comment 1.1: Title: Update Comment: After reading the rebuttal and the other reviews, I am maintaining my score. --- Reply to Comment 1.1.1: Title: Looking Forward to Your Further Questions Comment: We greatly appreciate your response. If you have further questions, we are more than happy to answer them, and hopefully our responses will alleviate your concerns. Regarding the lack of clarity in the writing of the method you mentioned, we have described it in more detail in the Common Response. We will also add this section to the paper to improve the readability of the paper. We are very grateful for your suggestions on our paper, which largely improved the readability of the paper and the clarity of the description of the method. --- Reply to Comment 1.1.2: Comment: Dear Reviewer AuxK, As the discussion period is drawing to a close, we would like to kindly invite any further questions or concerns you might have. We are eager to address these and hope that our responses will alleviate any remaining concerns. We re-wrote a paper revision plan in common response in the hope that our revision plan will alleviate your concerns about the lack of clarity in our methods and the difficulty of reading it. Once again, we thank you for the time and effort you have devoted to reviewing our paper. We greatly cherish the opportunity to discuss with you. Sincerely, Authors of Submission 3733
Summary: The paper presents Multi-object Demand-driven Navigation (MO-DDN) which extends the DDN task to work with multi-object search and personal preference. DDN task aims to find an object in a navigation setting based on a given demand instruction. The paper proposes a new attribute model for multi-object DDN where the demand instructions and object categories are encoded into the same feature space. The attribute model is a VQ-VAE like model that uses attributes generated from GPT-4 as input. The attribute features are later used with a coarse-to-fine exploration agent for navigation and obtaining the solutions. The proposed method is evaluated on habitat with the HSSD dataset and the quantitative results show that both attribute training and coarse-to-fine exploration improve the performance. Strengths: 1. The paper has good motivation for the MO-DDN task. Ideally for robot navigation, the robot should be able to consider all possible solutions and prioritize the most feasible one. 2. Coarse-to-fine exploration is interesting and seems to be a good solution to the DDN task. Weaknesses: 1. In line 41, the authors claimed that personal preferences are considered. However, I don't think it was addressed in the paper. 2. It is still unclear to me that, how MO-DDN is superior to DDN other than the solutions are multi-object rather than single-object. If we train a DDN with preferred solutions, would it have superior performance on $SR_p$? 3. Overall, I feel like section 4, especially section 4.2 is a bit disconnected which makes the section hard to follow. Also, Figure 3 and 4 are hard to understand too. 4. It would be more helpful to show more examples of basic and preferred attribute features, and some corner cases where priotizing becomes challenging. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. For results in Table 1, are $r_b$ and $r_p$ in Equation 2 being adjusted to prioritize basic or preferred solutions? And how are the metrics being evaluated (Success rate for basic/preferred) considering the baseline only works on one solution? 2. The authors proposed attribute loss and matching loss in the attribute training. I am wondering how efficient are they and why is the weight of attribute loss larger than other loss terms. 3. In line 137, a simpler version of "Find" action is used in MO-DNN compared to DNN. Does this adjustment contribute to the performance improvement over the DNN method? 4. Why does the MLP branch have a similar or worse performance compared to MOPA+LLM? And how is the branch chosen between MLP and LLM during navigation? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discussed the limitations of their works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your time and effort in reviewing our paper! We appreciate your kind endorsement of our benchmark and method. We hope the following clarification will ease your concerns, and hope to hear back from you if you have further questions! >Q1: In line 41, the authors claimed that personal preferences are considered. A1: Thanks for pointing this out. Due to character limitations, please see **Common Response 3 for Perference** . >Q2: how MO-DDN is superior to DDN other than the solutions are multi-object rather than single-object A2: Thanks for your advice! A complete comparison between MO-DDN and DDN benckmark can be found in Table 6 in the Appendix. Our method can leverage knowledge from external large models to **evaluate the small areas** where objects are most likely to be found, and use end-to-end models to efficiently and quickly **explore within the small areas**. DDN, on the other hand, needs to rely on an end-to-end model to **explore in a large area**. What's more, the attribute features used by DDN are one-to-one between instructions and objects, which cannot be well applied to multi-object search. Using preferred solutions slightly improves the $SR_p$ of DDN, but does not exceed our proposed method. Please see **Common Response 2 for Experimet** for the DDN trained with preferred solutions. >Q3: Overall, I feel like section 4, especially section 4.2 is a bit disconnected which makes the section hard to follow. A3: Thank you for pointing this out. We apologize for not making the method clear. We'll go into more detail about section 4.2 in the **Common Response 1 for Method**. We have provided a video in the supplemental material that may hopefully provide a better understanding. We will add summarizing paragraphs in the paper to illustrate the connections between each section. >Q4: It would be more helpful to show more examples. A4: Thanks for the suggestion. We **visualize two point clouds in the attached PDF**. Each block is colored by its score, with darker colors representing higher scores. We find that adjusting the values of $r_b$ and $r_p$ changes the scores of the blocks, which in turn affects the behavior of the agent. $r_b$ and $r_p$ are hyper parameters that control the priority of basic and preferred solutions (See A1 and **Common Response 3**). >Q5: For results in Table 1,xxx A5: Thanks for pointing this out. Table 1 shows the results under $r_b=1$ and $r_p=1$. In a real deployment, these two hyper parameters can be modified by the user to flexibly prioritize the basic and preferred solutions. See Appendix A.1.2 for calculation of the success rate. We have made some modifications to baselines to make them trainable and testable on our task, see Appendix A.4.2 for how this was done. Briefly, for the fairness of the test, we modify all the baselines' action space to Moveforward, RotateLeft, RotateRight, LookUp, LookDown, and Find. It is note that the baselines' policy does not output Done and Done is automatically executed only after the number of Find reaches the $c_{ind}=5$ limit. >Q6: Attribute loss and matching loss in the attribute training. A6: Thanks for pointing this out. Attribute loss directly train the mapping of instructions and objects to attribute features. Matching loss directly guides the alignment of the attribute features of objects and instructions. In attribute feature training, we have two objectives: first, to train two MLPs (Ins MLP Encoder and Obj MLP Encoder), which serve to map the CLIP features of instructions and objects, to the CLIP features of attributes (referred to as attribute features), and second, to align the attribute features of instructions and objects to the same feature space. Attribute loss serves to train the first objective, while all other losses are in the training the second objective. We argue that the first objective's to be more important because the alignment only makes sense when the mapping is correct. Therefore, the weight of the attribute loss should be greater than the other losses. Without attribute loss and matching loss, attribute features would degenerate into CLIP features, since VQ-VAE Loss would only map features to the codebook's feature space which is initialized by CLIP features. We add an experiment named **CLIP Exploration** to show that CLIP features are not as good as attribute features in coarse exploration phase, proving the effectiveness of the two losses. >Q7: a simpler version of "Find" action is used in MO-DNN compared to DNN. A7: Thanks for pointing this out. This simpler version of Find action is used in all baselines and our method, including DDN. All baselines do not require the output of the bounding box of the target objects. As we said in A5, we keep the fairness in the action space. Moreover, the simpler Find action just removes the requirements of outputting the bounding box and does not affect the navigation success metrics. >Q8: Why does the MLP branch have a similar or worse performance compared to MOPA+LLM? A8: Thanks for pointing this out. We argue that MLP branch is weaker than MOPA+LLM largely because we provide ground truth semantic labels for MOPA+LLM for GPT-4 to make decisions on whether to perform Find. The powerful inference capability of GPT-4 and the availability of ground truth labels greatly enhance the decision correctness. MLP branch does not use GPT-4 at any time and does not use ground truth labels for decision making. MLP branch is designed with the intention of abandoning dependence on external LLM resources and focusing on lightweight, purely local running. In Table 1, we test the performance of the two branches separately, so that the branch chosen among an episodes is predetermined (of course they can also be switched freely during navigation, but we did not test this). When deployed in real life, one of the branches can be freely chosen to run according to compute resources and LLM availability. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses. After reading other reviews and rebuttals, I would like to keep my rating. --- Reply to Comment 1.1.1: Title: Looking Forward to Your Further Questions Comment: We do appreciate you reading the rebuttal. We greatly appreciate your review, which enhances the readability and completeness of our paper. We are always ready to answer your further questions, and we hope that our answers will alleviate your concerns. --- Reply to Comment 1.1.2: Comment: Dear Reviewer xf5s, As the discussion period is drawing to a close, we would like to kindly invite any further questions or concerns you might have. We are eager to address these and hope that our responses will alleviate any remaining concerns. We re-wrote a paper revision plan in common response in the hope that our revision plan will alleviate your concerns about the lack of clarity in our methods and the difficulty of reading it. Once again, we thank you for the time and effort you have devoted to reviewing our paper. We greatly cherish the opportunity to discuss with you. Sincerely, Authors of Submission 3733
Summary: The paper presents "MO-DDN," a novel benchmark and approach for Multi-object Demand-driven Navigation (MO-DDN), where an agent needs to find multiple objects to satisfy complex, user-specific demand instructions. The proposed approach leverages a coarse-to-fine attribute-based exploration strategy. The method includes the training of an attribute model using CLIP features and a VQ-VAE loss, followed by a dual-phase exploration process combining modular and end-to-end techniques. The experimental results demonstrate that this method outperforms several baselines on the HM3D ObjectNav datasets. Strengths: Novel Benchmark: The introduction of MO-DDN as a benchmark addresses real-life complexities in demand-driven navigation, considering multi-object searches and user preferences. Coarse-to-Fine Strategy: The paper presents an innovative coarse-to-fine exploration strategy that effectively combines the benefits of modular and end-to-end methods, optimizing both efficiency and performance. Comprehensive Evaluation: The method is evaluated rigorously against multiple baselines, showing significant improvements in success rates and navigation efficiency. Attribute Model: The use of attribute features trained with a discretized codebook and VQ-VAE loss is well-motivated and shows promising results in aligning demand instructions and object features. Weaknesses: Complexity and Implementation: The coarse-to-fine exploration strategy and the dual-phase approach add significant complexity to the implementation. This complexity might hinder the practical deployment and replication of the method in other settings. Generalization to Unseen Environments: While the method shows good performance on the HM3D dataset, its generalization to other datasets or real-world environments with different characteristics is not extensively evaluated. Fixed Attribute Numbers: The assumption of fixed attribute numbers (k1 and k2) for instructions and objects simplifies training but may limit the model's flexibility and applicability to real-world scenarios where attributes can vary widely. Dependency on Pre-trained Models: The method heavily relies on pre-trained models like CLIP and GPT-4 for attribute extraction and task generation, which might pose limitations in environments where these models do not perform well or are not available. Limited Discussion on Limitations: The paper provides limited discussion on the potential limitations of the proposed method and the challenges that might arise in different applications or under varying conditions. Technical Quality: 2 Clarity: 1 Questions for Authors: Scalability: How does the method scale with increasing complexity of demand instructions and the number of objects required to satisfy them? Is there a performance degradation when the complexity of the scene increases? Attribute Feature Space: The paper mentions using a discretized codebook for attribute features. How does the choice of the number of vectors (128) and their dimensions (768) affect performance? Would a different configuration yield better results? Adaptability to Dynamic Environments: How does the approach handle dynamic environments where objects might move or new objects might appear? Is the model capable of real-time adaptation in such scenarios? Robustness to Noise: How robust is the method to noisy or incomplete demand instructions? For example, how does it handle ambiguous or partially incorrect instructions from users? Energy and Time Efficiency: Given the dual-phase exploration strategy, what are the energy and time efficiencies of the proposed method compared to the baselines? Are there any trade-offs between computational cost and navigation performance? Confidence: 1 Soundness: 2 Presentation: 1 Contribution: 1 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your time and effort in reviewing our paper! We really value your recognizing the novelty of our benchmark and method and the comprehensive evaluation. We hope the following clarification will ease your concerns, and hope to hear back from you if you have further questions! >Q1: Complexity and Implementation A1: Our modular method is easy to deploy, with multiple models connected to each other by a simple pipeline. Moreover, the computational resource consumption of our proposed method in the deployment is low, requiring only a consumer-grade graphics card with 10G memory (such as a GTX1080Ti or a RTX3080) for complete deployment. We can also replace the object detection model with the more lightweight model in some computing resource constrained situations. Training consumption is also very low throughout the method, requiring only training the end-to-end agent in the fine exploration phase, which only requires about 12 hours of training on an RTX4090 graphics card. >Q2: Generalization to Unseen Environments A2: Thank you so much for pointing this out. We first need to point out that the scene dataset we use is HSSD, not HM3D. HSSD divides the scenes into training and testing scenes, and we follow their settings. We completely evaluate the generalizability of our method including unseen environments and unseen task instructions in Table 1. To test on other scene datasets we first need to build the set of task instructions for the corresponding dataset, which is very difficult due to time constraints. Generalization is an issue that needs to be addressed for all navigation tasks. We do not purposely design modules to improve generalizability in this paper, which will be left for future research. >Q3: Fixed Attribute Numbers A3: Thanks for the advice. This is indeed a limitation, and we have discussed this limitation in Section 6 of the paper. We argue that four attributes are sufficiently expressive for common demands and objects. The more flexible options of $k_1$ and $k_2$ are left for future research. >Q4: Dependency on Pre-trained Models A4: Thank you for pointing this out. The use of pre-trained models is common in navigation, e.g. EmbCLIP[1], LM-Nav[2], MOPA[3]. These pre-trained models generalize well in most scenarios after being trained on large-scale datasets. We are also proposing **MLP branch** in the coarse exploration phase, which can be run completely locally when GPT-4 is not available or computational resources are limited to run a LLM locally. >Q5: Scalability A5: Thank you for pointing this out. We argue that as the complexity of the demand instructions and the number of objects increase, the performance degradation is possible. Due to the limitation of the categories of objects in the dataset, it is difficult for us to construct more complex demand tasks in a short period of time, so verifying this fact may need to be done on more complex scene datasets in the future. >Q6: Attribute Feature Space A6: Thank you for pointing this out. We did some simple experiments (which were not put in the paper) for the number of vectors. In this simple experiment, we tested 64, 128, 256, 512 as the number of vectors. The metrics chosen are similar to the cosine similarity expressed by Line 310 in the paper. In the end, 128 achieved the best cosine similarity. 768 was chosen because we wanted this code book to be a subspace of the CLIP feature space, and therefore chose dimensions consistent with the CLIP ViT-L/14 version. Since code book only serves as a non-direct alignment between object attributes and instruction attributes, we did not perform complete ablation experiments on different configuration. Our ablation experiments demonstrate that the absence of VQ-VAE Loss and codebook initialization degrades the expression of attribute features, which in turn leads to degraded navigation performance. Please see **Common Response 2 for Experiment** for OpenCLIP ViT-H-14's results. >Q7: Adaptability to Dynamic Environments A7: Thank you for pointing this out. The research on dynamic environments is necessary and important in the real world but beyond the scope of this paper. We assume, as most Object Navigation tasks do, that the scenes are static during navigation. >Q8: Robustness to Noise A8: In the LLM branch, the reasoning of instructions is left to LLMs such as GPT-4, whose powerful reasoning capability also handles ambiguous instructions well. And our tasks are designed with the assumption that all the demand instructions are correct, otherwise it is difficult to design the correct solutions. We add the following experiments for illustrating the robustness of our method to noise. We add Gaussian noise to the RGB-D camera (add N(0,3) to RGB and N(0, 0.03m) to Depth). The experimental results show that our method does not suffer much performance degradation under noise, showing robustness to noise. Please see the attached PDF file in the Common Response for results. >Q9: Energy and Time Efficiency A9: As we said in A1, the model can be deployed on a consumer-grade graphics card. During our testing, inference with the RTX4090 resulted in an FPS of around 5 or so, which is acceptable for navigation (even in the real world). Compared to end-to-end baselines (VTN, DDN, ZSON), our method does increase the computational consumption, but it also brings large performance gains. Compared to FBE+LLM and MOPA+LLM, the computational consumption is almost the same, but our method performs better. ## References [1] Simple but Effective: CLIP Embeddings for Embodied AI [2] LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action [3] MOPA: Modular Object Navigation with PointGoal Agents --- Rebuttal Comment 1.1: Comment: Dear Reviewer 9oyu, As the discussion period is drawing to a close, we would like to kindly invite any further questions or concerns you might have. We are eager to address these and hope that our responses will alleviate any remaining concerns. We re-wrote a paper revision plan in common response in the hope that our revision plan will alleviate your concerns about the lack of clarity in our methods and the difficulty of reading it. Once again, we thank you for the time and effort you have devoted to reviewing our paper. We greatly cherish the opportunity to discuss with you. Sincerely, Authors of Submission 3733
Summary: The paper introduces a new task called Multi-Object Demand Driven Navigation where an agent is tasked with searching for multiple-objects that satisfy a specified demand instruction by a user. The demand instruction is a natural language instruction specified by a user which might directly or indirectly ask for a specific object. For example, a instruction “I’m thirsty” requires an agent to search for some object in the environment that can be a drink (softdrink or water) that can address the query from the user. In MO-DDN the authors extend the task of DDN to scenarios where an instruction might require the agent to search for multiple objects. In addition, the paper proposes a modular method that involves building a coarse to fine attribute based exploration agent by leveraging SLAM and multimodal models like CLIP. The key idea of the method is to finetune CLIP to map a demand instruction to features that capture relevant attributes (one or more) to satisfy the goal. For example, feature vector for instruction “Find me a comfortable spot to work on report” should contain relevant features for objects required for such a instruction which could be “desk, lounge chair, laptop/book”. To learn such feature space authors propose a attribute based finetuning scheme by leveraging synthetic data generated using GPT-4. Finally, the authors show that using this method enables learning effective MO-DDN agents Strengths: 1. The paper is well written and easy to follow. 2. The proposed task of formulating multi-object search as a demand driven navigation where the objects required for tasks are not necessarily explicitly specified is a interesting task at the intersection of common sense and embodied AI and is valuable to study. 3. The proposed method of distilling demand based attributes into CLIP feature space is intuitive, simple and effective based on results shown in table 1. Weaknesses: 1. The experiments section is missing one important baseline where we evaluate how good the pretrained CLIP features do zero-shot on this task without any attribute based finetuning. It is important to establish that baseline to show effectiveness of the proposed method that leverages CLIP finetuning for the task. I’d appreciate if authors can add those results. 2. The authors mention about flexibility of the approach which allows to switch from basic solution to preferred solution for a demand instruction easily. However the part about basic vs preferred solution and when is the agent expected to execute the preferred solution in unclear in the manuscript. It’d be good if authors can clarify if any of the basic/preferred solution is acceptable for each task in the dataset or there are specific instructions which require the agent to execute the preferred solution. If yes, how is that conveyed to the agent? Is that part of the instruction or the agent needs to implicitly infer it? 3. The details about fine exploration policy is unclear. Is the fine exploration policy only called for low-level navigation action execution when a object of specific attribute is detected on the map? Authors need to add more details in section 4.2.2. to clarify the role of this module Due missing experiment mentioned in 1 and clarifications needed for 2 & 3 I am currently giving the paper a borderline reject but I'd be happy to increase my rating if authors address my concerns. Technical Quality: 2 Clarity: 2 Questions for Authors: Mentioned in weaknesses section Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your time and effort in reviewing our paper! We are also very appreciative that you have recognized the writing, the task setting, and the method proposed. We hope the following clarification will ease your concerns, and hope to hear back from you if you have further questions! >Q1: The experiments section is missing one important baseline A1: Thanks for the valuable advice. We supplement this baseline experiment named **CLIP Exploration**. In CLIP Exploration, CLIP-Text-Encoder ViT-L/14 is used to obtain CLIP features for instructions and objects, and the sum of feature similarities is used to select the block in the coarse phase. Please see Common Response's attached PDF for the results of the experiment. The experimental results show that computing block scores with CLIP features is not as good as computing them with attribute features. >Q2: The authors mention about flexibility of the approach which allows to switch from basic solution to preferred solution for a demand instruction easily. A2: Thanks for pointing this out. In our task settings, both basic and preferred demands are informed to the robot at the beginning of the episodes. Then GPT-4 generates the basic and preferred attributes which are encoded into basic attribute features and preferred attribute features. These two kind of attribute features will be used to calculating the block scores in the equation (2) of the paper. The flexibility we mention is that when the robot actually serves a specific user, the user can tell the robot to prioritize basic or preferred demands by **easily and flexibly adjusting the two hyper parameters, $r_b$ and $r_p$ in the equation (2)** of the paper (please see Table 5 in the paper for the effects of the two hyper parameters). $r_b$ and $r_p$ can be set at the beginning of the episodes by users (of course they are adjustable during navigation, but we didn't try this in our experiments). The higher the value of $r_p$, the more the robot tends to look for preferred solutions. In Table 1, we set $r_b=r_p=1$ for our main experiment. However, it can be noticed from Table 1 and Table 5 that finding the preferred solution is more difficult compared to the basic solution; this flexible design allows users to adjust the probability of being satisfied according to their own situation. For example, a user who prefers to drink coke can increase $r_p$ to motivate the robot to find the preferred coke. However, when he is **very** thirsty, This user can adjust $r_p$ down and $r_b$ up , so that the robots can find basic solutions (e.g., water) with a higher success rate. Future work can consider letting the robot automatically select the appropriate $r_b$ and $r_p$ from the user's instruction content and intonation. For example, when the robot detects that the user is anxious by the user's intonation, the robot can automatically turn up $r_b$ and turn down $r_p$. Please see **Common Response 3 for Perference** for more detailed description. >Q3: Authors need to add more details in section 4.2.2. to clarify the role of this module. A3: Thank you very much for your advice. The fine exploration phase actually consists of an end-to-end navigation model. Once a target block is selected in the coarse exploration phase based on block scores, a habitat-sim built-in path planner generates a sequence of actions (such as $\mathrm{MoveAhead}$, $\mathrm{RotateRight}$ and $\mathrm{RotateLeft}$) for the robot to reach the selected target block (we make sure that the next action of the path planner is on the explored region). **When the robot reaches inside the selected target block, it enters the fine exploration phase.** The end-to-end model continuously generates actions based on the current RGB-D image. The actions are given to the robot to execute. When this end-to-end model outputs $\mathrm{Find}$, the habitat simulator records a list of objects in the RGB image, and then the robot returns to the coarse exploration phase to select the next block to explore. Such switches between the coarse and fine exploration phase do not stop until the $\mathrm{Find}$ count reaches the upper limit $c_{find}=5$ (we also discuss the limitations of doing so in Appendix A.5). Please see **Common Response 1 for Method** for more detailed description. --- Rebuttal Comment 1.1: Title: Replying to author rebuttal Comment: The authors have demonstrated that the proposed method of finetuning CLIP to capture task specific attributes indeed performs better than zero-shot evaluation results from base CLIP model which was the experiments I asked about. Authors have also clarified how the hyper parameters $r_b$ and $r_p$ are set as part of the task. They also provided additional details about fine exploration policy. There's still a bunch of details that are unclear in how coarse and fine exploration policies are being used or whether it is the right instantiation. I went through the original paper and added details in rebuttal and here are few things that are still unclear about exploration policies: 1. Coarse Exploration Policy: In this stage authors mentioned that a map is built using RGBD and pose information as the exploration policy navigates in an environment. One key detail missing/unclear from the description of this policy in the paper is: how is the exploration done in this phase? Is the policy using frontier exploration or there is a smarter heuristic that leverages the finetuned CLIP feature scores to select waypoints/frontiers? I would appreciate if authors add those details. Another concern I have is the coarse exploration policy uses the shortest path planner from habitat to navigate to frontiers/waypoints in coarse exploration phase. The path planner in habitat uses privileged information from simulator to plan shortest path. Have authors tried using a fast marching planner/A-star on the map they built to evaluate performance? I would appreciate if authors clearly mention in the description that they use a oracle path planner for navigation if the FMM/Astar planner on built map performs worse. 2. Fine Exploration Policy: The paper mentions that fine exploration policy is a end-to-end policy which is handed control after the coarse exploration policy navigates to a target block. The details regarding training of this module are quite fuzzy throughout the paper and appendix. It is unclear what is the horizon of steps the fine exploration policy operates on. Authors need to add details of the dataset used for training this policy, how it was collected, how long was the policy trained for, what are some statistics of dataset used for training this module (like avg. steps, etc). Another big concern this brings up is: why is the fine exploration policy a end-to-end policy? As the authors are already using object detection + mapping for coarse navigation phase wouldn't the agent always end up registering all relevant objects in the map during exploration that you can leverage and navigate to using shortest path planner with information from the map? --- Reply to Comment 1.1.1: Title: Look Forward to Your Reply! Comment: We look forward to receiving your valuable and inspiring feedback! --- Rebuttal 2: Title: Further Responses (Part 1/2) Comment: We are very happy to receive your inspiring feedback! We hope the following clarification will ease your concerns, and hope to hear back from you if you have further questions! >Q1: how is the exploration done in this phase? A1: In the coarse exploration phase, the goal of the robot is to build the scene's point cloud map and select a small area (called a block in the paper) that needs to be finely explored. We do not have an explicit exploration policy for efficiently building a point cloud of the scene, but rather we integrate this process in the selection of the target block. **Concretely, we build the scene point cloud well by setting a simple rule: choose blocks that have never been visited before.** When selecting blocks, we prioritize the never visited blocks with high scores (the scores are calculated as described in equation (2) below). > $s=\sum_{o \in block} (r_{p}\times \max_{i=1..k_2; j=1..k_1} f_{o}^{i}*f_{pref ins}^{j} + r_{b}\times \max_{i=1..k_2; j=1..k_1} f_{o}^{i}*f_{basic ins}^{j})$----(2) >Where $f_{o}^{i}$ denote $i^{th}$ attribute features of the object $o$ in the block, $f_{basic ins}^{j}$ denote $j^{th}$ basic attribute features of the instruction (so do $f_{pref ins}^{j}$), $*$ denotes cosine similarity, and $r_{b}$ and $r_{p}$ are adjustable weights for whether to find basic or preferred solutions. In the above equation (2), the score of the block is determined by the cosine similarity between the attribute features of the instruction and the attribute features of the objects within the block, i.e., **as you mentioned, we utilized the finetuned CLIP feature scores.** And again, because each selection is the highest scoring of the available blocks, this means that there is a high probability of demand objects appearing, so the fine exploration phase is also more likely to find the goal. We argue that this method balances exploration and exploitation, and our ablation experiments in Table 2 demonstrate that such an exploration approach that utilizes the finetuned CLIP features is superior to frontier based exploration (FBE). >Q2: Have authors tried using a fast marching planner/A-star on the map they built to evaluate performance? A2: Thank you very much for pointing this out. In fact, this was a concern when we chose to use the build-in planner. We would worry that the built-in planner would use points from some unknown areas for planning paths. But we found that with our processing, we can ensure that in the vast majority of cases (>98.7%), the build-in planner will not use privileged information from simulator. Since the built-in planner would only output the next action at a timestep (e.g. $\mathrm{MoveAhead}$, $\mathrm{RotateRight}$, $\mathrm{RotateLeft}$), we impose some restrictions on it planning, such as detecting that the next step can only be chosen at a location that has already been recorded in the point cloud. We then compared the built-in planner with our own written planner (see the bottom of this QA) and found to be almost identical in planning the next step (over 98.7% of the actions are identical in our pre-test about the planner). Considering the low planning speed of our own planner and the fact that the built-in planner only uses privileged information from the simulator **in a very small number of cases** when planning, the trade-off between efficiency and reality is that we ended up choosing the built-in planner. For a fair comparison, we also use the built-in planner for other baseline modular methods such as MOPA and FBE. We will also be more explicit in the paper about the usage of the built-in planner. >Here we briefly describe how our planner works. (1) Notate the points that are obstacle-free above in the point cloud map as navigable points; (2) Compress the navigable points into two dimensions xy and build an undirected graph according to the robot's action space; (3) Find the shortest paths on the undirected graph using BFS. --- Rebuttal 3: Title: Further Responses (Part 2/2) Comment: >Q3: The details regarding training of this module are quite fuzzy throughout the paper and appendix. A3: Thank you so much for pointing this out. We describe in Appendix A.3.2 the model architecture, how the trajectories were collected. We collected about 50,000 trajectories to train the fine exploration module under seen tasks and seen scenes. These trajectories were collected according to the following steps: 1) randomly select a scene and a task, 2) initialize agent within two meters (\emph{i.e.}, block size in coarse exploration) of a target object, 3) use habitat-sim's built-in planner to get the next step, and let the agent to execute it; 4) when the distance to the target object is less than 0.2 meters, turn left/right or look up/down according to the height and position of the object, 5) when the target object is in the field of view, execute $\mathrm{Find}$ and close this trajectory. The average length of these trajectories is 9.19. The standard deviation of the trajectory length is 5.62. The median trajectory length is 8. The plurality of trajectory lengths is 3. TThe maximum trajectory length is 51. The minimum trajectory length is 2. We trained the model on a single RTX 4090 using imitation learning and cross-entropy loss, i.e., considering the action prediction as a classification task, consuming about 12h. We will also add the above details about the fine exploration phase in A.3.2. >Q4: why is the fine exploration policy a end-to-end policy? As the authors are already using object detection + mapping for coarse navigation phase wouldn't the agent always end up registering all relevant objects in the map during exploration that you can leverage and navigate to using shortest path planner with information from the map? A4: We design the end-to-end policy for the fine exploration phase for two reasons below. **Reason one:** It's OK to use information about objects recorded in the point cloud and plan a shortest path to reach the objects, but there are a couple of drawbacks. (1) Demand objects may not be visible or recorded in the point cloud at the coarse exploration phase. (2) In demand-driven navigation, the target objects may be small and complicated to place, such as a book, a small jar, or a pen, so **it is difficult and inefficient to design a hard rule for localizing the target objects and recording them into the point cloud when the robot reaches the selected block.** To demonstrate this, we design a simple hard rule for replacing the fine exploration phase and do a quick experiment: (1) after reaching the selected block in the coarse exploration phase, the robot rotates in a circle and records the observed objects using the object detection model; (2) the recorded objects are told to the LLM, which determines whether or not there exists an object that satisfies the demand; (3) if it exists, the robot plans to go next to this object. Quick experimental results on seen tasks and seen scenes show that such a hard rule only yields $SR_b$ 15.20, $SPL_b$ 3.42, $SR_p$ 9.83 and $SPL_p$ 3.16. We observe that the in-place rotation of the robot is incomplete for the observation of objects inside the block due to some occlusions. Thank you very much for the reminder, we will also complete this quick experiment as an ablation experiment for Table 3 to demonstrate the need for fine exploration. **Reason two:** Since we consider hard rules to be difficult (as also evidenced by the quick experiments above), we try to learn an exploration policy. Attribute-feature based end-to-end policies have been shown to be effective in DDN, so we also train an attribute-based end-to-end policy in the fine exploration phase. The difference is that we cut down the transformer decoder in the DDN and use only the transformer encoder as a feature extractor to reduce graphics memory consumption and speed up inference and use new attribute features as input. --- Rebuttal Comment 3.1: Title: Reply to authors rebuttal Comment: Thanks to the authors for addressing my queries. I have another follow up question on coarse exploration policy. Authors mentioned >A1: In the coarse exploration phase, the goal of the robot is to build the scene's point cloud map and select a small area (called a block in the paper) that needs to be finely explored. We do not have an explicit exploration policy for efficiently building a point cloud of the scene, but rather we integrate this process in the selection of the target block. Concretely, we build the scene point cloud well by setting a simple rule: choose blocks that have never been visited before. When selecting blocks, we prioritize the never visited blocks with high scores (the scores are calculated as described in equation (2) below). Does this mean that the policy always knows how many "blocks" are in the environment i.e. are the authors assuming that agent already has access to 1 tour of the house where the "blocks" are constructed first? As for my other questions, the response from authors have addressed my major concerns. I'd appreciate if the authors can add these details in the paper and make it explicit in writing so that it's easier for the reader to understand these details. --- Rebuttal 4: Title: Reply for The Block Comment: We appreciate your timely and valuable feedback! >Q1: Does this mean that the policy always knows how many "blocks" are in the environment i.e. are the authors assuming that agent already has access to 1 tour of the house where the "blocks" are constructed first? A1: The robot doesn't know how many blocks are in the scene at first. These blocks are segmented from the point cloud that has been built so far, based on the xy coordinates of the points. As the robot moves within the scene, the scene's point cloud continues to add newly explored points based on the current RGB-D input and then the number of blocks increases. The choice of blocks in the coarse exploration phase is also limited to those that have already been discovered. Therefore, the robot does not have access to a tour of the house to build the point cloud. We visualize some runtime point cloud and block segmentation examples in Appendix A.3.1. As can be seen from the examples, the point clouds of the scene are incomplete and the blocks being segmented are limited to point clouds that have already been explored. Thank you for pointing these out, and we'll add those details to the paper. Your valuable and constructive suggestions have nicely enhanced the readability and completeness of our paper. --- Rebuttal Comment 4.1: Title: Reply to rebuttal Comment: Thanks for clarifying the details. My concerns have been addressed so I have increased my rating. --- Reply to Comment 4.1.1: Comment: We are very happy to hear that your concerns have been addressed. We also appreciate you raising your rating.
Rebuttal 1: Rebuttal: # Common Response We are very grateful to all the reviewers and AC for their time and effort. We highly thank the reviewers for their appreciation of our writing, benchmark, methods, and experiments. "The paper is well written and easy to follow.""The proposed task .... is a interesting task at the intersection of common sense and embodied AI and is valuable to study."(XjVA) "The paper presents an innovative coarse-to-fine exploration strategy that effectively combines the benefits of modular and end-to-end methods, optimizing both efficiency and performance"(9oyu) "The paper has good motivation for the MO-DDN task." "Coarse-to-fine exploration is interesting and seems to be a good solution to the DDN task."(xf5s) "The ablation section is very clear and well written" "The paper does achieve SOTA performance on many of the tasks relative" (AuxK) **(Common Response 1 for Method)** However, we notice that some reviewers (XjVA, xf5s, AuxK) have some confusion about our method and the figures. We next summarize our method section at a high level. In Secion 4.1, We train two MLPs (Ins MLP Encoder and Obj MLP Encoder in Fig.2) for mapping **the CLIP features of instructions and objects**, to **the CLIP features of their corresponding attributes** (referred to as attribute features). The purpose of Attribute Loss is to allow the MLPs to correctly map CLIP features to ground truth attribute features. The purpose of the other four losses is to align the attribute features of instructions and objects, guiding them to be in the same feature space. In Section 4.2, We describe the coarse-to-fine exploration phase. Specifically, the robot label the object categories in the RGB and builds a scene point cloud as it walks. The point cloud will then be segmented into a number of squares, called Blocks, based on the xy coordinates of the points. At the beginning of the episodes, the robot asks the GPT-4 about the task instructions to get the basic and preferred attributes of the instructions, which will then be encoded into the basic attribute features and preferred attribute features for the next computation of block scores. The robot calculates the score of blocks using equation (2) and chooses a highest-scoring and never-visited block as the target block. We use habitat-sim's built-in path planner to generate a series of action sequences that navigate the robot to the target block (we make sure that the next action of the path planner is in the explored area). **Please see our video in the supplementary material or appendix A.3.1 for the Block Visualizations.** Once the robot reaches the target block, it enters the fine exploration phase, where the robot invokes an end-to-end navigation model (Section 4.2.2) to generate a sequence of actions based on the current RGB-D. When the robot performs a Find action in the fine exploration phase, the objects in the current RGB are recorded by the habitat simulator. The robot then switches to the coarse exploration phase to plan the next target block. The switches between these two phases do not stop until the number of Find action executions reaches the upper limit $c_{find}=5$, followed by the execution of Done and the end of the current episode. **We draw a new figure in the attached PDF to describe the switches between the coarse and fine exploration phase.** **(Common Response 2 for Experiment)** We also note that the reviewers (XjVA, 9oyu, xf5s, AuxK) make valuable suggestions about our experiments. We add the following four experiments: (1) **DDN (Preferred Trajectories)**: training the DDN using trajectories with the preferred solutions, (2) **CLIP Exploration**: computing block scores using the CLIP features rather than attribute features, (3) **Ours (with Noise)**: adding Gaussian noise to the RGB-D camera (add N(0,3) to RGB and N(0, 0.03m) to Depth), and (4) **Ours (ViT-H-14 Encoder)**: replacing the OpenAI's CLIP ViT-L-14 model with the OpenCLIP's ViT-H-14 model in the Method Section in the LLM branch. See attached PDF for the results. **(Common Response 3 for Perference)** Some reviewers (XjVA, xf5s) have questions about the way we address personal preferences. We apologize for not explaining clearly the roles of the two hyper parameters $r_b$ and $r_p$. In our benchmark, we propose that personal preferences affect the behavior of the robot. In our method, we use two hyper parameters $r_b$ and $r_p$ to adjust the prioritization of the basic and the preferred solutions in the equation (2). The two hyper parameters will be set at the beginning of episodes. In Table 1, we set $r_b=r_p=1$ for our main experiment. In Table 5, we study the effect of these two hyper parameters on robot behavior. We make a visualization example for illustrating how $r_b$ and $r_p$ affect decision making in the attached PDF. From Table 1 and 5 we can note that, in general, the success rate of satisfying the basic demands is higher than the success rate of satisfying the preferred demands. When deployed in a real environment, the two hyper parameters can be flexibly adjusted by users, allowing users to decide whether to prioritize basic or preferred demands at the moment. For example, a user who prefers to drink coke can increase $r_p$ to motivate the robot to find the preferred solution. When he is **very** thirsty, This user can adjust $r_p$ down and $r_b$ up , so that the robots can find basic solutions (e.g., water) with a higher success rate. Future work can consider letting the robot automatically select the suitable $r_b$ and $r_p$ from users' instruction and intonation. # Revision Plan: 1. We will add a description of the switches between the coarse and fine exploration phases and an associated pipeline figure in section 4.2. We will explain in more detail how preferences are addressed in our method. 2. We will again double-check the paper for grammar and typos. 3. In the experiment section, we will add our supplementary ablation and baseline experiments. Pdf: /pdf/5ede166ecda4920ca951acb63d9cc7b092f30d07.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations
Accept (poster)
Summary: The paper proposes a general framework for accelerating first-order optimization methods. The framework leverages parallel computing by estimating the gradients using kernel methods and breaking iterative dependencies. The paper establishes theoretical guarantees that the method gives an acceleration rate of $\sqrt{N}$ for $N$ parallelism. Then the paper does extensive empirical studies to show the improvements of this framework. Strengths: 1. Very clearly written. It's immediately clear from the abstract and introduction what kind of work the authors are doing and in the main body the details are given in a very pleasant manner with beautiful plots and formula. The whole paper is well organized with theory and practice covered in style. 2. Novelty. I'm quickly convinced that the paper's contribution is novel. The related work session well summarizes previous work and it's clear to me that the paper investigates a direction that is distinctively different and important. 3. A complete story. The paper studies both theoretically and empirically the aspect, with both results agreeing with each other to a certain extent, forming a complete story. Weaknesses: The theoretical guarantees rely on assumptions that might not work for $N$ very large, thus limit the extent to which the framework can accelerate optimization. In the empirical part, $N$ are not very large, thus unable to show that the framework can accelerate things to great extent in general. Technical Quality: 4 Clarity: 4 Questions for Authors: Suppose that the step size is very small, can the $N$ potentially be very large? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to Reviewer YWBT for the positive and constructive feedback! We appreciate that the reviewer highly recognizes that our paper is **very well written**, its contribution is **novel**, and our paper forms a **complete story with both theoretical and empirical supports agreeing with each other**. We would like to address your comments as follows. --- > The theoretical guarantees rely on assumptions that might not work for $N$ very large, thus limit the extent to which the framework can accelerate optimization. > In the empirical part, $N$ are not very large, thus unable to show that the framework can accelerate things to great extent in general. > Suppose that the step size is very small, can the $N$ potentially be very large? Thank you for your insightful comments. We acknowledge that when a relatively large learning rate $\eta$ is applied, the parallelism $N$ of our OptEx framework should not be excessively large to ensure fast convergence to a stationary point, as indicated in line 550 and Equation (45). In this scenario, the error of our kernelized gradient estimation, which is related to $\rho$ that requires a smaller $N$ to achieve a smaller value, will dominate the convergence of OptEx (refer to the second term on the RHS in Equation 45). However, when the learning rate (i.e., the step size) is very small, $N$ can indeed be potentially very large. In this different scenario, $\frac{2\Delta}{\eta NT}$ (refer to the first term on the RHS in Equation 45) will dominate the convergence of OptEx. Consequently, we can choose a very large $N$ to enjoy a small value of $\frac{2\Delta}{\eta NT}$ and thus achieve fast convergence of OptEx according to Equation 45. We support this with additional experimental results following the same setup as in Section 6.1. Using Adam with $\eta=0.001$, $T_0=50$, and $N=100$ on Ackley function ($d=10^5$), we observe that when $\eta$ is small (0.001 vs. 0.1), a large parallelism $N$ (100 vs. 5) can be applied, and our OptEx remains enjoying a speedup of approximately $\sqrt{N}$. | | $T=50$ | $T=100$ | $T=150$ | $T=200$ | |---------|--------|---------|---------|---------| | Vanilla | 3.0844 | 2.1665 | 1.2413 | **0.5220** | | | $T=5$ | $T=10$ | $T=15$ | $T=20$ | | OptEx | 0.7578 | 0.7016 | **0.4696** | 0.2383 | --- Thanks for your insightful questions. We will incorporate these discussions above into our revised version. We hope our clarification will address your concerns and improve your opinion of our work. --- Rebuttal Comment 1.1: Title: Very Good Rebuttal Comment: Thanks the authors for their well-written rebuttal. The response on my confusion about N is very clear. Thanks a lot. --- Rebuttal 2: Comment: Dear Reviewer YWBT, Thank you so much for your prompt and positive response after our rebuttal! We are so happy to hear that your concerns have been well addressed. We will incorporate the discussions above into our revised version. Sincerely, Authors
Summary: The paper "OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations" introduces a novel framework, OptEx, designed to improve the efficiency of first-order optimization (FOO) algorithms by approximately parallelizing their iterations. Strengths: 1. The introduction of a general framework for parallelizing iterations in FOO is novel and addresses a significant inefficiency in traditional optimization methods. 2. The paper provides robust theoretical guarantees for the performance of the OptEx framework, including bounds on estimation error and iteration complexity. 3. The extensive experiments across various domains (synthetic functions, reinforcement learning, and neural network training) demonstrate the practical applicability and efficiency gains of OptEx. Weaknesses: 1. The kernelized gradient estimation and the associated computational techniques are complex, which might pose challenges for practical implementation and understanding by a broader audience. 2. The efficiency and scalability of OptEx in very large-scale optimization problems or in highly dynamic environments might be limited 3. The theoretical guarantees rely on certain assumptions (e.g., the Gaussian distribution of gradients), which may not hold in all practical scenarios, potentially limiting the generality of the results. Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 1m6n for recognizing that our OptEx framework is **novel**, it addresses a **significant inefficiency** in traditional optimization methods, and it has **robust theoretical guarantees** and **extensive empirical results** to support its **practical applicability** and **efficiency gains**. We would like to address your concerns below. --- > The kernelized gradient estimation and the associated computational techniques are complex, which might pose challenges for practical implementation and understanding by a broader audience. Thank you for your feedback. We would like to emphasize that our kernelized gradient estimation and the associated computational techniques are, in fact, quite simple and easy to understand. They involve computations similar to those found in kernel regression (as detailed in Proposition 4.1). Furthermore, our implementation of these techniques is concise, requiring only 16 lines of code (excluding blank lines), which we have provided in our supplementary materials. By following our implementations, we believe that the audience should be able to deploy our method to their problems without significant challenges. > The efficiency and scalability of OptEx in very large-scale optimization problems or in highly dynamic environments might be limited. Thank you for raising this point. The efficiency and scalability of OptEx in **relatively large-scale** optimization problems have been well demonstrated by our results on neural networks with millions of parameters in Section 6.3. These results adequately highlight the potential of the OptEx framework in accelerating first-order optimization by approximately parallelizing its iterations. Regarding the **very large-scale** optimization problems and **highly dynamic environments** you mentioned, we would appreciate it if you could provide specific examples for us to explore in future research since different practical scenarios often require specialized adaptations of our OptEx framework to achieve a compelling performance. Overall, we find these directions quite interesting and worthwhile to investigate further. Thank you for bringing them to our attention! > The theoretical guarantees rely on certain assumptions (e.g., the Gaussian distribution of gradients), which may not hold in all practical scenarios, potentially limiting the generality of the results. Thank you for your insightful comment. If you are referring to our Assumption 1, we would like to clarify that our assumption pertains to the **gradient noise** rather than the **gradient itself** following a Gaussian distribution when dealing with the stochastic optimization problem in Equation 1. This assumption has been widely recognized and applied in the literature [18, 19, 20]. However, if you are referring to our Assumption 2, we agree with you that Assumption 2 (i.e., $\nabla F$ is sampled from a Gaussian prior $GP(0, k(\cdot,\cdot))$) may not hold in all practical scenarios given a predefined kernel $k$, as $k$ may not be the ground truth. Nonetheless, our extensive experiments on synthetic functions, reinforcement learning tasks, and neural network training, as presented in Section 6, have demonstrated that our OptEx framework is in fact quite robust with a predefined kernel $k$ in practice. These experiments show that OptEx can achieve reasonably good performance across various real-world problems, even when Assumption 2 may not necessarily hold. We believe these empirical results support the generality of OptEx. If you have any further concerns regarding the generality, we would be happy to address them. --- We thank the reviewer for the valuable input and hope our answers can increase your opinions of our work. --- Rebuttal 2: Title: response to author Comment: Thanks for the detailed response from the author, my concern has already been addressed. --- Rebuttal Comment 2.1: Comment: Dear Reviewer 1m6n, Thank you so much for your positive response after our rebuttal! We are so happy to hear that your concerns have been well addressed. We will incorporate all the discussions above into our revised version. Sincerely, Authors
Summary: This paper presents OptEx, an approach to parallelize optimization methods by using gradients from previous iterations to predict gradient for subsequent iterations which in turn breaks the serial nature of standard stochastic optimization thereby enabling approximately parallel iterations. The gradient prediction is done using Kernel methods. The paper also presents experimental evidence towards the effectiveness of the proposed approach. Strengths: This appears to be an interesting contribution, albeit lacking clarity in terms of writing. Weaknesses: -- The question of gradient estimation converging to the true gradient seems fairly far-fetched, and there is every reason to believe modeling gradients would essentially carry a constant bias without very strong regularity and realizability conditions. -- Furthermore, when applied to problems in RL, this becomes even more confounding because of the necessity to explore and using data collected with exploration to in turn construct gradient estimates. I am not sure if I follow how this can be achieved within this framework. -- The paper's experimental comparison doesn't offer any sources of comparisons against the many different approaches to parallelization that have already been studied in the literature, e.g. mini-batching, model averaging etc. Technical Quality: 2 Clarity: 2 Questions for Authors: Can the authors clarify the speedup offered by utilizing their framework while using acceleration/momentum based methods? Standard mini-batch methods offer different trade-offs particularly wrt batch sizes used while still achieving linear parallelization speedups [1]. [1] Cotter et al: Better Mini-Batch Algorithms via Accelerated Gradient Methods. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer ieTR for taking the time to review our paper and appreciate the reviewer's feedback. We would like to provide the following response to address the concerns. --- > The question of gradient estimation converging to the true gradient seems fairly far-fetched, and there is every reason to believe modeling gradients would essentially carry a constant bias without very strong regularity and realizability conditions. Thank you for your insightful comment. We would like to clarify that Theorem 1 and Corollary 1 theoretically show that our kernelized gradient estimation **asymptotically** converges to the true gradient with respect to the number $T_0$ of gradient history, under our assumptions in Section 5. **This means the estimation error diminishes to nearly zero (not a constant bias) only when $T_0$ is sufficiently large.** If $T_0$ is large enough, thanks to the continuity of gradients, we typically is able to accurately approximate the gradient at any point in a local region. This also aligns with results in Bayesian optimization literature, where function estimation converges to the true function asymptotically without a constant bias, given enough function queries, as evidenced in [32]. We acknowledge that in practice, obtaining a sufficiently large $T_0$ can be challenging, leading to biased gradient estimation. However, this biased estimation generally provides useful update directions, resulting in the acceleration rate $\Theta(\sqrt{N})$ that is achievable by our OptEx framework, as verified by our empirical results in Section 6. We hope this clarification addresses your concerns and demonstrates the robustness and potential of our approach. > Furthermore, when applied to problems in RL, this becomes even more confounding because of the necessity to explore and using data collected with exploration to in turn construct gradient estimates. I am not sure if I follow how this can be achieved within this framework. As mentioned in our Appendix B.2.2, our OptEx framework is built on the Deep Q-Network (DQN) algorithm for our RL experiments. In this context, we treat RL as a standard first-order optimization problem, where OptEx aims to accelerate the sequential gradient-based optimization of DQN using parallel computing. The exploration component, which is inherent in the DQN algorithm, does not require additional consideration within the OptEx framework. The exploration strategy used (i.e., the ε-greedy policy) inherently will primarily introduce noisier gradients (i.e., a larger $\sigma^2$), leading to more biased gradient estimates in our OptEx. Interestingly, our empirical results show that our OptEx is still able to achieve enhanced convergence even in the presence of this increased noise, demonstrating the effectiveness and robustness of our OptEx framework. > The paper's experimental comparison doesn't offer any sources of comparisons against the many different approaches to parallelization that have already been studied in the literature, e.g. mini-batching, model averaging etc. Our OptEx framework is in fact a complementary (or orthogonal) approach to the existing parallelization methods, aiming to speedup the first-order optimization in a more general way especially when the other parallelization methods are **not applicable** or **underperforming** rather than replacing them within the scenarios where they already work well (refer to our Sec. 2). For example, mini-batch and model averaging methods are typically used within the context of machine learning and may not be applicable for speeding up other optimization problems where batch sizes and model averaging are not possible, such as the synthetic function in Section 6.1. In contrast, our OptEx framework can provide a compelling speedup in these cases, as evidenced by the results in our Section 6.1. Even within the context of machine learning in our Sec. 6.3, when batch size is already sufficiently large, data parallelism (i.e., mini-batching and model averaging) no long can provide noticeable speedup to the optimization whereas our OptEx can make its novel contributions, as evidenced by the results in our Sec. 6.3 where a large batch size of 256/512 is applied. Overall, since OptEx and other parallelization methods are targeting for different scenarios, it is typically hard to make a fair experimental comparison among these methods. > Can the authors clarify the speedup offered by utilizing their framework while using acceleration/momentum based methods? Standard mini-batch methods offer different trade-offs particularly wrt batch sizes used while still achieving linear parallelization speedups [1]. Thank you for your question. Our OptEx framework is designed to complement existing methods, including acceleration/momentum-based methods (refer to our Sec. 2). When combined with these methods, OptEx is still able to provide additional speedup by parallelizing iterations. For example, in our experiments on synthetic functions (Section 6.1), we used Adam as the baseline optimizer. Our OptEx framework further improved Adam's performance, achieving an acceleration of approximately $\sqrt{N}$ in practice. This demonstrates the superior effectiveness and wide applicability of OptEx when combined with other optimization techniques. We will make this clearer in our revised paper. --- We hope this clarification addresses your concerns and can increase your opinions of our work. We are happy to provide more clarifications. --- Rebuttal Comment 1.1: Comment: Dear Reviewer ieTR, Thank you so much for taking the time to review our paper and for your insightful questions. While we have thoroughly addressed the concerns and questions raised by all the other reviewers, we sincerely hope that our clarifications and discussions above have effectively addressed your concerns as well. If you have any more questions or need more details, we are happy to answer them promptly within the discussion period. Best, Authors
Summary: This paper introduces a new approach for parallelizing stochastic gradient descent for unconstrained, nonconvex, smooth stochastic optimization. The approach is based on building a Gaussian Process surrogate for the true gradient (based on a history of observed stochastic gradients), and uses this surrogate to suggest potential 'future' iterates, by iterating gradient descent using the surrogate. Iterates are then adjusted based on new stochastic gradient estimates, computed in parallel. They present worst-case complexity theory that says that being able to compute $N$ stochastic gradients in parallel improves the iteration complexity by a factor of $\sqrt{N}$, with numerical results on a selection of synthetic problems, reinforcement learning problems, and neural network training problems. Strengths: 1. I think the overall approach is an interesting contribution. I particularly like the discussion about the quality of the GP surrogate for the true function and think these results are interesting (Section 5.1). I think this could have useful links with optimization of genuinely complex functions, similar to those typically considered in Bayesian and zeroth order optimization. 2. The numerical results show some promise for the method, and do indeed show promise for a framework for using parallel stochastic gradient evaluations, where this is practical. 3. The paper is clearly written and easy to follow. Weaknesses: 1. There are gaps in the literature survey of parallelized SGD methods. For example, there is no mention of the highly cited and relevant papers: - Recht et al. Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. NeurIPS 2011 - Mnih et al. Asynchronous Methods for Deep Reinforcement Learning. ICML 2016 - Yu et al. Parallel Restarted SGD with Faster Convergence and Less Communication. AAAI, 2019 2. It is unclear to me whether the convergence theory (Theorem 2) actually supports the assertion of a $\sqrt{N}$ speedup from $N$ processes. The RHS gradient bound indeed is $O(1/\sqrt{N})$, but you are measuring the smallest gradient over $O(N)$ outer iterations. So, doesn't this say that you decrease your gradient by a factor of $O(1/\sqrt{N})$ after doing $N$ times extra work, which is exactly the convergence rate of SGD under your sub-Gaussian stochastic gradient assumption [33]? The work would also benefit from a direct comparison with just sample averaging of stochastic gradients (see question below). 3. It appears to me that Theorem 3 appears to contradict standard theory (e.g. Bottou, Curtis & Nocedal. Optimization Methods for Large-Scale Machine Learning. SIAM Review, 2018). You claim that the complexity bound for (nonconvex) SGD is tight when applied to the spherical function eq (47), which is strongly convex. However (stochastic) gradient descent with properly chosen stepsizes should converge linearly for this problem, not sublinearly as claimed. Can you explain this potential mismatch in results? 4. For very large problems, it is unrealistic that parallelism will be available, since storing the model and training data may already require parallelism (i.e. a single stochastic gradient evaluation is already computed in parallel). It would be helpful to include a comment about the specific regimes (dimension, dataset size, level of parallelism) where this method is intended to be used. Technical Quality: 3 Clarity: 3 Questions for Authors: How is the next iterate of OptEx defined? Line 10 of Algorithm 1 says it is the final iterate that is chosen, but Figure 1 suggests that the next iterate is somehow taken from a combination of the $\theta_t^1, \ldots, \theta_t^N$ values. Can you please clarify? The latter interpretation would make more sense, otherwise all but one of the parallel evaluations are discarded immediately. Is the $O(1/\sqrt{N})$ speedup exactly the same as what you would get by just doing basic sample averaging? That is, at each iteration, just get $N$ different stochastic gradient estimates at the same point and average them (reducing the variance of the estimates). How does your bound compare with this simple use of parallelism? You say that "$\nabla F$ is assumed to be sampled from the Gaussian Process, $\nabla F \sim GP(0, K(\cdot,\cdot))$". Can you please clarify? I read this to mean that this is the assumed prior of your GP surrogate for grad F, but as written it suggests that your actual objective function has - on average - zero gradient, meaning we would expect the 'average point' to be almost stationary? You say the "Target baseline is equivalent to Algo. 1 with [GP mean] being replaced with [stochastic gradient], indicating the desired parallelized iteration we aim to approximate". Does the 'target' implementation also have the parallel step? If so, we would expect that to give an unfair advantage to 'target', since it does $2N$ stochastic gradient evaluations per outer iteration rather than $N$ for OptEx. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations appear to have been addressed. The theoretical assumptions (e.g. smoothness) are clearly articulated and some practical limitations are addressed in Section 7. No potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer RYzW for recognizing the **interesting contribution, promise, and clarity** of our OptEx framework. We address your concerns below. --- ## Responses to Weaknesses 1. Thank you for pointing out this oversight. We appreciate you highlighting these relevant papers on data parallelism. We have referenced some of these works (e.g., [8]) in Section 2 to contrast iteration parallelization with data parallelism. We will include the additional related works you mentioned in our revised paper for a more comprehensive literature survey. 2. Thank you for your insightful comment. Measuring the smallest gradient over $N$ parallel processes is indeed **meaningful and necessary** because our OptEx framework aims to parallelize the $NT$ sequential iterations in SGD. To understand the speedup of OptEx, we follow the common practice in standard SGD to measure the smallest gradients over $NT$ updates (including $N$ parallel processes and $T$ sequential iterations) for a **clear and fair comparison** with standard SGD. Measuring the smallest gradient over $N$ does not decrease the gradient by a factor of $1/\sqrt{N}$. Instead, OptEx works by **decreasing the number of sequential iterations by a factor of $1/N$ with parallelism $N$** (refer to the first term on the RHS of Equation 45). 3. Thank you for your thoughtful comment. We would like to clarify that **stochastic** gradient descent (SGD) indeed enjoys only a **sublinear** lower and upper bound (i.e., $\Theta(1/T)$ for strongly convex functions, as presented in [R1], where the convergence is measured by $E[\left\|x_t - x^*\right\|^2]$. In our paper, the gap between $\Theta(1/T)$ in [R1] and $\Theta(1/\sqrt{T})$ arises because we measure the convergence of OptEx using $\left\| \nabla F(x_t) \right\|^2$, in line with the measurement applied in our Theorem 2 for non-convex functions $F$. When using this measurement for convergence, the $\Theta(1/\sqrt{T})$ result in our paper **aligns perfectly** with the existing results for SGD as presented in [34]. [R1] Nguyen, Phuong_Ha, Lam Nguyen, and Marten van Dijk. "Tight dimension independent lower bound on the expected convergence rate for diminishing step sizes in SGD." Advances in Neural Information Processing Systems 32 (2019). 4. Thank you for the constructive suggestion. OptEx is designed to **complement**, **not replace**, existing parallel approaches (data, model, pipeline) for additional speedup when sufficient computing resources are available (Section 2). In resource-limited scenarios, simpler approaches should be prioritized due to OptEx's complexity. OptEx is most effective when our kernelized gradient estimation is accurate. Empirical results in Section 6.3 show OptEx performs well in **moderate-scale** optimization problems ($d \approx 10^6$, $N \leq 10$), achieving significant improvements. We will add these discussions to our revised paper to help practitioners assess OptEx's suitability. ## Responses to Questions 1. Thank you for pointing out this potential confusion. In our main paper, we choose the next iterate $\theta_t$ of OptEx to be $\theta_t^{(N)}$, where all the gradient evaluations ( $\{\theta_t^{(i)}\}_{i=1}^N$) are **not discarded**. These evaluations are used to construct our kernelized gradient estimation for the next iterate (Section 4.3), aiming to reducing estimation error (Theorem 1) and improve convergence (Appendix B.3). Figure 1 aims to suggest that all $N$ processes are **necessary** in each iterate of OptEx to ensure its effective performance. We will add these discussions to our revised paper to make this clearer. 2. The $O(1/\sqrt{N})$ speedup achieved by OptEx matches that of basic sample averaging for stochastic optimization with noisy gradients. However, the speedup from OptEx comes from **reduced sequential iterations** (first term on the RHS in Equation 45), while sample averaging derives from **reduced gradient variance** (second term on the RHS in Equation 45). When gradient noise is already small or in deterministic optimization (Section 6.1), data parallelism may not provide noticeable speedup, but OptEx can still contribute significantly. Overall, OptEx works in a complementary direction to existing parallelization methods, including sample averaging, to speed up first-order optimization, especially when other methods are not applicable or underperforming (Section 2). 3. Thank you for your question. Similar to the common practice in the Bayesian optimization literature [21], $\nabla F \sim GP(0, k(\cdot, \cdot))$ means that $\nabla F$ can be regarded as **being sampled from** a GP prior $GP(0, k(\cdot, \cdot))$ (refer to line 213), which is only a prior for $\nabla F$ and can not infer $\nabla F=0$ as $\nabla F$ can be **any** function within this prior. Given an increasing number of gradient histories, we will gain a better understanding of $\nabla F$, i.e., the GP posterior mean can gradually better approximate $\nabla F$, as evidenced by our Thm.1. We will add these discussions to our revised paper to clarify this point. 4. Thank you for your question. The 'target' implementation does not have the parallel step due to the inherent iterative dependency in first-order optimization. We treat 'target' as an **ideal, yet impractical**, iteration parallelization (refer to Appendix B.1). This baseline highlights the gap between the ideal and our OptEx, showing that OptEx achieves an acceleration of $\sqrt{N}$ instead of $N$. We will add these discussions to our revised paper for clarity. --- With our elaboration and clarification, we hope our response has addressed your concerns and increased your opinions of our work. We are happy to provide more clarifications if needed. --- Rebuttal Comment 1.1: Comment: Thank you for these clarifications, they are very helpful. Pending seeing a new version of the manuscript, I think the responses given for all points above would be very helpful. I have one remaining concern (question Q2) and one point for clarification (weakness Q2) that I still think warrant more consideration. **Questions #2** I understand that you would like to potentially use OptEx in the exact gradient case ($\sigma=0$, e.g. Section 6.1), and your results make sense in this context. However, in the SGD context that seems to be your main focus, if I had the ability to compute $N$ stochastic gradients in parallel (each with a given noise level $\sigma$ as you assume), I could either use OptEx or compute $N$ stochastic gradients at the current iterate and average these, giving an effective variance of $\sigma^2/N$. In Theorem 1, your complexity bound (when $\sigma>0$) is $O(\sigma/\sqrt{NT})$ for $T$ iterations, which suggests that sample averaging applied to standard SGD (where $\sigma \mapsto \sigma/\sqrt{N}$) would achieve the same complexity as OptEx with essentially no overhead from managing the GP, etc. It would be useful if you could either discuss why/when OptEx should be preferred to simple sample averaging, as I'm not sure if your theoretical results show this. See, for example, the discussion "Trade-Offs of (Mini-)Batching" in Section 4 of Bottou, Curtis & Nocedal [your ref 26]. **Weaknesses #2** I think my confusion arose because your statement of Theorems 2 and 3 are not quite correct. Specifically, the LHS term in eqs (8) and (9) should be $\min_{\tau\leq T, s\leq N} \|\nabla F(\theta_{\tau,s})\|^2$ rather than $\min_{\tau\leq NT} \|\nabla F(\theta_{\tau})\|^2$ as given (i.e. reflecting that you do $T$ outer iterations with $N$ parallel evaluations in each, rather than $NT$ outer iterations with $N$ parallel evaluations in each, as currently stated). This seems to align with what is shown in the proofs. --- Rebuttal 2: Comment: Dear Reviewer RYzW, Thank you so much for your positive feedback and thoughtful suggestions! We sincerely appreciate your recognition and support. We are glad to hear that our clarifications are very helpful. We will add our clarifications above in the writing to make our paper clearer. We would like to address your remaining concerns below. --- ### Questions #2 We totally agree with you that sample averaging applied to standard SGD can achieve similar complexity as our OptEx, without the overhead of managing the GP. As we have justified above, these two speedup approaches stem from different principles: - OptEx Speedup: This comes from reduced sequential iterations (first term on the RHS in Equation 45). - Sample Averaging Speedup: This is derived from reduced gradient variance (second term on the RHS in Equation 45). So, when gradient noise $\sigma$ is already decreased to a small value through sample averaging with parallelism $N'$ (not $N$), increasing $N'$ further (i.e., $N'+N$) provides diminishing returns due to the sublinear rate of $O(1/\sqrt{N'+N})$. For example, the benefit of reducing from $1/\sqrt{10}$ to $1/\sqrt{15}$ is less significant compared to reducing from $1/\sqrt{1}$ to $1/\sqrt{6}$ with an additional parallelism $N=5$. However, in this scenario of a small gradient noise $\sigma$, our OptEx can still achieve noticeable speedup (e.g., from $1/\sqrt{1}$ to $1/\sqrt{6}$ with the same additional parallelism $N=5$) by leveraging additional parallel computing to reduce the sequential iterations. This reduction in sequential iterations is however unattainable by mini-batch SGD itself. In light of this, Equation 45, leading to our Theorem 2, in fact, will provide valuable insights into when OptEx is preferable to simple sample averaging. Overall, OptEx complements rather than replaces existing parallelization methods, including sample averaging, to accelerate first-order optimization. This is particularly effective when other methods are **inapplicable** or **underperforming**, i.e., further noticeable improvements cannot be achieved merely by increasing parallel computing resources in those methods, as we have justified above. ### Weaknesses #2 We apologize for the confusion caused by our notation $\tau$. Our intention was to use $\tau$ to denote all gradients evaluated during the optimization process more easily, including those in parallel processes, as mentioned in line 260. In Theorems 2 and 3, the expression $\min_{\tau \leq NT} \left|\nabla F(\theta_{\tau})\right|$ is exactly meant to represent $\min_{t \leq T, s \leq N} \left|\nabla F(\theta_{t,s})\right|$. Thank you for bringing this to our attention. We will correct this notation in our revised paper. --- We want to thank Reviewer RYzW for your positive feedback and recognition again! We sincerely hope our response has addressed your remaining concerns and can increase your opinions of our work. If you have any other questions, we would like to address them. Sincerely, Authors --- Rebuttal Comment 2.1: Comment: Thanks for this information. Regarding question #2, I think this is a very important aspect and warrants a discussion in any new version of the paper. However, I don't think the benefit is as good as suggested: if you want to do $N_1$ parallel samples for each stochastic gradient (to reduce the variance to $\sigma^2/N_1$), then if you run OptEx with $N_2$ parallel runs, then you will need $N_1\times N_2$ stochastic gradients in parallel at each outer loop, not $N_1+N_2$, so your speedup is still $O(1/sqrt{N})$ where $N=N_1\times N_2$ is the number of cores needed. This has been a helpful discussion, and pending the clarifications in a revised version as agreed by the authors above, I am happy to upgrade my scores of the paper in the initial review above. --- Rebuttal 3: Comment: Dear Reviewer RYzW, Thank you so much for your prompt and thoughtful response. We are pleased to hear that our clarifications have improved your opinion of our work, and we will incorporate all the clarifications and discussions mentioned above into our revised manuscript, as per your suggestion. We also appreciate your valuable insight regarding the parallel computing required by combining sample averaging and OptEx, in relation to your question #2. We agree with you that in this scenario, we will need $N=N_1 \times N_2$ parallel computations, leading to a speedup of $O(1/\sqrt{N})$. The benefits of our OptEx over sample averaging may be more straightforward by looking into our Equation 45, which mirrors the convergence results of sample averaging. Particularly, when the gradient noise $\sigma$ is already decreased to a small value, i.e., the second term on the RHS in Equation 45 is small, the first term on the RHS in Equation 45 that is related to sequential iterations will then dominate the convergence. Consequently, simply applying sample averaging with increased parallel computational resources yields only marginal improvements in convergence. In contrast, OptEx can achieve a noticeable improvement by reducing the first term on the RHS in Equation 45 by $O(1/N)$. We will also include this discussion in our revised paper. Should you have any other questions, we would like to address them. Sincerely, Authors
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
IQA-EVAL: Automatic Evaluation of Human-Model Interactive Question Answering
Accept (poster)
Summary: This paper proposes an automated evaluation framework for Interactive Question Answering tasks based on LLM-based agent. The authors utilize LLMs to simulate the interaction between human and IQA models and use them to evaluate the interactions automatically. Additionally, they assign predefined personas to LLMs to better simulate the interaction characteristics of different groups. Their evaluation framework achieves a stronger correlation with human evaluations and eliminates the high costs of manual evaluation. Strengths: The strength of this paper is that it highlighted the importance of interaction in evaluation process for QA tasks and provided a solution for this problem, showing new insights into the evaluation for QA tasks. Weaknesses: The weakness of this paper is the lack of technical contributions and novelty, and its experimental analysis is not strong enough. They only used prompts to implement their methods, without any prompt engineering approach. Furthermore, their idea of using LLMs as evaluators is similar to some previous research papers, such as G-Eval [1]. [1] Y. Liu, D. Iter, Y. Xu, S. Wang, R. Xu, and C. Zhu. G-eval: Nlg evaluation using gpt-4 with better human alignment. CoRR, abs/2303.16634, 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: Major questions: 1. The paper utilizes LLMs as evaluators for Interactive Question Answering. However, according to some previous research papers [2], a systematic bias may be introduced by LLM evaluators and the results of evaluation may be influenced. Did you consider the impact of bias on the results in your research and try any methods to eliminate it? 2. You mentioned that accurate models are not necessarily preferred by humans in your paper, but you did not demonstrate this result in your experiments. Could you add a control group in your experiment? 3. There exists some errors in your analysis of LEA for stage 2. You stated that models’ evaluations highly correlate with human evaluation, while the Pearson correlation coefficient between 0.2 and 0.5 only indicates a weak or moderate correlation. 4. In your analysis of Table 1, you concluded that model gave higher scores for the “Fluency” metric than human because of the clarity and grammatical correctness of the response generated by IQA models. Could you please provide a detailed explanation of why you concluded that the model understands the concept of “Fluency” better than humans? Minor questions: 1. Could you please point out the LEA model used in the experiments shown in Table 3, like in Table 1 and 2? [2] P. Wang, L. Li, L. Chen, D. Zhu, B. Lin, Y. Cao, Q. Liu, T. Liu, and Z. Sui. Large language models are not fair evaluators. CoRR, abs/2305.17926, 2023. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['Ethics review needed: Discrimination, bias, and fairness'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you Reviewer RXxM for your reviews. Below we'd like to address your concerns. # Weaknesses ---- > **This paper's weakness is its lack of technical contributions and novelty, and its experimental analysis is not strong enough. It only used prompts to implement its methods without any prompt engineering approach.** Please refer to the generel rebuttal above. ---- > **Their idea of using LLMs as evaluators is similar to some previous research papers, such as G-Eval** G-Eval implements automatic evaluation for direct outputs from Large Language Models (LLMs) which is non-interactive. However, chat interactions involving multiple turns are more prevalent in real-world applications. Currently, no method is available to automatically evaluate the performance of models in such interactive settings [3]. Our approach, for the first time, addresses this gap by using LLMs to automatically assess the performance of assistant models during interactive scenarios. Moreover, we incorporate personas in our IQA-Eval to better represent the crowd, further significantly improving correlations. # Questions ---- > **A systematic bias may be introduced by LLM evaluators and the results of evaluation may be influenced. Did you consider the impact of bias on the results in your research and try any methods to eliminate it?** We acknowledge this is a common conern. Please refer to the "Concerns on self-favoring bias" section in the general rebuttal. ---- > **You mentioned that accurate models are not necessarily preferred by humans in your paper, but you did not demonstrate this result in your experiments. Could you add a control group in your experiment?** Thank you for pointing out this. We are sorry for this inaccurate statement. We will add this accurate claim in the paper: accurate QA models are preferred by humans in interaction-aware evaluations. The quote from our cited paper [3] is “[...] perception of helpfulness is not necessarily reflected in the overall interaction accuracy.” It describes the conclusion of multiple tasks in that paper (e.g. text summarization, social dialogue, QA). However, in the QA settings, Table 3 in Lee et al. (2023) shows that humans prefer accurate models on the QA task. We also conducted **new experiments** **(1)** using LEA to evaluate interactions between LEAs and IQA models (interactive) and **(2)** using LEA to evaluate direct answers generated by IQA models (non-interactive). Our experiments show that **LEA models prefer accurate models, which aligns well with the conclusion from human annotations.** (1) Evaluate interactions between LEA and IQA models. |||||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---| |5-point Likert|Helpfulness||||Fluency||||Accuracy|||| |LEA models|GPT-3.5|Claude-instant|Llama2-8b|Zephyr-Alpha|GPT-3.5|Claude-instant|Llama2-8b|Zephyr-Alpha|GPT-3.5|Claude-instant|Llama2-8b|Zephyr-Alpha| |IQA-EVAL-GPT4|4.60|4.60|3.83|4.27|4.97|5.00|4.87|4.93|0.93|0.93|0.83|0.93| |IQA-EVAL-Claude|4.90|5.00|4.97|4.97|4.87|5.00|4.93|4.87|0.73|0.8|0.57|0.73| (2) Evaluate direct answers generated by IQA models |||||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---| |5-point Likert|Helpfulness||||Fluency||||Accuracy|||| |LEA models|GPT-3.5|Claude-instant|Llama2-8b|Zephyr-Alpha|GPT-3.5|Claude-instant|Llama2-8b|Zephyr-Alpha|GPT-3.5|Claude-instant|Llama2-8b|Zephyr-Alpha| |IQA-EVAL-GPT4|4.33|4.17|2.70|3.53|5.00|4.97|4.13|4.33|0.83|0.80|0.47|0.57| |IQA-EVAL-Claude|4.97|5.00|4.53|4.87|4.97|4.97|4.47|4.97|0.83|0.80|0.47|0.57| ---- ---- > **There exists some errors in your analysis of LEA for stage 2. You stated that models’ evaluations highly correlate with human evaluation, while the Pearson correlation coefficient between 0.2 and 0.5 only indicates a weak or moderate correlation** Thank you for bringing this to our attention. We will modify the claimed relation to be weak to moderate in the paper. For interactions between humans and IQA models (Figure 2), LEA evaluations moderately correlate with human evaluations. However, for interactions between LEAs and IQA models (Table 2) which is the main focus of our paper, LEA evaluations highly correlate (around 0.6) with human evaluations. This indicates that **LEA models are helpful when participating in the whole evaluation process**, including both the interaction and evaluation. However, **when evaluating interactions between human and IQA models, LEAs focus differently from humans**, which causes moderate correlations in the “analysis for LEA for stage 2”. ---- > Unsupported paper claim: **the model understands the concept of “Fluency” better than humans?** Sorry for the confusion. We did not conclude that the model understands the concept of “Fluency” better than humans. Instead, we wanted to emphasize that **IQA-Eval scores on “Fluency” are close and highly correlated to human judgments**. Both scores given by human and LEA models show that IQA models provide fluent outputs. We will clarify and update this claim in the final version. ---- > **Could you please point out the LEA model used in the experiments shown in Table 3, like in Table 1 and 2?** In Table 3, the LEA model is GPT-3.5-turbo-1106. We will update it in the final version. ---- **References:** 1. Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2024). 2. Furniturewala, Shaz, et al. "Thinking Fair and Slow: On the Efficacy of Structured Prompts for Debiasing Language Models." arXiv preprint arXiv:2405.10431 (2024). 3. Lee, Mina, et al. "Evaluating human-language model interaction." arXiv preprint arXiv:2212.09746 (2022). 4. Liu, Yang, et al. "G-eval: Nlg evaluation using gpt-4 with better human alignment." arXiv preprint arXiv:2303.16634 (2023). --- Rebuttal Comment 1.1: Title: Shall we have more discussions? Comment: Dear Reviewer, Do you have any other questions of interest? In our rebuttal, we have added relevant explanations and several experiments, and we believe that these can to some extent address your concerns. We are eagerly looking forward to your response! --- Reply to Comment 1.1.1: Title: More discussions Comment: Dear Reviewer, The discussion period is ending. Do you have any other questions of interest? In our previous rebuttal, we have added relevant explanations and several experiments, and we believe that these can to some extent address your concerns. We are eagerly looking forward to your response!
Summary: This paper introduces IQA-EVAL, a framework for automatically evaluating interactive question-answering (IQA) systems using large language models. The authors propose using LLM-based Evaluation Agents (LEAs) to simulate human behavior in both generating interactions with IQA models and evaluating those interactions. The framework also incorporates persona assignments to LEAs to better represent diverse user groups. The authors demonstrate that IQA-EVAL achieves high correlation with human judgments and use it to benchmark several recent LLMs on complex question-answering tasks. Strengths: - Proposes a fully automated framework for evaluating interactive QA systems, addressing the need for more efficient evaluation methods - Incorporates persona assignments to LEAs, allowing for more nuanced and diverse simulations of user interactions - Demonstrates strong correlation with human judgments Weaknesses: - Limited novelty, as using LLMs as judges is quite common in evaluation tasks. This paper mainly focuses on a relatively new setting - the interactive QA setting. - The considered interaction scenarios could be more diverse to better reflect real-world user behaviors Technical Quality: 3 Clarity: 3 Questions for Authors: - Line 207~213: can you provide more explanations into why ChatGPT sometimes outperformed GPT-4 in your experiments? This seems counterintuitive given GPT-4's generally stronger capabilities. - Have you considered incorporating more diverse interaction patterns, such as clarification questions, ambiguous queries, or follow-up questions? How might this affect the evaluation results? - Have you explored how different persona distributions might impact the evaluation results? How sensitive is the framework to changes in persona assignments? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have made some effort to address limitations, such as acknowledging potential biases in LLM-based evaluation and discussing the impact of persona assignments. However, the paper would benefit from discussing the limited diversity of interaction patterns considered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful review. We would like to address the mentioned weakness and questions below. ---- # Weaknesses: > **Limited novelty, as using LLMs as judges is quite common in evaluation tasks. This paper mainly focuses on a relatively new setting - the interactive QA setting.** Previous works, like G-Eval, implement automatic evaluation for direct outputs from Large Language Models (LLMs) which is non-interactive. However, chat interactions involving **multiple turns are more prevalent in real-world applications**, as recognized by the HCI community [1]. Currently, **no method is available to automatically evaluate the performance of models in such interactive settings**. Our approach, for the first time, addresses this gap by using LLMs to automatically assess the performance of assistant models during interactive scenarios.  Additionally, **for the first time, we augment LLM-agent with personas** in our IQA-Eval to better represent the crowd, further significantly improving correlations. ---- > **The considered interaction scenarios could be more diverse to better reflect real-world user behaviors** Please see the reply to question 2 Below. ---- # Questions: > **Line 207~213: can you provide more explanations into why ChatGPT sometimes outperformed GPT-4 in your experiments? This seems counterintuitive given GPT-4's generally stronger capabilities.** Table 1 evaluates how close evaluators are to humans on absolute scores. Based on the results, although GPT-4 outperforms GPT-3.5 in accuracy and demonstrates stronger capabilities, scores given by GPT-3.5 are closer to humans than GPT-4. The main reason for low GPT-4 scores is its **more critical and strict evaluation** of “Helpfulness”. Although giving low scores, Table 2 shows that **GPT-4 mostly correlates to human evaluations** since its scores have the most similar trend as human scores. Given GPT-4’s generally strong capability, this result is not counterintuitive and supports the above reason. ---- > **Have you considered incorporating more diverse interaction patterns, such as clarification questions, ambiguous queries, or follow-up questions? How might this affect the evaluation results?** In section 5, we discussed the effect of different types of persona and more diverse interaction patterns (i.e. clarification, ambiguous, complex, follow-up, expert, detailed, straightforward, knowledgable, etc.). For example, each persona produces different interaction patterns which are diverse. The persona “Clarity-Seeker” prefers to clarify questions, terminologies, proper nouns, etc. The persona “Adaptability-Seeker” always proposes ambiguous questions and prefers IQA models that can understand its questions. The persona “Critical-Seeker” always follows the original question and asks critical questions to IQA models to answer the question. Our different personas have covered a variety of interaction patterns. Results are shown in Tables 3 and 4. **By adding personas and having more diverse interactions, IQA-Eval achieves higher correlations with human evaluations and better represents the crowd**. Moreover, in section 6.2 (LLM to Benchmarks), we benchmark IQA models using our IQA-EVAL **on question answering data and ambiguous queries**, showing our IQA-EVAL can be applied to different types of questions and interaction patterns and provide useful feedback. ---- > **Have you explored how different persona distributions might impact the evaluation results? How sensitive is the framework to changes in persona assignments?** In practice, the distribution of personas should be thoroughly surveyed. Given those distributions, the evaluation performance of IQA-Eval should always be similar to human evaluations (high correlation scores). **We expect that the distribution of personas should have little effect on IQA-Eval since all scores well in correlation with human raters (0.634-0.690)**. ---- ## References: 1. Lee, Mina, et al. "Evaluating human-language model interaction." arXiv preprint arXiv:2212.09746 (2022). --- Rebuttal Comment 1.1: Title: More clarification on the reply to question 3 Comment: We are sorry for the minor inaccuracy in the original reply to question 3. We expect that IQA-EVAL is sensitive to incorrect persona assignments. We conduct two new experiments to study the effects of changing persona assignments. When the persona distribution is incorrect (such as 20% Expert in the table below), the performance of IQA-EVAL shows a lower correlation with human evaluations. Moreover, the last two lines in the following table describe the correlation between human evaluations and IQA-EVAL within a sub-group only containing pure experts. The correlation results in line “IQA-EVAL (Pure Expert)” represent that (1) our personas accurately represent the pure expert group, as its correlation with the line “Human (Pure Expert)” remains nearly consistent with those in line “IQA-EVAL (Expert)” and (2) given this completely correct persona distribution, our IQA-EVAL correlates well with human evaluations. | 5-point Likert | Helpfulness | | | | Fluency | | | | | ----------------------------- | ----------- | ---- | ---- | ----- | ------- | ---- | ---- | ----- | | LEA models | TDA | TB | DA | ρ | TDA | TB | DA | ρ | | Human | 4.60 | 3.84 | 3.52 | | 4.35 | 3.84 | 3.22 | | | IQA-EVAL (Expert) | 4.17 | 3.08 | 3.12 | 0.756 | 4.47 | 3.84 | 3.40 | 0.787 | | IQA-EVAL (20% <br><br>Expert) | 4.31 | 3.26 | 3.44 | 0.708 | 4.62 | 4.09 | 3.65 | 0.741 | | IQA-EVAL (40% <br><br>Expert) | 4.21 | 3.14 | 3.23 | 0.751 | 4.49 | 3.88 | 3.44 | 0.779 | | IQA-EVAL (60% <br><br>Expert) | 4.11 | 3.01 | 3.00 | 0.725 | 4.43 | 3.77 | 3.34 | 0.734 | | IQA-EVAL (80% <br><br>Expert) | 4.02 | 2.90 | 2.79 | 0.680 | 4.30 | 3.56 | 3.12 | 0.703 | | Human (Pure Expert) | 4.69 | 4.00 | 3.73 | | 4.36 | 3.96 | 3.26 | | | IQA-EVAL (Pure Expert) | 4.37 | 3.57 | 3.33 | 0.778 | 4.20 | 3.40 | 2.97 | 0.786 | If you think we misunderstood your question, please let us know and feel free to ask any follow-up questions. --- Reply to Comment 1.1.1: Title: shall we have more discussions? Comment: Dear Reviewer, do you have any other questions of interest? In our previous rebuttal, we have added relevant explanations and several experiments, and we believe that these can to some extent address your concerns. We are eagerly looking forward to your response!
Summary: The authors introduce a novel method to simulate a human conversation when evaluated on an interactive question answering (IQA) and evaluate the simulated interaction according to some predefined metrics. Strengths: - The presentation of the paper is clear and easy to follow - The paper is well-motivated. It recognizes how costly and time-consuming it is to use human evaluations - The idea is reasonable Weaknesses: When using LLMs to evaluate the outputs from LLMs, recent research has shown LLMs to be biased in preferring their own generations compared to generations from other models, even if their generation wasn’t better [0]. This may bias the IQA-EVAL metrics if the same model is used for both IQA-EVAL and the IQA model. The authors provided limited evidence for their claims in section 4.3. Could the authors provide a more robust justification of why “ChatGPT’s assessments align well with human evaluations”, and other claims made in this section? The authors did not recognise that this method could cause negative societal impacts, instead stating “Evaluation works bear little risks for negative scoeital impacts”. LLMs have been repeatedly shown to exhibit strong biases as a result of their training procedure. Using LLMs for evaluation opens the risk of reinforcing these biases, whether within training of future models, or benchmark answers, or whatever is downstream of this method. [0] https://arxiv.org/abs/2404.13076 Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful review. We would like to address the mentioned weakness and questions below. ---- > **When using LLMs to evaluate the outputs from LLMs, recent research has shown LLMs to be biased in preferring their own generations compared to generations from other models, even if their generation wasn’t better. This may bias the IQA-EVAL metrics if the same model is used for both IQA-EVAL and the IQA model.** Please refer to the answer in the general author response on the top of the page. ---- > **The authors provided limited evidence for their claims in section 4.3. Could the authors provide a more robust justification of why “ChatGPT’s assessments align well with human evaluations”, and other claims made in this section?** The content in section 4.3 contains our findings after reading LEA free-form outputs about “helpfulness”. The ChatGPT in section 4.3 only indicates GPT-3.5. For the claim ``ChatGPT’s assessments align well with human evaluations’’. In Table 1, GPT-3.5 always gives high sores, indicating it is more positive generally. Its scores are the closest to human scores in Table 1, and its correlation is also high in Table 2. Thus, we claim GPT-3.5 assessments align well with human evaluations. For the other claim, compared to GPT-3.5, Claude assigns lower scores. These results align with the claim that Claude flags more issues. ---- > **The authors did not recognise that this method could cause negative societal impacts, instead stating “Evaluation works bear little risks for negative societal impacts”. LLMs have been repeatedly shown to exhibit strong biases as a result of their training procedure. Using LLMs for evaluation opens the risk of reinforcing these biases, whether within training of future models, or benchmark answers, or whatever is downstream of this method.** We apologize for the previous statement that “Evaluation works bear little risks for negative societal impacts”; this claim is inaccurate. Large Language Models (LLMs) have been repeatedly shown to exhibit strong biases due to their training procedures, which can result in significant societal impacts. These biases can perpetuate stereotypes, disadvantage certain groups, and influence downstream applications in harmful ways. To partially  address these concerns, we have implemented several measures to reduce bias in our evaluation process. For instance, we have fine-tuned prompts to minimize self-enhancement bias. Our experiments on the MMLU dataset demonstrate that bias does not significantly affect our evaluation results. Specifically, LEAs mainly evaluate the performance of IQA models in conversations. In IQA-Eval, each LEA model evaluates all IQA models, and since LEA models and IQA models are distinct, the impact of bias, such as LLMs favoring their own outputs, is mitigated in the evaluation results presented in sections 4 and 5. ---- ## References: 1. Panickssery, Arjun, Samuel R. Bowman, and Shi Feng. "Llm evaluators recognize and favor their own generations." arXiv preprint arXiv:2404.13076 (2024). 2. Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2024). 3. Furniturewala, Shaz, et al. "Thinking Fair and Slow: On the Efficacy of Structured Prompts for Debiasing Language Models." arXiv preprint arXiv:2405.10431 (2024). --- Rebuttal Comment 1.1: Comment: I am grateful for the further analysis presented by the authors. Given their sound arguments, I am going to raise my score to a 7.
Summary: This research addresses the evaluation methodology for multi-turn conversation using Large Language Models (LLMs), a topic of active research recently. The study proposes an evaluation framework called IQA-Eval, which consists of the target model (IQA model) and an agent (LEA model) that engages in conversation with the IQA model and evaluates each turn. The primary challenge in multi-turn evaluation is to 1) interactively converse with the IQA model and 2) evaluate each generated turn in a cost and time-efficient manner. This research aims to overcome this challenge by leveraging LLMs that consider personas to facilitate both conversation and evaluation. Through various experiments, the study confirms that applying LLMs for conversation and evaluation results in a high correlation with Human Evaluation. Additionally, it includes an analysis of the differences in the number of turns due to the varying capabilities of each IQA model. Strengths: 1.Compared to existing LLM multi-turn evaluations, this approach enables faster and more economical assessments. 2.Examining the human correlation suggests that the evaluation is reliable. It is very promising in that it allows for automatic evaluation of interactive multi-turn conversation capabilities that resemble real-world scenarios. 3.The evaluation framework is very simple, making it adaptable for assessing various multi-turn situations. Weaknesses: - The evaluation dataset consists entirely of multiple-choice questions, making it unsuitable for generating and evaluating multi-turn conversations. This is evident in Tables 3 and 5, where the average length of conversation turns is less than 2. Such a situation suggests that the process might be more about reasoning rather than simulating genuine interactive multi-turn conversations. It is necessary to analyze whether the performance shown in Table 6 is due to multi-turn interactions or the influence of the reasoning path. Alternatively, an evaluation using datasets that assume genuine multi-turn dialogue scenarios is needed. - The verification process for personas is lacking. There is no substantial evidence to consider that the performance differences by persona shown in Table 4 are reflective of the actual characteristics of those personas. - A self-preference issue appears to have arisen in Table 5, and there is no control in place to address this. Technical Quality: 3 Clarity: 1 Questions for Authors: - How does the Free Form Feedback in this study differ from the explanations provided by existing LLM-based evaluation methods (such as LLM-Eval and G-Eval)? - What is the total number of repetitions for the experiments mentioned in section 5.2? Could you provide the variance values for each experiment? - Why are the results for AmbigQA not included? Even if the performance of the IQA models is not optimal, the results should still be presented. Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: - The limitations are stated in the Appendix, clearly outlining the limitations of the experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. We would like to address your concerns as follows: # Weaknesses: >1. The evaluation dataset consists entirely of multiple-choice questions, making it unsuitable for generating and evaluating multi-turn conversations. This is evident in Tables 3 and 5, where the average length of conversation turns is less than 2. Such a situation suggests that the process might be more about reasoning rather than simulating genuine interactive multi-turn conversations. It is necessary to analyze whether the performance shown in Table 6 is due to multi-turn interactions or the influence of the reasoning path. Alternatively, an evaluation using datasets that assume genuine multi-turn dialogue scenarios is needed. We focus on task-driven dialogues. In the IQA setting, humans interact with assistant models to answer questions. This is a type of multi-turn interaction. In our work, the number of turns of interactions between human and IQA models is similar to those between LEA and IQA models. We leave other forms of dialogues, potentially less well-defined and also lacking annotated public datasets, for future research. Table 6 accuracy results come from a non-interactive, Chain-of-thought enabled setting, and as a result, this setting already allowed the reasoning path to be fully fledged by the non-interactive model itself. Table 5 accuracy results are from LEA models after interacting with IQA models. Since both CoT outputs and interactions include reasoning processes, the difference between Table 5 and Table 6 naturally arises from the LEA-IQA models’ interaction process, suggesting that multi-turn interactions help LEA models reach higher accuracy scores.  >2. The verification process for personas is lacking. There is no substantial evidence to consider that the performance differences by persona shown in Table 4 are reflective of the actual characteristics of those personas. We conduct a new experiment that shows the standard deviation in Table 4. The new table is in reply to question 2. Standard deviations show that personas affect evaluation performances. Additionally, we conduct an experiment demonstrating that personas impact LEA's performance. Using the "Expert" persona, we calculated the correlation between human evaluations and IQA-EVAL within a sub-group of pure experts. The results indicate that our personas accurately represent the pure expert group, as the correlation remains nearly consistent. In other words, if there is a dramatic change in correlation within the sub-group, it suggests that the proposed persona fails to represent the current group or the broader crowd. | 5-point Likert | Helpfulness | | | | Fluency | | | | | ---------------------- | ------------ | ------------- | ------------ | ----- | ------------ | ------------ | ------------ | ----- | | LEA models | TDA | TB | DA | ρ | TDA | TB | DA | ρ | | Human (Pure Expert) | 4.69 | 4.00 | 3.73 | | 4.36 | 3.96 | 3.26 | | | IQA-EVAL (Pure Expert) | 4.37 | 3.57  | 3.33 | 0.778 | 4.20 | 3.40 | 2.97 | 0.786 | | Human | 4.60 | 3.84 | 3.52 | | 4.35 | 3.84 | 3.22 | | | IQA-EVAL (Expert) | 4.17 | 3.08 | 3.12 | 0.756 | 4.47 | 3.84 | 3.40 | 0.787 | Based on the results, personas affect the performance of large language models. >3. A self-preference issue appears to have arisen in Table 5, and there is no control in place to address this. Please refer to the answer in the general author response on the top of the page. # Questions: >1. How does the Free Form Feedback in this study differ from the explanations provided by existing LLM-based evaluation methods (such as LLM-Eval and G-Eval)? The main difference is that the free-form feedback In this work evaluates interactions between LEA and IQA models, especially the performance of IQA models, while previous LLM-based automatic evaluation methods, like G-Eval, only evaluate direct outputs from Large Language Models (LLMs), which are non-interactive. Moreover, most related papers only provide numeric analysis, while we manually analyze feedback texts and correlate them to other results. >2. What is the total number of repetitions for the experiments mentioned in section 5.2? Could you provide the variance values for each experiment? We run the experiment in section 5.2 five times. The standard deviations are in the table below. | | | | | | | | |---|---|---|---|---|---|---| |5-point Likert|Helpfulness| | |Fluency| | | |LEA models|TDA|TB|DA|TDA|TB|DA| |Human|4.60|3.84|3.52|4.35|3.84|3.22| |IQA-EVAL|4.30 (±0.06)|3.87  (±0.11)|3.93 (±0.13)|4.47 (±0.05)|3.67 (±0.08)|3.97 (±0.06)| |IQA-EVAL (Expert)|4.17 (±0.08)|3.08 (±0.09)|3.12 (±0.11)|4.47 (±0.02)|3.84 (±0.04)|3.40 (±0.04)| |IQA-EVAL (Critical-Thinker)|4.44  (±0.08)|4.02 (±0.13)|4.08 (±0.17)|4.64 (±0.06)|3.97 (±0.08)|4.10 (±0.08)| |IQA-EVAL (Adaptability-Seeker)|4.24 (±0.05)|3.67 (±0.11)|3.75 (±0.11)|4.52 (±0.08)|3.84 (±0.07)|3.884 (±0.09)| |IQA-EVAL (Clarity-Seeker)|4.45 (±0.07)|3.77 (±0.15)|3.80 (±0.12)|4.60 (±0.04)|3.85 (±0.04)|3.94 (±0.06)| >3. Why are the results for AmbigQA not included? Even if the performance of the IQA models is not optimal, the results should still be presented.  The main reason for “-” in IQA-EVAL benchmarking (Table 5) is that weak IQA models cannot assist LEA models in answering questions at all. In most turns of interactions, these weak IQA models, for the task of outputing a disambiguated sentence, only repeat meaningless sentence pieces or questions proposed by LEAs. Thus, We use “-” instead of “0” in Table 5 to mark this inability of these models to complete the task. --- Rebuttal Comment 1.1: Title: More Responses to Weakness 1: Comment: Both datasets, HotpotQA and AmbigQA, used in Table 5 are not multi-choice datasets. Their inputs and outputs are texts. HotpotQA requires LEA modes to output a short text including an answer to the given question and contexts. AmbigQA requires LEA models to disambiguous questions first and then answer that question by giving a short text. As you’ve suggested, we also benchmark IQA models in another dataset called Natural Questions [1]. This dataset comprises authentic questions posed by users about Wikipedia articles, demanding true multi-turn dialogues for resolution, akin to the setup in QuAC [2]. The experiment results are as follows: ---- LEA: Claude-3 | IQA models | Helpfulness | Fluency | # Queries | Accuracy | | ---------- | ----------- | ------- | --------- | -------- | | GPT3.5 | 4.86 | 4.88 | 2.82 | 0.42 | | Claude | 4.88 | 4.90 | 3.02 | 0.38 | | Llama2 | 4.90 | 4.84 | 3.18 | 0.34 | | Zephyr | 4.84 | 4.90 | 3.02 | 0.28 | ---- LEA: GPT-4 | IQA models | Helpfulness | Fluency | # Queries | Accuracy | | ---------- | ----------- | ------- | --------- | -------- | | GPT3.5 | 4.12 | 5.00 | 2.76 | 0.44 | | Claude | 4.02 | 5.00 | 2.76 | 0.40 | | Llama2 | 3.20 | 4.84 | 3.08 | 0.32 | | Zephyr | 3.30 | 4.86 | 2.92 | 0.36 | ---- All numbers of queries in the first two tables are around 3, and each response to a query from IQA models contains an average of 2 sentences. Similar to Table 6 in the paper, we conduct non-interactive experiments on the following IQA models. | IQA models | # Sentences | Accuracy | | ---------- | ----------- | -------- | | GPT3.5 | 4.66 | 0.38 | | Claude | 3.16 | 0.34 | | Llama2 | 5.21 | 0.30 | | Zephyr | 4.68 | 0.24 | Given the above number of sentences in each IQA model’s response, the non-interactive outputs are roughly equivalent to about two interaction turns, less than three turns in interactive outputs. Thus, **The interaction process of IQA-EVAL involves not only reasoning processes but also simulating genuine interactive multi-turn conversations. This suggests that the performances shown in the first two tables above are driven more by multi-turn interactions than by reasoning processes.** Furthermore, these **interactions lead to enhanced accuracy**, as demonstrated by the superior results in the first two tables compared to those in the last table (non-interactive). If you think we misunderstood your question and if you have further questions, please let us know. Reference: 1. Kwiatkowski, Tom, et al. "Natural questions: a benchmark for question answering research." Transactions of the Association for Computational Linguistics 7 (2019): 453-466. 2. Choi, Eunsol, et al. "QuAC: Question answering in context." arXiv preprint arXiv:1808.07036 (2018). --- Reply to Comment 1.1.1: Title: shall we have more discussions Comment: Dear Reviewer, do you have any other questions of interest? In our previous rebuttal, we have added relevant explanations and several experiments, and we believe that these can to some extent address your concerns. We are eagerly looking forward to your response!
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their insightful reviews. Below we address common concerns by topics. # Insufficient technical contribution, especially on the lack of prompt engineering. Inspired by G-eval [4], we did conduct prompt engineering and designed our prompts by combining detailed instructions for different functions, such as task description, role description, metrics instructions, and evaluation instructions (we will add these information in the final version of the main paper). Each prompt is tuned based on a general question not included in IQA-Eval. We combine those instruction prompts flexibly based on evaluation stages. We believe that the prompts in the paper are well-designed. To be detailed by our new experiments below, adding new tricks of prompting, such as using few-shot prompting or otherwise adding details and de-bias instruction does not show significant improvements. # Concerns on LLM self-favoring bias The impact of bias is low in our IQA-Eval. During the develpoment of the prompts, we considered the impact of bias and manually evaluated all model interactions. Based on our evaluations, we tuned prompts as mentioned above to instruct our model to best reduce the impact of potential bias. To prove that the bias effect is low, **we conduct a new experiment using a more complex prompt and few-shot, including effective debiasing prompts** (following FastChat [1] and Shaz et al [2]) to evaluate interactions. In the following table, the scores assigned by LEA models in response to our prompts align more closely with human evaluations. ||Helpfulness |||Fluency |||Accuracy ||| |----------------------------- |----------- |---- |---- |------- |---- |---- |-------- |---- |---- | |LEA models |TDA |TB |DA |TDA |TB |DA |TDA |TB |DA | |Human |4.60 |3.84 |3.52 |4.35 |3.84 |3.22 |0.69 |0.52 |0.48 | |IQA-EVAL-GPT4 (our prompting) |3.67 |2.30 |2.10 |4.77 |3.87 |3.03 |0.87 |0.83 |0.67 | |IQA-EVAL-GPT4 (New) |3.50 |2.23 |2.10 |4.40 |4.07 |3.53 |0.87 |0.83 |0.67 | In our experiments on the MMLU dataset (Section 4&5), LEAs mainly evaluate the performance of IQA models in the interactions. Each LEA model evaluates all IQA models. **LEA models and IQA models are not the same models.** Thus, the impact of bias, as LLMs prefer themselves, is not a concern in the evaluation results presented in sections 4 (Meta Evaluation of IQA-EVAL Framework) and 5 (Effect of Assigning Persona to LEA). Finally, in our benchmarking of different IQA models (Section 6). The impact of self-preference bias is low when using IQA-EVAL for benchmarking different IQA models. In IQA-EVAL benchmarking results (Table 5), line GPT3.5 appears to be the only one impacted by self-preference bias since it serves as both the LEA and IQA models. To prove that the self-preference bias effect is low, **we run the experiment using again the more complex prompt and few-shot like above**. The experiment results are highly similar to the line “GPT3.5” in Table 5, indicating that the self-preference bias issue has little impact on IQA-Eval. **HotpotQA**: |LEA models |Helpfulness |Fluency |# Queries |Accuracy | |-------------------------- |----------- |------- |--------- |-------- | |IQA-EVAL-GPT3.5 (In paper) |4.72 |4.95 |1.49 |0.63 | |IQA-EVAL-GPT3.5 (New) |4.68 |4.91 |1.35 |0.60 | We will add these results and corresponding prompts to the paper. ---- [1] Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2024). [2] Furniturewala, Shaz, et al. "Thinking Fair and Slow: On the Efficacy of Structured Prompts for Debiasing Language Models." arXiv preprint arXiv:2405.10431 (2024). [3] Lee, Mina, et al. "Evaluating human-language model interaction." arXiv preprint arXiv:2212.09746 (2022). [4] Liu, Yang, et al. "G-eval: Nlg evaluation using gpt-4 with better human alignment." arXiv preprint arXiv:2303.16634 (2023).
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Quantifying the Gain in Weak-to-Strong Generalization
Accept (poster)
Summary: This paper studies the phenomena of weak-to-strong generalisation (WTSG) from a theoretical angle, aiming to explain why the phenomena occurs. The paper's main theoretical results show that, in a regression setting with a convex function class, the decrease in MSE from the weak to weak2strong model is bounded below by the error of the weak2strong model on the weak labels (the "misfit"). The proof of the theorem relies on the pythagoreon theorem for projectsions onto a convex set. The paper then performs synthetic (simulated guassian data) and realistic (molecular property prediction) experiments to test out the theoretical prediction, and find that the decrease in MSE is roughly linearly proportional to the misfit on the weak labels in both settings. They show that this relationship can be used to select which weak2strong model to choose (the one with the highest misfit) correctly. They finally show that in a low-sample regime, "weak" models aren't necessarily smaller ones, as larger models may overfit to the small number of samples and hence perform worse. In this setting, their predicted relationship continues to hold, but with the small model being the strong model. Strengths: The weak-to-strong generalisation (WTSG) phenomena is an important one to study and understand. The theoreticaly explanation put forth in this paper is easy to understand, original, and produces novel insight into why WTSG happens. The confirmation of those findings empirically, including in a more realistic setting, strengthens the work's quality and significance. The paper is generally well-written and clear and easy to read. Weaknesses: It's unclear why the authors chose the regression setting, when the original work and likely scenario of use is classification setting. I think this difference from the original work should be made clearer. The empirical confirmation in realistic data is limited to one setting, which is an unfamiliar dataset to me and likely most of the LLM and NLP community. It would be beneficial to demonstrate the phenomena across more settings, especially more standard ones. Giving some intuition as to *why* the theoretical result holds from the proof would be beneficial. In particular, pointing out that the inequality achieved relies on the distance measure not being a metric, otherwise the triangle inequality for metrics would force the inequality to be the other way round, if my understanding is correct? It's likely that in the setting from the original paper, the weak labelling is within the function class of fine-tuned strong models (where fine-tuning is over all parameters), at least on a naïve interpretation. It would be beneficial to discuss whether the authors think their theoretical results is the reason why WTSG happens in this setting as well (given the assumptions of their theory break), or whether they think WTSG happens in that scenario for a different reason. ### Summary In general, I'm in favour of the paper getting accepted, and am giving it an accept (7). I'd be willing to raise my score if additional empirical justification of theoretical claims was made in more varied settings. Technical Quality: 4 Clarity: 4 Questions for Authors: - Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have adequately discussed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your review and comments. We are really glad that you like our work! With regards to the points you bring up: > It's unclear why the authors chose the regression setting, when the original work and likely scenario of use is classification setting. I think this difference from the original work should be made clearer. Regression with the least squares loss, while arguably being one of the most basic tasks in statistics and machine learning, also turns out to be a setting where we can extract the essence of the weak-to-strong generalization phenomenon in an intuitive, geometric manner — as we show, the phenomenon in this setting effectively boils down to the Pythagorean theorem for projection onto convex sets (a well-studied and standard fact from convex analysis). We do believe that a similar intuition and explanation (in terms of “projections” onto “convex” sets) may also apply to the classification setting (with say the binary cross-entropy loss). It is also worth mentioning that the theory in several previous studies [1,2] on out-of-distribution finetuning is also focused on regression. Nevertheless, we will emphasize more on the difference in our setting (regression vs classification) in the next revision. [1] Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, and Percy Liang. Fine tuning can distort pretrained features and underperform out-of-distribution. [2] Yoonho Lee, Annie S Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, and Chelsea Finn. Surgical fine-tuning improves adaptation to distribution shifts. > The empirical confirmation in realistic data is limited to one setting, which is an unfamiliar dataset to me and likely most of the LLM and NLP community. It would be beneficial to demonstrate the phenomena across more settings, especially more standard ones. The molecular prediction task seemed natural for our setting and also stood out to be a standardized task [1] in computational chemistry for regression over sequential data, with a tractable pre-training dataset, and a good variety of fine tuning tasks to demonstrate the applicability of our results. Additionally, the community had also previously developed specialized transformer architectures for these tasks [2]. That is why we decided to go with this dataset. As can also be seen, for example, in Burns et al., 23, a major bulk of NLP/LLM tasks seem to be tailored to classification/generation; nevertheless, we would be happy to include results on a suitable NLP regression task (suggestions welcome!) in the next version. [1] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning [2] Benedek Fabian, Thomas Edlich, Héléna Gaspar, Marwin Segler, Joshua Meyers, Marco Fiscato, and Mohamed Ahmed. Molecular representation learning with language models and domain-relevant auxiliary tasks. > Giving some intuition as to why the theoretical result holds from the proof would be beneficial. In particular, pointing out that the inequality achieved relies on the distance measure not being a metric, otherwise the triangle inequality for metrics would force the inequality to be the other way round, if my understanding is correct? You are indeed right that if the distance measure (average squared distance) was a metric, we would get the reversed inequality. An intuitive reason for the inequality being the in other direction is the textbook cosine law on the sides of a triangle: $a^2 = b^2 + c^2 - 2bc \cos(\theta)$, where $\theta$ is the angle between sides $b$ and $c$. Convexity guarantees that this angle is obtuse, which makes the cosine negative. Hence, $c^2 \le a^2 - b^2$. Here, $a^2$ is the weak model loss, $b^2$ is the weak-to-strong misfit, and $c^2$ is the strong model loss. > It's likely that in the setting from the original paper, the weak labelling is within the function class of fine-tuned strong models (where fine-tuning is over all parameters), at least on a naïve interpretation. It would be beneficial to discuss whether the authors think their theoretical results is the reason why WTSG happens in this setting as well (given the assumptions of their theory break), or whether they think WTSG happens in that scenario for a different reason. This is a great question, as in such a case, our theory (which works in the setting of regression) would indicate that no weak-to-strong generalization should be exhibited. This is also what the authors of Burns et al, 23 refer to as *perfect student-supervisor agreement/imitation*. To mitigate this (Section 4.3.2 in Burns et al., 23), notice that the authors of that paper suggest incorporating an auxiliary "confidence" loss in their finetuning optimization objective to enable weak-to-strong generalization—this serves as a regularizer, and avoids the student (strong) model to naively overfit the weak labels. Note however, that our theory heavily uses the fact that the minimizer of the (unregularized) mean squared error is in fact the *projection* of the weak model onto the convex set—this is no longer true with a regularization term. However, there is a natural duality between regularized objectives and constrained optimization problems. An interesting future direction then would be to characterize the regularization terms that constrain the space of optimization to still contain the target function, but exclude the weak model itself, so as to provably mitigate student-supervisor imitation. --- Please let us know if we can help answer any other questions! --- Rebuttal Comment 1.1: Comment: Thanks for your clarification and response. In response to your last point, while Burns et al do propose the auxiliary confidence loss, even without this loss (i.e. naïve WTSG), the w2s model outperforms the weak model, which is anti-predicted by your theory (which as you note would predict no performance boost on top of the weak model). Ignoring the auxiliary confidence loss entirely Burns et al's results still don't match your theory, so I'll ask again whether you have any ideas as to why that is the case: Do you think your theoretical results are the reason why WTSG happens in this setting as well (given the assumptions of your theory break), or do you think WTSG happens in that scenario for a different reason? I agree that it would be interesting to extend your theory to explaining the auxiliary confidence loss' boost in performance, but you would first need to ensure it explains the naïve WTSG effect in the full fine-tuning regime, as it currently does not. Regardless, I am still happy to recommend acceptance of the paper, and will maintain my score. --- Rebuttal 2: Title: Response to comment Comment: Thank you for your comment. We do believe that the performance gains via WTSG in classification settings ought to also be characterized to some extent by the misfit between weak and strong models; however, other terms also seem necessary (capturing, among other factors, the case where the strong model can represent the weak model). Indeed, the gains in accuracy with naive finetuning without auxiliary losses in the classification settings considered by Burns et al. 23 already suggest that such terms should be necessary, and that the phenomenon of accuracy gain via WTSG in classification is more nuanced than in regression (e.g., not only the quantity, but even the *quality* of misfit can matter here). Thus, our theory does not directly explain the gains in performance in these cases, and a different analysis seems to be required. As for initial thoughts about a theoretical explanation for gains in classification: the phenomenon in classification that we want to capture is the following: the weak model is presenting the strong model with labeled data; however, the labels are really only "pseudolabels", and not true labels. The strong model, while learning from the weak model, is nevertheless still *rectifying* the wrong "pseudolabels". The self-training framework of [1], where the student and teacher are the same model, does indeed capture the case where the student model can in principle fully represent the weak model. Under certain assumptions (like expansion and separation), and along with consistency losses, this theory does explain performance gains of the self-trained student model, and thus seems to be a promising avenue to explain gains in WTSG as well. It would be interesting to see if the high-level analysis in our work (projection onto the space of strong models that are already implicitly biased towards the ground truth) could be combined with such an analysis to explain performance gains, even when no consistency losses are involved. Again, all these are really fascinating directions for future study! [1] Colin Wei, Kendrick Shen, Yining Chen, and Tengyu Ma. Theoretical analysis of self-training with deep networks on unlabeled data. --- Rebuttal Comment 2.1: Comment: Thanks for the response and discussion. My point was that main difference between your analysis and Burns et al. is not classification vs regression, but that your analysis assumes probing pretrained models, as opposed to fine-tuning all the weights. If you were to perform full fine-tuning in the regression setting then I would expect you would still see WTSG, but your theory would not predict that, as I think full fine-tuning of the strong model should be able to completely fit the labels of the weaker model (i.e. weak model function is inside the strong model function class), as it is a bigger model. Your theory would not predict WTSG here I think? If you ran WTSG for regression with full fine-tuning, what would you predict would happen, and how might you extend your theory to cover that case? --- Reply to Comment 2.1.1: Title: Response to comment Comment: Thank you so much for the clarification. We really appreciate your time engaging in this discussion, and acknowledge that our analysis assumes linear probing of pretrained models, whereas the gains in the classification settings of Burns et al. hold even when all the parameters of the strong model are finetuned. Here are a few thoughts: 1) You are right that if we allow finetuning of all the parameters of the strong model in the weak supervision stage, the weak model function is inside the strong model function class. However, we would like to note that in this case, the class of strong model functions ($f \circ h_s$ where both $f$, $h_s$ are free) is no longer a convex set---thus, our theory doesn't extend to this setting, and doesn't exactly predict whether we should/should not see WTSG. 2) Nevertheless, we ran a quick experiment on synthetic data (as in Figure 2(a), 2(b) of the paper), where we finetuned all the parameters in the strong model in the weak supervision stage. We would like to remark that we didn't see a clear WTSG trend in these experiments. It still might be true that on large-scale, real data, WTSG is observed in the regression setting, even if we were to finetune all the strong model parameters (like what we see in Burns et al.). If this is so, it would be interesting to characterize why this is happening, given that the underlying convexity assumption from our theory is broken (e.g., does the number of finetuning examples matter in the weak supervision stage?) On the other hand, if no clear WTSG is seen even in large scale experiments (like in the synthetic data), this would also be very interesting and suggest that 1) the conclusion of our analysis may be extendable to non-convex spaces of strong model functions, and 2) even further differences in WTSG between classification and regression.
Summary: The paper provides a geometric perspective on weak-to-strong generalization [[Burns et al., 23](https://arxiv.org/abs/2312.09390)]. Specifically, the authors show that if the set of strong model-representable functions the following holds: $$MSE(\phi^*, \phi^{ws}) \le MSE(\phi^*, \phi^w) - MSE(\phi^w, \phi^{ws}),$$ where $\phi^*$ is the ground truth labeling function, $phi^{w}$ is the the weak model labeling function and $\phi^{ws}$ is the weak-to-strong model labeling function. By labeling function I mean a mapping from inputs $x$ to real-valued labels $y$. The authors use this result to argue that the gain in MSE loss in weak-to-strong generalization is at least equal to the _misfit_ term $MSE(\phi^w, \phi^{ws})$. There are some experiments verifying this result in practice. Strengths: 1. The core result is in fact very simple and intuitive: it's a geometric argument about a projection on a convex set. 2. To the best of my knowledge, this main result was not reported previously in the context of weak-to-strong generalization. It provides a toy model where it's trivial to see that weak-to-strong generalization will hold. 3. The authors show some simple experiments where their theory is applicable: synthetic regression tasks and molecular property prediction. Weaknesses: W1. My main concern with the paper is that in my opinion the presentation is very confusing. Specifically, the core result is very simple, but it is presented in a way that took me a while to understand. My first issue is that the authors use notation $d_{\mathcal{P}}$ to denote the mean-squared distance, i.e. $\mathbb{E}(f(x) - g(x))^2$ and refer to it as distance (line 133). If $d_{\mathcal{P}}$ was in fact a distance, then Eq. (2) is the inverse of the triangular inequality. This confused me for a while. Moreover, Fig 1 shows a triangle with sides labeled as $A$, $B$, $C$, and states $C \ge A + B$. The resolution is that $d_{\mathcal{P}}$ is a square of a distance, not a distance. Then, Theorem 1 is just a combination of (1) law of cosines and (2) and the fact that the angle between the vectors connecting a point to its projection on a convex set, and a vector connecting its projection to any point in the convex set is $\ge 90$ degrees. The proof and the statement of the theorem are in my opinion significantly complicated by the explicit use of representations $h$ throughout. I don't understand what value they add: throughout the paper, the authors always use the same compositions e.g. $f^s \circ h^s$. Denoting the entire mapping with one letter would simplify the presentation. Currently, the authors also state that they are "employing a representation-theoretic perspective" (line 308), but I don't think there is anything added by the parameterization of the model as a function on top of some representations. Possibly the only thing is Claim 3, but it doesn't affect other parts of the paper much. W2. The result of the paper holds specifically for the MSE loss. Indeed, if $d_{\mathcal{P}}$ is a distance metric, then Eq.2 is the opposite of the triangle inequality and doesn't hold. Moreover, for the qualitative conclusion > Thus, the 148 inequality directly says that the weakly-supervised strong model improves over the weak model by (at least) an amount equal to the misfit. clearly doesn't hold for the classification settings considered by [Burns et al., 23]: it is possible for the strong model to differ from the weak model on datapoints where the weak model is correct, which would not make it more accurate with respect to the ground truth labels. W3. In fact, [Burns et al., 23] provide a relevant discussion of _simulation difficulty_ in Appendix E (especially E.1). They also note that in order for the strong model to improve upon the weak supervisor, the strong model should be unable to simulate the weak labels well. A related result is also reported in Fig 8 of that paper. I think it would be good for the authors to comment on how their results connect to these results. W4. The practical proposition of using the weak model with the highest misfit will not work without further assumptions. This idea is again quite related to the result in Appendix E.1 of [Burns et al., 23]: if the errors of the weak model are not simulatable by the strong model, but the signal is easy to simulate, then we will get very good weak-to-strong generalization. But in general, high weak-to-strong disagreement will not always imply good weak-to-strong generalization. Even under the assumptions of convexity etc of theorem 1, I believe it is not true that for a given $d_{\mathcal{P}}(\phi^*, \phi^w)$ higher $d_{\mathcal{P}}(\phi^w, \phi^{ws})$ always lead to lower $d_{\mathcal{P}}(\phi^*, \phi^{ws})$. Is that correct? Technical Quality: 3 Clarity: 1 Questions for Authors: Please comment on W1-W4. Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: Limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We address your concerns ahead: > W1..presentation very confusing..notation $d_P$ to denote the mean-squared distance.. The only reason we introduced the notation $d_P(f,g)$ was for ease of reading: it is convenient to have some notation for the long-form term $E_P(f(x)-g(x))^2$. We explicitly state in line 133 that $d_P$ is the “avg squared distance wrt $P$" (this is already not a valid "distance" in the context of metric spaces, which indeed must satisfy the triangle inequality). We want to assert that, if anything, we have tried to be as explicit and upfront about the simplicity and intuitive nature of our result as possible. There was no intent to make anything confusing. Nevertheless, we will add a line clarifying that $d_P$ does not satisfy the triangle inequality. We will also consider changing $d_P$ to $MSE$,just as you use in your review, and denoting the sides of the triangle as $A^2, B^2, C^2$ in Fig 1. > Thm 1 just a combination of law of cosines and convexity This is indeed the right intuition; formalizing it to the space of functions requires a proof, which is what we include. To us, the fact that weak-to-strong generalization (WTSG) (a seemingly complex, and increasingly important phenomenon going ahead) in the concrete setting of regression can be directly traced to the Pythagorean thm for convex sets (a std fact in convex analysis) is both surprising and satisfying. The main contribution of our work is in modeling this phenomenon and formalizing the question. In this capacity, we view simple explanations as a strength and not a weakness, as they help us understand the system better. > ...proof complicated by explicit use of representations throughout The only reason we frame our results as “finetuning on top of representations” is because this is predominantly the language used for understanding AI models today. Your suggestion about clubbing $f_s \circ h_s$ to $\phi_s$ is well-taken; however, if we simply denote the composition by a single function, the result appears more abstract, and the link to WTSG is obfuscated. If anything, our intention was for the reader to appreciate that the working of these models can be elicited at a level where one can instantiate standard mathematical tools. > W2. result of the paper holds only for the MSE loss. Indeed, and this is something we repeatedly make clear to be the scope of the paper (line 67, 133, 309..). This is a first step towards understanding the phenomenon of WTSG, and extending to other settings is an interesting future direction. > ...qualitative conclusion clearly doesn't hold for classification...possible for strong model to differ from weak model on points where weak model is correct It is a priori not clear that the qualitative conclusion of our work (that the gain in WTSG is effectively characterized by the WTS misfit) does not extend to settings beyond regression. It may very well be possible that an inequality like Eq 2 (with suitable error terms) holds in the classification setting, with $d_P$ measuring (say) the avg binary cross-entropy loss. Moreover, in such a case, it may still be possible that the strong model makes a mistake on a point where the weak model does not; the overall gain in performance will likely stem from the fact that the probability under the data distribution on such points is low, whereas the probability on points where the strong model improves is high. In any case, this is an interesting direction beyond the scope of the present work. > W3...Appendix E in Burns et al., 23, where they note that the strong model improves upon the weak supervisor when it is unable to simulate the weak labels This is precisely what our main theorem is also saying! Any strong model that suffers a large misfit error when trained (without auxiliary losses) on weak labels (i.e., it is unable to simulate the weak labels well) exhibits non-trivial WTSG! Relatedly, Fig 8 in their paper shows that adding auxiliary losses in the objective helps reduce student-supervisor agreement (alternatively, increase misfit), and thereby improve WTSG. > W4...in general, high WTS disagreement will not always imply WTSG The study in Appendix E.1,E.2 in Burns et al. 23 is more to do with the *qualitative* structure of WTS disagreement in the setting of classification. Namely, they ask: do different forms of disagreement (e.g., random errors vs correlated errors), for the same weak model accuracy, lead to differing WTSG? While this nuance arises in the setting of classification, in the setting of regression that we consider, projection onto the convex set only results in movement *towards the true function* assuming realizability---in this sense, any misfit is "in the correct direction”, and its full quantitative benefit is realized. Thus, the qualitative difference among different forms of misfits does not manifest for us. While it is true that in other settings, the nature of disagreement might matter, the general principle of decreasing student-supervisor imitation (alternatively, increasing misfit) to foster WTSG, either via auxiliary losses or otherwise, does constitute a significant theme in the work of Burns et al., 23. > Even under the assumptions of convexity etc of Thm 1... As our theorem asserts, for the regression setting under the squared loss and with a convex space of finetuning functions, it is (mathematically) true that for a given $d_P(\phi^*, \phi^w)$, a higher $d_P(\phi^w, \phi^{ws})$ (where $\phi^{ws}$ is the projection/minimizer of loss) will indeed lead to lower $d_P(\phi^*, \phi^{ws})$ (this is verbatim the inequality). In fact, this is also (nearly exactly) corroborated by all our experiments. --- We will definitely add a summary of the discussion above, in relation to Appendix E in Burns et al., 23 in the final revision. We hope that our response helps address your concerns. Please let us know if we can answer any more questions. --- Rebuttal Comment 1.1: Title: Checking in.... Comment: Thank you so much again for taking the time to read our manuscript! As the deadline for the discussion period is approaching, we would really appreciate hearing from you, and ask if any further clarifications are required!
Summary: This paper provides bounds for weak-to-strong generalization, where a strong student model is trained on the labels produced by a weaker teacher model. The authors prove that in a regression setup, under certain assumptions, the strong model gains over the weak model's accuracy by an amount equal to the *disagreement* between the strong and weak model. This gives a natural generalization bound for weakly-supervised regression models and also gives a selection criterion for which weak model to choose in practice, which the experiments show leads to good empirical performance. Strengths: - Proves a clean and intuitive theory for weak-to-strong regression. - Empirical results showing that the proposed bounds are tight (in fact, almost exact). Weaknesses: - The results of WSCM20 are not properly contextualized. Their analysis is *not* limited to a self-training scenario and applies for any student model learning from an arbitrary teacher, including a student that is more powerful than the teacher. - The paper is missing a discussion of and citations to relevant work in other semi- or un-supervised settings that bound generalization error in terms of the disagreement between two classifiers, such as [1], [2], and especially [3]. [1] https://arxiv.org/abs/1708.04403 [2] https://arxiv.org/abs/1901.04215 [3] https://papers.nips.cc/paper_files/paper/2001/file/4c144c47ecba6f8318128703ca9e2601-Paper.pdf Technical Quality: 3 Clarity: 3 Questions for Authors: - L71 "Next, we imagine that the weak model sees data labeled by the target function $f^* \circ h^*$, and after finetuning, learns some arbitrary function $f_w \circ h_w$." Is this assumption really necessary? Can't we just assume we are given a weak predictor with some error rate? This limits the setup significantly to weak teacher models that are fine-tuned on ground-truth data. - What happens in the current theory if technically the strong model hypothesis class $\mathcal{F}_s$ technically contains a function that can exactly fit the weak classifier $f_w \circ h_w$, but due to regularization this doesn't occur? This is often the case in practice, where regularization terms are required to avoid overfitting to the weak labels. Can the theory be modified to include a regularized version of $\mathcal{F}_s$? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper already discusses what are in my view its main limitations: (1) it only applies to regression and (2) it only applies to settings where the strong model hypothesis class $\mathcal{F}_s$ is convex. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your review and comments. We are really glad that you like our work! With regards to the points you bring up: > The results of WSCM20 are not properly contextualized. Their analysis is not limited to a self-training scenario and applies for any student model learning from an arbitrary teacher, including a student that is more powerful than the teacher. Thank you for pointing this out. We will clarify that the analysis of [WSCM20] applies to arbitrary student-teacher settings (albeit under expansion assumptions and a consistency loss) in the updated version! > missing a discussion of and citations to relevant work in other semi- or un-supervised settings that bound generalization error in terms of the disagreement between two classifiers Thank you for pointing out these references on disagreement-based analyses. We will be sure to include them in the revision. > L71 "Next, we imagine that the weak model sees data labeled by the target function $f^* \circ h^*$, and after finetuning, learns some arbitrary function $f_w \circ h_w$." Is this assumption really necessary? Can't we just assume we are given a weak predictor with some error rate? This limits the setup significantly to weak teacher models that are fine-tuned on ground-truth data. You are right, our results broadly hold under access to any weak predictor. Note that we do mention in our statement of Theorem 1 that $h_w$ is any weak representation, and $f_w$ is any (arbitrary) predictor (so that $f_w \circ h_w$ is any arbitrary predictor). Nevertheless, we will rephrase the sentence in the introduction to make it clear that the weak predictor can really be any arbitrary predictor. > What happens in the current theory if technically the strong model hypothesis class $\mathcal{F}_s$ technically contains a function that can exactly fit the weak classifier $f_w \circ h_w$, but due to regularization this doesn't occur? This is often the case in practice, where regularization terms are required to avoid overfitting to the weak labels. Can the theory be modified to include a regularized version of $\mathcal{F}_S$? This is a great question. In the setting considered in our paper, if $f_w \circ h_w$ can be represented exactly as $f_s \circ h_s$ for some $f_s \in \mathcal{F}_s$, and we were to perform naive finetuning without any regularization, the strong model will exactly converge to $f_w \circ h_w$, and we will see no weak-to-strong generalization. As also suggested in the work of Burns et al., 23, auxiliary “confidence” losses serve as regularizers to mitigate this phenomenon. Note however, that our theory heavily uses the fact that the minimizer of the (unregularized) mean squared error is in fact the *projection* of the weak model onto the convex set—this is no longer true with a regularization term. However, there is a natural duality between regularized objectives and constrained optimization problems. An interesting future direction then would be to characterize the regularization terms that constrain the space of optimization to still contain the target function, but exclude the weak model itself, so as to provably mitigate student-supervisor imitation. --- Please let us know if we can help answer any other questions! --- Rebuttal Comment 1.1: Title: Checking in.... Comment: Thank you so much again for taking the time to read our manuscript! As the deadline for the discussion period is approaching, we would really appreciate hearing from you, and ask if any further clarifications are required!
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Boosting Weakly Supervised Referring Image Segmentation via Progressive Comprehension
Accept (poster)
Summary: Aiming for the WRIS task, the work proposes a novel framework to leverage target-related textual cues from the description for progressively localizing the target object. The authors use a Large Language Model (LLM) to decompose the input text description into short phrases which are taken as target-related cues and fed into a Conditional Referring module. Besides, Region-aware Shrinking (RaS) loss and Instance-aware Disambiguation (IaD) loss are proposed to facilitate fine-grained cross-modal alignment. Strengths: 1. The motivation is clear and easy to understand. The idea of leveraging target-related textual cues decomposed by LLM for progressive localization makes sense. 2. This work develops multiple consecutive Conditional Referring Modules. RaS and IaD two loss objectives are seamlessly integrated into the framework. 3. It is novel to implement RaS and IaD two loss objectives by harnessing the capabilities of segmentation foundation models. The SAM[1] does not include strong semantic prior knowledge like GroundingDINO and Grounded-SAM[2], and it is suitable for the weakly-supervised setting. 4. The work shows outstanding object localization ability and outperforms counterparts on three common benchmarks. Especially after refined by the SAM, the results are promising. [1] Segment anything ICCV2023 [2] Grounding dino: Marrying dino with grounded pre-training for open-set object detection Weaknesses: 1. In table 1, only the results refined by SAM[1] are reported. Considering that FreeSOLO [2] is an unsupervised segmentation model, I am curious about the results refined by FreeSOLO. 2. This work introduces multiple stages for progressive localization. It may increase the training and inference time of the model. The time comparison of model training and inference may be needed. [1] Segment anything ICCV2023 [2] Freesolo: Learning to segment objects without annotations CVPR2022 Technical Quality: 4 Clarity: 4 Questions for Authors: 1. The mask proposals by SAM are often part of the whole instance. It will affect the refined results. Does this work do anything special to alleviate the problem? The author can give more details about this issue. 2. Table 1 shows the results of the pseudo labels obtained at the wealy-supervised training stage. Can the authors give the results after pseudo labels based on supervised training? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: NaN Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **[W1]**: In table 1, only the results refined by SAM[16] are reported. Considering that FreeSOLO [40] is an unsupervised segmentation model, I am curious about the results refined by FreeSOLO. **[Ans]**: Thanks for your advice. Here we present the **mIoU** after applying FreeSOLO refinement, as shown in the table below. The first row displays the results for TRIS[25], while the second row presents the results for our proposed method. Although FreeSOLO's extracted mask proposals are not as accurate as those from SAM, leading to reduced performance, our method still exhibits clear and substantial improvements across three established benchmarks. | Method | RefCOCO(val) | RefCOCO+ (val) | RefCOCOg_goolge(val) | | :-: | :-:|:-:|:-:| | TRIS[25] | 29.7 | 27.2 | 30.5 | | PCNet | **33.1** | **30.3** | **33.8** | > **[W2]**: This work introduces multiple stages for progressive localization. It may increase the training and inference time of the model. The time comparison of model training and inference may be needed. **[Ans]**: Thanks for your comment. Our method incorporates a multi-stage refinement process, which naturally leads to increased training and inference times. Below, we provide a comparative analysis of the time costs associated with various methods. Notably, the overall time expenditure of our method with three stages remains well within acceptable limits. | Methods | Training-Time | Inferring time | | :-: | :-: | :-: | | SAG[14] | 36.0 h | 63 ms | | TRIS[25] | 3.0 h | 35 ms | | PCNet | 6.0 h | 42 ms | > **[Q1]**: The mask proposals by SAM are often part of the whole instance. It will affect the refined results. Does this work do anything special to alleviate the problem? The author can give more details about this issue. **[Ans]**: You are correct that SAM's mask proposals often represent only a portion of the complete instance. Consequently, even when our method achieves accurate localization, it may not necessarily translate to precise instance masks. To demonstrate this, the last row ("Oracle") of Table 1 presents the mIoU scores achieved by selecting the mask proposal that best aligns with the ground-truth mask. These results support our observation and suggest that integrating a more advanced instance detector, such as GroundSAM [A], could further enhance our method's mask prediction accuracy (mIoU). > **[Q2]**: Table 1 shows the results of the pseudo labels obtained at the weakly-supervised training stage. Can the authors give the results after pseudo labels based on supervised training? **[Ans]**: Thanks for your question. We present additional results below. In our main paper, we focused on the quality of pseudo masks generated by our method (i.e., one stage). Here, we further investigate their effectiveness by training the LAVT [44] network using these pseudo masks as supervision. We report the resulting PointM (first row) and mIoU (second row) metrics on three benchmarks, along with the performance achieved when refining mask predictions with SAM (third row). These findings highlight the potential for further improvement in mask prediction through fully supervised training based on our generated pseudo masks. We will incorporate these results into the revised version of our paper. | Metric | RefCOCO(val) | RefCOCO+ (val) | RefCOCOg_goolge(val) | | :-: | :-: | :-: | :-: | | **PointM** | 66.1 | 60.4 | 62.4 | | **mIoU** | 44.6 | 40.3 | 41.1 | | **mIoU** (refined by SAM) | **54.5** | **49.9** | **51.1** | References: [A] Grounding dino: Marrying dino with grounded pre-training for open-set object detection ECCV2024 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed responses. All my concerns were thoroughly addressed. Overall, this paper presents a reasonable and well-explained approach to mimicking the progressive process of understanding language instructions. By leveraging visual localization and segmentation tools (e.g., SAM), the method achieves superior language comprehension and target localization, supported by solid experiments and extensive visualization analysis. This work provides clear insights and significant contributions to the field. Therefore, I will maintain my original rating. I also recommend that the authors include suitable explanations and new results in the revision. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thanks very much for your positive feedback and further suggestions! We'll include more explanations and convincing results in our revision as you suggested.
Summary: This paper proposes the Progressive Comprehension Network (PCNet) for the WRIS task. this model achieves visual localization by progressively incorporating target-related textual cues for visual-linguistic alignment. Although experimental results have demonstrated the effectiveness of this paper, several aspects of the paper lack clarity. Strengths: 1. The corresponding experiments validate the effectiveness of the proposed method on three popular benchmarks. the proposed method outperforms existing methods. 2. The two loss functions introduced in this paper effectively enhance the accuracy of the response maps. Weaknesses: 1. The Conditional Referring Module (CRM) is implemented through multiple cross-attentions, which is a relatively common approach, lacking novelty. 2. What is the general idea of classification loss mentioned in the paper? Please explain the fundamental research insights of this work. 3. In the Region-aware Shrinking (RaS) loss, what does the ⊙ symbol represent in Equation 5? Please clarify its meaning. 4. This article only conducted ablation experiments on the RefCOCOg dataset, so the ablation experiments are insufficient. It is recommended to validate on multiple datasets (such as RefCOCO+, RefCOCO) to comprehensively evaluate the effectiveness of our method. 5. The order of the tables is somewhat confusing, with Table 9 preceding Table 6. It is recommended to reorder the tables to make them more in line with logical sequence or reading habits. 6. Which datasets were the ablation experiments in Tables 7-9 conducted on? Please specify. Technical Quality: 3 Clarity: 2 Questions for Authors: None Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See the weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Sincerely thanks for useful comments. To the weaknesses, our response is as follows: >**[W1]**: Conditional Referring Module (CRM) is implemented through multiple cross-attentions, which is a relatively common approach, lacking novelty. Thanks for your comments. We would like to address the concerns about its novelty and highlight its importance in our approach: - Firstly, the proposed CRM module is well-motivated. In our method, we observe that referring text descriptions typically contain detailed cues on how to localize the target object. By progressively integrating these target-related textual cues from the input description, we hope to enhance the target localization step-by-step. Therefore, we introduced the CRM module to model this progressive comprehension process. More importantly, the module is combined with the `RaS` loss to implement instance-level supervision, which has not been explored in previous approaches. - The CRM module includes two cross-attentions, each playing a crucial role. The first, vision-to-text attention, aims to obtain vision-attended cue features by incorporating visual context information. The second, text-to-text attention, modulates the referring query using the target-related textual cue embeddings. Through the interaction between referring query and different textual cue embeddings, this module is designed to achieve a more discriminative referring embedding. In Table 4, we provide ablation experiments for the module, which also validate the effectiveness of our CRM design. >**[W2]**: the general idea of classification loss mentioned in the paper. We sincerely apologize for the less detailed presentation about the Classification loss $\mathcal{L}\_{\texttt{cls}}$ in **TRIS** [25] due to space restriction. The loss $\mathcal{L}\_{\texttt{cls}}$ aims to establish a global alignment between the visual content and the referring texts by formulating a classification process. We provided a detailed explanation of the idea and implementation of the loss in the **general response**. We will include this in the revision. >**[W3]**: what does the ⊙ symbol represent in Equation 5. **Ans**: The $\odot$ denotes the hadamard product. In Equation 5, we multiply the binary mask obtained from SAM and the response map element by element. Thanks for your comments. We will add the explanation in the updated version. >**[W4]**: this article only conducted ablation experiments on the RefCOCOg dataset, so the ablation experiments are insufficient. In our main paper, we follow previous works, e.g., SAG[14] and TRIS[25], to conduct the ablation experiments on the representative RefCOCOg dataset. As suggested, we conducted more ablations on other datasets (i.e., refcoco, refcoco+ datasets) and shown the results in the following tables. The results on these datasets lead to the same conclusions as those on RefCOCOg, further validating the necessity and effectiveness of each component of our method. - RefCOCO | | | | | | | | :---------- | :--------- | ----------- | -------- | ------ | ------ | | $\mathcal{L}_{\texttt{CLs}}$ | ${\texttt{CRM}}$ | $\mathcal{L}_{\texttt{RaS}}$ | $\mathcal{L}_{\texttt{IaD}}$ | **PointM** | **mIoU** | | &#10003; | | | | 50.3 | 24.6 | | &#10003; | &#10003; | | | 53.5 | 26.4 | | &#10003; | &#10003; | &#10003; | | 56.1 | 28.7 | | &#10003; | &#10003; | | &#10003; | 56.8 | 28.0 | | &#10003; | &#10003; | &#10003; | &#10003; | $\textbf{60.0}$ | $\textbf{31.3}$ | - RefCOCO+ | $\mathcal{L}_{\texttt{CLs}}$ | ${\texttt{CRM}}$ | $\mathcal{L}_{\texttt{RaS}}$ | $\mathcal{L}_{\texttt{IaD}}$ | PointM | mIoU | | :--------- | :-------- | ------- | ---------- | -------- | --------- | | &#10003; | | | | 44.5 | 22.1 | | &#10003; | &#10003; | | | 48.8 | 24.6 | | &#10003; | &#10003; | &#10003; | | 55.3 | 27.1 | | &#10003; | &#10003; | | &#10003; | 52.4 | 25.9 | | &#10003; | &#10003; | &#10003; | &#10003; | $\textbf{58.7}$ | $\textbf{29.2}$ | >**[W5]**: The order of the tables is somewhat confusing, with Table 9 preceding Table 6..... Thanks for pointing out the problem. We will reorder the two tables to make them more in line with logical sequence or reading habits in the updated version. >**[W6]**: Which datasets were the ablation experiments in Tables 7-9 conducted on? Please specify. We conduct the ablation experiments on **Refcocog** (google split) dataset as described in `lines 275-276`. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. My concerns are addressed, therefore I increase my score. --- Reply to Comment 1.1.1: Comment: Thanks very much for your positive feedback! We deeply appreciate that you decided to increase the score. We'll include the explanations and more convincing results in our revision according to your careful comments.
Summary: Inspired by human's step-by-step cognitive process for localizing a target object in an image, the paper proposes Progressive Comprehension Network (PCNet) for the task of weakly-supervised referring image segmentation (WRIS) where text is the only supervision signal. PCNet first decomposes a long, complex text description into multiple target-related cues using a large language model (LLM), and the specialized designed module, Conditional Referring Module (CRM), progressively refines the text-to-image response map using the generated text cues in multiple stages. Region-aware Shrinking (RaS) is introduced for constraining the latter stage to have a more compact, target-related response map, and Instance-aware Disambiguation (IaD) loss is proposed for differentiating between the target object and the other objects in the input image. The experiment results show the superiority of the proposed method. Strengths: - The paper provides extensive experiment results proving the validity of the proposed method, PCNet, and also ablation studies showing the contributions of main components of the model. - Figures help readers understand the proposed method. Weaknesses: - The proposed method uses the contrastive loss of TRIS, but not generates independent response maps from a positive cue and L negative cues. In Equation (3) and (4), it integrates all the text cue features and generates a single response map R_n that contains information from both positive and negative cues. This does not make sense. How did the authors differentiate between positive and negative cues when computing the classification loss L_Cls? (This is why I rated "poor" for presentation). - IaD, the loss to differentiate between the target object and other objects in the same image is similar to the calibration loss used in TRIS; the idea is the same while the calibration loss uses CLIP image-text similarity for differentiation. The authors should have discussed about this loss in the paper, and also done an ablation study comparing the two losses. - The PCNet uses the mask proposals generated by a pre-defined segmentation mask generator such as FreeSOLO and SAM. We cannot ignore the possibility of knowledge being distilled from the mask generator into PCNet, especially through RaS and IaD losses, main contributions of the paper. More specifically, we cannot be sure that the performance gains from using RaS and IaD do not come from the mask generator's knowledge. Technical Quality: 2 Clarity: 1 Questions for Authors: Please rebut the above weaknesses. I also have minor questions: - In Figure 2, the arrow comes from Q_{n+1} to Cls. How is Q_{n+t} used for computing the classification loss? - How did the authors sample L negative textual cues? one cue from each of the examples in the same mini-batch? - I am curious about the rationale behind Equations (2) and (3). Why did the authors use residual connection only after the mlp layer, not also after the self-attention layer like in Transformer? - Does the order in which text cues are used affect the performance of the model? E.g., is there a performance difference between feeding into the CRM module in the order of q_0, q_1, q_2 and in the order of q_0, q_2, q_1? Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The paper addressed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Sincerely thanks for your useful comments. To the weaknesses and questions, our response is as follows: **[W1]**: Thank you for you careful comment. We sincerely apologize for the less detailed presentation regarding the loss $\mathcal{L}_{\texttt{cls}}$ in **TRIS** [25] due to space restrictions. We provided a detailed explanation of the idea and implementation of this loss in the **general response**. We will include this loss explanation in the revision. **[W2]**: The idea of two loss functions is different and there are two essential differences between them: - **Motivation**. The calibration loss in TRIS is used to suppress noisy background activations and thus help to re-calibrate the target response map. In contrast, in our method, we observe that, multiple referring texts corresponding to different instances in one image may locate the same (or we say overlapping) instances (described in `lines 210-212`), due to the lack of instance-level supervision in WRIS. - **Implementation**. The calibration loss adopts the global CLIP score of image-text to implement a simple contrastive learning for revising the response map. Differently, we simultaneously infer the response maps of different referring texts from the same image, and obtain the **instance-level** localization results by choosing the mask proposal with max alignment score. `IaD` can achieves better localization accuracy with instance-level differentiation. - To further verify the superiority of our loss, we conduct **two-groups ablations** on the **RefCOCO (val)** dataset. The first ablation used `PCNet` without `IaD` loss as the baseline, and the second used the TRIS without calibration loss. We then separately introduced these two loss functions for comparison. Both ablations demonstrate that the `IaD` loss not only refines the response map (mIoU metric) but also significantly improves the localization accuray (PointM metric). - baseline (CLS+CRM+RaS) | Baseline | Calibration loss | IaD loss | PointM | mIoU | | - | :-:|:-:|:-:|:-:| | &#10003; | | | 56.1 | 28.7 | | &#10003; | &#10003; | | 56.9 | 29.9 | | &#10003; | | &#10003; | 60.0 | 31.3 | - baseline (CLS) | Baseline | Calibration loss | IaD loss | PointM | mIoU | | - | :-:|:-:|:-:|:-:| | &#10003; | | | 50.3 | 24.6 | | &#10003; | &#10003; | | 51.4 | 26.4 | | &#10003; | | &#10003; | 54.7 | 26.3 | **[W3]**: We would address your concerns from the following four aspects: - Mask generator does not contain any semantic information. Though it can be used as post-processing method to improve the completeness of the mask, here we aims to improve our localization accuracy by introducing suitable constraint based on the mask proposals. - Our core idea is to improve the understanding of textual cue information by progressive process. The ablation experiments (Table 2) demonstrate that even without **RaS** and **IaD** loss, our approach improves localization to some extent. However, relying only on global classification loss is suboptimal due to its lack of modeling at the instance level. Therefore, we introduce **RaS**, which leverages mask proposals to model multi-stage refinement and enhance instance-level cross-modal alignment. - In lines `200-204`, we point out that a complete mask proposal can improve the mask quality to some extent, but it is not the core of our `RaS` loss. In the following, we conducted a quantitative analysis on **RefCOCO** val set. - The first line denotes the results of baseline (include classification loss and CRM module). In the second line, we utilize the mask proposal generated by SAM (refer to `line-189`) as pseudo mask to calculate the `IoU` loss between it and the response map. The third line denotes the results of our `RaS` loss. - While the guidance of the mask proposal can improve the accuracy of the response map to some extent, there is a significant gap with the `RaS` loss on the localization performance. Besides, as shown in Table 1, even with the less accurate mask proposal by FreeSOLO, our methods still achieve superior localization performance. These results show that the performance improvement of our method is not exactly due to the introduction of mask generator. | Methods | PointM | mIoU | | - | :-:|:-:| | Baseline (CLs +CRM) | 53.5 | 26.4 | | Baseline + IoU_loss | 54.5 | 27.9 | | Baseline + RaS | 56.1 | 28.7 | - In addition, the `IaD` loss only utilizes the mask proposal as a localization result to derive the loss formula, there is no exploitation of mask knowledge. **[Q1]:** I guess the question concerns how $Q_{n+1}$, not $Q_{n+t}$ (absent in Fig. 2), is used for calculating the Classification loss. As explained in the **general response**, $Q_{n+1}$ in stage $n$ contains one positive and $L$ negative referring embeddings per image. Given this known text-image correspondence, computing the cross-entropy loss is straightforward. **[Q2]:** Leveraging the **refer_id** and **image_id** annotations in datasets, which link texts to unique instances, we randomly sample texts referring to different instances from the original. Simultaneously the cues for each text in the batch is sampled (one cue is used for referring embedding modulation at each stage). **[Q3]**: About the residual design, we draw inspiration from the previous works (like TRIS[25] and DenseCLIP[A]). The design can integrate useful cross-modal information while preserving the original modal information from degradation. **[Q4]**: The order of the text inputs would not distinctly affect the performance of our method. In the table below, we conducted a quantitative comparison on the **RefCOCO** (val) to verify this. | Order | PointM | mIoU | | :-: |:-: | :-: | | q1, q2, q3 | 60.0 | 31.3 | | q1, q3, q2 | 59.7 | 31.2 | References: [A] Denseclip: Language-guided dense prediction with context-aware prompting CVPR2022 --- Rebuttal 2: Comment: Thank the authors for the detailed response. The original formulation of the classification loss was quite confusing, but now it makes sense. Also, the comparison with the baseline + IoU loss convinces me of the contribution of the RaS loss. The authors also addressed my other concerns. It depends on further discussion with other reviewers, but I am leaning towards raising my rating. Please add the conducted comparison experiments with the calibration loss in the final draft. --- Rebuttal Comment 2.1: Title: Thanks for your response Comment: Thanks very much for your positive feedback that we have addressed your concerns through our response. Your decision to upgrade our score is deeply appreciated. Please let us know if you have further questions. We'll also include more explanations and new results in our revision as you suggested. --- Rebuttal 3: Comment: Thanks for your response and your willingness to consider raising the score. We genuinely appreciate your time and the valuable feedback. We are pleased that our rebuttal has addressed your concerns. However, we notice that your current rating still suggests that our work is still not qualified to be accepted. Considering that the discussion period is ending soon, if there are any remaining issues or concerns, we would greatly appreciate the opportunity to discuss them further with you.
Summary: This paper proposes a Progressive Comprehension Network for weakly-supervised referring image segmentation, which mimics the human process of progressive understanding by breaking down sentences into segments and gradually narrowing down the target range. The main contributions include: A multi-stage Conditional Referring Module to progressively comprehend text cues. A Region-aware Shrinking Loss to constrain the target region to gradually shrink. An Instance-aware Disambiguation Loss to eliminate overlap between different instances. Strengths: 1. Presentation: The majority of the paper is well-written and easy to follow, providing clear explanations of the key concepts. 2. Motivation: The motivation is reasonable and convincing. By mimicking the human process of understanding concepts from coarse to fine, the method progressively refines the final mask. Directly requiring the mask to continuously shrink could lead to trivial solutions, and two losses are introduced to further support this pipeline. The proposed method aligns well with the motivation. 3. Experiments: Comprehensive ablation experiments demonstrate the effectiveness of the proposed modules Weaknesses: 1. The concept of mimicking the human process of understanding concepts from coarse to fine-grained, along with multi-stage refinement for different parts of sentences, has been explored in prior REC research [1]. This approach involves parsing sentences into multiple parts and conducting multi-stage refinement. Applying REC's ideas [1] to WRIS can also be considered a contribution, but it should be thoroughly discussed in comparison to [1], highlighting any differences, and cited appropriately. 2. There is a lack of discussion and citation of recent WRIS work, such as [2], in related work and the main text. It is necessary to discuss and cite the latest WRIS work in both the related work section and the main table. 3. Section 3.4 is not entirely understandable. I suggest the authors re-describe this section to make it clearer and easier to understand. 4. The top of Fig.2 is not clear enough, especially the subscripts a and d, I suggest the author re-design this part. [1] Yang S, Li G, Yu Y. Dynamic graph attention for referring expression comprehension[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 4644-4653. [2] Dai Q, Yang S. Curriculum Point Prompting for Weakly-Supervised Referring Image Segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 13711-13722. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Why is the range of Equation 6 set to [0,1]? Can the combination of other proposals (denoted as 𝑚_𝑏 ) be simply approximated by the complement of the foreground, since the proposals segmented by SAM nearly cover the entire image? In this case, if IoU(𝑅_𝑛,𝑚_𝑓) < IoU(𝑅_𝑛,𝑚_𝑏), wouldn't Equation 6 be greater than 1? 2. Does 𝑚_𝑓 change at different stages in Equation 7, for instance, when different proposals are selected at different stages? What issues could this cause? In Equation 7, are 𝑛 and 𝑛+1 reversed? According to Section 3.3, the ambiguity score should be lower at later stages. 3. Section 3.1 mentions that there are five phrases for each sentence, and Section 3.2 states that each stage uses one text cue, so there should be five stages. Why do the implementation details mention only three stages? 4. In Section 3.4, how do you sample extra 𝑁_𝑑 texts? Are they randomly sampled from multiple text descriptions corresponding to the same image? How do you ensure that the sampled text descriptions correspond to different target referents? Because in refcoco series dataset, one referent may have multiple text expressions. 5. In the bottom part of Tab.1, are the numbers all obtained by using a single peak point as the prompt for SAM? Did you try using the response map as the prompt for SAM? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Sincerely thanks for useful comments. To the weaknesses and questions, our response is as follows: **[W1]**: Thank you for your advice. Although both DGA [A] and our method adopt multi-stage refinement, there are significant differences: - The motivation is different. DGA focuses on the fully-supervised visual grounding task and aims to model visual reasoning on top of the relationships among the objects in the image. In contrast, our work addresses the weakly RIS task and aims to alleviate the localization ambiguity by progressively integrating fine-grained attribute cues. - The implementation is different. In the absence of mask, we build the relation between the response map of different stages using the proposed RaS loss. - We will discuss and include their differences in the revision. **[W2]**: Thank you for your advice. PPT [B], appearing on Arxiv near the NeurIPS deadline, is considered contemporaneous work. It introduces a parameter-efficient point prompting framework. The Table below presents a comparison on the val sets using **mIoU**. While our method utilizes down-sampled SAM mask proposals, we also tested with full-resolution masks for fairness, aligning with PPT. In both scenarios, our method demonstrates superior target localization compared to PPT. We will discuss and include their differences in the revision. | Methods | val (RefCOCO) | val (RefCOCO+) | val (RefCOCOg_Google) | | -|:-:|:-:|:-:| | PPT | 46.7 | 45.3 | 42.9 | | Ours (down-sampled SAM mask) | **48.6** | 43.3 | **45.4** | | Ours (raw SAM mask) | **52.2** | **47.9** | **48.3** | **[W3 & W4]**: - Explanation about `Sec. 3.4` - **Motivation**. Considering that each image generally contains multiple instances, each paired with its reference description, the weak discriminative capability of the model in WRIS often leads to multiple texts for different instances in a single image activating the same region (i.e, response map overlap in Line-209). Thus, we propose the **IaD** loss in `Sec. 3.4` to encourage referring texts for different instances (in the same image) to activate different regions. - **Implementation**. Since this loss function is applied individually to each image sample, for clarity, we omitted the batch concept and consider only one sample, and omitted the subscript `a` used in the Line 215-Line 227. Specifically, given one image-text pair $\\{\mathbf{I}, \mathbf{T}\\}$, we sample $N\_d$ (By default, $N\_d=1$) texts for current image sample, which means that we have two referring texts (i.e., $\mathbf{T}$ and $\mathbf{T}\_{d}$ ) for the image $\mathbf{I}$. Then, we can obtain response maps $\mathbf{R}$ and $\mathbf{R}\_{d}$ according to the description in `Sec 3.2`, and get the alignment scores $S$ and $S\_{d}$ according to `Eq. (5)`. Afterwards, we get the instance indexes $\texttt{argmax}(S)$ and $\texttt{argmax}(S\_{d})$ as the proposal predictions by them. Considering $\texttt{argmax}(\cdot)$ is a non-differentiable operation during gradient backward, we adopt a differentiable implementation by `Eq. (8)`. Finally, we can impose MSE loss on the indexes to encourage the two texts (in the same image) to activate different regions by `Eq (9)`. - We will revise the figure to make the subscripts clear in the revision. **[Q1]**: - Thanks for pointing out this issue. This is a typo, and the range shold be [0,2], as in some cases, it may occur that $\text{IoU}(𝑅_𝑛,𝑚_𝑓) < \text{IoU}(𝑅_𝑛,𝑚_𝑏)$. - It is less feasible. The shrinking loss constraint the response map toward the compact orientation while maintaining located in the foreground area. $𝑚_𝑏$ include less foreground part and more background than the $m_f$. It would lead to more background activation when choosing the $𝑚_𝑏$ as the complement of the foreground. **[Q2]**: It is possible to have different instances for different stages. According to our statistical result, this issue occurs in a small number of cases (less than 10%) and has minimal impact on loss optimization. We will include this analysis and revise the Eq. (7) in the revision. **[Q3]**: LLM outputs for referring texts often vary in attribute cue count (2-5). To enable parallel training, we standardize them to 5 cues via repetition padding (`Sec. 3.1`). The Tab. 3's ablation study on stage numbers shows optimal performance with 3 stages, thus our framework's implementation utilizes three stages. **[Q4]**: In addition to the standard training batch ($B$ image-text pairs), we sample $N_d$ extra texts ($N_d=1$) per image. Leveraging the **refer_id** and **image_id** annotations in refcoco datasets, which link texts to unique instances, we randomly sample texts referring to different instances from the original. **[Q5]**: All results utilize a single peak point as the SAM prompt. As suggested, we compared different prompts on the validation set (see table below). Using the response map as the prompt proved sensitive to thresholds, impacting noise ratios and yielding inferior performance compared to the single peak point prompt. | Methods | val (RefCOCO) | val (RefCOCO+) | val (RefCOCOg_Google) | | -| :-:|:-:|:-:| | single peak + SAM | **52.2** | **47.9** | **48.3** | | pseudo mask (thre=0.8) + SAM | 37.7 | 33.4 | 38.4 | | pseudo mask (thre=0.5) + SAM | 41.1 | 35.6 | 39.3 | | pseudo mask (thre=0.2) + SAM | 39.4 | 35.3 | 35.6 | **References** [A] Dynamic graph attention for referring expression comprehension--ICCV 2019 [B] Curriculum Point Prompting for Weakly-Supervised Referring Image Segmentation--CVPR. 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response, which addressed most of my concerns. I would like to further discuss the IaD loss and Q4. In GroupViT, a similar operation is used to backpropagate gradients, but they do this to achieve hard assignment during inference, while the gradients are computed in a soft form. However, in this paper, there is no need for hard assignments during inference. Is there a simpler equivalent form to replace the current IaD loss? Regarding the answer to Q4, can I understand that you utilize the prior of different texts corresponding to different instances within the same image using **refer_id** ? Is it used in TRIS? I will raise my score if you can provide an explanation. Thank you very much! --- Rebuttal 2: Comment: Thanks very much for your positive feedback that we have addressed your most concerns and your willingness to raise the score. Here we give further explanations about IaD loss and Q4. - IaD loss. Thanks for your careful comments! In GroupViT, the Grouping Block groups similar semantic regions using the hard assignment strategy during the group tokens training and inferring process. In our IaD loss, we also adopt a similar hard assignment for deriving the loss function. The reason is that we aims to get the pseudo mask prediction by the accurate peak value point (i.e., the hard assignment results) instead of relying on whole score distribution $S(\cdot)$ (e.g., $S_{a}$, $S_{d}$ in Section 3.4). Thus utilizing the hard assignment to derive the IaD loss well matches our purpose, which helps rectify the ambiguous localization results. If we use the soft assignment (e.g., measuring KL divergence between $S_{a}$ and $S_{d}$), though the equivalent may be simpler, it not only does not match our purpose but also introduces more tricky components for optimization (e.g., extra distribution regularization is required). In order to verify the argument, we conduct a comparison on RefCOCOg(G) val dataset as follows. The KL_Loss even causes a slight decline, while the IaD loss brings a clear improvement on localization. | Methods | PointM | mIoU | | :-------------------------------------- | ------ | ---- | | $\mathcal{L}_{\texttt{CLs}}$ | 51.7 | 25.3 | | $\mathcal{L}_{\texttt{CLs}}$ + KL_loss | 51.2 | 24.8 | | $\mathcal{L}_{\texttt{CLs}}$ + IaD_loss | 53.1 | 26.6 | - Q4. Yes, you are right. `refer_id` is important information for the data_loader sampler and adopted in TRIS and SAG. Here, we further consider and utilize the prior (i.e., different texts corresponding to different instances within the same image should activate different regions) for localization optimization. If there are any additional clarifications needed, please do not hesitate to reach out. --- Rebuttal Comment 2.1: Comment: Thank you for your detailed response, which addressed all my concerns. I decide to raise the score. Please include the previously mentioned discussions in the final revision. --- Reply to Comment 2.1.1: Title: Official Comment by Authors Comment: We thank Reviewer oLx4 for reviewing our work and for raising the review score. We really appreciate it. We will include the discussions in our revision.
Rebuttal 1: Rebuttal: **To Reviewers and AC:** We extend our sincere gratitude to all the reviewers (**R1**-oLx4, **R2**-kTSb, **R3**-Tfcw, and **R4**-K7SG) for your time and insightful reviews, which help us emphasize the contributions of our work and revise the presentation. We are encouraged to hear that the reviewers found the work is well-motivated with good presentation and contribution (**R1**, **R4**), as well as the comprehensive experimental evaluation and commendable performance (**R1**, **R2**, **R3**, **R4**). We have methodically addressed each point in our individual responses, hoping that we can address your concerns. Here we first address broader questions: **Explanation about the idea and implementation of classification loss $\mathcal{L}\_{\texttt{cls}}$ in TRIS** [25]: - Our work consists of multiple stages and utilizes $\mathcal{L}\_{\texttt{cls}}$ in TRIS at each stage independently for response maps optimization. Here, we omit the index of stage n for clarity. - $\mathcal{L}\_{\texttt{cls}}$ formulates the target localization problem as a classification process to differentiate between positive and negative text expressions. While the referring text expressions for an image are used as positive expressions, the referring text expressions from other images can be used as negative expressions for this image. Thus, given a batch (i.e., B) of image samples , each image sample is mutually associated with one positive reference text (i.e., a text describing a specific object in the current image) and mutually exclusive with $L$ negative reference texts (texts that are not related to the target object in the image). Note that the number of batches is equal to the sum of the positive samples and the negative samples (i.e., $B = 1 + L$). - Speficially, in each training batch, $B$ (i.e., $1+L$) image-text pairs $\\{ \mathbf{I}\_i, \mathbf{T}\_i \\}\_{i=1}^{B}$ are sampled. Through the language and vision encoders, we can get referring embeddings ${\mathbf{Q}} \in \mathbb{R}^{B \times C}$ and image embeddings ${\mathbf{V}} \in \mathbb{R}^{B \times H \times W \times C}$. Then, we obtain the response maps ${\mathbf{R}} \in \mathbb{R}^{B \times B \times H \times W}$ by applying similarity calculation and normalization operation. After the pooling operation as done in TRIS, we further obtain the alignment score matrix ${\mathbf{y}} \in {\mathbb{R}}^{B \times B}$. According to the $\mathcal{L}\_{\texttt{cls}}$, for $i\_{th}$ image in the batch, there is a prediction score $\mathbf{y}{[i, :]}$, where $\mathbf{y}{[i, i]}$ predicted by the corresponding text deserves a higher value (i.e, **1** positive one) and the others deserve lower values (**L** negative ones). Then `Classification` loss for the $i\_{th}$ image from the batch can be formulated as cross-entropy loss: $$ \mathcal{L}\_{\texttt{cls}, i} = - \frac{1}{B} \sum\_{j=1}^{B}\left( \mathbb{1}\_{i=j} \log\left( \frac{1}{1+e^{-\mathbf{y}{[i, j]}}} \right) + (1-\mathbb{1}\_{i=j}) \log \left(\frac{e^{-\mathbf{y}{[i, j]}}}{1+e^{-\mathbf{y}{[i, j]}}} \right) \right), $$ ​ and the `Classification` loss for the batch can be formulated as: $$ \mathcal{L}\_{\texttt{cls}}= \frac{1}{B} \sum\_{i=1}^{B} \mathcal{L}\_{\texttt{cls}, i} $$ ​ The $i$ denotes the index for the visual image and the $j$ denotes the index for the text. We address the raised concerns below and will revise our paper according to all comments. Please let us know if you have further questions. Regards, Authors (paper 891)
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Equivariant spatio-hemispherical networks for diffusion MRI deconvolution
Accept (poster)
Summary: The paper presents a convolutional neural network for spherical deconvolution of DWI data to estimate fiber orientation distribution. The main contributions over previous approaches include the introduction of Spatio-Hemispherical Equivariant Convolution, Dense Matrix Multiplication, and the use of Pre-computed Chebyshev Polynomials. These innovations improve overall efficiency. Evaluation was conducted on simulated data, assessing efficiency in terms of memory and runtime, and accuracy, demonstrating reduced false positive rates and angular error. Strengths: 1. Enhanced efficiency for DWI spherical deconvolution using deep neural networks. 2. Improved accuracy compared to current methods, demonstrated with simulated data designed for such experiments. 3. Provided code for reproducibility. Weaknesses: 1. Limited Novelty: The primary innovations over previous approaches focus on the network's efficiency (Section 3.2). Consequently, the main factors contributing to the reported improvement in accuracy remain unclear. 2. Clarity of Presentation: The paper is highly detailed, making it difficult to follow and understand the main contributions that actually lead to improvements in performance (Section 3.2) and accuracy. 3. Limited Demonstration of Impact: The practical impact beyond accuracy on simulated Tractometer data is not clearly demonstrated. It remains uncertain whether the proposed approach offers any significant clinical or scientific applications. 4. Mixed Results: Figure 6D is particularly disappointing, as the results produced by the proposed method noticeably differ from the reference. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Please clearly describe the main novel contributions of the paper and how they contributed to the results. 2. Please thoroughly discuss Figure 6D, where the results of the proposed approach appear to deviate substantially from the reference. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors discuss the limitations of the paper. However, the lack of demonstration of clinical applications and the generalization to multiple clinical/scientific DWI acquisition settings should be discussed more thoroughly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback and for highlighting our improved efficiency and accuracy in comparison to current work! We believe that there may have been some miscommunications on our part and hope to clarify them below: # Unclear reasons for improvement > _"Limited novelty as presentation focused on efficiency. The main factors contributing to the reported improvement in accuracy remain unclear."_ To clarify, the accuracy gains primarily come from using physically motivated priors to solve highly ill-posed fODF reconstruction problems. To recap, previous work either - Does **voxel-by-voxel** iterative optimization (CSD) or unsupervised equivariant network training (ESD), accounting for only the spherical nature of diffusion signals. - More recent work (RT-ESD) does unsupervised equivariant network training accounting for both spatial and spherical symmetries. However, it is **too computationally intense** to run practically, which is half of our motivation. Our accuracy gains come from two perspectives, 1. We now correctly model the **antipodal symmetry** of diffusion signals, broadly seen in clinical in vivo dMRI, and directly incorporate it into our network kernels. This allows the network to focus on the reconstruction task instead of also trying to learn the antipodal symmetry of the data. This convolution is detailed in Section 3.2/”Spatio-hemispherical equivariant convolution”. 2. Further, we incorporate a physically-motived regularizer encouraging low **spatial total variation** into the unsupervised network training. To our knowledge, this regularizer has not been used in previous deep dMRI deconvolution networks. Its inclusion significantly improves fiber localization over previous methods, particularly when working with clinically used low-angular resolution images. Both contributions, alongside the low-level network kernel analyses, are entirely novel and contribute to the observed efficiency and accuracy gains. We would be happy to provide any additional information on these contributions will directly emphasize these perspectives in the revision. > _"The paper is highly detailed, making it difficult to follow and understand the main contributions that actually lead to improvements in performance and accuracy."_ Thank you for raising this concern. We have outlined the main contributions that led to the reported gains in the answer above. Regarding the contribution presentation, we will incorporate your feedback (and that of Reviewer `zoNj`) to revise the presentation to be easier to follow, thank you for raising this concern. For reference, the information regarding the causes of performance increases are currently presented in the following sections: - The theoretical and technical explanation of the improved efficiency is provided in Section 3.2, with a quantitative analysis of each contribution detailed in Section 4.1 and illustrated in Fig. 4. - Section 3.3 covers the contributions to improved fODF estimation accuracy. A detailed quantitative analysis of our method is presented in Section 4.2 and depicted in Figs. 6 and 7, emphasizing the importance of total variation regularization during training. We will streamline this in the revision. # Experiments > _"The practical impact beyond accuracy on simulated Tractometer data is not clearly demonstrated. It remains uncertain whether the proposed approach offers any significant clinical or scientific applications."_ We respectfully disagree. Our experiments include quantitative analyses on two benchmark datasets widely used by the dMRI community and also include qualitative analyses on in vivo HCP data, widely used in a variety of dMRI papers, including at NeurIPS \[[1](https://arxiv.org/pdf/2011.01355),[2](https://arxiv.org/pdf/2306.00854)\]. For context, fiber orientations in human diffusion MRI at clinical resolution have no ground truth. Acquiring this ground truth would require post-mortem dissection-based analyses and we are not aware of any publicly available dMRI dataset of this kind. As a result, the dMRI analysis community focuses on highly realistic simulated benchmark datasets to measure algorithmic progress. This is consistent with the challenge datasets of DiSCo and Tractometer used in our paper, where we achieve state-of-the-art fODF reconstruction results at more practical speeds. Further, our method is entirely generic and can be plugged into any existing dMRI analysis pipeline for analyzing the connectivity of the human brain. Lastly, the most accurate of previous work (RT-ESD) was unsuitable for clinical and scientific applications due to its computational load. Instead, our work is highly scalable due to its efficiency gains and can be practically deployed and matches or exceeds its accuracy. We would welcome any further discussion on this matter and will clarify the text to emphasize these points. > _"Fig6D is disappointing, results produced by the method noticeably differ from the reference"_ To clarify, Fig. 6D is a qualitative result of a single deconvolved voxel at clinical resolution. The actual **dataset-wide results are presented in Fig. 6B**, wherein we achieve much fewer false positives with lower angular error when looking at the entire dataset. For context, this is a highly undersampled dataset that is challenging for all baselines and we cannot expect reconstructions from low-angular resolution to match the performance on high-angular resolution. However, the baseline CSD produces an entirely different fiber orientation and ESD and CNN produce spurious fibers. Only SHD-TV has the correct number of major fibers with a lower angular error to the ground truth. Again, these are highly undersampled inputs so no method will achieve perfect reconstruction due to the ill-posedness, but dataset-wide, we measure a 30% decrease in angular error, and a 27% decrease in missing estimated fibers (false negative rate), when using SHD-TV in comparison to CSD, for example. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The comments and discussion provided by the authors indeed clarify their contribution. Yet, the contribution is very specific to dMRI. It is probably a better match to a dMRI / Neuroimaging-related conference than to the broader audience in neurips. I therefore keep my score. --- Rebuttal 2: Comment: Thank you for engaging in the discussion phase! We are happy to see that the reviewer has no remaining technical and experimental concerns w.r.t. their original review. Their new concern pertains purely to scope. > _“Yet, the contribution is very specific to dMRI. It is probably a better match to a dMRI / Neuroimaging-related conference than to the broader audience in neurips.”_ We respectfully disagree and believe that there may be a misunderstanding as, `` ### **1. NeurIPS has already published several papers on dMRI** dMRI has been explored extensively by both machine learning and neuroscience researchers due to its rich geometric structure and use in measuring neural connectivity. As a result, NeurIPS has already featured several papers on dMRI analysis with machine learning, demonstrating its relevance to the NeurIPS audience. For example, - [NeurIPS’23](https://proceedings.neurips.cc/paper_files/paper/2023/file/294de0fa7149adcb88aa3119c239c63e-Paper-Conference.pdf) - [NeurIPS’20](https://proceedings.neurips.cc/paper/2020/file/bc047286b224b7bfa73d4cb02de1238d-Paper.pdf) - [NeurIPS’19](https://papers.nips.cc/paper_files/paper/2019/file/0bfce127947574733b19da0f30739fcd-Paper.pdf) - [NeurIPS’17](https://proceedings.neurips.cc/paper/2017/hash/ccbd8ca962b80445df1f7f38c57759f0-Abstract.html) - [NeurIPS’14](https://proceedings.neurips.cc/paper_files/paper/2014/file/215a71a12769b056c3c32e7299f1c5ed-Paper.pdf) Additionally, similar conferences and journals in machine learning and computer vision have all featured machine learning work on dMRI data, further supporting its broad relevance to the machine learning community. For example, - [ICLR’23](https://openreview.net/forum?id=0vqjc50HfcC) - [CVPR’24](https://openaccess.thecvf.com/content/CVPR2024/html/Fadnavis_Patch2Self2_Self-supervised_Denoising_on_Coresets_via_Matrix_Sketching_CVPR_2024_paper.html) - [PAMI’22](https://www.computer.org/csdl/journal/tp/2022/02/09247263/1oslcBeZ3l6) `` ### **2. Our work is a general geometric deep learning contribution for spatio-spherical data** NeurIPS/ICML/ICLR and similar venues are strongly interested in geometric deep learning and deep learning on manifolds. However, such manifold structure only arises in specialized applications that may seem niche at first but later become of wide interest to the machine learning community (e.g. geometric deep learning for [molecular docking](https://openreview.net/forum?id=kKF8_K-mBbS)). While we focus on dMRI in our paper, our work is generically beneficial to the analysis of spatio-spherical signals as it finds several avenues for efficiency gains and builds a framework for sparse non-negative spatio-spherical deconvolution. We foresee several potential benefits in contexts where spatio-spherical data arises: [robotics](https://www.roboticsproceedings.org/rss14/p23.pdf), [neural rendering](https://openaccess.thecvf.com/content/CVPR2022/papers/Fridovich-Keil_Plenoxels_Radiance_Fields_Without_Neural_Networks_CVPR_2022_paper.pdf), gaussian splatting, [molecular dynamics](https://openreview.net/forum?id=dPHLbUqGbr), etc. `` ### **3. NeurIPS invites interdisciplinary work in its call for papers** NeurIPS explicitly encourages interdisciplinary submissions in its [Call for Papers](https://neurips.cc/Conferences/2024/CallForPapers). Our work lies at the intersection of the core “_Machine learning for sciences (life sciences)_” and “_Neuroscience and cognitive sciences_” areas mentioned in the call as it directly contributes: - New geometric equivariant deep learning methods for a core life sciences imaging modality (dMRI). - A novel self-supervised non-negative deconvolution formulation on a spatial graph of spherical signals. - Enhanced neuronal fiber recovery, which is crucial for the core neuroscience task of understanding brain connectivity. `` ### **4. Neuroscience is a core topic at NeurIPS and dMRI is the main tool for understanding neural connectivity** Neuroscience has been central to NeurIPS from its inception and understanding the [structural connectivity](https://www.sciencedirect.com/science/article/pii/S1053811913005351) of the brain _in vivo_ relies entirely on dMRI, among having other applications such as [surgical planning](https://www.nature.com/articles/s41593-024-01570-1). Therefore, in addition to the geometric deep learning community, we foresee our work being of interest to NeurIPS’ neuroscience community as well as our work contributes new deep learning methods that significantly advance the analysis of such data. In turn, these deconvolution advancements lead to more accurate neural pathway estimation that can potentially improve downstream neuroscientific and biomedical analyses. Thanks again for your engagement!
Summary: The authors extend previous [work done](https://proceedings.mlr.press/v227/elaldi24a.html) in the diffusion MRI (dMRI) fibre orientation distribution function (fODF) domain with an efficient $\mathbf{E}(\mathbf{3}) \times \mathbf{SO}(\mathbf{3})$ equivariant network. The proposed model directly leverages the antipodal symmetry of dMRI data to reduce computation time by 65%, as compared to previous work. The authors demonstrate the efficacy of this approach in a number of experiments; including an analysis of fODF estimation in simulated data and real world data, as well as in a downstream tractography task. They find that their method often performs best, whilst maintaining relatively high compute and memory efficiency. Strengths: The writing and format of this work is excellent. The attention to detail, as provided within the main text and the appendix, rivals that of full length journal articles within this domain. Weaknesses: This work represents a continuation of a previous method developed within [Elaldi et al](https://proceedings.mlr.press/v227/elaldi24a.html). Whilst the authors here present a significant increase in computational efficiency, this study is an iterative improvement on previous approaches, rather than a leap forward. I caveat that by acknowledging the importance of iterative improvements within scientific research. Overall, the writing is of an excellent standard. However, I have a small number of suggestions/mistakes enumerated below. - Line 98 you state that "trainable models have the advantage of decreasing the reliability of the method...". Here, _reliability_ evokes “reliable, as in you can count on it” rather than “rely on, as in this is a prerequisite”. I would maybe switch to “need for” or similar. - Line 151 I think you're missing a word at the end of the sentence "sparse matrix multiplication significant computational..." - Line 162 "Fig. 3 overviews", I would use "presents an overview" rather than using overview as a verb. - Line 269 I think "unsupervisedly" sounds a little clunky, would swap for "in an unsupervised manner" or similar. - Line 270 you state "...to extract fODFs and then use the estimated fODF to investigate the effect of improved local fODF estimation on...". I think this could be reworked to use the words fODF and estimat(ed/ion) a little less, perhaps by swapping "the estimated fODF" for "them", or swap "investigate the effect of improved local fODF estimation" for "investigate their effect". Technical Quality: 4 Clarity: 4 Questions for Authors: Given that your experiments either involve simulated data or healthy patient data, when tasked with reconstructing fODFs for subjects with significant brain pathologies, would you reasonably expect to see a drop in performance? Or do you suspect that the regularisation enforced via the equivariant properties and loss functions would be enough such that the difference in performance would be minimal?More generally, how would you expect this method to perform when tasked with prediction on out-of-distribution data, as compared to the iterative per-subject CSD method? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging feedback! Broadly, we will incorporate all of the detailed suggestions and address the remaining high level questions/comments below: > _”Whilst the authors here present a significant increase in computational efficiency, this study is an iterative improvement on previous approaches, rather than a leap forward. I caveat that by acknowledging the importance of iterative improvements within scientific research.” We agree that we are building extensively on previous work, in particular the spatio-spherically equivariant RT-ESD method. However, RT-ESD is too computationally intensive for scalable use in either research or the clinic. As a result, we develop a principled framework that leverages a physically motivated assumption about dMRI to build new equivariant layers that retain the equivariance, but are much faster to compute and store in memory. Further, we conducted an extensive analysis of these network kernels and identified key avenues for improvement (eg, pre-computing the Chebyshev polynomials) in order to make deep learning on very high dimensional dMRI data tractable. We agree that this is not necessarily a paradigm shift methodologically, however, our contributions have significant potential for real-world application which was not possible with previous work. > _"I have a small number of suggestions/mistakes enumerated below."_ We greatly appreciate the detailed enumeration of typos and awkward phrasings! We will make all suggested changes. > _"[...] When reconstructing fODFs for subjects with significant brain pathologies, would you reasonably expect to see a drop in performance? [...] how would you expect this method to perform on out-of-distribution data [...]?"_ Thank you for raising this question. We do agree that our model can be further validated under more diverse clinical conditions. As there is no _in-vivo_ microstructural ground truth for the human brain, especially for brains with pathologies, validating novel methods is challenging and ill-posed. However, as testing data distributions shift, we believe our additional proposed priors benefit ill-posed reconstruction. Our priors of spatial smoothness, spatio-spherical equivariance, sparse fibers, and antipodally symmetric signals are all valid for brains with pathologies as well. We therefore do not expect degradation on brains with lesions. As a preliminary exploration, in Fig. 1 of the rebuttal PDF, we now perform an initial fODF analysis on a brain with a tumor using the recently released dataset from \[[1](https://www.nature.com/articles/s41597-024-03013-9)\]. We find that the additional priors from our method help fODF estimation and fiber tracking substantially yielding smooth fibers, whereas the baseline CSD method has a hole in its estimated fiber tracks. However, we emphasize that this analysis is preliminary and that tractography on brains with lesions is a highly active area of research that requires substantial modifications to tractography algorithms \[[2](https://www.sciencedirect.com/science/article/pii/S1053811921009241)\], which are outside the scope of our fODF estimation work. We will mention this as a limitation and area for future work and add these new results to the appendix. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The comments and discussion provided by the authors is sufficient, and I therefore keep my positive score.
Summary: This work introduces a novel framework for fODF estimation through equivariant spatio-hemispherical networks that achieve dMRI deconvolution. Experiments on simulated dMRI datasets with known ground truth, as well as on real in vivo dMRI data are conducted, showing promising results while improving over previous methods. Strengths: 1. The paper improves upon previous methods in both processing time, and quantitative results. 2. The evaluation is sound and the experiments nicely show results on synthetic datasets with known ground truth. Weaknesses: The main weakness of this manuscript is in the way the contributions section is written (at the end of the Introduction section, lines 57--71). The authors would potentially increase readability of their paper by making this paragraph as clear and as sound as possible. It would also help readers quickly identify if they wish to continue reading this paper and if it is of interest to their own research. My suggestions are: 1) Introduce this paragraph by restating what the main aims of this paper are (similar to what you wrote on lines 20--23). 2) Clearly introduce the technical contributions as they are backed by the experiments / results section with a short description of what was achieved. 3) Some of the contributions (specifically, the in vivo qualitative results) are not present in the main manuscript, but are part of the appendix. The exception is Figure 2 which does not have enough description in the main text (lines 254--256), and appears in the middle of a paragraph discussing the synthetic data results. I believe that the experiments section should be clearly reflected in the main aims and contributions of the paper, and the appendix should be used for optional / additional results which do not take away from the main contributions. I understand that there is a limit of 9 content pages to the paper, and I am happy to discuss this further. Technical Quality: 3 Clarity: 3 Questions for Authors: Please find below some questions and general suggestions: 1. Figure 1 introduces the readers to an example of how dMRI data looks, and it is an important prelude towards understanding the problem statement of your paper. For this reason, I suggest the authors include further explanations in this figure, in either visual form or in the captions, such as: 1. How does gradient 25 differ from gradient 288 (maybe try to explain / show that these are different gradient directions and/or strengths instead of the 1/25/288 indices which have no specific meaning in this context)? 2. I think it is also important to show a zoomed-in version of the T1w image, to not confuse the reader that the spatio-spherical signal is also present in the structural data. 2. Please be consistent with referencing your figures in the manuscript: you sometimes write Figure x, and sometimes write Fig. x 3. I suggest you introduce the name of your proposed spatial-hemispherical deconvolution (SHD) framework in the contributions section (lines 57--71) as on the next page Figure 2 shows examples of your model. 4. In Figure 6B, could you discuss whether the low-resolution input to high-resolution output experiments could produce unrealistic reconstructions in the presence of noise / in a real dMRI data setting, as high-angular resolution is needed for higher contrast in the angular domain? I am wondering if for crossing fibers, for example, as shown in Figure 6D, none of the methods can accurately reconstruct the ground truth then maybe we cannot trust these reconstructions for low-angular resolution data? 5. Can you also please label the x-axes in Figures 6A and 6B to make it clear that the values are in degrees? 6. In Figure 6C can you explain the “narrowness” of your result as compared to the ground truth or CSD? 7. Can you provide a short description (in section 4.2.1) of how the peak angular error and false positive rates are calculated? I understand that these are described in A.3, but to improve readability I suggest that they are introduced a bit sooner with the details left for the appendix, or at least to make it more clear that the details are in A.4. Moreover, it would be interesting to understand the slight increase in FPR in both Figures 6A and 6B when comparing SHD(-TV) with RT-ESD. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have attempted to address some potential limitations, but would be nice to see a lengthier discussion on how their proposed method would perform on other in vivo clinical datasets, under different noise levels, patient motion, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the highly detailed and valuable feedback! We will incorporate the suggestions and address the other concerns below: # Weaknesses > _"The main weakness of this manuscript is in the way the contributions section is written at the end of the Introduction. [...]"_ Thank you for the detailed suggestions on editing this paragraph! We will incorporate them into the revised paper. > _"Some of the contributions (specifically, the in vivo qualitative results) are not present in the main manuscript, but are part of the appendix."_ As the reviewer mentions, we combined the qualitative in-vivo fODF estimation and quantitative synthetic analysis in Section 4.2.1, as both address the same question. We clarify that this section details both results and is not solely focused on quantitative outcomes. However, we agree that our complete qualitative and quantitative analysis was not entirely contained in the main paper due to space limitations. In our revision, we will expand the caption of Fig. 2 and bring in more qualitative in vivo results into the main paper from the appendix. To do so, we will abbreviate the front matter of the introduction and move some lower level experimental details to the appendix. # Questions > _"Could you discuss whether the low-resolution input to high-resolution output experiments could produce unrealistic reconstructions in the presence of noise / in a real low-resolution dMRI data setting [...]?"_ Thank you for raising this important discussion point. We agree with the reviewer that hallucination is a concern in all undersampled reconstruction methods, including previously proposed fODF estimation methods. As demonstrated in our experiments, in low angular settings, the current widely-used Constrained Spherical Deconvolution (CSD) method provides inaccurate fiber estimates with high angular error. Our motivation is precisely to mitigate these unrealistic reconstructions by considering the spatial correlation of the diffusion signal and leveraging the network's equivariance property. In our DiSCo experiments with clinically relevant noise, as shown in Fig.6.B, using both the spatially-informed network and spatio-spherical equivariance leads to a significant decrease in hallucinated false positive fibers and angular error. **Importantly, the FPR achieved with our method on low-angular resolution input is competitive with the results on high-angular resolution input**. However, we entirely agree that undersampled reconstruction results must be approached with caution, as the lower angular resolution increases angular error and the likelihood of missing estimated fibers, which can be qualitatively observed in Fig.2.B. This caution should be considered within the broader context of the tissue microstructure estimation field. For example, Diffusion Tensor Imaging, which uses only a few spherical samples to estimate the underlying fiber configuration, can sometimes create spurious local reconstructions but remains a primary tool in clinical applications and research. Our method, with its additional priors and self-supervised learning strategy, represents a significant improvement over methods currently used in clinical settings. We will clarify the results to include this discussion. > _"[...] lengthier discussion on how the proposed method would perform on other in vivo clinical datasets, under different noise levels, motion, etc."_ We agree that this is an important discussion. In the attached rebuttal PDF, we perform an additional experiment using dMRI images of brains with pathologies \[[1](https://www.nature.com/articles/s41597-024-03013-9)\]. As also suggested by reviewer `JrFU`, we agree that our method would benefit from further specific validation under varying image qualities and the presence of artifacts. Briefly, we find that additional priors of spatio-spherical equivariance and spatial smoothness improve fiber estimation on in vivo anomalous clinical data, with the caveat that such analyses need to be performed on much wider scale in a clinical followup for certainty. Further, we believe that as image quality degrades, that is precisely where additional priors help for ill-posed reconstruction. Due to space limitations in the rebuttal, please see our discussion with Reviewers `JrFU` and `F5CU` for further details. These limitations and avenues for further work will be added to the revision. > _"[...] slight increase in FPR in both Fig 6A and 6B when comparing SHD(-TV) with RT-ESD."_ The reviewer is correct that there is a slight increase in the FPR. Regarding the SHD versus RT-ESD comparison, we speculate that as FPR and Angular Error have a tradeoff, they might require slightly differing regularization weights. SHD also has the prior of antipodal symmetry whereas RT-ESD does not, and the bias-variance tradeoff might be causing this slight increase. For the proposed SHD-TV model, the increase in FPR can be attributed to the smoothing effect of the total variation regularization, which occasionally extends a fiber into neighboring voxels where it might not be appropriate. However, this increase in FPR is offset by the benefits of much lower angular error and higher spatial coherence, which enhances robustness against noise and improves fiber localization. > _"In Fig6C can you explain the “narrowness” of your result as compared to the ground truth or CSD?"_ We use a prior of sparse fibers using a sparse regularizer during training, this is consistent with previous work such as ESD and RT-ESD and the wider fODF literature. This leads to increased narrowness in comparison to the CSD method, which does not use any sparsity regularization. # Writing improvement suggestions Thanks again for all of the detailed writing and presentation suggestions. We will incorporate all of them into the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The clarifications added by the authors to all questions raised have addressed most of my concerns. I therefore keep my positive score.
Summary: This paper introduces a novel method for analyzing diffusion MRI data, leveraging convolutional network layers equivariant to the E(3)×SO(3) group, which respects the physical symmetries of dMRI data. The proposed spatio-hemispherical graph convolutions reduce computational complexity while maintaining high deconvolution accuracy. Strengths: This paper presents a novel method for dMRI deconvolution by introducing equivariant convolutional network layers that account for the physical symmetries in dMRI data. The use of spatio-hemispherical graph convolutions, leveraging the antipodal symmetry of neuronal fibres, reduces computational complexity while maintaining accuracy. The proposed method addresses important challenges in dMRI analysis, focusing on the need for accurate deconvolution at clinically feasible resolutions. The theorical foundation and empirical validation is sufficient, and the methodology is well-presented with clear explanations. The results are consistently validated, showcasing the method's efficiency and accuracy improvements. Additionally, The clarity of the paper is good with well-organized structure. Weaknesses: 1. The reliance on specific assumptions may limit the scope of the model. It would be better to improve the flexibility of the model so that it could be applied to diverse scenarios. 2. Diverse clinical conditions can be considered in future studies, involving varying levels of image quality and pathological changes. Conditions such as specific noise or artifacts are not fully explored. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the authors clarify the limitations of the antipodal symmetry assumption? Are there specific scenarios where this assumption might not hold, and how might this impact the model's performance? 2. Can the authors discuss the generalizability of their method across different clinical conditions and patient populations? How adaptable is the model to varying clinical data qualities? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive evaluation and for highlighting the importance and quality of our methodology, presentation, and experiments. # Antipodal symmetry assumption > _"It would be better to improve the flexibility of the model so that it could be applied to diverse scenarios."_ We agree that the antipodal spherical symmetry assumption limits our approach to antipodally symmetric spatio-spherical data. While previous methods \[[1](https://arxiv.org/pdf/2310.02970),[2](https://arxiv.org/pdf/2304.06103),[3](https://arxiv.org/pdf/2102.06942)\], have developed spatio-spherical equivariant networks without this assumption, they are far too computationally intensive to scalably process high-dimensional data such as diffusion MRI. We believe that this is partly why the diffusion MRI community primarily still uses conventional iterative methods. We instead build on previous works by incorporating more knowledge of the underlying task directly into the network layers to significantly increase computational efficiency. However, we agree with you that future work should aim to relax these assumptions while maintaining our efficiency gains and this is mentioned in our future work. > _"Can the authors clarify the limitations of the antipodal symmetry assumption?"_ Thank you for initiating this important discussion! The antipodal symmetry assumption is widespread in dMRI analysis due to the symmetric nature of the diffusion process. Consequently, only a symmetric fODF can be estimated from a single voxel-wise diffusion signal, and **mainstream deconvolution methods widely use this assumption**. For instance, the most widely-used fODF estimation method, Constrained Spherical Deconvolution \[[4](https://www.sciencedirect.com/science/article/abs/pii/S1053811907001243)\], computes only the even-order spherical harmonics of the fODF, thus making the antipodal symmetry assumption. The one limitation is that at a _microscopic_ level, fODFs can have antipodal asymmetry. However, this is only visible in ex vivo dissection studies and we focus on in vivo clinical dMRI where antipodal symmetry is a valid assumption. This assumption is also reflected in common dMRI acquisition strategies in large-scale studies that only sample hemispherical signals at every voxel \[[5](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8317510/),[6](https://onlinelibrary.wiley.com/doi/10.1002/mrm.21646),[7](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7065087/)\]. We will mention this in the Discussion of the revised paper. # More diverse clinical conditions > _"Diverse clinical conditions can be considered in future studies, involving varying levels of image quality and pathological changes [...]."_ Thank you for suggesting this possible future direction! We do agree that our model can be further validated under more diverse clinical conditions. However, as there is no _in-vivo_ microstructural ground truth for the human brain, validating novel methods is challenging and ill-posed. In our paper, we rely on community-developed standardized benchmarks and datasets. As mentioned by the reviewer, our paper presents comprehensive experiments to quantitatively validate the performance improvements of our method by performing: - fODF estimation analyses with diverse image qualities, quantitatively on the DiSCo and Tractometer datasets, as well as qualitatively on the HCP dataset. - Downstream tractography task analyses on the Tractometer dataset. - Speed and memory analyses. Regarding the specific scenarios mentioned by the reviewer, ## Brains with lesions In Fig. 1 of the rebuttal PDF, we perform an initial fODF analysis on brains with anomalies using the recently released dataset from \[[8](https://www.nature.com/articles/s41597-024-03013-9)\]. We find that the additional priors from our method help fODF estimation and fiber tracking substantially, filling in a hole in the fiber tracks recovered using the baseline CSD method. However, we note that this analysis is preliminary and that tractography on brains with lesions is a highly active area of research that requires substantial modifications to tractography algorithms \[[9](https://www.sciencedirect.com/science/article/pii/S1053811921009241)\], which are outside the scope of our fODF estimation work. We will mention this as a limitation and area for future work and add these new results to the appendix. ## Varying image qualities Regarding image qualities, all of the datasets in our paper have substantially different characteristics and SNRs. Further, we believe our model generalizes well across **different clinical conditions and patient populations** because: - It is trained to reconstruct the input diffusion signal directly, rather than recognizing specific tissue structures that may or may not be present at clinical deployment. - The self-supervised training framework allows the model to be fine-tuned for any new image and imaging setup. We agree that testing performance under varying amounts of patient motion and imaging artifacts would be highly interesting. However, we are not aware of any publicly available in vivo datasets that provide such data. Our limitations and future work will now further emphasize that future work should investigate performance under various amounts of head motion or other clinical artifacts. --- Rebuttal Comment 1.1: Comment: Thank you for the response. The rebuttal has addressed most of my concerns. I will keep my scores.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and their encouraging feedback. Their comments and suggestions have made the revision a stronger paper. We were happy to find that they found the submission to be theoretically sufficient \[`JrFU`\], well presented and organized \[`JrFU, F5CU`\], extensively validated with sound evaluation \[`JrFU`, `zoNj`\], yielding better efficiency and accuracy \[`JrFU`,`F5CU`,`rCgS`\], with a high attention to detail \[`F5CU`\]. Common questions and concerns revolve around the clarity of our contributions paragraph in the Introduction \[`zoNj`, `rCgS`\], our antipodal symmetry assumption \[`JrFU`\], the limited analysis in the main text of the in-vivo experiments \[`zoNj`\], and a lack of clinical applications \[`rCgS`\]. The reviewers would also like to see a longer discussion of our method in more diverse clinical settings, such as pathological changes, noise, and motion \[`JrFU`, `zoNj`, `F5CU`\]. ## Changes and clarifications To improve our submission and address the reviewers' concerns, we make the following major revisions and clarifications, briefly recapped below: - \[`JrFU`\] We motivate the physical reasons behind the assumption of antipodal symmetry of diffusion MRI signals and describe the computational benefits below. - \[`zoNj`,`rCgS`\] We will use the suggestions to rewrite our contribution paragraph in the revision so as to make the paper accessible and clearly understandable. - \[`zoNj`,`rCgS`\] We clarify our experiments on the highly undersampled qualitative example visualized in Fig 6D. - \[`zoNj`,`rCgS`\] We provide additional descriptions of our quantitative and qualitative results, regarding the narrowness of the predictions, slight increases in false positive fibers on a specific experiment, and the challenges of prediction on low-angular resolution undersampled images. - \[`JrFU`,`zoNj`,`F5CU`\] We now provide a qualitative analysis of the outcome of our method on dMRI of a brain with pathologies and discuss how our model adapts to new clinical and imaging setups. All details and remaining comments are addressed in the individual responses below. Additional results are attached as a PDF. We appreciate the reviewers' comments on our contribution and thank them again for their time and expert feedback. We would be happy to address any additional concerns and incorporate any further suggestions. Pdf: /pdf/1585d8e29912e194103cc8c893cfb640f8265975.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SlimGPT: Layer-wise Structured Pruning for Large Language Models
Accept (poster)
Summary: This paper presents a novel SlimGPT framework to conduct structured pruning for LLMs in a fast and low-cost way. Specifically, SlimGPT modifies the Optimal Brain Surgeon (OBS) framework, and proposes a Batched Greedy Pruning to enhance the performance of head-wise pruning through Cholesky decomposition. SlimGPT also improves the FFN pruning efficiency via Dynamic Group Size. Besides, SlimGPT employs an Incremental Pruning Ratio in order to mitigate the error accumulation problem in layer-wise pruning. Experiments on the LLaMA, LLaMA-2, and other popular LLMs demonstrate that SlimGPT achieves a new SOTA on LLM pruning. Strengths: 1. This paper presents a proper way to extend the OBS framework to structured pruning with strong theoretical foundation. The technical details are thorough and convincing. 2. Extensive experiment results demonstrate that SlimGPT successfully achieves a new SOTA, surpassing all related works in this field. Weaknesses: 1. SlimGPT employs an Incremental Pruning Ratio strategy. In L220, the article specifies that a logarithmic increasing strategy performs well and is employed in all experiments. Actually, this should be considered more carefully. Pruning can be viewed as a method to get rid of unnecessary information in the activations and only preserve necessary components for later layers. From this perspective, it resembles that of Token Merging [1,2]. It is already demonstrated in [1,2] that the performance loss caused by an aggressive pruning schedule in the first layers can be mitigated by re-training. Therefore, I suggest the authors test more increasing strategies, instead of the logarithmic strategy, to fully utilize the power of SlimGPT. 2. Pruning at a $p\\%$ sparsity does not usually lead to an $\frac{1}{1-p\\%}$ throughput speedup [2]. To demonstrate that SlimGPT actually helps to deploy LLMs, the authors should carefully compare the **throughput** (eg., tokens per sec.) of pruned models generated by SlimGPT and the competing baselines. [1] Token Merging: Your ViT But Faster (ICLR 2023) [2] PYRA: Parallel Yielding Re-Activation for Training-Inference Efficient Task Adaptation (ECCV 2024) Technical Quality: 4 Clarity: 4 Questions for Authors: Under the same total sparsity value, does different increasing schedules affect the model inference speedup? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: See Weaknesses and Questions. I do appreciate the theoretical contributions. So my rating may be adjusted after carefully checking the replies and other reviews. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly appreciate the reviewer for the constructive comments. > W1: Further explanation about Incremental Pruning Ratio strategy. In Section 5.3.2, we have discussed the impact of different pruning ratio strategies on performance, with detailed results presented in Table 5 (for convenience, Table 5 is reproduced below). Specifically, for the Incremental Pruning Ratio strategy, both logarithmic and linear approaches are employed. Each of these strategies offers distinct benefits: the linear approach achieves better Zero-shot Avg results but incurs a slight loss in PPL. Nevertheless, overall, whether the strategy is linear or logarithmic, the incremental scheme significantly outperforms uniform pruning or decrementing strategies. | | Model Size | PPL | Zero-shot Avg. | | ---------------------- | ---------- | ------ | -------------- | | log increase (SlimGPT) | 3.40b | 38.83 | 52.23 | | linear increase | 3.34b | 46.57 | 53.45 | | uniform | 3.50b | 123.05 | 44.34 | | log decrease | 3.40b | 380.69 | 36.73 | | linear decrease | 3.34b | 932.64 | 35.62 | Please note that our experiments are conducted under **low-resource conditions**. After extensive fine-tuning on large-scale data, the differences in performance resulting from various pruning ratio strategies will diminish further (Sheared-LLama[1], even with uniform pruning, mitigated performance impacts through subsequent training on a large data scale). However, under resource-limited conditions, selecting an appropriate pruning ratio strategy can reduce the reliance on subsequent training, thus minimizing performance loss. This is particularly crucial for LLMs, as a complete training cycle demands substantial resources and time. [1]. Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning. > W2: Compare the throughput (eg., tokens per sec.) of pruned models generated by SlimGPT and the competing baselines. Thank you for your valuable suggestion. About the additional experiments on inference speed, we provide the experimental results and analysis in "global response" at the top. Please refer to that reponse for detailed information. > Q: Under the same total sparsity value, does different increasing schedules affect the model inference speedup? In general, under the same number of parameters, deeper models (with more layers) tend to have slower inference speeds. On the other hand, models with the same number of layers but different widths primarily experience variations in instantaneous computational load, which impacts speed differently depending on the performance and optimization of the GPU used. Since our current method typically does not affect the number of layers, as long as there isn't an extreme variation in width distribution, inference speed should not be significantly impacted. This is supported by the experimental results from the previous question. --- Rebuttal 2: Comment: Dear Reviewer 8hG2: We sincerely appreciate your valuable and insightful comments. With less than 24 hours remaining in the discussion period, we anticipate further opinion from you. We would like to further discuss the LLM inference speed. Since most operators are computed in parallel, the acceleration from pruning does not arise from a reduction in computational load but rather from a decrease in parameter access time, which is a significant bottleneck in large model inference. Therefore, the inference time is not linearly related to the parameter number. In our experiments, we find that pruning 50% of the parameters result in an inference speed that is 63% of the original. Best Regards, Authors of submission 8375
Summary: This paper presents SlimGPT, a method for structured pruning of LLMs to balance performance with efficiency. The method is based on the OBS framework and introduces Batched Greedy Pruning to enhance pruning accuracy and speed. The authors also propose the Incremental Pruning Ratio strategy to mitigate performance loss due to error accumulation. Experimental results on LLaMA and other models demonstrate that SlimGPT achieves state-of-the-art results with significant improvements in performance retention compared to existing methods. Strengths: 1. The paper introduces a novel method for structured pruning based on the OBS framework to accelerate large language models. 2. The method is validated through comprehensive experiments on various models, showing improved performance and efficiency. Weaknesses: 1. Since Table 9 illustrates the significant impact of the calibration dataset on the pruning performance, I doubt whether the selection of calibration samples is more important than the design of the pruning criterion. The experimental results compared in the paper are with different sampling strategies of the calibration dataset, so it is hard to evaluate the superiority of the proposed method. 2. The design of the layer-wise sparsity is empirical with no theoretical analysis. Since the pruning within a layer is a greedy pipeline, it is unclear why the layer-wise sparsity design is not in a greedy pipeline. 3. The inference speed should be provided for comparisons. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. In Figure 1., from my view, it seems that the output elements of the third head are all the smallest, so they are all reordered to the first head for pruning. My question is what if not all elements of a head are in the same magnitude order? In this case, how to conduct batched greedy pruning? 2. Can you provide some insights about the reason why the pruning results with 2048 samples and 2048 sequence length start to decrease in zero-shot average metric? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly appreciate the reviewer for the constructive comments. > W1: The impact of the calibration dataset on the pruning performance. Thank you for the valuable comment. Table 9 displays the pruning results of SlimGPT **without fine-tuning** (for convenience, Table 9 is reproduced below). When the calibration data is switched from C4 to Alpaca/GPT4-Alpaca, the PPL results indeed degrade (48.26/47.06 vs. 42.06). However, the Zero-shot Avg scores improve significantly (54.44/54.66 vs. 52.70), surpassing the **fine-tuned results** of all baselines. | Dataset | PPL↓ | Zero-shot Avg.↑ | | ------------ | ----- | --------------- | | C4 (SlimGPT) | 42.06 | 52.70 | | Alpaca | 48.26 | 54.44 | | GPT4-Alpaca | 47.06 | 54.66 | This observation highlights a trade-off between PPL and Zero-shot Avg. Due to SlimGPT's inherent parameter compensation mechanism, it is sensitive to the quality of the input data, so there is a distinct difference in impact between pre-trained data and instruction-following data. Additionally, our experiments show that random open-source pre-trained data can achieve SOTA results. Therefore, we believe that using higher-quality data tailored to specific domains provides SlimGPT with greater potential for improvement compared to other methods. > W2: The design of the layer-wise sparsity is empirical. The core concept of SlimGPT is derived from the OBS framework, which addresses global model pruning by breaking it down into layer-wise subproblems. Each layer is optimized **sequentially** from shallow to deep. Previous studies, such as OBC[1] and GPTQ[2], have demonstrated that this method yields excellent results across various domains. And our primary objective is to apply this framework to the structured pruning of LLMs. We would like to discuss the viability of the layer-wise greedy pipeline. Due to the unidirectional influence between layers, the current layer is impacted solely by the preceding layer and remains unaffected by subsequent layers. If the pruning process is not executed sequentially, the local optimality at each step would be compromised. We believe this would complicate the task and significantly increase the computational time required. [1]. Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning. [2]. GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers. > W3: The inference speed should be provided. Thank you for your valuable suggestion. About the additional experiments on inference speed, we provide the experimental results and analysis in "global response" at the top. Please refer to that reponse for detailed information. > Q1: Further explanation on Figure 1. As a structured pruning scheme, SlimGPT prunes attention blocks with the smallest pruning granularity at the head level. Consequently, heads are treated as indivisible units. We evaluate each head by summing the errors of all columns within it. For those interested in the specifics, our detailed algorithmic process is outlined in Algorithm 1. > Q2: Can you provide some insights about the reason why the pruning results with 2048 samples and 2048 sequence length start to decrease in zero-shot average metric? Thank you for the comment. The Zero-shot Avg is essentially a mean value, which can be easily influenced by sub-tasks with significant fluctuations. To facilitate analysis, we provide detailed results for the commonsense reasoning tasks below. | Experiment | Sample Size | Sequence Length | BoolQ | PIQA | HellasW | WinoG | ARC-e | ARC-c | OBQA | Avg. | | ---------- | ----------- | --------------- | ----- | ----- | ------- | ----- | ----- | ----- | ----- | ----- | | (a) | 512 | 64 | 63.12 | 68.82 | 51.34 | 57.62 | 46.17 | 29.35 | 35.00 | 50.20 | | | 512 | 1024 | 67.22 | 71.06 | 55.23 | 59.27 | 53.45 | 31.14 | 36.60 | 53.42 | | | 512 | 2048 | 66.73 | 71.60 | 55.35 | 57.30 | 53.11 | 32.42 | 35.60 | 53.16 | | (b) | 128 | 512 | 65.66 | 69.37 | 54.07 | 58.33 | 50.04 | 30.55 | 35.2 | 51.89 | | | 256 | 512 | 63.76 | 70.29 | 54.47 | 58.88 | 53.07 | 30.89 | 34.6 | 52.28 | | | 1024 | 512 | 65.60 | 71.65 | 54.35 | 57.46 | 53.07 | 30.63 | 35.40 | 52.59 | | | 2048 | 512 | 63.30 | 71.71 | 54.79 | 57.38 | 53.2 | 31.66 | 34.60 | 52.38 | As the sample length increases from 64 to 1024, there is a noticeable improvement in the metrics across various tasks. However, when the length is further increased to 2048, the rate of improvement slows down, and some metrics even decline, particularly for WinoGrande (59.27 vs. 57.3). We believe that since most commonsense reasoning tasks involve short text data, the impact of SlimGPT on these tasks may diminish when the sequence length of the calibration set exceeds a certain threshold. In fact, it could even weaken the model's understanding of short texts. When the sample size increases, the changes across the various subtasks are less consistent and exhibit smaller magnitudes. As shown in Figure 3 of the paper, the fluctuations in Zero-shot Avg in subplot (a) are significantly smaller than those in subplot (b), and there are two instances of decline. Both of these declines result from fluctuations in the BoolQ dataset (refer to the table above). Thus, we hypothesize that the BoolQ dataset is particularly sensitive to random sampling in the calibration set, which may result in the drop in Zero-shot Avg. --- Rebuttal 2: Comment: Dear Reviewer QvFZ: As the discussion period comes to a close, we sincerely look forward to your feedback on our rebuttal. Your further opinions would be essential for us to improve our work. Regarding your concern that the calibration set is more important than the pruning method, we include additional experiments with LLM-Pruner (our baseline). We perform pruning without fine-tuning at the same pruning ratio, yielding an evaluation result of `PPL=136.19`. In contrast, the worst result for SlimGPT is `PPL=48.26`. Therefore, we believe that while the calibration set may have a slight influence on the bias of model pruning, it does not fundamentally affect the pruning results. Best Regards, Authors of submission 8375
Summary: The authors proposed a layer-wise pruning approach called SlimGPT that follows the Optimal Brain Surgeon framework but with a batched pruning procedure utilized to make it feasible on large models while remaining structured. The authors claim near-optimal pruning performance on commonsense reasoning datasets and wikitext ppl. Strengths: Models pruned via structured pruning approaches can naturally gain efficiency benefits, and the proposed method supposedly inherits this important property (though missing some efficiency reports). The task- and relatively architecture-agnostic nature of SlimGPT also makes it score well in terms of adaptability. The experiment reports indicate SlimGPT beats three other SOTA methods by a healthy margin (though its alignment requires some additional polish). Weaknesses: The main weakness of the paper is its eval, both in terms of alignment and coverage. 1. Unaligned evaluation: Many of the compared baselines utilized a different calibration dataset and procedure, but it looks like only the LLM-Pruner is replicated in an aligned setup, while all results for other methods are copied from their original literature. This needs to be controlled, especially with only three methods to compare. 2. Overly emphasis on the Llama family: The presented evaluation is solely conducted on various Llama 1/2 and Vicuna models, which are all Llama family-based. More coverage on other popular LLMs should be included. I'd recommend a healthy selection from Mistral, Phi, Gemma, Qiwen, and something along the line. 3. Only on zero-shot commonsense reasoning tasks: As mentioned around line 243, the real task evaluation of this paper is conducted *"under a zero-shot setting on the Commonsense Reasoning datasets, which encompass seven diverse subtasks..."* This is not comprehensive enough. Common intelligence datasets like MMLU and GSM8k in typical few-shot setups should also be reported. Additionally, given the weak generation/long context performance in some recent layer pruning (but not layer-wise structured pruning in finer granularity) work like ShortGPT, I'd like to see SlimGPT evaluated on tasks like LongBench, InfinityBench, Needle-In-A-Haystack retrieval, and HumanEval on models with longer context window (e.g., mistral 32k). 4. Incomplete efficiency report: There are no throughput or latency results, which are key efficiency metrics for conducting structured pruning in the first place and must be reported in efficiency literature, especially because different structured pruning methods can reflect different inference efficiencies, leading to work like ShearedLlaMA proposing targeted structured pruning. Also, the authors claim that *"low-cost, low-resource, and time-efficient compression scheme"* as their contribution in line 65, but there is no runtime or memory report on its pruning procedure. 5. Not really a weakness, but the authors should consider giving a more detailed introduction of the compared baselines either in related work or around line 246. Despite my score of 4, I believe many of the raised concerns are addressable as there are mostly just more evals, and I am open to improving my rating upon proper rebuttal. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Where is the plotting for the unpruned model in Figure 2? What pruning method is applied here? How much performance can fine-tuning recover in this setup? 2. Is a pruned model superior to a smaller pretrained model? Like can a SlimGPT-pruned 13B model with a pruning ratio set around 45% be more performant than a 7/8B model? From the look of Tables 1 & 2, it seems a 50% pruned 13B model is significantly inferior to a 7B, and a 50% pruned 30B model is required to provide similar zero-shot task performance to an unpruned 7B, so this is unlikely. If confirmed, I am afraid this massively discounts the contribution of this work, as one can often just adopt a smaller pretrained model with no pruning or calibration necessary. Though I understand different application scenarios may call for models with different sizes, where the pretrained models can't cover them all. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have provided a limitation and checklist section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly appreciate the reviewer for the constructive comments. Due to text limitations, we try to answer your questions concisely and convincingly. > W1: Unaligned evaluation. We acknowledge your concerns. Achieving a fully aligned experimental setup is challenging since pruning tasks differ from tasks like model optimization, and various pruning methods have unique principles and data requirements. For instance, `LLM-Pruner` has two stages: pruning and fine-tuning, while `Compresso` and `LoRAPrune` involve only one stage of sparse training. Our method, `SlimGPT`, also follows a two-stage process, aligning it with `LLM-Pruner`. Given the differing principles of `Compresso` and `LoRAPrune`, we make a compromise in terms of aligning the experimental environment. However, we believe that this does not affect the conclusions of our paper. In fact, we achieve better pruning results using fewer data resources or fewer iterations. > W2: Overly emphasis on the Llama family. Thank you for your valuable comments. In our model selection, we reference previous works and the influence of LLama and Vicuna models but overlooked Vicuna's derivation from LLama. Due to time constraints, we choose Baichuan-7b, a prominent model in the Chinese community, for our experiments. This model includes a comprehensive MMLU and C-Eval evaluation script and is supported by LLM-Pruner, aiding our quick verification process. Using the same setup as the paper, we prune Baichuan-7b by 20% with LLM-Pruner and SlimGPT. SlimGPT achieves slightly better PPL results (19.73 vs. 19.85) and significantly outperforms LLM-Pruner in all commonsense reasoning tasks (58.8 vs. 56.38). Prune%|Method|#Params|PPL↓|BoolQ|PIQA|HellasW|WinoG|ARC-e|ARC-c|OBQA|Avg -|-|-|-|-|-|-|-|-|-|-|- -|-|7B|13.25|70.09|76.93|70.06|64.09|67.05|40.53|38.20|60.99 20%|LLM-Pruner|5.7B|19.85|62.87|74.48|63.03|60.93|60.31|36.86|36.20|56.38 20%|SlimGPT w/o tune|5.7B|20.01|69.17|75.03|65.25|61.25|64.94|35.15|36.60|58.20 20%|SlimGPT|5.7B|19.73|66.21|75.03|66.74|63.85|62.84|38.91|38.00|58.80 > W3: Only on zero-shot commonsense reasoning tasks. Thank you for your suggestions. We evaluated our model based on the official LLama report and previous research, focusing on language modeling performance (PPL) and zero-shot commonsense reasoning. While we believe these tasks are convincing, we recognize they may be insufficient for a complete assessment. To improve our evaluation, we perform 5-shot tests on Baichuan-7b using MMLU and cross-lingual task C-Eval, as seen in the table below. The 5-shot evaluation results for MMLU show SlimGPT outperforming the baseline (35.4 vs. 24.3), and it also excels in the cross-lingual C-Eval dataset (28.7 vs. 22.4), despite reduced performance after finetuning on Alpaca. Dataset|Prune%|Method|#Params|Humanities|Social Sciences|STEM|Other|Avg -|-|-|-|-|-|-|-|- MMLU|-|-|7B|38.1|49.2|35.2|47.7|42.1 ||20%|LLM-Pruner|5.7B|24.8|23.4|21.9|26.8|24.3 ||20%|SlimGPT w/o tune|5.7B|29.7|38.0|31.3|38.9|34.0 ||20%|SlimGPT|5.7B|32.2|39.2|31.1|40.2|35.4 C-Eval|-|-|7B|47.2|50.5|36.4|45.5|43.3 ||20%|LLM-Pruner|5.7B|23.5|24.9|21.1|21.2|22.4 ||20%|SlimGPT w/o tune|5.7B|32.9|34.3|28.3|30.9|31.0 ||20%|SlimGPT|5.7B|31.8|33.1|24.2|29.9|28.7 Regarding your suggestion for experiments on long-context performance, we recognize their importance but, due to time constraints, we won’t be able to conduct them at this moment. We plan to include them in future work for a more comprehensive study. > W4: Incomplete efficiency report. Thank you for your suggestion. We include the experimental results and analysis on inference speed and pruning efficiency in the "global response" section, as other reviewers have similar questions. Please refer to that for detailed information. > W5: Detailed introduction of the compared baselines. Due to space constraints, we have omitted the baseline introduction in the final paper version. We apologize for any inconvenience and will include it in the updated version. > Q1: Further explanation on Figure 2. Figure 2 shows the output errors of various pruned models relative to the unpruned model, which has an error of zero and is not displayed (its PPL is 12.63). In our pruning approach, we utilize SlimGPT with the Alpaca dataset, pruning only the first layer to minimize output errors. To assess performance recovery after finetuning, we present the PPL results below, where all models are finetuned with LoRA for one epoch using Alpaca. We observe that the PPL does not improve after finetuning, even for the unpruned model. We believe this may be due to the distinct distribution bwtween the instruction-following dataset Alpaca and test dataset Wikitext2, potentially causing overfitting of the unpruned weights and poorer performance on the test data, which requires further validation. Model|PPL (w/o tune)|PPL (w/ tune) -|-|- LLama-7B|12.63|15.63 Layer0-prune-25%|12.86|16.86 Layer0-prune-50%|13.98|17.31 Layer0-prune-75%|21.49|31.27 > Q2: Is a pruned model superior to a smaller pretrained model? This topic deserves discussion. Under **low-resource conditions**, pruned models after finetuning generally perform worse than smaller pre-trained ones, as shown in our works and previous research(LLM-Pruner). However, with **full training**, pruned models can outperform their smaller versions, like in Sheared-LLama[1]. Thus, pruning may serve as a high-benchmark initialization method to lessen the need for extensive training. Our aim is to focus specifically on the task of LLM pruning itself. Under constrained resource conditions, such as when a more compact version of the model (e.g., less than 1B parameters) is required for edge-side deployment, LLM pruning and compression provide a cost-effective solution. [1]. Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning. --- Rebuttal 2: Title: Bumping the score to 5, but would need more evals and comparsions to keep increasing my rating. Comment: I thank the authors for the detailed rebuttal. The new results — especially on the efficiency front — look decent and thus warrant a slight bump to 5. However, to fully convince me, I'd like to see SlimGPT applied on truly challenging tasks like GSM8k, as well as long context tasks like LongBench and Needle-in-a-Haystack (on llama3 and mistral v0.2). I am particularly interested in the long context front due to the known drawbacks of ShortGPT. With the unaligned nature discussed in W1, it looks like the proposed method is mostly compared to LLMPruner. While I recognize that LLMPruner is an established method, I wonder how SlimGPT would perform against some of the more advanced methods, like APT [2]. p.s. While I appreciate the addition of Baichuan, I would still like to see the MMLU report on a more mainstream model, like llama2-7b, for better cross-referencing needs. May the authors supply that? --- [2] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference --- Rebuttal Comment 2.1: Comment: Dear Reviewer jqTK: We truly appreciate the reviewer's recognition of our work. In response to your suggestions regarding extra experiments, we add the following evaluations: > Evaluation of LLama2-7b on MMLU and mathematical tasks GSM8k. | Prune% | Method | #Params | Humanities | Social Sciences | STEM | Other | Avg | GSM-8k-Acc | | ------ | ---------------- | ------- | ---------- | --------------- | ---- | ----- | ---- | ---- | | - | - | 6.7B | 43.3 | 51.6 | 36.3 | 52.1 | 45.6 | 13.8 | | 20% | LLM-Pruner | 5.4B | 25.7 | 23.6 | 24.2 | 26.8 | 25.2 | 2.3 | | 20% | SlimGPT w/o tune | 5.4B | 36.0 | 45.2 | 33.5 | 44.1 | 39.4 | 4.2 | | 20% | SlimGPT | 5.4B | 35.3 | 42.2 | 31.5 | 43.0 | 37.8 | 6.0 | The first few columns display the results for MMLU, while the last column shows the evaluation results for GSM-8k. SlimGPT demonstrates significant improvements over the baseline in both tasks. Notably, for the challenging task of GSM-8k, LLM-Pruner retains 16.7% of the performance (2.3 vs. 13.8) after pruning, whereas SlimGPT retains 43% of the performance (6.0 vs. 13.8), achieving more than a twofold increase. Since we employ a very basic and low-cost fine-tuning method, we believe there is room for further performance enhancement. > Evaluation of Mistral-7B-Instruct-V2.0 on LongBench Due to the more representative 32k context of Mistral, we will present the evaluation results for Mistral-7B-Instruct-V2.0 on LongBench, hoping this will serve as a valuable reference. The model evaluation takes longer than we anticipate, so we primarily provide results for the single-document QA task, as shown in the table below. The performance of the pruned model varies on English tasks, with the capability retaining as much as 97% (30.54 vs. 31.47) in some instances. However, its performance on the cross-language task MultiFieldQA-zh is comparatively modest. We find that this can be largely attributed to the Alpaca fine-tuning, which results in a bias towards answering questions in English, consequently lowering the overall scores. For example, consider the following case. `{'pred': 'The appeals court determined that the defendant should pay a compensation amount of RMB 57,081.86.', 'answers': ['人民币57081.86元。'], 'all_classes': None, 'length': 2975}` | Prune% | Method | #Params | NarrativeQA | Qasper | MultiFieldQA-en | MultiFieldQA-zh | | ------ | ------- | ------- | ----------- | ------ | --------------- | --------------- | | - | - | 6.7B | 27.33 | 31.47 | 48.59 | 58.17 | | 20% | SlimGPT | 5.4B | 18.89 | 30.54 | 40.47 | 24.75 | > SlimGPT vs. APT. The APT paper actually describes a sparse training pruning method. This approach manually inserts masks before and after specific modules and measures the importance of channels/heads using outliers. Its advantage lies in an end-to-end design; however, its task-specific nature and strong coupling with LoRA complicate its application to other tasks. Based on the results provided in the paper, we conduct a rough comparative analysis. While the different pruning ratios make it difficult to directly judge the evaluation results, it is evident that SlimGPT, with 1 epoch naive LORA tuning, offers a more lightweight and straightforward setup without requiring alterations to the model structure. | Method | Prune% | Tuning Data | Tuning Epoch | LORA rank | HellaSwag Eval | MMLU Eval | | ------- | ------ | ----------- | ------------ | --------- | -------------- | --------- | | APT | _30%_ | Alpaca | 15 | 8-256 | 71.1 | 36.9 | | SlimGPT | _20%_ | Alpaca | **1** | **8** | 74.9 | 37.8 | Best Regards, Authors of submission 8375 --- Rebuttal 3: Title: Why can't you prune SlimGPT 30% to be comparable with your APT report? Comment: I'll take a closer look at the rest soon, but may the author please address the question as titled? Just want to post earlier so that you can have a chance to reply. I'd also like to see APT on GSM8k under a comparable setting. --- Rebuttal Comment 3.1: Comment: Dear Reviewer jqTK: We would like to thank the reviewer for the comments. > Why can't you prune SlimGPT 30% to be comparable with your APT report? Our analysis is based on the results provided in the APT paper and the existing results from the SlimGPT paper. Unfortunately, we do not have sufficient time to conduct additional experiments to prune the model by 30% for comparison now (The experiment is ongoing). We hope you can understand this limitation. Furthermore, the APT paper do not provide evaluation results for GSM8K, so we are currently unable to present GSM8K comparison results. > More evaluation of long-context tasks on LongBench. We update our evaluation of long-context tasks on LongBench, which includes 4 task types, as shown in the table below. The current version of the model evaluation has fixed bugs related to input format compared to the previous version, leading to a slight change in performance. According to the table, we find that the performance of the pruned model varies on **English** tasks; in some cases, it even exceeds the results of the original model (2WikiMQA: 28.34 vs. 26.32). The majority of tasks maintain over 80% of the original performance. - Evaluations on Single-Doc QA tasks | Prune% | Method | #Params | NarrativeQA | Qasper | MultiFieldQA-en | MultiFieldQA-zh | | ------ | ---------------- | ------- | ----------- | ------ | --------------- | --------------- | | - | - | 6.7B | 27.32 | 31.47 | 48.57 | 49.06 | | 20% | SlimGPT | 5.4B | 19.33 | 30.37 | 42.64 | 26.65 | - Evaluations on Multi-Doc QA tasks | Prune% | Method | #Params | HotpotQA | 2WikiMQA|Musique|DuReader (zh)| | ------ | ---------------- | ------- | ----------- | ------ | --------------- | --------------- | | - | - | 6.7B | 43.11 | 26.32 | 18.81 | 30.57 | | 20% | SlimGPT | 5.4B | 38.13 | 28.34 | 15.16 | 13.98 | - Evaluations on Summarization tasks | Prune% | Method | #Params |GovReport|QMSum|MultiNews|VCSUM (zh)| | ------ | ---------------- | ------- | ----------- | ------ | --------------- | --------------- | | - | - | 6.7B | - | 22.92 | 25.46 | 14.91 | | 20% | SlimGPT | 5.4B | - | 19.95 | 22.78 | 12.49 | - Evaluations on Few-shot Learning tasks | Prune% | Method | #Params |TREC|TriviaQA|SAMSum|LSHT (zh)| | ------ | ---------------- | ------- | ----------- | ------ | --------------- | --------------- | | - | - | 6.7B | 68.50 | 86.98 | 42.00 | 39.00 | | 20% | SlimGPT | 5.4B | 56.00 | 82.59 | 41.46 | 22.75 | Best Regards, Authors of submission 8375 --- Rebuttal 4: Title: Thanks. Bumping to 6 but please include the additional results, as well as tone down your claim a little. Comment: Thank you for being resourceful and adding many requested experiments during the rebuttal time. **The added result confirms my intuition: that cheap, non-ShreadLlama-like LLM pruning techniques do not perform well under rigorous evaluation.** Your added results on GSM8k and LongBench confirm that, as there are visible drops with just 20% pruned. Note that we usually don't observe such a performance drop with techniques like weight-only quantization at a much more aggressive rate; even with vanilla group-wise quantization with no finetuning. That being said, I recognize that pruning LLMs is much harder than quantifying LLMs. There surely are some benefits unique to the pruning way, and overall, pruning is without a doubt a school of efficiency worth developing; especially knowing its gap with quantization. **The proposed method is better or at least on par with the established/recent baselines, so I recommend an acceptance with score 6 & confidence 5.** But I urge the authors to: * Tone down the claim a bit, e.g., the "98% performance" claim in your abstract is slightly misleading. Most common-sense reasoning tasks are easy and do not represent LLM's true capability, so it is almost an overstatement based on cherry-picked results. * Highlight the results that are not perfect (MMLU, GSM8k, LongBench, etc.) so that future works will have a clear direction for improvement, instead of always muddling those easy tasks. * Add a proper section to discuss the pros/cons of pruning compared to other efficiency techniques (e.g., quantization) and their unique challenges. --- Rebuttal Comment 4.1: Comment: Dear Reviewer jqTK: Thanks for your valuable feedback. We sincerely appreciate your time to review our submission and response. We will revise the paper accordingly and incorporate the above results into the updated version. Best Regards, Authors of submission 8375
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely appreciate your valuable and insightful comments. Here I would like to address the concerns regarding inference speed or pruning efficiency raised by all reviewers. > Inference speed and memory usage report. As the inference speed is primarily influenced by the final model structure and is not specifically tied to the pruning algorithm used (typically, the number of layers does not decrease), we initially omitted the inference runtime report. To demonstrate that SlimGPT actually helps to deploy LLMs, we provide the inference speeds for LLama-7b with 20% and 50% pruning, as shown in the table below. The batch size is set to 1, the maximum output limit is 512, and the average value is taken from 50 inference results. Additionally, we examine the impact of two different pruning ratio strategies on inference speed: uniform pruning and the Incremental Pruning Ratio strategy. All supplementary experiments were conducted in an environment with NVIDIA H20. In the case of pruning 50% of the parameters using the log increase strategy, the model's memory usage during inference is reduced to 51% (14297MB vs. 27737MB), and the inference latency decreases to 63% (9.21ms vs. 13.51ms). When employing uniform pruning, both memory usage and latency experience slight reductions, although it is important to note that the parameter counts are not entirely equivalent between the two methods. | Prune% | Strategy | #Params | Memory | Avg Latency (per token) | | ------ | ------------ | ------- | ------- | ----------------------- | | - | - | 6.7B | 27737MB | 13.51ms | | 20% | log-increase | 5.4B | 22497MB | 11.89ms | | 50% | log-increase | 3.4B | 14297MB | 9.21ms | | 50% | uniform | 3.4B | 13793MB | 9.05ms | > Pruning runtime and memory usage report. Regarding the runtime and memory usage during the pruning procedure, we have mentioned briefly that all pruning processes can be completed within 1 GPU hour (using A100 hardware). Specifically, the memory usage varies depending on the model size and the calibration size, and the pruning speed is additionally influenced by the pruning ratio. We present the pruning efficiency results from our paper experimental setup. Since SlimGPT operates in a layer-wise manner, we don't need to load the entire model but only load the parameters of the current layer and the corresponding input features at one time, which significantly reduces memory usage. For the task of pruning the 13B model by 50%, we only require 12 GB of GPU memory and 41 minutes to complete the process. | Model | memory | prune-20%-runtime | prune-50%-runtime | | --------- | ------ | ----------------- | ----------------- | | LLama-7b | 7375M | 678.4s | 1073.9s | | LLama-13b | 11601M | 1417.1s | 2475.3s |
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Why are Visually-Grounded Language Models Bad at Image Classification?
Accept (poster)
Summary: This article notes that current top-performing VLMs, like GPT-4V and LLaVA, are unable to perform image classification tasks like the CLIP model. Despite having a larger number of parameters and incorporating a vision encoder from a pre-trained CLIP model. This author thinks the main reason for this underperformance is data-related. Critical information for image classification is encoded in the VLM's latent space but can only be effectively decoded with enough training data. The author proposes to incorporate classification-specific datasets (such as ImageNet) into VLM training; its performance in classification and complex visual tasks can be significantly improved. For example, after fine-tuning ImageNet, VLM's performance on the ImageWikiQA dataset improved by 11.8%. Strengths: The article clearly points out the performance issues of the current Visual Language Model (VLM) in image classification tasks. It systematically analyzes the possible reasons, filling the research gap in this field. This author thinks the main reason for this underperformance is data-related. By incorporating classification-specific datasets (such as ImageNet) into VLM training, the performance of classification and complex visual tasks can be significantly improved. For example, after fine-tuning ImageNet, VLM's performance on the ImageWikiQA dataset improved by 11.8%. Weaknesses: The novelty is limited. This article assesses how well current VLMs perform on classification datasets and fine-tunes them using the same datasets. These contributions are insufficient to warrant publication of this paper. Technical Quality: 3 Clarity: 3 Questions for Authors: I would like to know how well the fine-tuned VLMs perform on the original benchmarks such as MME, TextVQA, and others. Can we achieve similar results if a classification dataset is included in the second stage of training with LLaVA? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the author has addressed relevant limitations and discussed the broader impacts adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer CuMn for providing thoughtful feedback on our work. We address Reviewer CuMn’s questions below. --- **Limited novelty** > The novelty is limited. This article assesses how well current VLMs perform on classification datasets and fine-tunes them using the same datasets. These contributions are insufficient to warrant publication of this paper. > We want to clarify that our paper is not a method paper proposing new techniques to achieve better performance on standard benchmark leaderboards. Instead, it is an analysis paper. Object recognition is fundamental to the general reasoning capability of VLMs, yet current VLMs perform poorly in this area. We are the first to thoroughly investigate this critical issue and propose a potential solution to improve it. **Our primary contribution lies in identifying the problem (i.e., VLMs are inadequate image classifiers, a significant weakness that has been overlooked) and conducting an in-depth investigation (i.e., understanding why VLMs are poor image classifiers and how to address this issue).** This contribution goes far beyond merely assessing current VLMs on classification datasets and fine-tuning them using the same datasets. In summary, our contributions are threefold: 1. **Thorough Evaluation to Identify the Problem:** We thoroughly evaluated current public and proprietary VLMs on four common classification benchmarks and discovered that their performance significantly lags behind CLIP. This finding is counterintuitive because VLMs often use CLIP as their vision encoders. **This finding reveals a weakness in VLMs that previous works have not noted. Understanding the limitations of VLMs is crucial given their increasing deployment in various scenarios.** 2. **Hypothesis Testing to Understand the Problem:** Given the poor performance of VLMs in classification, we investigated the underlying reasons. We considered multiple plausible hypotheses related to VLM inference, training, and data perspectives. For example, **essential information for classification could be lost during the vision encoder’s propagation through multiple LLM layers; VLMs might be inherently poor at classification due to their text generation training objective compared to the standard cross-entropy objective.** Our thorough investigation ruled out these reasons but revealed the major issue: the lack of alignment data. **This finding is counterintuitive because other alternative hypotheses also seemed plausible.** 3. **Improving VLMs based on Our Understanding:** Based on our findings, we explored ways to enhance VLM performance. We believe that classification is foundational to more advanced capabilities. For example, if a VLM cannot accurately classify mushroom species, it will also struggle with follow-up questions, such as whether a particular mushroom is poisonous. Indeed, **we found that simply adding classification data not only improves VLMs’ classification performance but also their general capabilities, demonstrating that accurately identifying objects is a prerequisite for answering complex questions about these objects.** **Impact:** recent work from Google DeepMind, PaliGemma [1], supports our main conclusion that data is the critical factor in improving VLM performance. They also found that most VLM tasks benefit significantly from longer pre-training with more data (Figure 4, Appendix K). **This demonstrates that our analysis can inspire researchers to build better VLMs in the future.** [1] PaliGemma: A versatile 3B VLM for transfer. --- **Fine-tuned VLM performance on other benchmarks** > I would like to know how well the fine-tuned VLMs perform on the original benchmarks such as MME, TextVQA, and others. Can we achieve similar results if a classification dataset is included in the second stage of training with LLaVA? > Thank you for your question. **Following your suggestion, we evaluated both the original VLM and our fine-tuned VLM on three additional benchmarks and found that the VLM’s performance remain the same.** The table below shows the performance of the original LLaVA1.5-7B and the fine-tuned LLaVA1.5-7B on TextVQA, POPE, and MMVet. After fine-tuning, the LLaVA1.5-7B achieves the same accuracy as without fine-tuning. This result is intuitive because we included all the original LLaVA training data during our instruction tuning. We did not include MME because the dataset is not publicly available, and our access request has not yet been approved. We will release all the codes and model checkpoints to reproduce these results and include them in the revised paper. | | TextVQA | POPE Popular | POPE Adverse | MMVet | | --- | --- | --- | --- | --- | | LLaVA1.5-7B (Official Released) | 58.2 | 86.1 | 84.2 | 31.1 | | LLaVA1.5-7B (Further Finetuned) | 58.0 | 86.3 | 84.5 | 31.1 | --- Thank you again for your feedback. Please let us know if you have further questions or concerns! --- Rebuttal 2: Title: Gentle request for discussion Comment: Dear Reviewer, We kindly request your feedback on our rebuttal. We believe we have thoroughly addressed your concerns regarding other VLM benchmarks and the novelty of our work. If our rebuttal has resolved your concerns, we would greatly appreciate it if you could reconsider your scores for our paper. Should you have any further questions or concerns, we are eager to discuss them. Thank you for your time and consideration! --- Rebuttal Comment 2.1: Comment: Thank the authors for the response. After reading the rebuttal, I would like to keep the original rating. --- Reply to Comment 2.1.1: Comment: Thank you for your response! Could you clarify which of your concerns remain unaddressed? If you have any other questions or concerns, please feel free to let us know.
Summary: This paper analyzes the issue of large vision language models (VLMs) that perform poorly in common image classification datasets such as ImageNet. The authors analyze different perspectives on the problem, including trying different inference and training methods. For inference, the authors tried using different prompt variations, shrinking the number of classes, and computing conditional probabilities of class names. All these methods still leads to performance gaps between VLMs and CLIP. Furthermore, the authors confirm that sufficient class information is encoded in the output features of the visual encoder in VLMs, via linear probing. The authors also discover VLMs can be trained to generate class labels with a comparable accuracy as their visual encoders. Finally, they analyze the frequency of different classes in the original datasets that were used to train the VLMs, and find that the frequency of samples for a class is positively correlated to the accuracy of the VLMs on that class. This reveals that the poor performance of VLMs in image classification is due to the lack of class labels in their instruction-tuning data. They also construct a new question-answering dataset, ImageWikiQA, to test the model ability of answering questions related to the finegrained image class. The authors then finetune VLMs using a dataset that consists of both the image classification data and the original nstruction-tuning data. The resulting VLMs can perform well on the ImageWikiQA dataset. Strengths: - This paper provides a relatively extensive analysis of why SOTA public VLMs perform poorly in image classification. The results generally look correct but there are some issues (as detailed in Weaknesses and Questions) that make them less sound. - The authors create a new dataset, ImageWikiQA, to evaluate the question-answering capacity of VLMs related to the fine-grained classes in ImageNet. - The authors show that incorporating the ImageNet classification data into VLM's instruction-tuning data can improve the model performance on ImageWikiQA. Weaknesses: - It seems that the authors did not consider there may be multiple textual labels for many ImageNet classes. For example, for class with ID n01496331, the class name can be electric ray, crampfish, numbfish, or torpedo. When calculating VLM classification accuracy, the authors can match the model-generated text with any of these class names. - It may harm the VLM's instruction-following capacity to incorporate the ImageNet 1.28M classification data into the original LLaVA instruction-tuning data to train the VLM, because the classification data has a single fixed template "What type of object is in this photo? ASSISTANT: <Class Name>." Did the authors perform any evaluation of the VLM's instruction-following capacity? - It is expected that the information necessary for classification is largely preserved in the visual encoders in the VLMs, as these visual encoders are typically pre-trained on ImageNet and are kept frozen during the integration into VLMs. If the instruction-tuning data for VLMs do not contain class-specific information, it is unlikely that the VLMs can automatically align the class information in the visual encoder's output to the text generation. This explains why inference methods do not work in the paper. - Section 4.2 is a bit unclear. For example, it mentions that the authors randomly sampled at most 3 ImageNet images for each question, but then it mentions there are 2000 multiple-choice questions, each with an image. So how many images are there per question? Technical Quality: 2 Clarity: 3 Questions for Authors: - Can the authors confirm the results in Table 3 (the right sub-table) are the accuracy on the validation/test sets? The accuracy on ImageNet looks very high to me. I have the same question for the accuracy mentioned in line 279. - In Table 4, LLaVA1.5-7B with GT Class provide still has a relative low accuracy. Why is that? - Can the authors confirm there are no training data in ImageWikiQA (i.e., it is only a testing dataset)? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Please see the Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 8fB5 for providing detailed and thoughtful feedback on our work. We address Reviewer 8fB5’s questions below. --- **Multiple textual labels** > There may be multiple textual labels for many ImageNet classes. For example, n01496331 can be electric ray, crampfish... Thank you for your suggestion. **We have now considered ImageNet label synonyms [1] in our evaluation process.** When evaluating with synonyms, **the accuracy only improves by 1%-3%**, which still results in a significant performance gap compared to CLIP. We will include these results in the revised paper. [1] https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a | Model | Accuracy w/o Synonyms | Accuracy w/ Synonyms | Delta | | - | - | -| - | | BLIP2-2.7B | 25.3 | 27.8 | 2.5 | | IBLIP-7B | 14.6 | 16.5 | 1.9 | | IBLIP-13B | 14.7 | 16.6 | 1.9 | | LLaVA1.5-7B | 22.8 | 24.6 | 1.8 | | LLaVANeXT-V7B | 29.4 | 32.2 | 2.8 | | LLaVA1.5-13B | 24.3 | 26.0 | 1.7 | | LLaVANeXT-M7B | 32.3 | 35.1 | 2.8 | | Claude3 | 53.6 | 56.3 | 2.7 | | GeminiPro | 39.2 | 42.5 | 3.3 | | GPT4 | 48.5 | 51.1 | 2.6 | --- **Instruction-following capacity** > It may harm the VLM's instruction-following capacity to incorporate the ImageNet 1.28M classification data into the original LLaVA instruction-tuning data. Great question! **We have now evaluated the original VLM and our fine-tuned VLM on three additional instruction-following benchmarks:** TextVQA, POPE, and MMVet. We found that **fine-tuned VLM achieves the same accuracy as without fine-tuning**. We will release all the codes and models to reproduce these results and include them in the revised paper. | | TextVQA | POPE Popular | POPE Adverse | MMVet | | --- | --- | --- | --- | --- | | LLaVA1.5-7B (Official Released) | 58.2 | 86.1 | 84.2 | 31.1 | | LLaVA1.5-7B (Finetuned) | 58.0 | 86.3 | 84.5 | 31.1 | --- **Expected conclusion** > It is expected that the information necessary for classification is largely preserved in the visual encoders in the VLMs. We agree that it is apparent that the information necessary for classification is preserved in visual encoders. However, **it is not unclear whether the information still remains after propagating through all the LLM layers** (e.g., 32 layers for Vicuna-7B). Our results highlight that information is preserved after the LLM propagation rather than preserved in visual encoders. > If the instruction-tuning data for VLMs do not contain class-specific information, it is unlikely that the VLMs can automatically align the class information in the visual encoder's output to the text generation **This claim is actually unknown and controversial.** The key question is: if both vision encoders and language models have seen a specific class during their uni-modal training, do VLMs need to see the exact class in a multi-modal format to align them during multi-modal training, and how much data is required? **Many previous works have shown that this alignment stage is very data-efficient and even unnecessary [1, 2].** **Our paper demonstrated that 1) multi-modal data is necessary for alignment, and 2) increasing the data amount leads to linearly improved performance.** This provides a critical data-centric view for VLM training. **Recent work from DeepMind echoes our findings.** They found that most VLM tasks benefit significantly from longer pre-training with more data, as shown in Figure 4 and Appendix K [3]. We will add these clarifications in the revised paper. [1] Visual Instruction Tuning [2] Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models [3] PaliGemma: A versatile 3B VLM for transfer --- **ImageWikiQA clarification** > It mentions that the authors randomly sampled at most 3 images for each question, but then it mentions each with an image. How many images are there per question? **There is only one image per question** (see Appendix Table 11 for examples). The confusion arises from our two-stage creation pipeline: - **1st stage: generate questions at the class level.** For any given class out of the 1000 ImageNet classes, we create questions such as “What is the native region of <class name>?”. - **2nd stage: generate questions at the image level.** Each ImageNet class has 50 images, from which we randomly sample 3 images and create 3 questions like “What is the native region of <image i>?” We will revise the text for clarity. --- **Data split clarification** > Are the results in Table 3 and line 279 are the accuracy on the validation/test sets? The accuracy on ImageNet looks very high to me. **Both Table 3 and Line 279 are evaluated on the validation set.** The data splits use the official data split (e.g., ImageNet contains 1.28M training images and 50K validation images). Detailed data splits are provided in Appendix Table 6. **The very high accuracy on ImageNet is one of the significant contributions of our paper.** We found that, after fine-tuning the VLM on ImageNet, there is no more gap between VLM and CLIP, with VLM now being the state-of-the-art classifier. --- **Low accuracy of LLaVA1.5-7B with GT class** > In Table 4, LLaVA1.5-7B with GT Class still has a relative low accuracy. Why? Table 4 reports the accuracy on ImageWikiQA. **These questions come from Wikipedia and require extensive knowledge, which is very challenging even for humans** (e.g., what is the native region of the guinea pig?) **The low accuracy (55.9%) is because LLaVA1.5-7B lacks sufficient world knowledge for these ImageNet classes.** It is well known that smaller LMs like Vicuna-7B lack world knowledge compared to larger LMs like GPT4. --- **ImageWikiQA test only** > No training data in ImageWikiQA? **Yes, ImageWikiQA only contains a test set.** This dataset serves as an important resource to evaluate VLM’s fine-grained classification capabilities as well as its knowledge and reasoning abilities. --- Thank you again for your feedback. Please let us know if you have further questions! --- Rebuttal Comment 1.1: Comment: Thank you for the responses. Most of my concerns are addressed. I have one follow-up question. How many unique questions are there in the ImageWikiQA dataset? For example, a question like “What is the native region of <image i>?” counts only once even though it can be asked for different images and classes. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! We are glad to hear that our response has solved most of your concerns. > How many unique questions are there in the ImageWikiQA dataset? For example, does a question like “What is the native region of ?” count only once, even if it’s asked for different images and classes? The ImageWikiQA dataset contains 2,000 unique questions. A question like “What is the native region of <image i>?” is counted multiple times if it is asked for different images or classes. For example, if the question is asked for 2 cat images and 2 dog images, it would count as 4 questions in total. To avoid over-representing any particular question, a question like “What is the native region of <image i>?” is limited to a maximum of 3 occurrences across the 2,000 questions. Please let us know if you have any further questions! --- Rebuttal 2: Title: Gentle request for discussion Comment: Dear Reviewer, We kindly ask for your feedback on our rebuttal. We have conducted all the requested experiments (ImageNet multiple textual labels and instruction following capacity) and clarified concerns that due to misunderstanding. If our rebuttal has addressed some of your concerns, we would appreciate it if you could reconsider your scores for our paper. Should you have any further questions, please do not hesitate to reach out to us. Thank you for your time and consideration!
Summary: The paper presents an interesting observation of VLMs lagging in image classification performance as compared to the visual encoders lie CLIP used within them. Several hypothesis are explored to explain this observation including train-time (information loss, training objective used), inference-time (prompt variations, label set size) and data related reasons. Their analysis shows the primary reason for the observed gap to be data prevalences, showing a correlation between class prevalence during VLM training and performance in those classes for image classification. The paper then simply proposes inclusion of these datasets into VLM training as a way to fix this issue, showing image classification improvements. Strengths: 1. The paper is nicely structured. An interesting performance trend is observed, hypothesis are explored to explain the phenomena, and conclusions drawn from the experiments are further tested. The evaluation and experimental details, the different hypothesis spanning both training and inference, and the well crafted control experiments make it a good study. 2. The paper is also well motivated and studies a relevant failure mode of VLMs. The results show that even though these modes might encode object or image level concepts, they still suffer from an inability to classify images which is supposed to be a more simpler, and more importantly a more fundamental visual task. This makes it a valuable research problem. 3. The specific experiments to explore each hypothesis are interesting in their own right. For example studying the effects of prompt ordering or CoT to image classification or different variations of inference or label set size, type of objective used, and linear probing results, all provide useful cues towards the hypothesis, but also showcase interesting VLM behavior. 4. The paper proposes a very simple and effective way to bridge the VLM performance gap by fine-tuning on the classification data. It shows how doing so leads to not just regaining high image classification performance, but also improves VLM-specific tasks on the classification dataset, for which they curate a new dataset. Weaknesses: 1. I'm not completely convinced with the paper's final conclusion of data being the reason why CLIP models are superior to VLMs. The fact that linear probes can extract good classification performance from VLMs shows that its a decoding problem since even these models were trained with the same data having skewed prevalences for certain badly performing classes, but the linear probe tuning manages to bring that out to the same degree as CLIP. This implies that either what's missing is a task-aligned fine-tuning (which is what a linear probe does, and which is what the paper does when they fine-tune) or perhaps even a modification of how these are inferred on for image classification which can lead to a more a task-aligned inference (CLIP is trained for text/concept alignment and is evaluated using knn whereas VLMs are trained for vision-text conditioned text generation, but evaluated in a very specific way by attaching tokens of the different options for image classification). In either case, the data argument for explaining the gap might not be the only, or even the strongest reason. 2. The paper proposes inclusion of the image classification datasets as a solution for this problem. If this VLM limitation stems for other reasons such as multimodal confusion and text feature interference or hallucination problem specific to the generated text space, adding more data for different tasks as a solution might not generalize to similar VML issues. In this case, since the new dataset involves tasks which necessitate good image classification, performance after fine-tuning goes up on both sets of tasks. This might not hold true for other equally fundamental visual tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Figure 2, the label set size does seem to have some effect on the performance gap b/w CLIPs and VLMs. Since this relates to the way VLMs are evaluated for image classification (attaching class options as tokens in the prompt which grow with the label set), do you think this experiment shows the evaluation difference between the two methods (and also how they are originally trained), can explain some of the gap? Also, why do you think the Caltech and ImageNet datasets behave a little differently than Cars and Flowers in Figure 2? 2. Why do you think CLIP does not suffer from the class prevalence bias but LLava does in Figure 3? What happens if we plot this for linear probed LLava models? What happens if we plot this for linear probed Llava models which are fine-tuned on similarly biased data having similarly skewed frequencies? If those models do not show this strong correlation, it might be more about the fine-tuning and less about the data distribution. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper has discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer SHnR for their positive comments and thoughtful feedback. We address Reviewer SHnR’s questions below. --- **Main reason and solution** > I'm not completely convinced with the paper's final conclusion of data being the reason why CLIP models are superior to VLMs… The data argument for explaining the gap might not be the only, or even the strongest reason. > We clarify that data is not the **primary reason**. The primary reason is that the information required for classification is encoded in the VLM’s latent space but cannot be effectively decoded. Data serves as an **effective solution** to decode this information, as our experiments show that with sufficient data, VLMs can match the accuracy of state-of-the-art classification models. > This implies that either what's missing is a task-aligned fine-tuning or perhaps even a modification of how these are inferred on for image classification which can lead to a more a task-aligned inference. Thank you for the thought-provoking question. We agree that task-aligned fine-tuning or inference, such as adding a linear probing loss on the VLM output space for training and using KNN in the VLM output space for evaluation, can improve VLM classification performance. **However, our focus is on enhancing VLM’s general capabilities in solving a variety of tasks rather than the specific classification task.** The only natural inference interface for different tasks is text generation. Task-aligned inference, like KNN, is only applicable to classification and not other tasks. For task-aligned fine-tuning, such as adding a new token and a linear probing loss, we observed no improvement when evaluating this fine-tuned VLM on ImageWikiQA. This shows that task-aligned fine-tuning does not improve VLM’s overall capabilities. In summary, without changing VLM training or inference to maintain its general capabilities for different tasks, adding data is the most promising and effective approach. We will add these clarifications in the revised paper. --- **Solution to other problems** > If this VLM limitation stems for other reasons such as multimodal confusion and text feature interference or hallucination problem specific to the generated text space, adding more data for different tasks as a solution might not generalize to similar VML issues. Thank you for your question. We agree that for issues like multimodal confusion or hallucination, adding data might not resolve the problem, but **addressing these is beyond the scope of our paper.** The main contribution of our paper is the formulation of the problem that VLMs are poor image classifiers and a thorough investigation into why this is the case and how to solve it. Our paper demonstrates that the poor classification performance of VLMs is due to the inability to decode the information encoded in VLMs. **Given this reason, data serves as an effective solution to decode the information.** --- **Label set size analysis** > Do you think the label set size experiment shows the evaluation difference between CLIPs and VLMs can explain some of the gap? Thank you for your question. **The label set size experiment can partially explain the gap but not entirely.** By reducing the number of labels for classification, we can narrow the gap between VLM and CLIP, but a gap remains across all label set sizes, even with just two labels (two-way classification). Moreover, we find that while the absolute gap between VLMs and CLIPs narrows with reduced label size, the relative gap increases (Appendix Figure 4). For example, in two-way classification on ImageNet, VLMs have a 5.7% error rate, while CLIP has a 0.2% error rate, resulting in a 28.5x gap; for 20 classes, VLMs have an 18.0% error rate, while CLIP has a 2.5% error rate, resulting in a 7.2x gap. **These results indicate that the VLM-CLIP gap cannot be fully explained by label set size.** > Why do you think the Caltech and ImageNet datasets behave a little differently than Cars and Flowers in Figure 2? The absolute gap on Flowers and Cars is larger than ImageNet and Caltech when reducing the label set size. **This may be because Flowers and Cars are more fine-grained classification datasets, while ImageNet and Caltech are more coarse-grained**. VLMs are weaker in fine-grained classification compared to CLIP, resulting in a larger gap. We will add these clarifications in the revised paper. --- **Class prevalence bias** > Why do you think CLIP does not suffer from the class prevalence bias but LLava does in Figure 3? Great question! **In Figure 3, the class frequency on the x-axis is computed based on the LLaVA pre-training and instruction-tuning dataset, not the CLIP training dataset.** If we change the x-axis to the CLIP pre-training data frequency, CLIP should also show prevalence bias. Please refer to Figure 2 in [1]. [1] No “Zero-Shot” Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance. > What happens if we plot this for linear probed LLava models? **We plotted the line for the linear probed LLaVA model, which aligns with the fine-tuned LLaVA (see PDF in general response).** The zero correlation is expected because fine-tuning or probing on the ImageNet dataset equalizes class frequencies, as ImageNet classes are evenly distributed. We will include this line in the updated paper. > What happens if we plot this for linear probed Llava models which are fine-tuned on similarly biased data having similarly skewed frequencies? We have not yet trained a new model on skewed data, but **we found that training on the ImageNet classification dataset does not improve performance on the Flowers dataset, which has drastically different class frequencies.** This further verifies that data, rather than fine-tuning, is the key factor determining performance. We will add this in the revised paper. --- Thank you again for your feedback. Please let us know if you have further questions! --- Rebuttal Comment 1.1: Comment: Thanks you for the elaborate discussion and answering all my questions related to label set size and class prevalences. The new experiments you added were very helpful. Some responses: > We clarify that data is not the primary reason. The primary reason is that the information required for classification is encoded in the VLM’s latent space but cannot be effectively decoded. Data serves as an effective solution to decode this information, as our experiments show that with sufficient data, VLMs can match the accuracy of state-of-the-art classification models. Thanks for clarifying this! The paper mentions it in multiple places that data is the reason why VLMs are bad at classification. Here's one of the prominent concluding statement at the end of Section 3: *"These results suggest that data is the primary cause of the poor classification performance of VLMs."* If the main reason is the inability to decode the latent information, and the solution to it is to fine-tune with classification data, then I would advise the authors to explicitly state that in the introduction to prevent confusion. Currently Figure 1 simply says "Not Enough Data" as the reason which is not true. It's more about being able to align the VLM features using the right kind of data. The paper successfully shows that VLMs hold the information needed for performing well at image classification (so do their CLIP-based visual encoders), but require some kind of data-aligned fine-tuning on classification or instruction tuning to do well on those tasks. This kind of tuning being necessary is not surprising since you need some way to align the VLM to classes seen at classification time. It's a known observation and often seen in other models as well, but the paper conducts hypothesis-driven experimentation to rule out other possibilities, and in the process generates new ablative results and a new dataset, which is valuable to the community. Illuminating the general problem of VLM needing alignment for classification is a useful direction for future work. I will increase my rating. --- Reply to Comment 1.1.1: Comment: We are pleased to hear that our response has addressed your concerns, and we will certainly revise the related text as you suggested. Thank you for your time and consideration! --- Rebuttal 2: Title: Gentle request for discussion Comment: Dear Reviewer, We kindly request your consideration of our rebuttal. We believe we have thoroughly explained why the data presents a promising solution rather than the root of the problem, and why altering the training or inference objective would not be appropriate. Additionally, we have addressed the other questions raised. Thank you for your time and consideration! --- Rebuttal 3: Comment: Dear Reviewer SHnR, With just 1 day left in the response period, we would greatly appreciate it if you could kindly review our responses soon. We are still looking to discuss any remaining concerns. Thank you for your time!
Summary: In this paper, the authors explored why Vision-Language Models (VLMs) significantly underperform as image classifiers. They compared several publicly available with proprietary VLMs on several classification benchmark datasets, including ImageNet, Flowers102, StanfordCars, Caltech101, and their newly collected ImageWikiQA dataset. The paper presented multiple hypotheses regarding the underperformance of VLMs, addressing questions related to inference, training objectives, and training/finetuning data. The authors conducted empirical studies and concluded that the performance of VLMs in classification is determined by the data used. They fine-tuned LLaVa1.5-7B on ImageNet and the LLaVa instruction-tuning dataset to enhance classification accuracy. Strengths: 1. The paper is well-organized and easy to follow. 2. The motivation is clear, and the empirical study is thorough. 3. The authors analyzed the training data of LLaVA1.5 models to show the strong correlation between class frequency over accuracy. Weaknesses: 1. The authors did not provide much analysis on the proprietary VLMs. Can the authors address the hypotheses related to inference, training objectives, and data for these VLMs? 2. The authors introduced “open-world setting” and “closed-wold setting” as evaluation protocols in section 2.3. However, the protocol seems to be only applied to Table 1. It is not clear which settings are being used for sections 3 and 4. 3. The authors need to provide more explanation for Table 2. What is the difference between the “Closed-World Setting” performance of Table 1 and the “Base Prompt w/ Label (Fixed Order)” in Table 2? Why are their accuracies different? 4. The LLaVA-7B and BLIP2-7B models are not large enough to demonstrate intrinsic capabilities such as chain-of-thought reasoning [1]. The authors could include the prompt variation results of Table 2 for one of the proprietary VLMs to show the impact on performance. 5. In Lines 178-179, the authors observed that the information necessary for classification is preserved in VLMs' latent space; however, it cannot be effectively decoded. If the VLMs have the essential information, then why are they not able to decode it? The authors should explain the possible reasons for this. 6. In Table 4, the finetuned VLM performance is worse than the zero-shot performance of GPT4 on ImageWikiQA. What are the possible reasons for this? Why is the “Finetuned on ImageNet” model performance (30.6) worse than the non-finetuned model performance (37.8)? Does finetuning always enhance the general capabilities of the VLMs? [1] Zhang, Zhuosheng, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. "Multimodal chain-of-thought reasoning in language models." arXiv preprint arXiv:2302.00923 (2023). Technical Quality: 2 Clarity: 2 Questions for Authors: Please respond to the points of weakness I mentioned above. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer fPsb for their positive comments and thoughtful feedback. We address Reviewer fPsb’s questions below. --- **Analysis of proprietary VLMs** > Can the authors address the hypotheses related to inference, training objectives, and data for proprietary VLMs? Thank you for your question. **Proprietary VLMs haven’t released the training details and data usage, so it is impossible to directly address training-related or data-related hypotheses.** For inference-related hypotheses, we conduct additional experiments with GPT4 (**see PDF in general response**), and the conclusion is the same as the public VLMs. **Recent work from DeepMind** [1] shows that most VLM tasks benefit significantly from longer pre-training with more data (in Figure 4 and Appendix K of [1]). This **aligns with our main conclusion that data is the critical factor in improving VLM performance.** [1] PaliGemma: A versatile 3B VLM for transfer. --- **Open-world vs closed-world setting** > It is not clear “open-world” vs “closed-world” settings are being used for sections 3 and 4. Great question! **Sec. 3 and 4 all use “open-world” settings (not providing the label set in the prompt), except for prompt variation analysis and label set size analysis in Sec. 3.1, using “closed-world” settings (providing the label set in the prompt).** Naturally, the “closed-world” setting narrows the model’s generation space and increases accuracy. When the model can easily predict the class in the label set by modifying the inference space (probabilistic inference in Sec. 3.1 and probing VLM in Sec. 3.2) or training VLMs with more classification data (Sec. 4), **the advantage of the “closed-world” setting doesn’t exist**. Thus, for these experiments, we use the “open-world” setting. We will clarify this in the revised paper. --- **Clarification for Table 2** > What is the difference between the “Closed-World Setting” performance of Table 1 and the “Base Prompt w/ Label (Fixed Order)” in Table 2? Why are they different? **Table 1’s “Closed-World Setting” is equivalent to Table 2’s “Base Prompt w/ Label (Random Order)”.** In this setting, we concatenate label candidates in a random order for each question: “What’s in the image? Choose one from random_shuffle([cat, dog, pig])”. The accuracy for these two settings is identical between Table 1 and Table 2. Table 2’s “Base Prompt w/ Label (Fixed Order)” concatenates label candidates in a fixed order for each question: “What’s in the image? Choose one from [cat, dog, pig]”. This setting is used to rule out the possibility that the order of labels might affect model performance. --- **CoT with larger VLMs** > The LLaVA-7B and BLIP2-7B models are not large enough to demonstrate intrinsic capabilities such as chain-of-thought reasoning [1]. Thank you for your suggestion! We have now evaluated GPT-4’s performance on ImageNet using chain-of-thought (CoT) reasoning. In Table 2, GPT-4 achieves 60.6% accuracy on ImageNet without CoT. **With CoT, GPT-4 achieves 62.2% accuracy**, demonstrating that CoT has a limited impact on image classification, even for large VLMs. We will add these results and citations in the revised paper. --- **Cannot decode information** > If the VLMs have the essential information, then why are they not able to decode it? Great question! **One possible reason is that the VLM’s decoding space is too large and not aligned with the visual features.** Specifically, VLM decoding is performed through next-word prediction, which usually involves a vocabulary of over 10,000 words. The output text embedding is not aligned with the visual features due to insufficient data to align these spaces. **In general, having information in a model does not necessarily mean the model can express that information.** This phenomenon is also observed in other research areas. For example, after training ResNets or ViTs with self-supervised learning methods like SimCLR, the model acquires discriminative information for different classes. However, this information can only be decoded by adding and training a linear layer on a new dataset. Similarly, we demonstrate that VLMs possess classification information, but it is not readily expressible. **Adding classification-related data helps bridge the gap between “information possession” and “information expression”** (refer to Table 3 in our paper). We will include this discussion in the revised paper. --- **Fine-tuning results** > In Table 4, why is the “Finetuned on ImageNet” model performance (30.6) worse than the non-finetuned model performance (37.8 [should be 38.0])? Does finetuning always enhance the general capabilities of the VLMs? **Fine-tuning often improves in-distribution performance but does not necessarily enhance out-of-distribution or general capabilities.** **Table 4 presents results of models fine-tuned on ImageNet and evaluated on ImageWikiQA, which are vastly different datasets.** ImageNet questions only require classification, such as “Q:  <image> What is in the image? A: dog,” while ImageWikiQA questions demand both classification and knowledge/reasoning, such as “Q:  <image> What is the native region of this object? A: South America.” Fine-tuning solely on ImageNet trains the model to classify, but it can lead to a loss of general capabilities like reasoning and knowledge, resulting in lower performance on ImageWikiQA (30.6 fine-tuned vs. 38.0 pre-trained). This phenomenon is known as “catastrophic forgetting” [1]. **However, when we fine-tune on both ImageNet and instruction tuning data, the model learns classification while retaining its original capabilities, leading to a significant performance improvement on ImageWikiQA (49.8 fine-tuned vs. 38.0 pre-trained).** We discussed this in Lines 281-285 and will add further clarification. [1] Overcoming catastrophic forgetting in neural networks. --- Thank you again for your feedback. Please let us know if you have further questions! --- Rebuttal 2: Title: Gentle request for discussion Comment: Dear reviewer, We would kindly ask for a response to our rebuttal. We believe that some of your concerns are due to a misunderstanding and are not weaknesses, and we would appreciate it if you would revisit your evaluation of our work. Thank you for your time and consideration! --- Rebuttal 3: Comment: Dear Reviewer fPsb, With just 1 day left in the response period, we would greatly appreciate it if you could kindly review our responses soon. We are still looking to discuss any remaining concerns. Thank you for your time!
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback on our manuscript. Below, we provide individual responses to each reviewer. Please let us know if you have any further questions or concerns! **We have also attached a PDF for reviewers fPsb and SHnR, which includes visualizations to further address their questions.** Pdf: /pdf/b1fbc233c6955dc0ca81a3ba7996b918ff735ac2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DiffTORI: Differentiable Trajectory Optimization for Deep Reinforcement and Imitation Learning
Accept (spotlight)
Summary: This paper proposes DiffTOP as a model-based approach to reinforcement learning and behavior cloning. DiffTOP learns a cost function and a dynamics model through differentiable trajectory optimization, and then uses the learned model during inference for online optimization. For model-based RL tasks, DiffTOP is built on a recent work of TD-MPC and resolves the objective mismatch issue that TD-MPC failed to address. For behavior cloning, the DiffTOP can be used with a different loss function tailed to BC, with an additional consideration of learning a multimodal policy which is achieved by the use of CVAE. The proposed DiffTOP framework is evaluated on 15 model-based RL tasks and 35 imitation learning tasks with high-dimensional observation spaces. Strengths: * The authors demonstrate broad applicability of the proposed DiffTOP framework in both reinforcement learning and behavior cloning settings, with varying problem complexity ranging from simple continuous control (such as cartpole) to complex object manipulation tasks (such as robomimic or maniskill). Weaknesses: [EDIT] The concerns were resolved throughout the rebuttal. In particular, my initial assessment of the paper regarding the ignorance of TD-MPC2 was inaccurate. Despite the thorough results presented in the paper, I have to say that it is not well contextualized relative to the existing literature. * First and foremost, the idea of leveraging model-based differentiable optimization for reinforcement learning and learning-based control is not new, and the field is rapidly expanding with new methods emerging in multiple domains, including control theory, robotics, and machine learning. For instance, Nikishin et al. [1] proposes an approach to differentiable optimization of trajectories (for an infinite horizon) by leveraging implicit function theorem and deep neural networks. Cheng et al. [2][3] perform differentiable trajectory optimization to learn a feedback policy, although they assume a known model-based function class for the dynamics and the policy. Sacks et al. [4] propose to optimize the inner-loop of an MPC optimization algorithm, though their method is based on MPPI. Given the abundance of similar ideas in the literature, the authors are encouraged to spend time to conduct a more thorough literature review. * Most critically, the authors of TD-MPC recently published a new paper titled TD-MPC2 [5] at ICLR 2024, in which the improvement from TD-MPC includes the mitigation of objective mismatch. Since the motivation is very similar to that of this paper and the authors of this paper completely disregard the new work, the contribution of this paper is highly questionable (even though the algorithmic details are still different between TD-MPC2 and DiffTOP). In particular, the following motivational statement made in Section 4 is no longer true: “Existing model-based RL algorithms such as TD-MPC suffer from the objective mismatch issue … DifftTOP addresses this issue …” [1] Nikishin, Evgenii, Romina Abachi, Rishabh Agarwal, and Pierre-Luc Bacon. "Control-oriented model-based reinforcement learning with implicit differentiation." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7, pp. 7886-7894. 2022. [2] Cheng, Sheng, Minkyung Kim, Lin Song, Chengyu Yang, Yiquan Jin, Shenlong Wang, and Naira Hovakimyan. "Difftune: Auto-tuning through auto-differentiation." arXiv preprint arXiv:2209.10021 (2022). [3] Cheng, Sheng, Lin Song, Minkyung Kim, Shenlong Wang, and Naira Hovakimyan. "DiffTune $^+ $: Hyperparameter-Free Auto-Tuning using Auto-Differentiation." In Learning for Dynamics and Control Conference, pp. 170-183. PMLR, 2023. [4] Sacks, Jacob, Rwik Rana, Kevin Huang, Alex Spitzer, Guanya Shi, and Byron Boots. "Deep model predictive optimization." arXiv preprint arXiv:2310.04590 (2023). [5] Hansen, Nicklas, Hao Su, and Xiaolong Wang. "TD-MPC2: Scalable, Robust World Models for Continuous Control." In The Twelfth International Conference on Learning Representations (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: * Regarding the use of CVAE for capturing multimodal action distributions for behavioral cloning, it seems that the distribution representation in the latent space is still unimodal Gaussian. Although it should work just fine, I wonder if the authors have considered to leverage a discrete categorical latent variable for explicitly modeling multi-modality even in the latent space to possibly improve the performance (see e.g., [6]) [6] Ivanovic, B. and Pavone, M., 2019. The trajectron: Probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 2375-2384). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: [EDIT] The concerns were resolved throughout the rebuttal. In particular, my initial assessment of the paper regarding the ignorance of TD-MPC2 was inaccurate. * As mentioned above, the contribution of this paper is quite limited especially in light of the recent development of TD-MPC2. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to extend our heartfelt gratitude for taking the time to review our paper. Thank you for all valuable comments and suggestions on improving the quality of the paper. Below we respond to each of your comments in detail. **Q: ... Given the abundance of similar ideas in the literature, the authors are encouraged to spend time to conduct a more thorough literature review.** We thank the reviewer for bringing up these related works. We agree that the field of using differentiable optimization for end-to-end learning is rapidly expanding, and we have already discussed and cited many relevant works in the “Differentiable optimization” paragraph in the related work section. We are sorry for the omission of these 4 related works and we have updated section 2 of our paper to include a thorough discussion of them, detailed as follows: Nikishin et al. [1] proposes to learn a dynamics and reward model in model-based RL, and they derive an implicit policy as the softmax policy associated with the optimal Q function under the learned dynamics and reward. The dynamics and reward model are learned by back-propagating the RL loss through this implicit policy via implicit function theorem. In contrast, we derive the implicit policy as the optimal solution from performing trajectory optimization with the learned dynamics, reward and Q function. Nikishin et al. only tested their methods in simple tasks with ground-truth low-level states (e.g., CartPole with only 2d action space and 4d state space), while we show our methods work with much more complex tasks in DeepMind Control Suite with high-dimensional image observations, and it outperforms several prior state-of-the-art methods. Cheng et al. [2, 3] proposes to tune the parameters of a controller by unrolling the controller and system dynamics into a computation graph and optimizing the controller parameters via gradient descent with respect to the task loss, with applications in tracking trajectories for drones. The method assumes a known dynamics and policy class (e.g., a PID controller), and only optimizes some parameters of the controller. Our method does not assume any prior knowledge on the dynamics or policy class, instead, the dynamics and policy are all neural networks which we learn end-to-end using the task loss. Instead of representing the policy as a predefined controller and learning its parameters, our policy is represented as performing trajectory optimization with the learned dynamics, reward and Q functions. Sacks et al. [4] proposes to learn the update rule in MPPI (the mean and variance used to sample the actions), represented as a neural network, using reinforcement learning, with known dynamics and cost functions. To apply RL for learning the update rule, they design an auxiliary MDP and treat the MPPI process as part of the dynamics of this auxiliary MDP. Instead of learning the update rule, we learn the dynamics, reward, Q function used in trajectory optimization to generate the actions. We perform differentiable trajectory optimization instead of RL to optimize the parameters of these functions. We thank the reviewer again for introducing us to these 4 related works. As discussed above, we believe our proposed method is quite different from these 4 related works and our contributions remain valid. **Q: The authors of TD-MPC recently published a new paper titled TD-MPC2 [5] at ICLR 2024, in which the improvement from TD-MPC includes the mitigation of objective mismatch. Since the motivation is very similar to that of this paper and the authors of this paper completely disregard the new work, the contribution of this paper is highly questionable.** We thank the reviewer for raising this issue. We respectfully disagree with this assessment. TD-MPC2 [5] does not include any improvements over TD-MPC [6] that mitigate the objective mismatch issue. TD-MPC2 inherits the same training objective from TD-MPC, as evidenced by Equation (3) in TD-MPC2 and Equations (8), (9), and (10) in TD-MPC. This means that TD-MPC2 still suffers from the objective mismatch issue, where the dynamics model is optimized to predict future states in latent space (via the latent state consistency loss), which is not necessarily aligned with the goal of achieving high task return when using the dynamics model for planning. The primary improvements in TD-MPC2 over TD-MPC are related to exploring different architectural variations and new algorithmic design choices for more stable training in multi-task settings, rather than addressing the objective mismatch issue. Therefore, we believe our contribution and motivation remains valid. We have updated section 2 of our paper to include a discussion of TD-MPC2. **Q: I wonder if the authors have considered to leverage a discrete categorical latent variable for explicitly modeling multi-modality even in the latent space to possibly improve the performance** Thank you for this great suggestion. We chose CVAE due to its simplicity, and as shown in our experiments, it achieves good performance and is able to capture the multi-modality well (Fig. 4). We believe that a more advanced technique, such as the discrete categorical latent variable as suggested, would further improve the performance. We look forward to exploring this in future work. [1] Nikishin et al., Control-oriented model-based reinforcement learning with implicit differentiation, AAAI Conference on Artificial Intelligence. 2022. [2] Cheng et al., Difftune: Auto-tuning through auto-differentiation, IEEE Transactions on Robotics, 2024 [3] Cheng et al., DiffTune$^\dagger$: Hyperparameter-Free Auto-Tuning using Auto-Differentiation, Learning for Dynamics and Control Conference, 2023. [4] Sacks et al., Deep model predictive optimization, arXiv preprint, 2023. [5] Hansen et al., TD-MPC2: Scalable, Robust World Models for Continuous Control, ICLR 2024 [6] Hansen et al., Temporal Difference Learning for Model Predictive Control, ICML 2022 --- Rebuttal Comment 1.1: Title: Response to Authors and Acknowledgement of Inaccurate Feedback in Initial Review Comment: I sincerely thank the authors for their time and effort to prepare the rebuttal. Please find below my response. * > We thank the reviewer for bringing up these related works. We agree that the field of using differentiable optimization for end-to-end learning is rapidly expanding, and we have already discussed and cited many relevant works in the “Differentiable optimization” paragraph in the related work section. We are sorry for the omission of these 4 related works and we have updated section 2 of our paper to include a thorough discussion of them, detailed as follows: * Thank you for conducting the thorough literature review. It should further clarify the uniqueness of DiffTOP when compared against prior approaches that share similarity. * > We thank the reviewer for raising this issue. We respectfully disagree with this assessment. TD-MPC2 [5] does not include any improvements over TD-MPC [6] that mitigate the objective mismatch issue. TD-MPC2 inherits the same training objective from TD-MPC, as evidenced by Equation (3) in TD-MPC2 and Equations (8), (9), and (10) in TD-MPC. This means that TD-MPC2 still suffers from the objective mismatch issue, where the dynamics model is optimized to predict future states in latent space (via the latent state consistency loss), which is not necessarily aligned with the goal of achieving high task return when using the dynamics model for planning. The primary improvements in TD-MPC2 over TD-MPC are related to exploring different architectural variations and new algorithmic design choices for more stable training in multi-task settings, rather than addressing the objective mismatch issue. Therefore, we believe our contribution and motivation remains valid. We have updated section 2 of our paper to include a discussion of TD-MPC2. * Thank you for your response. I appreciate the level of details provided in the exposition and apologize for mistakenly making an inaccurate claim in my original review; I had been confused in part by the following statement made in Section 3.1 of TD-MPC2 paper, which I cite below for transparency: * > However, accurately predicting raw future observations (e.g., images or proprioceptive features) over long time horizons is a difficult problem, and does not necessarily lead to effective control (Lambert et al., 2020). Rather than explicitly modeling dynamics using reconstruction, TD-MPC2 aims to learn a maximally useful model: a model that accurately predicts outcomes (returns) conditioned on a sequence of actions. Here, Lambert et al. (2020) is the original objective mismatch paper. It implicitly states that TD-MPC2 takes care of the objective mismatch issue. However, based on your response (and upon closer look at Appendix A of TD-MPC2), I agree that the model objective of TD-MPC2 is largely the same as that of TD-MPC. Thus, the results presented in Figure 3 of the present paper already confirms the superiority of DiffTOP over TD-MPC/TD-MPC2 in terms of directly optimizing the task performance. In the revised paper, the authors may want to explicitly counter-argue the above statement made by the TD-MPC2 authors, rather than just stating that TD-MPC suffers from the objective mismatch issue (e.g., in Section 4.1), since these two statements are indeed conflicting with each other and may cause confusion for other people as well. Again, I appreciate the authors for taking the review comments seriously. I have updated my review and revised the score. --- Rebuttal 2: Title: Thank you for your response! Comment: We want to sincerely thank the reviewer for the prompt response, and the detailed explanation. We are glad our rebuttal has addressed your concerns, and we really appreciate the update of the review and the revision of the score. > It should further clarify the uniqueness of DiffTOP when compared against prior approaches that share similarity. Thank you for this suggestion! In addition to the discussion to the 4 related works in the initial rebuttal, we will also update the paper to further clarify the uniqueness of our method when compared to prior approaches. We want to thank the reviewer again for taking the time to review our paper and read our rebuttal. Your suggestions and feedback have greatly helped improve the quality of the paper.
Summary: The paper introduces DiffTOP, an approach leveraging differentiable trajectory optimization as the policy representation to enhance performance in deep reinforcement learning (RL) and imitation learning (IL). By utilizing the advancements in differentiable trajectory optimization, DiffTOP addresses the "objective mismatch" issue prevalent in prior model-based RL algorithms by optimizing the dynamics model to directly maximize task performance. The method is benchmarked across various robotic manipulation tasks with high-dimensional sensory inputs and demonstrates superior performance over state-of-the-art methods in both model-based RL and IL domains. Strengths: 1. The paper is well-written, structured, and intuitive, making it easy to understand. 2. The experiments are well-executed with various benchmarks. The comparison with baselines is thorough, including state-of-the-art methods like DP3. 3. The paper provides experiments in both imitation learning and reinforcement learning settings, which is commendable. 4. DiffTOP is combined with various existing approaches to demonstrate the generalization of the method. Weaknesses: 1. The work is not well-contextualized. While the paper makes a general claim about a policy class using differentiable trajectory optimization, using such technique is not new in the field, as the authors point out in the related work section. Despite the authors mention some differences in implementation (e.g., whether the dynamics model is learned), the paper presents this differentiable trajectory optimization as a new contribution. It would be beneficial for the introduction and diagrams to highlight the specific differences between the proposed work and other differentiable trajectory optimization approaches, rather than focusing on the differences among EBM Diffusion policy and TD-MPC. Additionally, calling the method Differentiable Trajectory Optimization might be inaccurate since it encompasses a broad range of works and could be misleading, failing to capture the differences. 2. While the execution is commendable, the design choices could be better justified, and more analysis would strengthen the paper. Particularly, as mentioned in the related work section, there are closely related works like [2][19][42], but there is no direct comparison with them in the experiments. More analysis on the design choices, such as using TD-MPC for model-based RL and learning dynamic functions, would also be appreciated. 3. The idea is not particularly novel. The combination of TD-MPC and differentiable trajectory optimization is rather straightforward. 4. It would also be beneficial to have more validation on real robots. Technical Quality: 3 Clarity: 2 Questions for Authors: see the weaknesses section. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to extend our heartfelt gratitude for taking the time to review our paper. Thank you for all valuable suggestions and comments on improving the quality of the paper. Below we respond to each of your comments in detail. **Q: The work is not well-contextualized … It would be beneficial for the introduction and diagrams to highlight the specific differences between the proposed work and other differentiable trajectory optimization approaches … calling the method Differentiable Trajectory Optimization might be inaccurate since it encompasses a broad range of works and could be misleading, failing to capture the differences.** Thank you for this suggestion. We have updated the introduction and diagrams of our paper to highlight the differences between our method and other differentiable trajectory optimization approaches: “1) We are the first to show how differentiable trajectory optimization can be combined with deep model based RL algorithms, training dynamics, reward, Q function, and the policy end-to-end using task loss. In contrast, prior work focuses on imitation learning [2, 42], assumes known dynamics and reward structures and learns only a few parameters [2], or first learns the dynamics model with the dynamics prediction loss (instead of the task loss), and then uses the fixed learned dynamics for control [19]. 2) We are the first to show that the policy class by differentiable trajectory optimization can scale up to high-dimensional sensory observations like images and point clouds, achieving state-of-the-art performances in standard RL and imitation learning benchmarks. In contrast, prior works [2, 19, 42] only test their methods in customized tasks with ground-truth low-level states, such as CartPole or Pendulum, and do not report performance on standard benchmarks with more complex tasks and high-dimensional observations. ” Thank you for the suggestion on the name of the method, and we will revise it to better reflect our unique contributions. **Q: ... there is no direct comparison with closely related works [2][19][42] in the experiments.** Thank you for the feedback. We did not compare to [2][19][42] because we target different experiments. These related works all conduct experiments on customized tasks with ground-truth low-level states. In contrast, we test our method on standard RL and robotic imitation learning benchmarks, with high-dimensional sensory observations like images and point clouds. As these prior works have not been demonstrated on high-dimensional observations or more complex tasks, we originally compared to more recent state-of-the-art methods on these benchmarks, e.g., 3D Diffusion Policy [1]. We have now included a comparison with Amos et al. in one of their tasks (pendulum swing-up with ground-truth low-level states) under imitation learning settings. Unlike Amos et al., who assumes known dynamics and reward structures and only learns 10 parameters, our method uses neural networks to represent both dynamics and reward functions without such assumptions. The metric is the cost of the learned policy. As in Amos et al., we test in two settings, pendulum without damping and with damping. Following Amos et al., their method does not model the damping effect in the assumed dynamics, so the ground-truth dynamics model is not realizable in the damping case. We also compared to an additional baseline in Amos et al., which uses a LSTM to predict the expert action. The results (Table 1 in the PDF uploaded via global rebuttal) show our method performs slightly worse in the no damping case but noticeably better in the damping case. This is because Amos et al. assumes correct dynamics in the no damping case and learns only 10 unknown parameters, whereas the assumed dynamics structure is incorrect in the damping case; we use fully-connected neural networks to represent the dynamics function, avoiding such assumptions. It is generally difficult to know the exact correct dynamics function structure, especially for tasks with complex dynamics (e.g., with contacts) and high-dimensional observations (images and point clouds). **Q: More analysis on the design choices, such as using TD-MPC for model-based RL and learning dynamic functions, would also be appreciated.** Thank you for the suggestion. We have updated section 5 of our paper to include more discussion and experiments on these design choices: “We choose TD-MPC for its simplicity and state-of-the-art performance. However, our method is compatible with any model-based RL algorithm that learns a dynamics model and a reward function. To show this, we have added experiments implementing our method on top of Dreamer-V3 [3], another state-of-the-art image-based model-based RL algorithm, and tested it on 4 Deepmind Control Suite tasks. The results (Fig. 1 of the PDF uploaded in the global rebuttal) show that integrating our method with Dreamer-V3 improves performance in 3 out of 4 tasks, indicating that our method can enhance other model-based RL algorithms as well. In our experiments, we have also combined our method with 3D Diffusion Policy for imitation learning, and it achieves significant improvements. We choose to learn the dynamics function as we work directly with high-dimensional sensory inputs like images and point clouds, where manually specifying the analytic dynamics function is challenging. This is in contrast to learning from ground-truth low-level states where the dynamics model can be derived using physical laws. Therefore, we need to learn a dynamics model in the latent space, similar to prior work such as TD-MPC [4]. ” **Q: It would also be beneficial to have more validation on real robots.** Thank you for the suggestion. While real-world validation is beyond the scope of this paper, which focuses on standard benchmarks in simulation, we believe that validation of our method on real robots would be valuable for the robotics community. we leave this as important future work. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed and well-written response! I particularly appreciate the authors' effort in conducting additional experiments within such a short period. Overall, I remain positive about the paper. One suggestion I would like to make is that DiffTop should be clearly distinguished from the previous works mentioned in the related work section. Currently, the paper’s organization doesn’t make this distinction immediately clear. A dedicated section, perhaps with a simple diagram highlighting the conceptual differences, would be helpful. --- Rebuttal 2: Title: References for the rebuttal Comment: [1] Ze, Y., et al., 3d diffusion policy: Generalizable visuomotor policy learning via simple 3d representations, RSS, 2024 [2] Amos, B., et al., Differentiable mpc for end-to-end planning and control. NeurIPS, 2018 [3] Hafner, D., et al., Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104, 2013 [4] Hansen N., et al., Temporal Difference Learning for Model Predictive Control, ICML 2022 [19] Jin, W., et al., Pontryagin differentiable programming: An end-to-end learning and control framework.NeurIPS, 2020 [42] Xu, M., et al., Revisiting implicit differentiation for learning problems in optimal control, NeurIPS, 2023 --- Rebuttal 3: Title: Thank you for your response! Comment: We sincerely thank the reviewer for reading our rebuttal and for the prompt response. > One suggestion I would like to make is that DiffTop should be clearly distinguished from the previous works mentioned in the related work section. Currently, the paper’s organization doesn’t make this distinction immediately clear. A dedicated section, perhaps with a simple diagram highlighting the conceptual differences, would be helpful. Thank you for this suggestion! We agree that dedicated section or a diagram would be especially helpful to clearly distinguish our method from prior works. We will certainly incorporate this in the final version of the paper, in addition to the updated discussion of the closely related works in the initial rebuttal. We want to thank you again for taking the time to review our paper and read our rebuttal. Your suggestions and feedback have greatly helped improve the quality of our paper.
Summary: The paper introduces DiffTOP, a novel policy class for reinforcement learning (RL) and imitation learning (IL) that utilizes differentiable trajectory optimization to generate policy actions. DiffTOP leverages recent advancements in differentiable trajectory optimization, allowing end-to-end learning of cost and dynamics functions through gradient computation. The approach addresses the "objective mismatch" problem in model-based RL by optimizing dynamics and reward models to directly maximize task performance. For imitation learning, DiffTOP optimizes actions with a learned cost function at test time, outperforming previous methods. The authors benchmark DiffTOP on 15 model-based RL tasks and 35 imitation learning tasks with high-dimensional inputs like images and point clouds. The results demonstrate that DiffTOP surpasses prior state-of-the-art methods in both domains. The paper includes analysis and ablation studies to provide insights into DiffTOP's learning procedure and performance gains. Strengths: This paper has several strengths: - The paper presents a robust and technically rigorous study with extensive experiments and ablation studies across a wide range of challenging environments. The proposed method DiffTOP achieves superior results in both RL and IL tasks with high-dimensional sensory observations. - The approach effectively tackles the important "objective mismatch" problem inherent in model-based reinforcement learning. - The ability to compute policy gradients directly with respect to the parameters describing the observation and transition model is a significant advancement, eliminating the need for sample-based estimates. - DiffTOP alleviates the model mismatch problem by learning the model concurrently with optimization, similar to TD-MPC. Weaknesses: While the paper has many strengths, there are also some potential weaknesses or areas that could be improved: - The paper attempts to condense a large amount of information into limited space, which may affect readability and clarity, particularly for readers less familiar with TD-MPC. - The trajectory optimization solver used (Theseus) does not support constraint optimization, requiring manual unrolling of dynamics instead of presenting it as a constraint to the optimizer. Technical Quality: 3 Clarity: 2 Questions for Authors: - For ManiSkill tasks, have the authors tried more advanced BC baselines (e.g., Diffusion Policy)? It seems the performance of baselines is not very strong. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: The authors have discussed a few limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to extend our heartfelt gratitude for taking the time to review our paper. Thank you for all valuable comments and suggestions on improving the quality of the paper. Below we respond to each of your comments in detail. **Q: The paper attempts to condense a large amount of information into limited space, which may affect readability and clarity, particularly for readers less familiar with TD-MPC.** We thank the reviewer for bringing this to our attention. Indeed, the model-based RL part of DiffTOP builds upon TD-MPC, thus requiring some knowledge of TD-MPC for understanding the paper. Following the reviewer’s suggestion, we have updated section 3.2 with more explanations of TD-MPC to add more buildup for introducing our algorithm, and for a better understanding of the paper. We have also updated our paper to include more background information and details on TD-MPC in the appendix for readers less familiar with it. **Q: The trajectory optimization solver used (Theseus) does not support constraint optimization, requiring manual unrolling of dynamics instead of presenting it as a constraint to the optimizer.** We thank the reviewer for this feedback. Indeed, Theseus does not support constrained optimization at the time of our paper submission, and thus we have to unroll the dynamics when solving the trajectory optimization problem. The original Theseus paper has the following discussion about constrained optimization in their limitation section: “The nonlinear solvers we currently support apply constraints in a soft manner (i.e., using weighted costs). Hard constraints can be handled with methods like augmented Lagrangian or sequential quadratic programs [99, 100], and differentiating through them are active research topics”. Based on this, it seems that the support of constrained optimization might be added in the future. We leave integrating DiffTOP with constrained optimization as important future work. **Q: For ManiSkill tasks, have the authors tried more advanced BC baselines (e.g., Diffusion Policy)? It seems the performance of baselines is not very strong.** We thank the reviewer for this question. ManiSkill tasks all use point cloud as the policy inputs; the original Diffusion Policy paper only tested their method with image inputs, and has not been implemented and tested to work with point cloud inputs. The baseline we compared to in the ManiSkill tasks was the best method that was introduced by the ManiSkill authors, and we have further tuned this baseline method to make the results stronger than presented in their original paper. The improvement of DiffTOP over the ManiSkill baselines in Table 2 provides evidence to the effectiveness of our proposed method. We also note that we have compared to more advanced BC baselines such as Diffusion Policy and 3D Diffusion Policy (DP3) in other benchmarks where these baseline methods are originally tested, i.e., MetaWorld and RoboMimic, and DiffTOP outperforms them in both benchmarkings, showing the effectiveness of our proposed method. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. I don't have further concerns. --- Rebuttal 2: Title: Thank you for your response! Comment: We sincerely thank the reviewer for reading our rebuttal and for the prompt response. We are glad that our rebuttal has addressed your concerns. We want to thank you again for taking the time to review our paper and read our rebuttal. Your suggestions and feedback have greatly helped improve the quality of our paper.
Summary: The paper presents a method that uses Differentiable Trajectory Optimization as a policy representation. The proposed method extends the work of Temporal difference learning for model predictive control (TD-MPC) by incorporating a policy-gradient loss for which analytical backpropagation is possible thanks to the differentiable properties of the used trajectory optimization scheme. The paper evaluates the effectiveness of trajectory optimization as a policy representation for both model-based RL and Imitation Learning, benchmarking the approach in the DeepMind control tasks, the MetaWorld benchmark, the Robomimic benchmark, and the ManiSkill benchmark. Furthermore, the work presents comparisons against other policy representations like traditional feed-forward policies, Energy-based methods (EBMs), and Diffusion Policies. Strengths: - The paper is well-written, the work is interesting and it presents a simple yet effective extension of TD-MPC. - The method enables the use of trajectory optimization using high-dimensional observations and it is also able to deal with multimodality in the solution space. - The method is evaluated thoroughly in different setups (Model-based RL, IL) and against other classes of policy representations, outperforming most baselines in terms of sample efficiency and final reward. Weaknesses: [Minor] - The main weakness of the approach is the high computational cost of solving trajectory optimization at test time. The appendix reports 0.052 seconds to infer the action for one timestep (20 Hz), despite the fact that the time horizon is considerably short (1 to 3 steps). Such inference speed might hinder the deployment in certain real-world scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: - Is there any specific reason why the Levenberg-Marquardt solver was used? Using a second-order solver might lead to fewer iterations to get to a solution, but the time per iteration might also be higher. A simpler solver (SGD) could do the job and be faster, improving the overall training time and inference speed. - Are there any insights on why increasing the time horizon for the trajectory-optimization policy leads to the slight decrease in performance reported in Appendix A2.3? Typically, a longer time horizon leads to trajectories of better quality. Does the trajectory optimization problem reach convergence for the longer time horizons that were tested? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Limitations of the method are mentioned and I agree with the author's claims that there are no direct potential societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to extend our heartfelt gratitude for taking the time to review our paper. Thank you for all valuable comments and suggestions on improving the quality of the paper. Below we respond to each of your comments in detail. **Q: [minor] The main weakness of the approach is the high computational cost of solving trajectory optimization at test time. The appendix reports 0.052 seconds to infer the action for one timestep (20 Hz), despite the fact that the time horizon is considerably short (1 to 3 steps). Such inference speed might hinder the deployment in certain real-world scenarios.** We thank the reviewer for bringing up this issue. We have updated section 5 of the paper to include a more detailed discussion on the inference speed of our method for real-world applications: “Our current inference speed is 0.052 seconds, which is a control frequency of 20 Hz. We note that such inference speed is comparable or higher to other deep robot learning algorithms that take high-dimensional image or point cloud observations as inputs, e.g., Diffusion Policy [1] reports a control frequency of 10 Hz, and PerAct [2] reports a control frequency of 2.23 Hz. As these methods are shown to be able to be deployed with real-world robots for many tasks, we believe our method’s inference speed of 20 Hz would work well for most real-world robot tasks as well. ” We would also like to clarify that since we also learn a value function that predicts the future accumulated rewards and use it during the planning process, the policy is able to reason more than 3 steps into the future at inference time. **Q: Is there any specific reason why the Levenberg-Marquardt solver was used? Using a second-order solver might lead to fewer iterations to get to a solution, but the time per iteration might also be higher. A simpler solver (SGD) could do the job and be faster, improving the overall training time and inference speed.** We thank the reviewer for this insightful question. We use the Theseus [3] library for differentiating through the trajectory optimization algorithms, and the solvers supported by Theseus include second-order solvers like Gauss-Newton, Levenberg–Marquardt, and Dogleg, and linear solvers such as CHOLMOD. We choose Levenberg-Marquardt as we find it to perform better in early experiments. We didn’t use SGD since it is not supported by the Theseus library, possibly due to its slower convergence rate when dealing with highly non-linear problems. Although implementing SGD to solve the trajectory optimization problem itself is not difficult, robustly differentiating through it may not be trivial and is out of the scope of the current paper. We look forward to trying more solvers, such as SGD, in future work. **Q: Are there any insights on why increasing the time horizon for the trajectory-optimization policy leads to the slight decrease in performance reported in Appendix A2.3? Typically, a longer time horizon leads to trajectories of better quality. Does the trajectory optimization problem reach convergence for the longer time horizons that were tested?** We thank the reviewer for this interesting question. We do notice a slight performance drop with longer prediction horizons. The reason could be as follows: since there will always be errors in the learned dynamics function and the reward/value function, there is a tradeoff between the errors in the dynamics model and the errors in the reward/value function when choosing the prediction horizon. With a longer prediction horizon, compounding errors in the learned dynamics model will dominate, whereas with a shorter prediction horizon we expect errors in the learned value function to be more prevalent [4]. The optimal value for this parameter is highly likely to be application-dependent, but as shown in Appendix A2.3, our method can demonstrate robustness to different prediction horizons. When solving the trajectory optimization problem, we run the Levenberg-Marquardt solver for 100 iterations and it has reached convergence within 100 iterations. [1] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, Shuran Song, “Diffusion Policy: Visuomotor Policy Learning via Action Diffusion”, RSS 2023 [2] Mohit Shridhar, Lucas Manuelli, Dieter Fox, “Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation”, CoRL 2022 [3] Luis Pineda, Taosha Fan, Maurizio Monge, Shobha Venkataraman, Paloma Sodhi, Ricky T. Q. Chen, Joseph Ortiz, Daniel DeTone, Austin Wang, Stuart Anderson, Jing Dong, Brandon Amos, Mustafa Mukadam, “Theseus: A Library for Differentiable Nonlinear Optimization”, NeurIPS 2022 [4] Harshit Sikchi, Wenxuan Zhou, David Held, “Learning Off-Policy with Online Planning”, CoRL 2021
Rebuttal 1: Rebuttal: Dear reviewers, We want to extend our heartfelt gratitude for taking the time to review our paper. Thank you for all valuable comments and suggestions on improving the quality of the paper. We respond to each of your comments in detail in the individual rebuttal. Following the reviewer's suggestions, the attached PDF contains two additional experiments: - Comparison to closely related work Amos et al. (Reviewer epML) - Combining DiffTOP with Dreamer-V3 as an ablation study on the underlying model-based RL algorithm (Reviewer epML) [1] Amos, B., et al., Differentiable mpc for end-to-end planning and control. NeurIPS, 2018 Best, Authors Pdf: /pdf/9a44fc226b07a6229d4e38ed2534b61f06565a38.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Free-Rider and Conflict Aware Collaboration Formation for Cross-Silo Federated Learning
Accept (poster)
Summary: This paper introduces a strategy named FedEgoists designed to enhance collaboration in cross-silo federated learning (FL) scenarios, particularly in business sectors where participants (FL-PTs) are often competitive and self-interested. The proposed FedEgoists strategy presents a sophisticated and theoretically sound approach to managing collaborations in cross-silo federated learning, and its strengths lie in its ability to handle self-interest and competition effectively. Strengths: FedEgoists ensures that the formed coalitions are optimal. No coalition can improve its utility by merging with another coalition, making the solution stable and efficient. The strategy effectively addresses the problem of free riders, ensuring that all FL-PTs contribute to and benefit from the FL ecosystem proportionately. By preventing FL-PTs from contributing to their competitors or their supporters, the strategy minimizes conflicts of interest. This paper validates FedEgoists using the benchmark experimental datasets. Weaknesses: The paper lacks rigorness as pointed out in the Questions part. Technical Quality: 2 Clarity: 2 Questions for Authors: (1) FedEgoists requires a central server to coordinate and enforce the coalition formation, which introduces a single point of control and potential failure. This centralization might conflict with the decentralized nature of federated learning and violate the fundamental motivation of FL. (2) FedEgoists assumes that FL-PTs will accurately report their competitive relationships to the central server (CS). However, this can easily lead to privacy leaks and potential attacks. (3) While the FedEgoists strategy focuses on collaboration benefits and competition avoidance, it does not explicitly address data complementarity. (4) Since the FedEgoists strategy relies on coalition formation based on reported benefits and competition, there is a risk of strategic manipulation by participants. (5) The experiments are not sufficient, and there are too few datasets and baselines. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: It seems limitations are not clearly or explictly elaborated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comments 1.** FedEgoists requires a central server to coordinate and enforce the coalition formation, which introduces a single point of control and potential failure. This centralization might conflict with the decentralized nature of federated learning and violate the fundamental motivation of FL. Since the FedEgoists strategy relies on coalition formation based on reported benefits and competition, there is a risk of strategic manipulation by participants. $\color{blue}{Response.}$ In the classic federated learning (FL) framework, there is a central server that is assumed to be trustable. The decentralized nature of FL refers to the data of different FL-PTs (i.e., FL participants) being decentralized since the local data of FL-PTs don’t need to be transferred to the central server. As introduced in Section 1, in the vanilla FedAvg framework, multiple FL-PTs train a shared model locally with their own dataset, and upload their local model updates to a central server trusted by all FL-PTs, which then aggregates these model updates and distributes the model updates to each FL-PT (i.e., client) [22]. Following such a client-server architecture of FedAvg, many important techniques have been proposed to improve the FL performance while facing the statistical heterogeneity of data over FL-PTs (e.g., FedProx [18], SCAFFOLD [14], pFedHN [26], pFedMe [28], FedDisco [40], pFedGraph [39]). Such client-server architectures are especially applicable to cross-silo FL considered in this paper; here, FL-PTs are typically companies or organizations with reliable computational resources and communication channels [9,10]. In such architectures, the central server has authority to determine the contribution relationships of FL-PTs (i.e., the ways of aggregating the model updates for FL-PTs) in the FL training process (e.g., [5,6,30]). There is a development process for a technique to be implemented in real worlds and adopted by organizations. Business sectors are an important FL application domain; addressing better the concerns of FL participants in business sectors can facilitate the participation of potential organizations into a FL ecosystem. The theoretical framework of this paper sticks to the current FL technical practices and serves as a starting point to construct a FL ecosystem where FL-PTs have both self-interest and competition features and a FL manager needs to construct collaboration relationships of FL-PTs without free-riding and conflicts of interests between FL-PTs. The reviewer also raised an excellent point about a single point of control and potential failure. In some important application domains of FL such as wireless network systems, it is necessary to consider such issues with a single point of failure; as shown by the survey below, in such domains, blockchain-based federated learning can be promising where there is no central server and peer-to-peer coordination between FL-PTs is needed: - Nguyen, Dinh C., Ming Ding, Quoc-Viet Pham, Pubudu N. Pathirana, Long Bao Le, Aruna Seneviratne, Jun Li, Dusit Niyato, and H. Vincent Poor. "Federated learning meets blockchain in edge computing: Opportunities and challenges." *IEEE Internet of Things Journal* 8, no. 16 (2021): 12806-12825. In blockchain-based FL without a central server, it is an interesting research problem to design proper negotiation protocols to guarantee that there are no free-riding and conflicts of interests between FL-PTs. This will be highlighted as a nice future work in the final version of our manuscript. **Comments 2.** FedEgoists assumes that FL-PTs will accurately report their competitive relationships to the central server (CS). However, this can easily lead to privacy leaks and potential attacks. **$\color{blue}{Response.}$** Following the standard technical practices in literature, the central server is assumed to be trustable, as explained above. In real worlds, the central server may represent an impartial and authoritative third-party (e.g., the industry association) [5]. Then, FL-PTs can report their competitive relationships to the third-party in person and confidentiality agreements can be signed between the third-party and FL-PTs. We will clarify this in the final version of the manuscript. **Comments 3.** While the FedEgoists strategy focuses on collaboration benefits and competition avoidance, it does not explicitly address data complementarity. **$\color{blue}{Response.}$** Yes, this paper doesn’t address the technical issues related to data complementarity. It uses the existing techniques [5,30] to evaluate the data complementarity between FL-PTs, which is represented by a benefit graph. The competitive relationships between FL-PTs form a competing graph. According to the benefit graph and the competing graph, this paper aims to construct the contribution relationships between FL-PTs to guarantee no free-riding and conflicts of interests between FL-PTs, which is desired in business sectors. **Comment 4.** The experiments are not sufficient, and there are too few datasets and baselines. **$\color{blue}{Response.}$** Due to the space limitations, please refer to global rebuttal. --- Rebuttal Comment 1.1: Comment: Not fully clearing my concern but I think I can increase my rating --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer 2xqy, we thank you sincerely for your time to check our rebuttal and your positive feedbacks.
Summary: This paper studies an interesting topic, which is about cooperation and competition in federated learning. It can be used to describe or simulate the real-world scenarios. The authors propose to use graph-related techniques to formulate the relation between the local clients. They test the algorithm on the benchmark datasets and the medical dataset. Strengths: 1, The research question in federated learning may have potential practical impact in the real-world setting. The cross-silo setting fits what the authors explore. Considering the cooperation and competition from the graph perspective in FL is an interesting point. 2, Statements are supported with experiment results and theory. 3, The presentation is helpful for readers including the algorithm description, diagram, and figures. Weaknesses: 1, The novelty of each module, especially the graph part, may need to be clarified clearly. Also, the two principles are commonly used, if the reviewer understand correctly, an optimal formation is proposed based on these two, is that correct? 2, Most designed part is at the server side. The reviewer may not be able to observe the strong connection with federated learning. The setting itself may tell that each client is a participant in this framework. However, the introduction (motivation) and the method could be a little bit fragmented. 3, Ablation study and hyperparameter study may help verify the algorithm more deeply and soundly. 4, A little bit concerned about the baseline selection. Some contribution-related work in FL like [1], if comparable, should be considered or at least discussed. [1] Contribution-Aware Federated Learning for Smart Healthcare Technical Quality: 3 Clarity: 3 Questions for Authors: 1, Could you please address the concerns in the weakness part? 2, Could you clarify how the local model information is leveraged and updated in the graph part more clearly? 3, How to validate or verify the meaning of the graph construction part? Do we need to have the groundtruth of the relation of all the clients? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1, The connection of the proposed techniques should be closely associated with the setting of FL, which focuses on the iterative updates between the server and the clients. 2, The original novelty of contribution should be emphasized a little bit more. 3, More comprehensive experiments may help demonstrate the effectiveness and understand the generalization of the algorithm better. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses 1 \& Limitations 2** Overall, the original novelty of contribution includes multiple aspects: (1) identifying an interesting question to study, (2) proposing a desirable solution concept for this new problem, and (3) proposing an optimal solution. Specifically, business sectors are a key FL application domain. Firstly, the concerns of FL participants are identified (Section 1). Although the principles used to address these concerns are intuitive or used in FL literature, they are simultaneously considered for the first time. Secondly, a solution concept (or a proper problem formulation) is proposed (Section 3.3) such that the resulting solution can well satisfy the needs of FL participants and help FL participants achieve the best possible ML model performances. Thirdly, an algorithm is proposed to give an optimal solution to the defined problem (Section 4). The aim of developing FL techniques is to make FL able to be implemented in real worlds and adopted by organizations. The proposed theoretical framework of this paper serves as a starting point to construct a FL ecosystem where FL-PTs have both self-interest and competition features and a FL manager needs to construct collaboration relationships of FL-PTs without free-riding and conflicts of interests between FL-PTs. Yes, the solution concept of this paper is proposed based on these two principles. Of the greatest relevance to this paper is Ref. [30] that only considers Principle 1 in this paper and simply gives a heuristic solution to realize Principle 1. In this paper, we are motivated by the scenario under study and consider simultaneously realizing Principle 1 as well as Principle 2. Also, a refined solution concept is proposed in Section 3.3 to guarantee that no coalitions can collaborate together and be merged into a larger one to achieve a higher utility. **Weaknesses 2 \& Limitations 1** This paper is about the application of federated learning (FL) to business sectors and addresses the related issues about free-riders and conflicts of interests, which are the key to enabling the application of FL in the scenario under study. It is interdisciplinary in nature and may relate to the topics “Applications” and “Social and economic aspects of machine learning” in the Call for Papers of NeurIPS-2024. Its focus is not on the FL’s technical details or how to improve the potential technical flaws in the current FL study; in contrast, its focus is on the social and economic effect of FL. Applications are the destinations of a technique; business sectors are an important domain to apply FL; addressing better the concerns of FL participants in business sectors can facilitate the application of FL, thus amplifying its social and economic effect. On the other hand, in order to validate the effectiveness of the proposed theoretical framework, related FL techniques have to be implemented in the experimental evaluation part (i.e., Section 5). **Weaknesses 3** Motivated by your suggestions, more experiments have been conducted to verify the algorithm better. The experiments aim to verify the effectiveness of the proposed theoretical framework in the scenario under study. The scenario features self-interest for every client and competition between a part of clients. The collaboration relationships between clients are built according to the competing graph and the benefit graph; the latter graph depends on the data heterogeneity/complementarity of clients. Thus, the variable factors in the scenario/experimental environment include the intensity of competition between clients and the non-IID setting. Following your suggestions, we vary the related parameters in these two factors to verify the algorithm better. Firstly, two clients are either independent of each other, or compete against each other. For each pair of clients, there is a probability $\alpha$ that these two clients compete; here the probability that these two clients are independent of each other is $1 - \alpha$. The value of $\alpha$ determines the intensity of competition among clients and a larger value reflects a higher level of competing intensity among clients. In the benchmark experiments on CIFAR-10 and CIFAR-100, we better verify the effect of the competition intensity on the effectiveness of the proposed solution where we vary the value of $\alpha$ that takes different values in {0.05,0.1,0.2,0.3,0.4} and conducted the corresponding experiments. The related experimental results are presented in Tables 1 & 2. All the tables can be found in the PDF document of the global rebuttal. Secondly, the data heterogeneity is typically simulated by a pathological or Dirichlet distribution. e.g., the pathological distribution is used in [5,30]. In this paper, we conducted experiments on pathological and Dirichlet distributions, respectively. Also, in this rebuttal, we better verify the effect of data heterogeneity on the effectiveness of the proposed solution. Specifically, let $m$ denote the number of classes. In Dirichlet distribution, there is a distribution vector $q_{c}\in \mathbb{R}^{m}$ is drawn from the Dirichlet distribution $Dir_{m}(\beta)$ for each class $c$ and client $v_{i}$ is allocated a $q_{c,i}$ proportion of data samples of class $c$; smaller $\beta$ value results in higher data heterogeneity. We vary the value of $\beta$ that takes different values in {0.01,0.1,0.5} like [39] and conducted the corresponding experiments. The related experimental results are presented in Table 4. Besides the added experiments, we better clarify the type or the research focus of this paper. In the experiments, since this paper doesn’t focus on researching the FL technique itself, we simply follow the current best technical practices in FL literature. Like [5,30], ablation study and hyperparameter study are not taken and this paper doesn’t focus on the design of network architectures. --- Rebuttal 2: Comment: **Weaknesses 4** The two papers have different focuses. There is a central server that coordinates the FL training process. In [1], a global ML model is built for all clients and the paper aims to evaluate the individual contribution of each client/FL-PT to this global model. In our paper, a personalized ML model is produced for each individual client $v_{i}$ and the central server determines which clients can contribute to this client $v_{i}$ in the FL training process. While determining such contribution relationships between clients, we aim to avoid free-riding and conflicts of interests between clients. --- Rebuttal 3: Comment: **Questions 2 \& 3** This paper has two graphs (i.e., the benefit graph $\mathcal{G_b}$, and the competing graph $\mathcal{G_c}$) that are assumed to be known to a FL manager; $\mathcal{G_b}$ is obtained by collaborative model training, while the information on $\mathcal{G_c}$ is reported by clients to the FL manager (or the central server). Below, we detail the ways of obtaining them and how the local model information is leveraged in the graph part. Firstly, data heterogeneity/complementarity between clients is characterized by a benefit graph $\mathcal{G_b}$. Like [5,30], this paper uses the hypernetwork technique in [23] to evaluate the data complementarity, thus obtaining the benefit graph. In the graph part, the local model information is mainly leveraged in the calculation of $\mathcal{G_b}$. We also refer readers to Section 4.2 of Ref. [5] for the technical details on the way of obtaining $\mathcal{G_b}$. Specifically, there are $n$ clients. Each client $v_{i}$ has a risk/loss function $\ell_{i}$: $\mathbb{R^n}\rightarrow \mathbb{R_+}$. Given a learned hypothesis $h\in$ $\mathcal{H}$, let the loss vector $\mathbf{\ell}(h)=[\ell_{1}, \dots, \ell_{n}]$ represent the utility loss of the $n$ clients under the hypothesis $h$. The hypothesis $h$ is considered a Pareto solution if there is no other hypothesis $h^{\prime}$ that dominates $h$, i.e., $\nexists h^{\prime}\in \mathcal{H}, s.\,t.\, \forall i: \ell_{i}(h^{\prime})\leqslant \ell_{i}(h) \text{ and } \exists j: \ell_{j}(h^{\prime}) < \ell_{j}(h). $ Let $r=(r_{1}, \dots,r_{n})$ $\in \mathbb{R^n}$ denote a preference vector which denotes the weight of the objective local model loss that is normalized with $\sum_{k=1}^{n}{r_{k}}=1$ and $r_{k}\geq 0, \forall k\in \{1, \dots, n\}$. The hypernetwork $HN$ takes $r$ as input and outputs a Pareto solution $h$, i.e., $h\gets HN(\phi, r)$, where $\phi$ denotes the parameters of the hypernetwork [23]. For each client $v_{i}$, linear scalarization can be used. Like [5,30], an optimal preference vector $r_{i}^{\ast}=\left(r_{i,1}^{\ast}, r_{i,2}^{\ast}, \dots, r_{i,n}^{\ast}\right)$ is determined to generate the hypothesis $h_{i}^{\ast}$ that minimizes the loss with the data $\hat{\mathcal{D_i}}$. This is expressed as $h_{i}^{\ast}=HN(\phi, r_{i}^{\ast}), \text{ where } r_i^* = argmin_r \hat{\mathcal{L_i}}(HN(\phi, r)).$ For each client $v_{i}$, the value of $r_{i,j}^{\ast}$ is used as an estimate to the weight of $v_{j}$ to $v_{i}$ [5,30]. $r_{1}^{\ast}, r_{2}^{\ast}, \dots, r_{n}^{\ast}$ define a directed weighted graph, i.e., the benefit graph $\mathcal{G_b}$. Secondly, following the standard technical practices in literature, the central server is assumed to be trustable, as explained above. In real worlds, the central server may represent an impartial and authoritative third-party (e.g., the industry association) [5]. Then, FL-PTs can report their competitive relationships to the third-party in person and confidentiality agreements can be signed between the third-party and FL-PTs. The above information will be clarified in the final version of our manuscript. --- Rebuttal 4: Comment: **Limitations 3** Due to space limitations, please refer to the global rebuttal. --- Rebuttal Comment 4.1: Comment: Thanks for the author's comments. I think the comments have addressed my concerns and answered the questions. I will increase the score. --- Reply to Comment 4.1.1: Title: Thank you! Comment: Dear Reviewer gR3Y, we thank you sincerely for your time to check our rebuttal. We are glad that our response has addressed your concerns.
Summary: The business sector is a main domain where cross-silo federated learning (FL) has many promising applications in various scenarios. The authors simultaneously consider the self-interest and competition features in the business sector. They develop a novel framework to both address the resulting free-riding problem and avoid the conflict of interest between any two competing FL participants (FL-PTs), which results in an interesting optimization problem. Finally, the authors find an optimal solution where one coalition cannot increase the utility of any of its members by collaborating with other coalitions. Extensive experiments are conducted to show the effectiveness of the proposed solution, and its ability to establish efficient collaborative networks in cross-silo FL with FL-PTs that engage in business activities. Strengths: S1: The authors consider a timely and interesting question in cross-silo FL where the free-riding and competition issues are considered simultaneously. S2: The authors propose a new problem formulation and definition and develop a novel theoretical framework of practical importance to address the identified problem. The resulting optimization problem is solved optimally. The paper is technically sound. S3: The paper is well organized and easy to follow. The background and related work are well introduced to understand the importance of the problem of this paper. S4: Extensive experiments have been done to show the effectiveness of the proposed solution. Weaknesses: W1: The proposed algorithm is not polynomial-time solvable. However, this seems reasonable since there are typically a limited number of FL-PTs (e.g., 2 to 100) for cross-silo FL in business sectors. The authors propose a novel application of the classic algorithms in graph theory to solve their problem optimally. The algorithm complexity depends on these classic graph algorithms, which work well in reality. Technical Quality: 3 Clarity: 4 Questions for Authors: Q1: Can the proposed technical framework be applied to other types of tasks or datasets, e.g., NLP? In the experimental evaluation part, the authors have already shown its application to some interesting tasks commonly seen in literature. Q2: To keep consistency, it is better that “selfish” in Figure 1 is changed to “self-interest”. Q3: In this paper, all FL-PTs are partitioned into mutually disjoint coalitions/groups. It seems that this is similar to clustered federated learning where FL-PTs are partitioned into mutually disjoint clusters (i.e., groups). The reviewer understands that the authors consider a problem that is clearly different from the typical clustered federated learning. Q4: In your experimental results, how many trials have you conducted to obtain the standard deviation? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: See weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors would like to thank you sincerely for your overall positive comments on the manuscript, including your positive acknowledgement of the question under study, the theoretical soundness of the proposed framework, and the effective experimental validation of the proposed solution. **Q1:** The method itself is general and is expected to be applicable to NLP field, which will be shown in a future journal version. **Q2:** In the final version, we will update the manuscript as you suggested. **Q3:** In the final version, we will better clarify in Section 2 the connection and differences of our work with clustered federated learning. **Q4:** In the experiments, five trials are conducted to obtain the standard deviation. We will clarify this in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. I'm generally fine with the results. Thus, I will keep my score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer BcXD, we thank you sincerely for your time to check the rebuttal and your recognization of our work.
Summary: This paper focuses on client selection in cross-silo federated learning. The authors propose FedEgoists. In particular, FedEgoist participate clients into different clusters to avoid free riders and conflict of interests. Theoretical analysis is provided to validate the theoretical soundness of the proposed FedEgoist. Experiments validate the effectiveness of proposed FedEgoist. Strengths: * The discussed topic, client selection in cross-silo federated learning, is important in practical setting. * The writing is easy to follow. Weaknesses: * The evaluation setting on CIFAR-10 and CIFAR-100 cannot simulate conflict of interests in the real world. Further consideration needed. * The meaning of propositions 1 & 2 is unclear. Further explanation is expected. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is no potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors would like to thank you sincerely for your overall positive comments, including the acknowledgement of the importance of the discussed topic in practical setting and the theoretical soundness of the proposed framework. Your comments are also constructive to help improve the manuscript. Below, we made a response to the two questions in the weakness part, indexed as W1 & W2 respectively. **W1: Experiments on CIFAR-10 & CIFAR-100** In this rebuttal, we better clarify the setting in our experiments. The setting here is the same as the setting in [30] where the same competitive relationships between clients are considered in the FL context. Two clients (i.e., FL-PTs in the paper) are either independent of each other, or compete against each other. For each pair of clients, there is a probability $\alpha$ that these two clients compete; here the probability that these two clients are independent of each other is $1 - \alpha$. The value of $\alpha$ determines the intensity of competition among clients and a larger value reflects a higher level of competing intensity among clients. Compared with [30], we define a new metric to better validate the performance of the proposed approach and conducted more experiments to show the robustness of the proposed approaches in terms of the level of competing intensity between clients. + Firstly, we show the performance of the proposed approach when $\alpha$ takes different values in {0.05,0.1,0.2,0.3,0.4}, representing different levels of competing intensity between clients. We conducted five trials to show the average performance. For the $l$-th trial, a particular competing graph $G_{c,l}$ is randomly generated for the $l$-th time with the given $\alpha$; then, the experiments for the baseline and proposed approaches are run; the performance of the proposed approach is denoted as $r_{\alpha, l, p}$ while the performance of the $i$-th baseline approach is denoted as $r_{\alpha, l, i}$. Given the value of $\alpha$, we show the average performance of the five trials (i.e., $\sum_{l=1}^{5}{r_{\alpha, l, p}}/5$ and $\sum_{l=1}^{5}{r_{\alpha, l, i}}/5$, $i=1,\dots, 9$). The related experimental results are presented in Tables 1 and 2. All tables can be found in the uploaded PDF document of the global rebuttal. For CIFAR-10, the proposed FedEgoists achieves the best performance in all cases when compared with all the basedline approaches. For CIFAR-100, the proposed FedEgoists achieves the best performance on average when compared with all the basedline approaches. + Secondly, suppose we are given a specific value of $\alpha$. We define a new metric to show the worst-case performance of the proposed approach compared with the baseline approaches by the following ways: - Firstly, we find an integer $l^{\ast}$ such that under the $l^{\ast}$-th trial, the performance of the baseline approaches is the best compared with the proposed approach, i.e. $l^{\ast} = \arg\max\limits_{l\in [1, 5]}\{ (\max\limits_{i\in [1,9]}{r_{\alpha, l, i}}) - r_{\alpha, l, p} \}$ where $\max\limits_{i\in [1,9]}{r_{\alpha, l, i}}$ is the best performance of all the baseline approaches in the $l$-th trial and $(\max\limits_{i\in [1,9]}{r_{\alpha, l, i}}) - r_{\alpha, l, p}$ is their performance improvement (or the difference) to the proposed approach, which may be negative if the proposed approach achieves a better performance. - Secondly, after finding such $l^{\ast}$, we show the performance of the proposed and baseline approaches under the $l^{\ast}$-th trial. Specifically, given the value of $\alpha$, we compute the value of $(\max\limits_{i\in [1,9]}{r_{\alpha, l, i}}) - r_{\alpha, l, p}$; these values under different $\alpha$ are presented in Table 3. $\color{blue}{Conclusion.}$ Table 3 shows that in the worst case, the proposed FedEgoists has a performance very close to the best performance of all the baseline approaches. Tables 1 and 2 show that on average, the proposed FedEgoists achieves a significant performance improvement when compared with all the baseline approaches. **W2: The Meaning of Propositions 1 & 2** You raised an excellent point about the clarity. In the final version, we will better clarify the meaning of Propositions 1 \& 2 and better show how Algorithm 1 gives an optimal solution to the problem of this paper defined in Section 3.3 (lines 183-185 on page 5). Specifically, clients are partitioned into multiple coalitions/groups, and this paper aims to find a solution that satisfies Principles 1 \& 2 and $\Pi=\emptyset$. Principles 1 \& 2 guarantees that there are no free-riders and conflicts of interests in a FL ecosystem; the requirement of $\Pi=\emptyset$ guarantees that no coalitions can collaborate together and be merged into a larger one to achieve a higher utility. In such context, the meaning of Propositions 1 \& 2 is as follows: + Proposition 1 shows that Eq. (1) holds and Principle 2 is realized in the solution given by Algorithm 1. Eq. (1) is given in Section 3.2.1 where we say that _Principle 1 is realized when Eq. (1) is satisfied_, which will be highlighted as a lemma in the final version and used in the proof of Proposition 1. Also, Proposition 1 will be updated to directly say that ``_Upon completion of Algorithm 1, Principles 1 \& 2 are realized._". + Proposition 2 shows that the requirement of $\Pi=\emptyset$ in Eq. (3) is satisfied. In the final version of the manuscript, at the end of Section 4, the physical meaning of the conclusions in Propositions 1 & 2 will be clarified better as explained above. --- Rebuttal 2: Comment: **Notes on other added experiments.** In this rebuttal, besides the experiments conducted for the above comments in W1, more experiments are also conducted (a) to verify the robustness of the proposed solution in terms of the level of data heterogeneity, and (b) on a new dataset and with a new baseline approach. **Experiments (a)** Specifically, this paper is about the application of the current FL techniques to the business sectors, and the experiments aim to verify the effectiveness of the proposed theoretical framework in the scenario under study. The scenario features self-interest for every client and competition between a part of clients. The collaboration relationships between clients are built according to the competing and benefit graphs; the benefit graph depends on the data heterogeneity/complementarity of clients. Thus, the variable factors in the scenario/experimental environment include the intensity of competition between clients and the non-IID setting. In the experiments, we vary the related parameters in these two factors to better verify the robustness of the proposed algorithm. Above, the experiments for the intensity of competition have been introduced. Data heterogeneity is typically simulated by a pathological or Dirichlet distribution. For example, the pathological distribution is used in [5,30]. In this paper, we conducted experiments on pathological and Dirichlet distributions, respectively. Also, in this rebuttal, we better verify the effect of data heterogeneity on the effectiveness of the proposed solution. Specifically, let $m$ denote the number of classes. In Dirichlet distribution, there is a distribution vector $q_{c}\in \mathbb{R}^{m}$ is drawn from the Dirichlet distribution $Dir_{m}(\beta)$ for each class $c$ and client $v_{i}$ is allocated a $q_{c,i}$ proportion of data samples of class $c$; smaller $\beta$ value results in higher data heterogeneity. We vary the value of $\beta$ that takes different values in {0.01,0.1,0.5} and conducted the corresponding experiments. The related experimental results are presented in Table 4. **Experiments (b)** We additionally conducted experiments on the synthetic data used in [30] where the experimental setting is also the same as the setting in [30]. The related experimental results are presented in Tables 5-6, which can be found in the PDF document. We add one more approach in the paper below, called FEDORA, as the baseline: Jun Wu, Wenxuan Bao, Elizabeth Ainsworth, and Jingrui He. "Personalized federated learning with parameter propagation." In ACM KDD, 2023. FEDORA is only for image classification tasks; thus, we only conducted experiments on CIFAR-10 and CIFAR-100. The related experimental results are presented in Tables 1,2,4. --- Rebuttal 3: Title: Discussion Period & Thank you! Comment: Dear Reviewer jdA3, we thank you for your time and detailed comments. Since the discussion period will end soon, we will appreciate if you could check the rebuttal and let us know whether our response has addressed your concerns. Thank you sincerely.
Rebuttal 1: Rebuttal: More experiments have been conducted to verify the effectiveness of the proposed solution: 1. More experiments are conducted to verify the robustness of the proposed algorithm. 2. We define a new metric to better validate the performance of the proposed approach. 3. New datasets and baselines are considered. **Added Experiments Part 1: Robustness** The paper is about the application of the current FL techniques to the business sectors, and the experiments aim to verify the effectiveness of the proposed theoretical framework in the scenario under study. The scenario features self-interest for every client and competition between a part of clients. The collaboration relationships between clients are built according to the competing graph and the benefit graph; the latter graph depends on the data heterogeneity or complementarity of clients. Thus, the variable factors in the scenario or experimental environment include the intensity of competition between clients and the non-IID setting. We vary the related parameters in these two factors to verify the robustness of the proposed algorithm. Below, we explain the related parameters in these two factors and the added experiments: Firstly, two clients are either independent of each other or compete against each other. For each pair of clients, there is a probability $\alpha$ that these two clients compete; here the probability that these two clients are independent of each other is $1 - \alpha$. The value of $\alpha$ determines the intensity of competition among clients and a larger value reflects a higher level of competing intensity among clients. In the benchmark experiments on CIFAR-10 and CIFAR-100, we better verify the effect of the competition intensity on the effectiveness of the proposed solution where we vary the value of $\alpha$ that takes different values in {0.05,0.1,0.2,0.3,0.4} and conducted the corresponding experiments.Given a specific value of $\alpha$, we conducted five trials to show the average performance. For the $l$-th trial, a particular competing graph $G_{c,l}$ is randomly generated for the $l$-th time with the given $\alpha$; then, the experiments for the baseline and proposed approaches are run; the performance of the proposed approach is denoted as $r_{\alpha, l, p}$ while the performance of the $i$-th baseline approach is denoted as $r_{\alpha, l, i}$. Given the value of $\alpha$, we show the average performance of the five trials (i.e., $\sum_{l=1}^{5}{r_{\alpha, l, p}}/5$ and $\sum_{l=1}^{5}{r_{\alpha, l, i}}/5$, $i=1,\dots, 9$). The related experimental results are presented in Tables 1 and 2. Secondly, the data heterogeneity is typically simulated by a pathological or Dirichlet distribution. For example, the pathological distribution is used in [5,30]. In this paper, we conducted experiments on pathological distribution(PAT.) and Dirichlet distribution(Dir), respectively. Also, in this rebuttal, we better verify the effect of data heterogeneity on the effectiveness of the proposed solution. Specifically, let $m$ denote the number of classes. In Dirichlet distribution, there is a distribution vector $q_{c}\in \mathbb{R}^{m}$ is drawn from the Dirichlet distribution $Dir_{m}(\beta)$ for each class $c$ and client $v_{i}$ is allocated a $q_{c,i}$ proportion of data samples of class $c$; smaller $\beta$ value results in higher data heterogeneity. We vary the value of $\beta$ that takes different values in {0.01,0.1,0.5} like [39] and conduct the corresponding experiments. The related experimental results are presented in Table 4. **Added Experiments Part 2: A New Metric** Suppose we are given a specific value of $\alpha$. We define a new metric to show the worst-case performance of the proposed approach compared with the baseline approaches by the following ways: 1. We find an integer $l^{\ast}$ such that under the $l^{\ast}$-th trial, the performance of the baseline approaches is the best compared with the proposed approach, i.e. $l^* = \arg \max_{l \in [1,5]} \left( \max_{i \in [1,9]} r_{\alpha,l,i} - r_{\alpha,l,p} \right) $ where $\max\limits_{i\in [1,9]}{r_{\alpha, l, i}}$ is the best performance of all the baseline approaches in the $l$-th trial and $(\max\limits_{i\in [1,9]}{r_{\alpha, l, i}}) - r_{\alpha, l, p}$ is their performance improvement (or the difference) to the proposed approach, which may be negative if the proposed approach achieves a better performance. 2. After finding such $l^{\ast}$, we show the performance of the proposed and baseline approaches under the $l^{\ast}$-th trial. Specifically, given the value of $\alpha$, we compute the value of $(\max\limits_{i\in [1,9]}{r_{\alpha, l, i}}) - r_{\alpha, l, p}$; these values under different $\alpha$ are presented in Table 3. $\color{blue}{Conclusion.}$ Table 3 shows that in the worst case, the proposed FedEgoists has a performance very close to the best performance of all the baseline approaches. Tables 1 and 2 show that on average, FedEgoists achieves a significant performance improvement when compared with all the baseline approaches. **Added Experiments Part 3: A New Dataset and a New Baseline** More experiments are conducted on a new dataset and using a new baseline. Specifically, we additionally conducted experiments on the synthetic data used in [30] where the experimental setting is also the same as the setting in [30]. The related experimental results are presented in Tables 5-6. We add one more approach in the paper below, called FEDORA, as the baseline: - Jun Wu, Wenxuan Bao, Elizabeth Ainsworth, and Jingrui He. "Personalized federated learning with parameter propagation." In *ACM KDD*, 2023. FEDORA is only for image classification tasks; thus, we only conducted experiments on CIFAR-10 and CIFAR-100. The related experimental results are presented in Tables 1,2,4. $\color{blue}{Remark.}$ All experimental results (tables) can be obtained from the uploaded PDF document. The results are presented in the form of mean ± std. Pdf: /pdf/70a7b1481a50dd40f42597a0659299e7ea5fec39.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Generating compositional scenes via Text-to-image RGBA Instance Generation
Accept (poster)
Summary: This study enhances text-based image generation diffusion models by introducing multi-layer noise blending and transparency-aware training procedures, enhancing control over the generated image content. This technique allows for finer control over the elements composing the image and improves the quality and consistency of the generated images through conditional sampling. Strengths: 1. The approach allows for detailed control of object appearance and placement within images. 2. Images are produced with greater detail and consistency through the use of conditional sampling and transparency handling. 3. The method is straightforward and easy to understand. Weaknesses: 1. Novelty: The field of layered-based image generation, which includes both depth-based and instance-based layering, has been thoroughly investigated. It remains unclear what theoretical or methodological innovations the author's approach offers compared to existing instance-layering methodologies. The capability for fine-grained control, as mentioned, seems achievable by most contemporary methods. 2. Evaluation Concerns: Recent developments in instance-based layering methods appear to have been neglected. An analytical discussion or empirical benchmarking against these newer methods is warranted. Pertinent literature includes: Huang, Runhui, et al. "LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model." arXiv preprint arXiv:2403.11929 (2024). Zhang, Lvmin, and Maneesh Agrawala. "Transparent image layer diffusion using latent transparency." arXiv preprint arXiv:2402.17113 (2024). 3. Experimental Assessment: There appears to be a lack of quantitative evaluations for GLIGEN and MultiDiffusion methodologies. It would be helpful if the authors could clarify why quantitative comparisons with these methods were not included. Additionally, presenting visual results for entire scenes from non-layer-based methods, such as the SD series models, would be valuable in assessing whether layout-independent methods enable more coherent or effective inter-instance interactions. 4. The independent generation of instances introduces significant complexity and poses challenges in assembling them into a coherent scene, as noted by the authors. Technical Quality: 2 Clarity: 3 Questions for Authors: How does the model manage unexpected object interactions or complex overlapping in scene compositing? Providing in-the-wild results that showcase complex object interactions and instance occlusions, such as scenarios depicting a person riding a motorcycle and a person wielding a wand, would help clarify the model’s capabilities in intricate scene compositions. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors mention limitations briefly at the end of the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments that increase the quality of our work, and address main concerns below. **Novelty:** while we agree with the reviewer that layered-based representation are receiving increasing attention, we argue that our proposed methodology has several unique components which paves the way for multi-layer innovation. Firstly, transparent image generation is an emerging topic, with very few related works dedicated to the task. To tackle this task, we propose a novel modelling strategy with a disentangled latent space and mutual conditioning. In addition, we provide several training insights to achieve better results with this type of generation task. Secondly, we disagree with the reviewer that the level of fine-grained control we are providing can be achieved with contemporary method, as evidenced by our experiments. While common layered methods allow control over location and appearance of objects, they often struggle with patterns and fine-grained instance details, as all components are generated at the same time. We have investigated competing methods with publicly available implementations, and consistently observed difficulties in achieving the right level of details, and/or accurate instance positioning for the type of complex prompts we are showing in our manuscript. We highlight that we are working with substantially more complex image descriptions than e.g. T2I Compbench (reference [16]). In contrast, by generating instances separately, we can control fine-grained attributes and focus on scene coherence at blending time. Lastly, our multi-layer scene composition is also different from pre-existing methodology, due to our pre-generated instances. Here, instead of focusing on generating the right instance prior to a global merger, we focus on instance integration within the scene, through iterative instance addition layer by layer, at each timestep. This allows more natural integration of individual instances in the scene. **Related works:** While we focused our evaluation on published work, we agree with the reviewer that LayerDiffusion [1] is highly relevant recent related work, and update our manuscript to add comparisons to this work, both in related works and experiments. We highlight that this work was developed in parallel to ours, and our methodology and handling of transparency is entirely different. Furthermore, we highlight that the multi-layer scene generation process of LayerDiffusion is substantially more restricted, as instance location cannot be specified and is solely controlled by background appearance. We provide quantitative and qualitative comparisons in this rebuttal. Experimental details, and discussion are available in the global rebuttal. LayerDiff [2] is contemporaneous work to ours, and is closer to a multi-layer extension to Text2Layer than our work. We note an important limitation of this work, which is that occluded components are not generated. This makes scene manipulation tasks more challenging, and substantially reduces flexibility. Unfortunately, due to the complex dataset building required and lack of publicly available code, we are not able to provide comparisons to this work, but will integrate it in our related works section. **Experiments:** We agree with the reviewer that quantitative experiments would be a great addition. Unfortunately, as discussed in sec 4.2 (Baselines), quantitative evaluation is highly challenging due to the interactive nature of our methodology, as well as the highly complex nature of image prompts we are working with. This would require the development of a novel benchmark, which we leave to future work as quantitative evaluation remains an open problem for controllable generation tasks. We have made efforts to provide many examples highlighting different properties and benefits of our work, and to thoroughly analyse our method’s behaviour in our supplementary experiments. We note that we do provide a non-layered baseline with the PixArt-alpha model (reference [6]) and are happy to additionally provide a Stable Diffusion baseline as well if needed. **Instance interactions:** We agree with the reviewer that independent instance generation is a limitation of our work, and an important focus of our future work. Our method in its current state can mitigate these issues with precise instance prompts, and stronger blending freedom at composition time. Highly intricate interactions can be achieved without altering our main methodology, by fine-tuning our RGBA generator with higher quality, fine-grained training data, or making it capable of generating grouped instances. [1] Zhang, Lvmin, and Maneesh Agrawala. "Transparent image layer diffusion using latent transparency." arXiv preprint arXiv:2402.17113 (2024). [2] Huang, Runhui, et al. "LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model." arXiv preprint arXiv:2403.11929 (2024).
Summary: This paper introduces a novel method for generating complex images from textual descriptions with a high degree of control over object attributes and scene composition. They introduce a new training paradigm for adapting a diffusion model to generate isolated objects with transparency, using a disentangled representation for RGB and alpha channels. Moreover, a multi-layer scene compositing strategy based on noise blending is developed, enabling the generation and manipulation of complex scenes from multiple instance images. Overall, the task is interesting and the proposed method achieves fantastic generation ability. Strengths: 1. The paper presents a framework for fine-grained controllable generation, providing user control over instance appearance and location with intrinsic scene manipulation capabilities. The results show superior controllability of their approach. Weaknesses: 1. I think this paper ignores a similar and pioneer work: LayerDiffusion [1], which achieves the RGBA generation by finetuning the VAE and diffusion models. It is necessary that this paper should compare with this prior work. 2. It seems that the Multi-Layer Noise Blending would cause unnatural image generation. I hope you can provide more discussion about the composition of foreground and background. 3. A deep and clear literature review of prior works is very important for an excellent work. More discussions about the LayerDiffusion[1] should be included in this paper. [1] Zhang L, Agrawala M. Transparent image layer diffusion using latent transparency[J]. arXiv preprint arXiv:2402.17113, 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: None Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments that improve the quality of our work. We are grateful for the positive remarks (interesting work, fantastic generation ability, superior controllability), and address the reviewer’s main concerns: comparison to LayerDiffusion and the impact of multi-layer noise blending on realistic generation. **LayerDiffusion:** Firstly, we highlight that our work was developed in parallel to LayerDiffusion [1], which is nearly contemporaneous to ours, and our methodological decisions were not inspired from this work. This can be evidenced by our entirely different methodological decision : we explicitly model the transparent channel, and disentangle the latent space; while LayerDiffusion implicitly incorporates transparent information. However, due to the very similar objective of transparent generation, we agree with the reviewer that comparisons are important, and extend our evaluation to include a comparison to LayerDiffusion. Experimental details, results and discussion are available in the global rebuttal. Besides transparent generation, LayerDiffusion also offers multi-layer generation properties by generating instances conditioned on a background image. We point out here that our blending strategy affords substantially better controllability, as LayerDiffusion does not allow to specify the location of these instances (see Figure 2 of this rebuttal). We will update our manuscript to incorporate discussions on LayerDiffusion in related work, and add experiments provided in this rebuttal. **Multi-layer noise blending** can lead to unnatural generation, as evidenced in our supplementary experiments (see Supp. F.2 and Figure 12), when noise injection constraints are too strong. Our method is designed with this challenge in mind, with two important components to ensure natural looking images: 1) iterative integration of instances in the global scene, and 2) imposing constraints in early timesteps only. The latter ensures naturalness of generated scenes by leveraging the intrinsic biases of diffusion models. Imposing constraints only at early timesteps ensures instance locations and key attributes are integrated, while giving enough freedom to the model to generate realistic images. The impact of increasing or decreasing constraints is discussed in more details in our supplementary section F.2. The former component, unique to our multi-layer approach, iteratively integrates new instances in separate layers. Generating separate images allows to smoothly integrate each instance in the scene individually, simplifying the task in comparison to methods that aim to blend all instances in the scene at once. [1] Zhang, Lvmin, and Maneesh Agrawala. "Transparent image layer diffusion using latent transparency." arXiv preprint arXiv:2402.17113 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for the response! I still think that the novelty is limited and the experimental results are not very convincing. I will keep my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, we thank you for your quick response. We have made efforts to address all your comments: we have provided a comparison to LayerDiffusion where we clearly outperform the method quantitatively with markedly different methodology, and have thoroughly discussed the limitations of multi-layer noise blending and our key innovations to address them. We are not aware of alternative works proposing to pre-generate RGBA instances for multi-layer scene composition with fine-grained control. Please let us know what aspects of our work remain unsatisfactory so we can address them.
Summary: This paper proposes to use a diffusion model to generate separate objects and then apply multi-layer noise blending to build a composite scene. A RGBA generator is finetuned from a latent diffusion model to generate alpha transparency for objects in addition to RGB. A transparency-aware training procedure is designed for both VAE and diffusion models. Then the generated objects are iteratively injected into a scene with a background according to a specific layout. Strengths: 1. To adapt the VAE and diffusion model for RGBA transpancy generation, they make several key changes in training and model, such as using disentangling latents. These changes are nontrivial. 2. Proposed a novel background blending in addition to multi-layer composition process, which ensures the consistency between objects and the background. 3. Extensive experiments have been conducted on state-of-the-art models and have shown good results. Weaknesses: The multi-layer scene composition method requires generating K + 1 images for composition, which is inefficient. Since RGBA objects have already been generated, there might be more efficient ways to fuse them in one step or reduce the cost of each iteration. Technical Quality: 3 Clarity: 3 Questions for Authors: A more detailed algorithm for 3.3 might be necessary to clarify how all the mechanisms work together, since the diffusion processes in the first n steps and b steps are different. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments that increase the quality of our work and are encouraged by the positive feedback. As discussed in our limitation section in supp. D, we agree that our more expensive generation is a limitation of our work. However, we expect this cost to be heavily amortised once a critical mass of RGBA assets has been generated. The same principle is also the current practice for 3D digital assets. We further highlight that it is a common limitation of layered representations, where layers require separate generation processes, and consider that the added flexibility and blending quality can outweigh additional generation cost. Nonetheless, we intend to explore more efficient solutions in future works, notably by exploring ways to combine instance generation and blending while maintaining fine-grained control. We thank the reviewer for highlighting clarity issues in section 3.3. We will update our manuscript to include the following algorithmic description of the section in supplementary materials to facilitate understanding. Multi-layer scene composition algorithm: > **Inputs**: $K$ pre-generated RGBA instances $\{I_k,M_k\}$, A bounding box based scene layout $\mathcal{L}=\{[cx_k, cy_k, w_k, h_k ], \text{for } k \in 1,\cdots,K\}$, A latent diffusion model (VAE $\mathcal{E}$, diffusion model $\mathcal{D}$) T generation timesteps Random noise $\eta \in \mathcal{N}(0,I)$ Number of blending timesteps $n$, background blending timesteps $b$, cross-layer consistency timesteps $n_s$ **Output**: $K+1$ multi-layer images (background, $K$ images with increasing number of instances) > *Generation set-up* >>Rescale $\{I_k,M_k\}$ according to $[cx_k, cy_k, w_k, h_k ]$ $x^0_k \leftarrow \mathcal{E}(I_k)$ $m_k \leftarrow$ Downsample $M_k$ to latent space dimension Compute $x^k_t$ at all diffusion timesteps $t \in [1,T]$ using DDIM inversion Initialise all images with same Gaussian noise: $y^k_T \leftarrow \eta$ >*Composite scene generation* >>**For** $t=T$ **to** 0: *Loop over timesteps* >>> *Predict noise update with diffusion mode*l: $\epsilon_{\theta}(y^k_t,t) \leftarrow \mathcal{D}(y^k_t,t)$ $y^k_{t-1} \leftarrow$ Update according to noise scheduler and $\epsilon_{\theta}(y^k_t,t)$ **If** $t \geq b$: *Background blending optional step* >>>> $y^0_{t-1} \leftarrow$ $y^0_{t-1} \cdot m^*+\frac{1}{2} (y^0_{t-1}+ y^K_{t-1})\cdot (1-m^*)$ >>>**If** $t \geq n$: *Scene composition* >>>>**For** $k=1$ **to** K: >>>>> $y^k_{t-1} \leftarrow$ $y_{t-1}^{k-1} \cdot(1-m_k)+x^k_{t-1}\cdot m_k$ >>>**Elif** $t \geq n+n_s$: *cross-layer consistency optional step* >>>>**For** $k=0$ **to** K: >>>>> $ y^K_{t-1} \leftarrow y^K_{t-1} \cdot(1-m_k)+y_{t-1}^{k-1} \cdot m_k$ --- Rebuttal Comment 1.1: Comment: Thanks for the response. I think this is an interesting work. I will keep the current rating.
Summary: This paper describes a diffusion model based method for layered text-to-image generation with RGBA masks (transparency information). This is a useful approach when aiming to generate complex images with many objects. To that end, the base VAE as well as the latent diffusion model are adapted to handle another channel and fine-tune to a corresponding dataset. The method is evaluated against base models and visual results show convincing results. Strengths: - The method is sound and well-engineered for this problem using sota building blocks. - The paper is well-written and structured. - It is easy to follow the paper. - The visual results are convincing that the method works. Weaknesses: My main concern is regarding evaluation. While the method demonstrates that it works (visually), and quantitative evaluation shows that it performs better than off-the-shelf text-to-image models, there is a complete lack of comparison against methods that are more relevant to that particular task. The paper mentions many related works, but evaluation is only against base models, and comparisons to alternative methods are missing. - Naive Text2Layer extension to multiple layers instead of just two - No editing / inpainting methods (discussed in related work) are used for comparison - LayerDiff https://arxiv.org/abs/2403.11929 - No comparison to controllability methods such as ReCo, BoxDiff etc. - No comparison to Layered rendering diffusion model for zero-shot guided image synthesis Because of this, it is difficult to judge the technical contributions and novelty of the proposed method. Mainly because it seems like a straightforward adaptation of an existing pipeline through fine-tuning. More justification and experimental evidence are required to demonstrate and ablate the necessary design choices to go from an existing backbone to the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: - Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 1 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments that increase the quality of our work. We are encouraged by the positive comments that our work is useful, our methodology is sound, and our visual results are convincing. We now address the main limitations raised: 1) limited contribution and novelty, and 2) experimental comparisons. **Novelty:** we have aimed to design a novel controllable generation method with fine-grained control and interactivity in mind. To this end, we propose to first generate RGBA assets, and then assemble them in a composite scene. Generating images with transparency information is far from trivial (as mentioned by RwSp1) as the concept of transparency is foreign to diffusion models. We introduce novel modelling and training methodology for this purpose. In contrast with LayerDiffusion [1], which injects transparency information in the null space of the VAE to facilitate fine-tuning, we explicitly model transparency through disentanglement of RGB and alpha channels. Explicitly modelling transparency allows for more precise and nuanced generation of RGBA images, and guarantees that we generate images with a transparent background, which is not the case with LayerDiffusion. Text2Layer (reference [55]) is a substantially different approach as well, requiring triplets of background, foreground and saliency maps for training; and predicting all components together. Crucially, this approach doesn’t generate individual instances, but a foreground comprising all instances. Our scene composition leverages these RGBA assets in a novel multi-layer approach. In contrast with existing multi-layer methods which generate layers in parallel before merging them, we build a multi-layer composition process where each layer is dependent on the lower image layer, by integrating new instances in an increasingly complex image one layer at a time. This process allows to smoothly integrate each component individually (rather than trying to blend all instances together at once) and is uniquely afforded by our RGBA asset pre-generation. This further allows scene manipulations very easily, as pre-generated RGBA instances can easily be moved, replaced, and have their appearance frozen. **Evaluation:** We have made efforts to compare to highly relevant, widely used, state of the art works, for both RGBA generation and scene composition, at the time of submission. For the former, we reimplemented Text2Layer as the closest published work capable of RGBA generation. We highlight that extension of this work to multiple layers is far from straightforward, as disentanglement of foreground instances through their saliency segmentation approach is non trivial, and is beyond the scope of this work. We agree with the reviewer that nearly contemporaneous work LayerDiffusion is an important comparison, and provide quantitative and qualitative comparisons in this rebuttal. Experimental details, results and discussion are available in the global rebuttal. Regarding scene composition evaluation, we have made efforts to compare to open-source, widely used state of the art works designed for compositional generation. As discussed in section 4.2 (Baselines), we have compared to layout-based generation method GLIGEN, which in our experiments, more consistently outperformed boxdiff and ReCO (references [47] and [50]) and therefore constituted our best baseline bounding box based method. Similarly, for multi-layer scene generation, we have chosen multidiffusion, due to its strong performance and available implementation. Unfortunately, we were not able to reimplement Layered rendering diffusion model for zero-shot guided image synthesis (LRDiff, reference [36] in our manuscript) for comparison, due to missing implementation details and no public code. We highlight that LayerDiff[2] is contemporaneous work without available opensource code. Due to the expensive data construction and training costs, we are unable to provide comparisons to this work. While our focus is on generating all image components, including occluded areas, LayerDiff only generates visible pixels. This in turns, strongly limit scene manipulation abilities, which is one of the key benefits of our representation. Finally, while inpainting and editing techniques are relevant related work, we highlight that the focus of our work is not editing, but interactive and controllable scene generation. Our scene manipulation experiments highlight intrinsic benefits of using RGBA assets to easily manipulate scene content with strong content preservation. However, we do not aim to directly compete with editing methods, as we have not introduced any explicit editing mechanisms. [1] Zhang, Lvmin, and Maneesh Agrawala. "Transparent image layer diffusion using latent transparency." arXiv preprint arXiv:2402.17113 (2024). [2] Huang, Runhui, et al. "LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model." arXiv preprint arXiv:2403.11929 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for the respones. I have updated my score to BR because I still think that the technical contributions are limited and the experimental comparisons are not convincing enough to lean towards acceptance. --- Reply to Comment 1.1.1: Comment: Dear reviewer, we thank you for upgrading your score. We have provided comparisons with all relevant state of the art open-source works, including now LayerDiffusion which we outperform, and discussed all modern closed source works in our related works. Our adaptation to transparent generation is entirely novel with no alternative works proposing disentangled latent spaces and mutual conditioning. Our second contribution, our multi-layer blending method, proposes novel ideas such as our iterative layer-wise instance integration process. Please let us know what aspects of our work remain unsatisfactory so we can address them.
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed reviews, insights and comments that improve the quality of our work. We are encouraged by the positive comments: the soundness of our method (RGbJP), the quality of our results (RGbJP, RwSp1, RS8j5), the non triviality of our work (RwSp1), and detailed control afforded by our method (RtSA3). We have made efforts to address all reviewers concerns in individual rebuttals, through additional experiments and clarifications. The reviewers main concerns were comparison of our method with LayerDiffusion [1] and novelty of our work. We restate our key innovations below, and discuss our experimental details and results on LayerDiffusion. **Novelty:** We designed our approach with interactivity and controllability in mind. Our key objective is the development of a controllable generation solution were the user can easily build complex scenes, control instances appearances, and adjust scene layout and content. In particular, we are considering scenes with complex layouts, where multiple instances have unique patterns and attributes, going well beyond compositional benchmarks such as T2I-Compbench (reference [16]). Our key idea is the use of pre-generated RGBA assets, and their composition through a multi-layer process. This allows precise attribute control, ensures that the blending process focuses on scene coherence, and substantially facilitates scene manipulation tasks. To achieve this, we introduce 1) a novel modelling and training approach for generation of transparent images, and 2) a novel multi-layer scene composition method, that blends pre-generated assets into a scene. Integrating instances one by one in the scene allows for smoother scenes where relative position of instances in 3D space (A in front of B) is clearly established and easily controlled. Our experiments have shown that our approach allows a degree of fine-grained control that competing methods cannot achieve. **LayerDiffusion:** While we focused our original manuscript on comparison with published, state of the art work, we agree that the recent LayerDiffusion is highly relevant work, and now provide comparisons with this related work in our rebuttal which will be added to our manuscript. We ran experiments using the official implementation and recommended parameters [3], we used the Juggernaut XL V6 SD XL model with 20 Steps, DPM++ 2M SDE sampler and a CFG scale of 5. For RGBA evaluation, instances were generated using the Foreground Model model with the same captions as for the other models. For the scene composition visualisation we have used the Background to blended image model where we iteratively add instances to the previously blended image, following the protocol in [1] ( Figure 9). Images were generated across multiple seeds, and the best results was selected. Results are available in the attached PDF document. Figure 1 shows visual comparisons of generated instances, showing that while LayerDiffusion is able to generate high quality instances, it does not necessarily respect prompt details (for example, the impressionist style). Quantitative results in Table 1 show that we outperform LayerDiffusion on all metrics. The lower IoU performance could notably be attributed to the implicit transparency modelling, which does not guarantee that accurate transparency layers are generated. Lastly scene composition experiments in Figure 2 show that the model struggles with iterative integration of instances, with a strong bias towards the centre of the image, and no ability to control where instances are added. We restate key differences between our work and LayerDiffusion: while both train a model capable of generating RGBA images, our RGBA modelling approach is entirely different, where we build a disentangled latent space and explicitly model interactions between RGB and Alpha channels. In contrast, LayerDiffusion encodes transparency within the null space of the VAE, so as to train a diffusion model following standard practice. We further highlight that our scene composition is substantially more flexible, as LayerDiffusion only generates instances conditioned on a background image, with no control over instance positioning. As shown in Figure 2 of our attached results, this can lead to poorly positioned instances, with a bias towards the centre of the image. **Baselines:** The field of controllable generation is very popular, and there exist many competing works aiming for similar objectives. Considering the interactive nature of our work, providing a reliable quantitative evaluation of our results is highly challenging and an open problem. For qualitative evaluations, we have made efforts to select the strongest open-source baselines, covering standard image generation (PixArt-alpha (reference [6])), layout based generation (GLIGEN (reference [23])), and multi-layer (multi-diffusion (reference [2])), and attempted to reproduce alternative closed source methods. We now additionally add LayerDiffusion as a highly relevant baseline. We will additionally enrich our related work to discuss contemporaneous, closed source, work LayerDiff[2]. This work focuses on predicting different image regions separately with dedicated prompts, and in contrast to ours, doesn’t predict occluded areas. The methodology, closer to Text2Layer (reference [55]), requires expensive dataset building through instance segmentation. [1] Zhang, Lvmin, and Maneesh Agrawala. "Transparent image layer diffusion using latent transparency." arXiv preprint arXiv:2402.17113 (2024). [2] Huang, Runhui, et al. "LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model." arXiv preprint arXiv:2403.11929 (2024). [3] https://github.com/lllyasviel/sd-forge-layerdiffuse Pdf: /pdf/767bbcf7b334e55263579c9ca5f71256fce42537.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Concept-Based Explainability Framework for Large Multimodal Models
Accept (poster)
Summary: This paper introduces an explainability approach for interpreting the internal representations of large multimodal models (LMMs). The authors train an image captioning model consisting of a pretrained image encoder and language model and a connector model. To extract interpretable representations, the authors use dictionary learning, decomposing representations into lower dimensional $U$ and $V$ matrices using semi non-negative matrix factorization as the optimisation objective. To interpret concepts in $U$ in the textual domain, the authors use the language model unembedding layer to extract the highest probability tokens associated with a given concept vector $u$. Likewise, to visually interpret a concept vector $u$, the authors find the set of images that maximally activate $u$. The authors provide quantitative evidence demonstrating that their method generates concepts which are well aligned with both the input image and the ground-truth captions. Additionally, they demonstrate their approach qualitatively produces well defined concepts with limited overlap between the tokens represented by different concepts. Strengths: 1. This paper introduces a promising novel approach to performing mechanistic interpretability on multimodal models. Given the increasing ubiquity of multimodal models there exists a clear need for interpretability approaches for this style of model. This work presents a viable dictionary-learning based approach I look forward to seeing other researchers building upon in the future. 2. The evaluation of the framework is comprehensive including both qualitative and quantitative results, both of which are essential for any interpretability method. Additionally, I was impressed to see the authors consider multiple approaches (e.g. PCA/KMeans) and sensible baseline models when evaluating their results, providing more confidence that the final approach taken was an appropriate method. 3. The paper is very clearly articulated, with the rationale well-defined, the approach clearly outlined and the results succinctly and clearly summarised. Weaknesses: 1. The authors provide good evidence that the concepts extracted by their method show minimal overlap with other concepts, however they do not address the alternative possibility, that their extracted concepts might represent more than one distinct concept. This phenomenon of feature “superposition” has been well documented in other model interpretability work (see [1][2][3]) and so it seems plausible that it may arise in the approach taken here. This seems especially likely given that the dimensionality of their concept dictionary is lower than the dimensionality of the internal representations. Though I do not think this should detract from the otherwise excellent contributions presented in this paper, I do think this at least warrants a brief discussion, and perhaps more qualitative analysis of the extracted concepts, to assess whether any evidence of feature superposition is observed. 2. Occasionally the authors make claims that go beyond the scope of this work. For example, in the abstract they state, “we present a novel framework for the interpretation of LMMs”. However, this claim seems too strong given that this approach is only really valid for image captioning models rather than large multimodal models more generally. Additionally, there are a few comments such as “we find the generalization of LLMs to multimodal inputs is an interesting phenomenon to understand” in 3.1 (under “Training”) and “the multimodal structure of internal token representations starts to appear at [later layers]” in 4.2 (under “Layer Ablation”). However these claims do not seem valid as the language model layers are frozen during training. It seems more appropriate to say that these extracted concepts represent language model concepts, and the connector model learns to transform the image representations such that they align with these language model concepts. [1] Elhage, N, et al. "Toy Models of Superposition." arXiv:2209.10652 (2022) [2] Arora, S et al. "Linear Algebraic Structure of Word Senses, with Applications to Polysemy." arXiv:1601.03764 (2016) [3] Elhage, N, et al. "Softmax Linear Units" Transformer Circuits Thread (2022) Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Why did the authors choose to use Opt-6.7B as the language model rather than more powerful similar sized models such as Llama-7B? 2. Why do the authors take the absolute activations of $V$ in (5)? Are these activations not guaranteed to be non-negative by the optimisation objective? 3. Did the authors consider trialling gradient-based feature visualisation approaches [1] in addition to taking images with the highest activations? This could be an alternative approach to build additional confidence that the extracted concepts do represent what qualitative analysis of the highest activating samples appears to suggest. I don’t expect the authors to perform analysis of this kind in this manuscript however it could be worth commenting on this as a future avenue of research, or alternatively raising any valid criticisms of this approach? 4. There is a typo in figure 1 (“Captionin”) [1] Olah, C, et al. "Feature Visualization", Distill, 2017. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The work presented here is only for a single, relatively small model, trained with a specific objective (image captioning). As such it is not clear that this approach will necessarily generalise to other multimodal models. The authors should touch on this limitation in the discussion or limitations section in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive and insightful comments. Please find our response pointwise below: **Qualitative analysis for feature superposition/specificity of concept vectors:** Thanks for the interesting suggestion. We conducted a preliminary qualitative study on some concept vectors in the dictionary learnt for token "Dog" to analyze if these concept vectors tend to activate strongly for a specific semantic concept (monosemantic) or multiple semantic concepts (polysemantic). In particular, we first manually annotated the 160 test samples for "Dog" for four semantic concepts, for which we knew we had concept vectors in our dictionary, namely "Hot dog" (Concept 2, row 1, column 2 in Fig. 7), "Black dog" (Concept 20, row 10, column 2 in Fig. 7), "Brown/orange dog" (Concept 6, row 3, column 2 in Fig. 7), and "Bull dog" (Concept 15, row 8, column 1 in Fig. 7). For a given semantic concept, we call this set $C_{true}$. Then, for its corresponding concept vector $u_k$ we find the set of test samples for which it activates greater than a threshold $\tau$. This threshold was set to half of its maximum activation over test samples. We call this set of samples $C_{top}$. To estimate specificity of the concept vector we compute how many samples in $C_{top}$ lie in the ground-truth set, i.e. $|C_{top} \cap C_{true}|/|C_{top}|$. We find Concept 2 ("Hot dog") to be most monosemantic with 100\% specificity. For Concept 20 ("Black dog") too, we found high specificity of 93.3\%. For concept 15 ("Bull dog") we observed the lowest specificity of 50\%. This concept also activated for test images with toy/stuffed dogs. Interestingly, the multimodal grounding of concept 15 already indicates this superposition with maximum activating samples also containing images of 'toy dogs'. Concept 6 ("Brown/orange dog") is somewhere in between, with 76\% specificity. This concept vector also activated sometimes for dark colored dogs, which wasn't apparent from its multimodal grounding. In summary, your expectation of superposition/polysemanticity is fair. Prominent or distinct semantic concepts seem to be captured by more specific/monosemantic concept vectors, while more sparsely present concepts seem at risk to be superposed resulting in a more polysemantic concept vector capturing them. We can add this discussion in appendix. **Qualifying certain claims:** We accept your concerns about the three claims/statements. We will modify them to qualify their extent and scope. **Powerful language model + Generalization to other LMMs:** We are pleased to share that we conducted new experiments LLaVA-v1.5. It uses Vicuna-7B (closely related to LLaMA) as the language model. We are able to extract meaningful multimodal concepts and achieve quantitatively consistent results as for DePALM. The global response presents more details regarding the same. **Absolute activations in Eq. (5):** We wrote Eq. (5) this way so that it remains applicable even for methods without the constraint for non-negativity of activations (eg. PCA). For Semi-NMF it indeed does not make a difference. **Trialling gradient-based feature visualisation:** This is a fair suggestion and a useful future direction to build upon. For now, we have explored generating saliency maps with LLaVA (not via gradients) for concept activations by computing the inner product of concept vector $u_k$ with all visual token representations from corresponding layer $L$, i.e. $u_k^T [h^1_{(L)}, ..., h^{N_V}_{(L)}]$. This vector of size $N_V =576$ is reshaped to 24 $\times$ 24 and resized to original input. This is feasible for LLaVA as all visual token representations fed to the language model preserve their patch based identity. Sample visualizations are available in 1-page PDF (Fig. 2) in global response. **Typo in Fig. 1:** Thanks for pointing it out. We'll correct it. --- Rebuttal Comment 1.1: Comment: Thank-you for addressing all my comments, the additional analyses are very interesting. I’m happy with all the responses to my queries and look forward to the final version of the manuscript. I have one additional comment I’ve made below: **Qualitative analysis for feature superposition/specificity of concept vectors** The polysemanticity analysis is very interesting. Thank-you for taking the time to conduct this analysis, I think it would be a useful discussion point in the appendix. The analysis you have presented considers the specificity of concepts related to a given token (“dog” in this instance). I would be interested in understanding the extent to which the concepts for one token are polysemantic for other tokens e.g. is the “Hot dog” concept active for tokens other than “dog”. I don’t expect you to conduct any additional analyses or to edit the manuscript but just wanted to raise this point as a potential future avenue of research. --- Reply to Comment 1.1.1: Comment: Thanks for the rebuttal acknowledgement! Happy to address all your questions! For future development, we'll certainly keep your point about polysemanticity of concept vectors to other tokens under consideration .
Summary: This paper proposes a new approach to understand multimodal concepts learned in LLMs with visual prefixes. To do so, the authors propose a dictionary learning-based approach that decomposes the representation of a word token in the product of two low-rank matrices via Semi-NMF, one representing the concepts and the other representing the activations for a given token. The authors evaluate the representation of the DePALM model on 8 common objects (e.g., dog, bus, cat) showing qualitatively good results, as well as better auto-eval metrics than with related baselines constructed by the authors. Strengths: Originality: The paper proposes a novel method to interpret concepts in a multimodal LLM by grounding them in both text and visual spaces. For a given concept, it creates a representation matrix from image–caption pairs, which is then decomposed into two low-rank matrices using Semi-NMF. Quality: The authors show that extracted concepts for a given token can be interpreted both visually and textually through qualitative examples. Quantitatively, the method outperforms other related baselines introduced by the authors. Clarity: The paper is relatively well written, although some parts (eg, Section 3.4, 4.0 and 4.1) feel very dense and require more time to be processed. Significance: The paper provides interesting insights in the representation of a given word token processed by a frozen LLM that is augmented with image understanding via visual prefix learning. Weaknesses: 1. My main concern is in studying the representation of only eight tokens. It would be interesting to study more words and see if there are any interesting takeaways. These could include the 80 COCO classes, and additional rarer tokens. 2. The use of Semi-NMF in L165-167 is unclear. Later on, we find out that it works better than other decompositions, but I was left wondering how the authors came up with Semi-NMF in Section 3.4. 3. It would have been useful to compare the proposed approach with some current interpretability methods that could apply here (or at least discuss why they might not be used here). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Are there any blockers to perform a larger-scale (wrt tokens) analysis? 2. Interpretability methods like ROME use the last token of a given word – why do you use the first token? 3. Does the method actually work for multi-token words (i.e., words that are split into multiple tokens by the tokenizer)? If so, are any of your 8 words in that category? 4. Please double-check your references, some of them appear more than once. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the interesting feedback and positive comments: **Computational scaling for large number of tokens:** Our experiments use single GPU. The BERTScore evaluation does not scale well. It can take upto 3-4 hours to evaluate all baselines on some target tokens (COCO). The representation extraction process scales fine with DePALM (upto 15 minutes for some tokens), but poorly with models like LLaVA (upto 3-5 hours for some tokens). The core of the method itself, dictionary learning and multimodal grounding, is computationally cheap. **Experiments with more tokens:** We didn't experiment earlier with more because of expensive BERTScore evaluation. However, we have now conducted experiments with 30 additional COCO classes (apart from 8 in the paper) for CLIPScore and Overlap , with identical hyperparameters. We considered these COCO classes based on the criteria that these are single token words with at least 40 predicted test samples by $f_{LM}$. Single token criterion is to keep our experimental setup consistent. Lower bound criterion on test samples is to ensure average test CLIPScore is reliable for each target token. We report the macro average of test CLIPScores and overlap over the 30 extra target tokens in tables below. We'll add the detailed results in appendix for all baselines. We obtain results consistent with those in main paper, with Semi-NMF extracting concepts with good balance between high CLIPScore and low overlap. | Metric (Macro Avg)| Rnd-Words | Noise-Imgs | Simple | Semi-NMF |GT-captions| |:---:|:--:|:---:|:---:|:---:|:---:| | test CLIPScore $(\uparrow)$ | 0.515 | 0.521 | 0.625 | 0.630 | 0.762 | | Metric (Macro Avg) | Simple | PCA | KMeans | Semi-NMF | |:--:|:--:|:---:|:---:|:---:| | Overlap $(\downarrow)$ | 0.443 | 0.050 | 0.451 | 0.155 | **Application to multi-token words:** It is an interesting question. None of our evaluated tokens were in this category. However, our approach can be easily adapted to this. In particular, we extract representation of last token from first prediction of the multi-token sequence. The other aspects of the method remain unchanged. While there can also be other viable strategies, the rationale behind this adaptation is that the last token of our sequence of interest can also attend/combine representations from previous tokens in the sequence. We add below results for such examples. Detailed results will be added in appendix): |Metric|Token |Rnd-Words|Noise-Imgs|Simple|Semi-NMF|GT-captions| |:---:|:---:|:--:|:--:|:--:|:--:|:--:| |test CLIPScore $(\uparrow)$|traffic light|0.516 ± 0.03|0.525 ± 0.03|0.664 ± 0.06|0.634 ± 0.05|0.744 ± 0.04| |test CLIPScore $(\uparrow)$|cell phone|0.542 ± 0.04|0.547 ± 0.03|0.598 ± 0.04|0.598 ± 0.05|0.765 ± 0.06| |test CLIPScore $(\uparrow)$|stop sign|0.533 ± 0.03|0.549 ± 0.03|0.617 ± 0.08|0.616 ± 0.05|0.775 ± 0.04| | Metric | Token | Simple | PCA | K-Means | Semi-NMF | |:--:|:--:|:---:|:---:|:---:|:---:| | Overlap $(\downarrow)$|traffic light|0.704|0.050|0.579|0.174| | Overlap $(\downarrow)$|cell phone|0.623|0.051|0.746|0.164| | Overlap $(\downarrow)$|stop sign|0.461|0.058|0.704|0.109| **Motivation to use Semi-NMF:** The idea to consider Semi-NMF was inspired from the effectiveness of NMF for various interpretability applications ([1-4]). Given the positive and negative values in $h^p_{(L)}$, it was not possible to effectively apply NMF. Thus, Semi-NMF arose as a natural option by relaxing non-negativity on $\mathbf{U}$. We expected the constraint to only positively combine the dictionary elements should still be useful from an interpretability perspective. Theoretical connections between Semi-NMF being generalized version of K-Means [5] also supported our confidence. We'll expand Sec 3.4 to include some of the motivation for more clarity. We'd also like to add that we did consider all three approaches (PCA, KMeans, Semi-NMF) initially and qualitatively found Semi-NMF dictionaries most balanced (diverse and multimodally meaningful). **Applicability of other current interpretability methods:** Related works (Sec. 2) discuss in detail the applicability of previous CAV-based approaches and previous approaches to understand VLMs/LMMs. Within general interpretability literature, the closest related methods to CAV based concept extraction are input autoencoder-based concept methods or concept bottleneck models. However these are used almost exclusively as by-design interpretable networks and thus out of scope for understanding pretrained models. We can add a brief discussion about them in Sec. 2. **Representations extraction + Relation to ROME:** We selected representation from first position where target token is predicted following Schwettmann et al. [6] who analyzed neurons for a given caption at the first predicted noun. In practice, what is essential is extracting representations where the target token is predicted. In this regard we are not in conflict with ROME but rather aligned. ROME analyzes factual associations of the form (subject, object, relation). The input prompt to the language model consists of the subject and relation. The model predicts the next token which is expected to be the object and thus ROME also analyzes representation at a position where their token of interest is predicted. **Repeated references:** Thanks. We'll correct them. [1] G. Trigeorgis et al. "A deep semi-nmf model for learning hidden representations." ICML 2014 [2] J. Parekh et al. "Listen to interpret: Post-hoc interpretability for audio networks with nmf." NeurIPS 2022 [3] T. Fel et al. "Craft: Concept recursive activation factorization for explainability." CVPR 2023 [4] YT. Guo et al. "The rise of nonnegative matrix factorization: algorithms and applications." Information Systems 2024. [5] CHQ. Ding et al. "Convex and semi-nonnegative matrix factorizations." IEEE TPAMI 2008 [6] S. Schwettmann et al. "Multimodal neurons in pretrained text-only transformers." ICCVW 2023 --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and clarifying my doubts. I will keep my positive evaluation of the paper, and follow on the discussion with the other reviewers. --- Reply to Comment 1.1.1: Comment: Thanks for the rebuttal acknowledgement! We're glad to address your doubts.
Summary: In this paper, the authors propose a framework for interpreting LMMs. Specifically, they introduce a dictionary learning-based approach applied to the representation of tokens. The elements of the learned dictionary correspond to the proposed concepts. These concepts are semantically well-grounded in both vision and text. Strengths: Using concepts to interpret large multimodal models is a promising idea. Weaknesses: 1. I'm sorry, I don't fully understand this field, so I will lower my confidence score. 2. The notation system is quite confusing. I suggest the authors reorganize it for better clarity. 3. The paper directly applies the concept activation vector (CAV) to Large Multimodal Models, but it does not explain the benefits of doing so or why this approach is valid. Technical Quality: 2 Clarity: 2 Questions for Authors: The experiments in the paper are relatively few. I suggest conducting further research on the interpretability of multimodal large models such as LLaVA and MiniGPT-4. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: yes, the author explains the limitations of their study and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We add our response for each point below: **Notation system:** We will try our best to reorganize some notations for clarity. We remain open to incorporate any particular suggestions. **Benefits of CAV-based approach for LMMs:** Concept based explainability approaches in general are preferred as they highlight what semantic representations a model extracts rather than which regions of input are important for a model [1]. In order to gain understanding about internal representations of an LMM, CAV-type approach were the only viable option for concept extraction as the other types of concept explainability methods (eg. Concept bottleneck models) focus on training interpretable networks by design. Furthermore, the effectiveness of recent works ([2]) in learning concepts dictionaries for CNNs also motivated us to develop a concept extraction approach that can be applied to LMMs. **New experiments:** We have conducted new experiments on LLaVA. We are able to extract meaningful multimodal concepts and achieve quantitatively consistent results as for DePALM. The global response and 1-page pdf present more details regarding the same. [1] J. Colin, T. Fel, R. Cadène, and T. Serre. "What I cannot predict, I do not understand: A human-centered evaluation framework for explainability methods." NeurIPS 2022. [2] T. Fel, V. Boutin, L. Béthune, R. Cadène, M. Moayeri, L. Andéol, M. Chalvidal, and T. Serre. "A holistic approach to unifying automatic concept extraction and concept importance estimation." NeurIPS 2023 --- Rebuttal Comment 1.1: Comment: Dear reviewer, We have incorporated your complete feedback in our rebuttal and sincerely hope it addresses all the concerns.
Summary: The authors propose using dictionary learning to extract concepts from multimodal models and simultaneously ground them in the text and image latent space. They draw on prior work on multimodal neurons and concept activation vectors. The authors provide quantitative results using CLIPScore and BERTScore to measure concept extraction, multimodal grounding, and concept overlap. Strengths: Appears to be mathematically correct and well-grounded in prior work. Well-written and well-illustrated. Weaknesses: Evaluates only a few tokens (dog, bus, train, cat) thoroughly, which seems like not enough when prior work (Goh et al.) studies thousands of concepts. Evaluates only one model, DePALM. Uses two automated metrics, CLIPScore and BERTScore, but no human evaluation. A little worrisome that DePALM uses a CLIP-ViT-L14 and CLIPScore is the primary evaluation for the method. Overlaps significantly with prior research (Goh et al, Kim et al); multimodal neurons have been described in work dating back several years using non-negative matrix factorization. Technical Quality: 3 Clarity: 3 Questions for Authors: In table 2, why is PCA by far the best method for minimizing overlap between learned concepts? Presumably because it’s PCA on only these five concepts, meaning that variance is maximized between only the five concepts and not between the dictionary more broadly? Do you observe any evidence of polysemanticity, as reported in prior work including Goh et al.? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I don’t see many negative societal implications of this research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We address their concerns pointwise below: **Evaluates only on DePALM:** We have now also conducted experiments on LLaVA. We are able to extract meaningful multimodal concepts and achieve quantitatively consistent results as for DePALM. Further details regarding the same are available in the global response and 1-page PDF. **Uses two automated metrics, but no human evaluation:** Human evaluation would be a good way to further evaluate the performance of our approach. The current paper was rather focused on developing dictionary learning for LMMs, extracting multimodal concepts through dictionary learning, and studying various methods for this goal in a structured way. To the best of our knowledge, the feasibility of these aspects has not been explored before, and was one of the main concerns of the paper. Nevertheless, we would like to consider human evaluation in future developments. **Using CLIP as visual encoder and CLIPScore as metric:** We believe the meaningful grounding is influenced more by the language model. We conducted experiments with two non CLIP visual encoders to support this hypothesis. Please find further details in our global response. **Overlap/difference with prior research (Goh et al., Kim et al.) and multimodal neurons literature:** We discuss both Goh et al., and TCAV (Kim et al.) in Sec. 2 along with our differences with them. Yes, the notion of multimodal neurons has been introduced previously. However, we respectfully disagree with the assessment that there is a significant overlap with it. Our approach performs dictionary learning on internal representations of a LMM, to extract a set of concepts about a target token. This methodology is quite unrelated to work in (Goh et al.) that analyzes activations of individual neurons in CLIP ResNet encoder (final conv layer) for various input images, to determine what conceptual information in image they activate for. In general, dictionary learning based concept extraction approaches and individual neuron analysis approaches are methodologically different but complementary. They are complementary approaches to analyze internal representations of a network. However, dictionary learning methods discover directions in the representation space useful to decompose data representations, while neuron analysis approaches study activation patterns of individual neurons in detail. We list some more significant differences with Goh et al.: - The ‘multimodality’ discussed in Goh et al. is different from ‘multimodality’ here or notion of multimodal neurons in Schwettmann et al. In particular, Goh et al. analyze neuron activations only for images where certain conceptual information can be present in different forms/modalities (eg. celebrity as a face in the image, as caricatures in image, as text in the image etc.), while the multimodality here or in Schwettman et al. concerns with explicit grounding of a concept vector or neuron in vision and text. - The two papers analyze very different types of models (LMMs vs CLIP-ResNet encoder) with different architectures, training objectives and tasks they solve. **Low overlap for PCA:** PCA is best for minimizing overlap possibly because its concept vectors by design are orthogonal. Thus, when decoded, they typically yield different sets of grounded words. We would also like to state for clarity that PCA is not learnt for just five concepts, nor do the target tokens ('dog', 'bus', 'cat', 'train') represent concepts of our dictionary. We apply PCA (or any dictionary learning method) to learn a different dictionary of $K=20$ concept vectors for each target token separately. Thus the overlap for dictionary learning methods is reported separately for each token and calculated between 20 concepts learnt for it. **Evidence of polysemanticity:** Our preliminary qualitative analysis suggests that concept vectors for prominent and distinct semantic concepts seem to be more monosemantic (eg. 'hot dog', 'black dog'). We also found concept vectors which are more polysemantic in nature, and capture more sparsely present semantic concepts. Such concepts might be identifiable in some cases while extracting the multimodal grounding. For instance, row 8, left column in Fig. 7 illustrates a concept discovered for 'Dog' token that activates for 'bull-dogs', 'pugs' but also for 'toy/stuffed dogs'. More details about the analysis can be found in our comment to Reviewer 17ab on feature superposition. --- Rebuttal Comment 1.1: Comment: Dear reviewer, We have incorporated your complete feedback in our rebuttal and sincerely hope it addresses all the concerns.
Rebuttal 1: Rebuttal: We want to thank all the reviewers for their great interest and useful feedback. We address most reviewer comments individually. In this global post we would like to address two key concerns, each raised by at least two reviewers: 1. **New experiments on LLaVA (Reviewers Go37, XzMc, 17ab):** We have now conducted experiments on LLaVA-v1.5 model. The model uses a CLIP-ViT-L-336px visual encoder, a 2-layer linear mapper that outputs $N_V=576$ visual tokens, and a Vicuna-7B language model (32 layers). We use identical hyperparameters as for DePALM ($K=20, \lambda=1, L=31$). The **attached 1-page PDF** contains the figures/tables regarding this experiment. For now, we report (i) the test CLIPScore for top-1 activating concept, for Rnd-Words, Noise-Imgs, Simple and Semi-NMF, GT-captions (Tab. 1 in 1-page PDF) and (ii) Overlap score for non-random baselines (Tab. 2 in 1-page PDF). We also show qualitative examples of concepts extracted for token `Dog' (Fig. 1 in 1-page PDF). More detailed quantitative and qualitative results will be added in appendix. Quantitatively, we obtain **consistent results** to those observed for DePALM. Semi-NMF extracts most balanced concept dictionaries with high multimodal correspondence and low overlap. Qualitatively too, the method functions consistently and is able to extract concepts with meaningful multimodal grounding. 2. **CLIP visual encoder and CLIPScore as metric (Reviewers 8A1z, Go37):** The meaningful concept representations, we believe, are more due to their alignment with language model representations than the visual encoder. To support our hypothesis, we conducted experiments with 2 different DePALM models with frozen visual encoders different from CLIP, a frozen ViT-L encoder trained on ImageNet [1] and another frozen ViT-L trained as a masked autoencoder (MAE) [2]. Both LMMs use the same pretrained OPT-6.7B language model. Collectively, the three encoders (including CLIP) are pretrained for three different types of objectives. We use Semi-NMF to extract concept dictionaries, with all hyperparameters identical. The results are reported in tables below. 'Rnd-Words' and 'GT-captions' references are reported for each LMM separately, although they are very close to the ones in main paper. The "ViT-L (CLIP)" baseline denotes our system from the main paper that uses CLIP encoder. Importantly, we still obtain similar test CLIPScores as with CLIP visual encoder. The concept dictionaries still possess meaningful multimodal grounding. The various concepts also tend to be similar as for CLIP visual encoder. We'll add the qualitative visualizations and complete quantitative results for the non-CLIP encoders in appendix. | Token | Rnd-Words | ViT-L (ImageNet) | ViT-L (CLIP) | GT-captions | |:-------------:|:----------:|:---------------:|:--------------:|:-----------:| | Dog | 0.514 ± 0.05 | 0.611 ± 0.09 | 0.610 ± 0.09 | 0.783 ± 0.06 | | Bus | 0.498 ± 0.05 | 0.644 ± 0.07 | 0.634 ± 0.08 | 0.739 ± 0.05 | | Train | 0.494 ± 0.05 | 0.617 ± 0.07 | 0.646 ± 0.07 | 0.728 ± 0.05 | | Cat | 0.539 ± 0.05 | 0.628 ± 0.07 | 0.627 ± 0.06 | 0.794 ± 0.06 | | Token | Rnd-Words | ViT-L (MAE) | ViT-L (CLIP) | GT-captions | |:-------------:|:----------:|:--------------:|:------------:|:-----------:| | Dog | 0.515 ± 0.05 | 0.602 ± 0.07 | 0.610 ± 0.09 | 0.784 ± 0.06 | | Bus | 0.501 ± 0.05 | 0.627 ± 0.07 | 0.634 ± 0.08 | 0.737 ± 0.05 | | Train | 0.483 ± 0.06 | 0.618 ± 0.08 | 0.646 ± 0.07 | 0.726 ± 0.05 | | Cat | 0.541 ± 0.04 | 0.629 ± 0.09 | 0.627 ± 0.06 | 0.795 ± 0.06 | [1] A. Dosovitskiy et al. "An image is worth 16x16 words: Transformers for image recognition at scale." ICLR 2021 [2] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, R. Girshick. "Masked autoencoders are scalable vision learners." CVPR 2022 Pdf: /pdf/3c3bd7c9fb746657967944622b3ae29c7d9ed579.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: Authors propose to look at the representation of chosen concepts, in multimodal models. They test different automated methods to learn decompositions of a token representations (which they then linearise in a dictionary of concepts). They also provide quantitative and qualitative analysis of a few examples, showing that this method clusters different human-interpretable meaning or usages of a same concept, for both visual and textual modalities. Strengths: Great qualitative analysis Multiple representation decomposition algorithms are tested, including more well known PCA or very specific but better suited Semi NMF, which is an adaptation to practical realities. Evaluation of the method is quite extensive, using a wide range of relevant state of the art interpretability tools to verify their work theoretically and practically. Effort is made to put quantitative metrics on concepts and to evaluate very abstract semantic analysis. Provided examples are clear and strike curiosity. Weaknesses: I worry there is a circularity when evaluating with ClipScore a frozen model with a CLIP component. (and even with BERT scoring, where you evaluate an interpretability metric with a non interpretable system of somewhat similar complexity). (Minor) Very recent works are relevant to the interpretability of LLM / Multimodal models (Templeton, et al., "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet", Transformer Circuits Thread, 2024 AND Gao, Leo, et al. "Scaling and evaluating sparse autoencoders." arXiv preprint arXiv:2406.04093, 2024) and could be cited. Line 223 "the correspondance between a the image" is probably a typo and should be fixed Technical Quality: 4 Clarity: 4 Questions for Authors: Mostly a personal curiosity question: Are there any intuitions from this work allowing to disentangle the data effect of the apparition of the studied concepts from the model/training effect? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Sample experiments are done on very simple words and concepts, available at pre-training for the tested model. How well this method would adapt to more complex concepts and images is not studied, but is where a lot of interpretability becomes necessary. This is nonetheless a necessary first step in an exciting direction. Do you have an intuition on how well this method would scale to this? Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and intriguing questions. We respond to them pointwise below: **CLIPScore and LMM with frozen CLIP encoder:** We believe the meaningful grounding is influenced more by the language model. We conducted experiments with two non CLIP visual encoders to support this hypothesis. Please find further details in our global response. **Very recent works on LLM interpretability:** Thanks, we make a note of them. **Line 223 typo:** Thanks for pointing out the error. We’ll fix it. **Disentangling data effect/model effect:** Indeed, the question of the impact of the available data vs the model architecture/training, w.r.t the concepts, is interesting. Qualitatively, in our initial experiments we observed cases where the type of input data clearly affected the quality of extracted concepts. For instance, when extracting concepts for token 'light', we observed most concepts only contained images of 'traffic light', even though we expected more diversity. This was because most input images in our dataset with predicted token 'light' were of 'traffic light'. On a separate note, it could be worth tracking the evolution of the learnt concepts of our approach during training. This could possibly help to disentangle the training/optimization effect. But we leave that as a future research direction to pursue. **Expanding to more complex concepts:** Yes, this is indeed an important direction we are considering to expand in future. One possible way to extend to more complex or abstract words could be to associate the complex word to a set of target tokens. We could then consider decomposing representations for the all tokens in the set simultaneously. Our first instinct is to use a linear decomposition approach. Nevertheless, if this is not sufficient, non-linear decomposition methods could also be explored. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will maintain my score, as I continue to believe it is an interesting and well developed paper, and will follow discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for the rebuttal acknowledgement!
null
null
null
null
null
null
Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning
Accept (poster)
Summary: This paper considers the problem of training neural operators to solve forward partial differential equation (PDE) problems in data-limited settings. The paper motivates this setting with the large data simulation cost incurred by these methods when generating a large training set of PDE solutions. The paper suggests a self-supervised pretraining method derived from self-supervised learning tasks popular in the computer vision community. This pretraining method only requires a dataset of initial conditions, PDE coefficients, and physical parameters; it does not require any PDE solutions. The paper shows the benefit of their method by showing their pretrained models outperform randomly-initialized models or models pretrained on general computer vision tasks; there are many evaluations across a range of PDE problems and real-world experimental data. In addition, the paper considers an "out-of-distribution" (OOD) task where PDE coefficient settings are changed at test time and suggests an "In-Context Learning" (ICL) strategy, which uses the trained model to adaptively combine examples of ground-truth solutions. Strengths: This paper addresses important challenges in the field of neural operators and provides well-motivated solutions to these problems. There are a large number of experiments to quantify the effect of these solutions. I will list individual strengths below: - The paper considers important problems in the field of neural operators: reducing the dependence on training data generated by PDE solvers, adapting trained models to out-of-distribution problem settings, and quantifying performance on complicated real-world datasets. - The experimentation in this paper is very thorough. The paper shows experiments on PDE problems across a broad range of physical parameters as well as real-world datasets ERA5 temperature, ScalarFlow, and Airfoil. - The paper's pretraining methods are relatively simple and can be applied to a broad class of problems. It is easy to imagine these methods being used by other members of the community. - The paper's proposed ICL method is a creative emulation of attention mechanisms that is applicable to general neural operator methods. Weaknesses: I have concerns about the experimental design and worry the experimental results are overstated in the text. The claims about reducing data simulation costs are particularly overstated, and I believe the paper could be improved by contracting its stated goals to focus on real-world problem settings where data is scarce, and simulation methods are unavailable. I will list individual weaknesses below: - The paper motivates the limited-data regime due to the computational burden of generating PDE solutions. The stated goal is to "reduce data simulation costs". However, the method suggests a pre-training procedure that trains full-sized models on $\geq 40,000$ samples for $500$ epochs. This also introduces a large computational cost, and this added cost of pre-training is never addressed. Without accounting for this added cost of pre-training, the paper's claims about reducing data simulation costs feel vacuous. - The paper fails to contextualize their results on the PDE benchmarks with meaningful baselines. Figures 3 and 5 only consider random initialization of neural network weights and pretrained on a general computer vision task as baselines. This ignores the simple baseline of generating extra samples with the same computational cost as pretraining. - Similarly, the paper fails to contextualize the results of the OOD experiment with meaningful baselines. The description of the OOD experiment indicates that the methods in citations [76, 77] are designed to solve the same problem. When reading the text, it seems like these methods could be used as baselines for comparison in Figure 6. If this is not the case, because [76, 77] are solving a weaker version of the problem or require significantly more data or compute resources, this is an advantage of the proposed method and should be highlighted in the paper. - The amount and type of unlabeled pretraining data in the ERA5 and ScalarFlow experiments are unusual for a self-supervised learning setting, which raises questions about the significance of this experiment. I have written specific questions about this below. - The results of the OOD experiment indicate the OOD task is still a difficult problem for neural operators, but the text does not discuss this difficulty. Without such a discussion, the significance of this experiment and the ICL method is unclear. Technical Quality: 2 Clarity: 3 Questions for Authors: General: - Line 132 states "initial snapshot $u_0(x)$ ... is sufficient for defining PDE systems." Could the authors explain or qualify this statement? The definition of a PDE system must also include a partial differential equation with parameters, a domain, and possibly boundary conditions like $u_0(x)$. - Line 185 states "Numerical solutions of PDEs are expected to exhibit invariance to filtering blur or different resolutions of inputs." Could the authors explain or qualify this statement? There are many examples where qualitative features of PDE solutions depend on the discretization level. One example is the Helmholtz problem considered with high wavenumber and forcing term with high-frequency oscillations. As the resolution of the input forcing term and solution increase, higher-frequency oscillations in the solution can be resolved. For the Poisson equation, one can show analytically that applying a blur to the forcing term will change the solution. - The experiment in Appendix E shows encouraging results that the general-purpose self-supervised learning method MoCo v2 does not perform well on the ERA5 task. I believe that if one were to combine Figure 8 and Figure 5a, the result would show that pretraining with MoCo v2 produces almost no improvement over random initialization. This seems like a nice result and wonder if the authors have intuition about why this is the case. Questions about PDE experiments: - What are the inputs and outputs of the model for each experiment? As I was reading this paper, I was constantly asking this question and spent time jumping around in the appendix looking for details. Would it be possible to collect all of this information in a table for ease of reference? - For each PDE experiment, what is the distribution of forcing functions (Poisson, Helmholtz) or initial conditions (Reaction-Diffusion, Navier-Stokes)? - How does the "diffusion" physics parameter in Table 1 relate to the definition of the Poisson equation in Equation 1? Are elements of K randomly drawn from this range? - How does the Reynolds number relate to the definition of the Navier-Stokes equations (Equation 5)? - Are there spatial boundary conditions for the Reaction-Diffusion equation? - What do the error bars in Figures 3 and 4 represent? Are they showing the min/max of the three independent random seeds? Questions about real-world data experiments: - The ERA5 and ScalarFlow datasets have temporal observations; the learning task is to predict the state of the system at time $t+16$ given a sequence of state observations at times $[t, t+15]$. What is the unlabeled data used in pretraining for these cases? Is it sequences of state observations at times $[t, t+15]$, for a set of different values $t$? - For these experiments, how does using snapshots at $t > 0$ agree with the definition of unlabeled PDE data for time-dependent systems in Section 3.1.1? - These experiments use a large number of snapshots to construct the pretraining dataset. In the ERA5 example, this is enough data to build $14,600 \times \frac{75}{100} \times \frac{1}{16} \approx 680$ labeled samples. In the ScalarFlow example, this is enough data to build $70,200 \times \frac{80}{100} \times \frac{1}{16} \approx 3500$ labeled samples. Could the snapshots used in the pretraining dataset be used in a supervised learning framework? If so, why doesn't the “random init” baseline have access to this data? - Are there examples of simulation or experimental settings where it is reasonable to assume we can easily gather snapshots of a system state, but these snapshots could not be used in a supervised neural operator framework? Mentioning such examples in the text would help readers understand the significance of this experiment. Questions about the ICL method and OOD experiments: - Do the authors have any intuition for why the Poisson and Helmholtz OOD performance is poor, and the Navier-Stokes performance is better? Providing information about the types of PDE problems amenable to in-context learning would be a valuable contribution and would increase the impact of this experiment. - Is there a connection between the expected smoothness of a particular PDE’s solution and the performance of the ICL method on that PDE? For example, we know that the solution of the Poisson equation should be relatively smooth, but the proposed ICL method outputs highly non-smooth estimates. - I see that ICL with more demos improves the scale of the prediction. Are there problem settings where the scale of the solution is of particular interest? Discussing such settings may better contextualize the results on these challenging OOD tasks. As they continue their work, I want to ensure the authors are aware of this preprint, which provides a fast method for generating synthetic data without a PDE solver, applicable to certain PDE problems. Fitting the neural operator to synthetic data could be a useful pretraining task if the PDE is known. I am NOT asking the authors to compare their method with this preprint: E. Hasani and R. A. Ward, “Generating synthetic data for neural operators.” arXiv, Jan. 04, 2024. Available: http://arxiv.org/abs/2401.02398 Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have addressed some limitations of their work. As mentioned above, I believe the computational expense of the pretraining stage is a limitation of this work which is not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank the time and effort of reviewer u3i6 in reviewing our paper! > **Q1**. Without accounting for this added cost of pre-training, the paper's claims about reducing data simulation costs feel vacuous. Thanks for this great question! 1. We further provide extra experiments on joint unsupervised pretraining on multiple PDEs in **Figure 1 of our attached PDF response**. We can see that joint pretraining can further improve the performance of fine-tuning on different PDEs. This means fine-tuning on different PDEs can share the same pretrained checkpoint, which reduces the total computational costs. We will include this figure in our camera-ready version. 2. Recent works, like [1], also only considered per-PDE pretraining, even though their work is claimed to target SciML foundation models. Moreover, previous unsupervised pretraining works in computer vision also focused on a single dataset (like ImageNet), and they did not compare the costs of pretraining versus collecting more downstream samples. 3. The most important contribution of our paper is to define and leverage **unsupervised data** in PDEs and design **unsupervised pretraining** on top of it. Although pretraining is studied in scientific machine learning, **unsupervised pretraining is largely underexplored in the community**. Our work is the first kind, and will inspire the community in this research direction. In sum, we totally agree that reducing the total computation costs (training + sample collection) is important, and our promising joint pretraining results (**Figure 1 in our attached PDF response**) encourage us to keep working on this direction, but this should not weaken the core contribution and target in this paper. > **Q2**. Baseline of generating extra samples with the same computational cost as pretraining Actually, since we studied different numbers of samples in Figure 3, we can compare our method with baselines trained with extra generated samples (as indicated by the red arrows showing the savings in the number of samples). > **Q3**. Contextualize the results of the OOD experiment with meaningful baselines, such as [76, 77] [76, 77] by Liu et al. are related works, but their works cannot be directly compared with ours because: 1) they did not study 2D time-dependent PDEs; 2) they only scaled up to 5 demos, which is fewer than ours. We will highlight this more in our camera-ready version. Meanwhile, we further provide another baseline in the OOD in **Figure 3 in our attached PDF response**. In our paper, our ICL leverages the output (prediction) of neural operators to find similar samples (lines 251–255). In this new baseline, we try to use features extracted by the backbone of the neural operator (high-dimensional features before the final output layer) to find similar samples. As we can see, in general, this baseline is worse than our original ICL method, indicating that the final output of the neural operator can more accurately indicate true similar samples. > **Q4**. The results of the OOD experiment indicate the OOD task is still a difficult problem for neural operators, but the text does not discuss this difficulty. Thanks for the suggestion! Based on our visualization in Figure 14 in Appendix K, we believe OOD is challenging for neural operators because of: 1) Significantly different patterns of solutions under different physical parameters. 2) Different value ranges of solutions under different physical parameters. We will include this discussion in our camera ready. > **Q5**. Line 132 states "initial snapshot u0(x) ... is sufficient for defining PDE systems." Could the authors explain or qualify this statement? For a specific PDE with fixed physical parameters and a spatial-temporal domain, if we are given the initial condition and boundary condition, the dynamics of the PDE will be determined since there is no stochasticity in the PDE. > **Q6**. Line 185 states "Numerical solutions of PDEs are expected to exhibit invariance to filtering blur or different resolutions of inputs." Could the authors explain or qualify this statement? Our superresolution (SR) objective shares the same motivation as recent works on enhancing scientific data resolutions [2], which is to train SciML models to preserve the inherent physical properties of scientific data while learning the SR process. Given a ***specific*** input distribution, after memorizing the input data and fitting the SR objective, solutions of neural operators are expected to preserve the inherent physical properties and exhibit invariance to filtering blur. Moreover, in the context of pretraining, both objectives – MAE and SR – aim to further extract meaningful representations of input samples from current data distributions and provide a better-adapted initialization of network weights. > **Q7**. The experiment in Appendix E shows encouraging results that the general-purpose self-supervised learning method MoCo v2 does not perform well on the ERA5 task. ... This seems like a nice result and wonder if the authors have intuition about why this is the case. The main reason MoCo v2 does not perform well is that it was originally designed for object-centric images, and the loss is applied at the instance level (whole image) rather than at the pixel level (per spatial location), making it suboptimal for solving PDEs. --- Rebuttal 2: Title: Continued Responses (1) Comment: > **Q8**. What are the inputs and outputs of the model for each experiment? As I was reading this paper, I was constantly asking this question and spent time jumping around in the appendix looking for details. Would it be possible to collect all of this information in a table for ease of reference? Thanks for this suggestion! Here we include another table of details (input/output shapes). Due to the page limit, we will try to squeeze this table into our camera ready. | | Input | Input Shape | Output | |:---:|:---:|:---:|:---:| | Poisson | source function (f), three diffusion coefficients | CxHxW (C = 4) | potential field (u) | | Helmholtz | source function (f), wavenumber | CxHxW (C = 2) | wave function (u) | | NS (FNO) | vorticity (w) | TxHxW (T=33) | vorticity (w) at T+1 | | NS (PDEBench) | velocity (Vx, Vy), vorticity (w) | TxCxHxW (T=15, C=3) | velocity (Vx, Vy), vorticity (w) at T+1 | | RD (PDEBench) | activator (u), inhibitor (v) | TxCxHxW (T=10, C=2) | activator (u), inhibitor (v) | > **Q9**. For each PDE experiment, what is the distribution of forcing functions (Poisson, Helmholtz) or initial conditions (Reaction-Diffusion, Navier-Stokes)? For generating forcing functions or initial conditions, we follow previous works [1, 3, 4] and adopt their numerical solvers. We further design the ranges of physical parameters as listed in Table 1. > **Q10**. How does the "diffusion" physics parameter in Table 1 relate to the definition of the Poisson equation in Equation 1? Are elements of K randomly drawn from this range? The "diffusion" physics parameter in Table 1 determines the eigenvalue of the diffusion coefficient tensor. For more details about the construction of diffusion coefficient tensors, please consider reading the paragraph “PDE coefficient sampling” in Section 3 of [1]. > **Q11**. How does the Reynolds number relate to the definition of the Navier-Stokes equations (Equation 5)? The Reynolds number is defined as $Re=\frac{\rho u L}{\nu}$, where $\nu$ is the viscosity of the fluid ($\rho$ is the density of the fluid, $u$ is the flow speed, $L$ is a constant of the fluid’s characteristic linear dimension). $\nu$ is used in Eq. 5. In other words, a smaller $\nu$ indicates a larger Reynolds number. > **Q12**. Are there spatial boundary conditions for the Reaction-Diffusion equation? Following PDEBench [3], we consider the Neumann boundary condition for Reaction-Diffusion. > **Q13**. What do the error bars in Figures 3 and 4 represent? Error bars in Figures 3 and 4 indicate standard deviations of three independent runs with different random seeds. > **Q14**. The ERA5 and ScalarFlow datasets have temporal observations; the learning task is to predict the state of the system at time t+16 given a sequence of state observations at times [t,t+15]. What is the unlabeled data used in pretraining for these cases? Is it sequences of state observations at times [t,t+15], for a set of different values t? Similar to time-dependent PDEs (reaction-diffusion, Navier-Stokes), which are explained in line 131 and in Appendix A, on ERA5 and ScalarFlow, models are trained on individual snapshots without temporal information. The pretraining, fine-tuning, and test sets are separate without overlap. We will provide clearer explanations in our camera-ready version. > **Q15**. For these experiments, how does using snapshots at t>0 agree with the definition of unlabeled PDE data for time-dependent systems in Section 3.1.1? In time-dependent PDEs, the simulation can start from any snapshot (i.e., the numerical simulation can take any snapshot as the initial condition and continue simulating the dynamics). In fact, in FNO, the actual simulation of Navier-Stokes starts after the fluid roughly exits the chaotic phases (please refer to the GitHub repo called `neuraloperator/physics_informed`, line 34 of `generate_data.py`). This setting is also similar to starting the collection of climate data at any timestep and continuing the collection. The key is that we do not involve any temporal dynamics during our pretraining. > **Q16**. Could the snapshots used in the pretraining dataset be used in a supervised learning framework? If so, why doesn't the “random init” baseline have access to this data? The “random init” baseline can access this data. However, as we consider these pretraining samples as individual snapshots (to simulate the practice where people cannot capture temporal dynamics and can only collect individual spatial climate/flow snapshots), we do not use any temporal information in these samples, which cannot be used for supervised forecasting. --- Rebuttal 3: Title: Continued Responses (2) Comment: > **Q17**. Are there examples of simulation or experimental settings where it is reasonable to assume we can easily gather snapshots of a system state, but these snapshots could not be used in a supervised neural operator framework? Collecting snapshots with temporal dynamics in large-scale scenes is significantly more challenging than collecting individual snapshots without temporal dynamics. We have the following examples, and we will try to include them in our camera ready: 1. Weather forecasting: Continuous monitoring of atmospheric parameters requires a network of weather stations, satellites (e.g. large amount of historical satellite data was leveraged in ERA5), and real-time data processing, whereas a single weather report is simpler and less resource-intensive. Long-term climate monitoring involves maintaining observation networks for years, compared to a one-time collection of current climate conditions. 2. Smoke dispersion studies: Tracking smoke spread over time requires multiple sensors (e.g. five cameras with carefully calibrated intrinsics and extrinsics in ScalarFlow) and continuous data processing, unlike a single air quality measurement at a peak pollution event. 3. Ocean currents: The monitoring demands continuous deployment of buoys and sensors, whereas a single measurement of water conditions can be done with minimal equipment like a handheld sensor or a small research boat. > **Q18**. Do the authors have any intuition for why the Poisson and Helmholtz OOD performance is poor, and the Navier-Stokes performance is better? Is there a connection between the expected smoothness of a particular PDE’s solution and the performance of the ICL method on that PDE? For example, we know that the solution of the Poisson equation should be relatively smooth, but the proposed ICL method outputs highly non-smooth estimates. To understand the results in the OOD setting, we provide a visualization in Figure 14 in Appendix K. We find that the Helmholtz equation in the OOD setting exhibits much more complicated unseen patterns than the Poisson and Navier-Stokes equations, making it challenging to learn. The non-smoothness of the ICL method outputs is mainly due to the uncertainty of neural operators on out-of-distribution (OOD) samples. As the sample-wise similarity is measured by the neural operators’ outputs, which are not accurate, the more severe the OOD setting, the less confident the model’s output and similarity measurement will be. > **Q19**. I see that ICL with more demos improves the scale of the prediction. Are there problem settings where the scale of the solution is of particular interest? Discussing such settings may better contextualize the results on these challenging OOD tasks. Thanks for this suggestion! In numerical simulations or predictions of PDEs, there are settings where the scale or magnitude of solutions is more important than the exact shape/pattern. We will try to include these examples in our camera ready. 1. Heat Transfer: In large-scale systems, the focus might be on overall temperature and extreme values. For instance, in evaluating a cooling system, the key concern might be the peak temperature rather than the detailed temperature distribution. 2. Fluid Dynamics: For applications like aerodynamics, the overall drag or lift force on an object is often more critical than capturing every detail of the flow pattern, such as in airfoil design. 3. Environmental Modeling: The concentration of pollutants at specific locations or total pollutant transport is often more crucial than the exact distribution, such as in groundwater flow studies. > **Q20**. Regarding related work: E. Hasani and R. A. Ward, “Generating synthetic data for neural operators.” arXiv, Jan. 04, 2024. Thanks for bringing this up! We noticed this work after our paper submission. Our method, which works on unlabeled PDE data, is orthogonal to their generation of synthetic PDE data, and the two approaches could potentially be combined into a semi-supervised learning strategy. [1] “Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior“ Subramanian et al. 2023 [2] “SuperBench: A Super-Resolution Benchmark Dataset for Scientific Machine Learning” Ren et al. 2023. [3] “PDEBENCH: An Extensive Benchmark for Scientific Machine Learning“ Takamoto et al. 2022 [4] “Fourier Neural Operator for Parametric Partial Differential Equations” Li et al. 2020 --- Rebuttal 4: Comment: Many thanks to the authors for providing detailed responses to all parts of the review. I have specific comments and questions about a few parts of the response. **Q1 and Q9** I really appreciate the design of this extra experiment, and the results look encouraging. However, the lack of detail about the distribution of forcing functions and source functions still obfuscates the significance of these results. I have briefly read [1] to understand the distribution of forcing functions and source functions. To my understanding, the distribution of Helmholtz and Poisson source functions are the same. Is this correct? **Q2** In my opinion, Figure 3 and the relevant text do not give enough detail to compare the proposed method with the baseline of generating extra samples. This is because there is not enough information to evaluate how long it takes to pretrain the model (I could not find this in Section B3), and there is no information about how long it takes to generate samples for the Helmholtz or Poisson problems. **Q4** Thanks for this explanation. Do you plan to update any of the relevant text in the camera-ready version? **Q6** Thanks for this explanation. The explanation makes sense but is quite different from what appears in the original submission. Do you plan to update this in the camera-ready version? **Q8** Could you also add the shapes of the pretraining inputs/outputs to this table? Thank you for all of the other replies; again, I really appreciate the careful and thorough response. --- Rebuttal Comment 4.1: Title: Thank you very much for your reply! Comment: We truly thank reviewer u3i6 for reading our response and providing the prompt reply! Below are our responses to your further questions: > **Q1**. Distribution of Helmholtz and Poisson source functions. Yes, the source distributions are the same for Poisson and Helmholtz. However, they have different ranges of physical parameters in their input, and they have different numbers of input channels (4 for Poisson, 2 for Poisson). > **Q2**. Pretraining costs and simulation costs. 1. Pretraining costs. FNO: 20 GPU hours. Video-MAE: 18 GPU hours. (GPU: A100) 2. Simulation costs (8192 downstream samples for Figure 3): * Poisson: 0.04 hour * Helmholtz: 7.85 hour > **Q3**. Update new explanations into camera ready. Yes, we promise to update the explanations for both your previous Q4 and Q6. We thank you for this meaningful discussion! > **Q4**. Shapes of the pretraining inputs/outputs. Thanks for the suggestion! We put the shape of our input/output during pretraining into the table below. Since our pretraining is based on reconstruction, the input and output share the same shape. | PDE | Input/Output (Reconstruction) | Shape | |---|:---:|:---:| | Poisson | source function (f), three diffusion coefficients | CxHxW (C = 4) | | Helmholtz | source function (f), wavenumber | CxHxW (C = 2) | | NS (FNO) | vorticity (w) | HxW | | NS (PDEBench) | velocity (Vx, Vy), vorticity (w) | CxHxW (C=3) | | RD (PDEBench) | activator (u), inhibitor (v) | CxHxW (C=2) |
Summary: This paper proposes a pre training strategy for operator learning. They introduce unsupervised training in the case of physical data. 2 architectures are studied: transformers and FNO. Finally, some experiments on different PDEs are conducted to highlight the properties of the model. Strengths: The paper is well-written and explains step-by-step the algorithms and architecture. Several experiments highlight the performances of the model on both simulated and real-world datasets, which is a challenging setting. Moreover, statistics are provided for each experiment. Weaknesses: As mentioned in the paper, masked auto-encoding has already been proposed in CV applications. The contribution however is unclear for me. ICL and masked-auto-encoding are widely known techniques. Moreover, it is difficult to identify why is the unsupervised pretraining performs best than other baselines. Technical Quality: 3 Clarity: 3 Questions for Authors: - For the unlabeled data for time-dependent PDE, how does the PDE ICs are sufficient to identify the PDE? How does the model have access to the PDE parameters? - Is your model trained on multiple physics? Or pretraining are proceeded on one PDE? - On figure3, why are the unsupervised training performing better than training from scratch? How is proceeded this “random init” training? Is it using data with supervised loss? - What is the comparison of other baselines in the OOD setting? - What is the computational cost of training your model compared to other baselines? How many parameters do the baselines contains? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Author have proposed a discussion on the limitations of their paper. However, they did not address societal impact and ecological considerations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank the time and effort of reviewer 6vSy in reviewing our paper! > **Q1**. Masked auto-encoding has already been proposed in CV applications. The contribution however is unclear for me. We would like to clarify and re-emphasize: The most important contribution of our paper is to define and leverage **unsupervised data** in PDEs and design **unsupervised pretraining** on top of it. This is mentioned in our first contribution bullet (lines 66–69) and also in line 7 of our abstract. There are previous works that try to pretrain neural operators on simulated solutions [1, 2, 3], but no previous works tried to reduce the simulation costs by defining and leveraging unlabeled PDE data. That means, although pretraining is studied in scientific machine learning, **unsupervised pretraining is largely underexplored in the community**. Our work is the first kind, and will inspire the community in this research direction. We do not claim that our MAE method is new, and we also provide clear motivation for why we introduce MAE in pretraining neural operators (lines 160–172). > **Q2**. Moreover, it is difficult to identify why is the unsupervised pretraining performs best than other baselines. > (Similar question) On figure3, why are the unsupervised training performing better than training from scratch? We discussed the benefits of our unsupervised pretraining in Figure 4 in Sec. 4.1. There are three reasons: 1. Reduced overfitting, which consistently leads to smaller generalization gaps across all PDEs we studied. 2. Faster convergence, which can accelerate model convergence more than both random initialization and vision-pretrained checkpoints. 3. Meaningful representations, which are extracted from pretrained Video-MAE and are helpful during fine-tuning. > **Q3**. For the unlabeled data for time-dependent PDE, how does the PDE ICs are sufficient to identify the PDE? How does the model have access to the PDE parameters? Our models do not have access to physical parameters. Given the initial condition and boundary condition, the dynamics of a PDE are determined since there is no stochasticity in the PDE. > **Q4**. Is your model trained on multiple physics? Or pretraining are proceeded on one PDE? Our experiments focus on pretraining on one single PDE. However, we also further studied joint pretraining. We provide extra experiments in **Figure 1 in our attached PDF response**. We can see that joint pretraining can further improve the performance of fine-tuning on different PDEs. We will include this figure in our camera-ready version. > **Q5**. How is proceeded this “random init” training? Is it using data with supervised loss? In Figure 3, the “random init” and “unsupervised” models share completely the same fine-tuning settings, and they both use the supervised loss (labeled PDE data) during fine-tuning. The only difference is that “random init” uses random initialization of model weights to start the fine-tuning, while “unsupervised” uses pretrained weights (from our unsupervised pretraining) to start the fine-tuning. > **Q6**. What is the comparison of other baselines in the OOD setting? Thanks for the suggestion! We further provide a baseline in the OOD setting, attached in **Figure 3 in our attached PDF response**. In our paper, our ICL method leverages the output (prediction) of neural operators to find similar samples (lines 251–255). In this new baseline, we try to use features extracted by the backbone of the neural operator (high-dimensional features before the final output layer) to find similar samples. As we can see, in general, this baseline is worse than our original ICL method, indicating that the final output of the neural operator can more accurately indicate true similar samples. > **Q7**. What is the computational cost of training your model compared to other baselines? How many parameters do the baselines contains? * Pretraining costs. FNO: 20 GPU hours. Video-MAE: 18 GPU hours. (GPU: A100) * Model parameters. FNO: 67.1M. Video-MAE: 23.4M. > **Q8**. Societal impact and ecological considerations Our paper improves the data efficiency of neural operators for solving PDEs, which can significantly reduce computational costs and energy consumption associated with high-fidelity numerical PDE simulations. This improvement can lead to more sustainable scientific research by minimizing the environmental footprint of extensive computational experiments. Moreover, by democratizing access to advanced PDE solutions through more efficient pretraining, our method has the potential to accelerate scientific and engineering advancements in the broader community, benefiting society at large. [1] “Lie Point Symmetry Data Augmentation for Neural PDE Solvers” Brandstetter et al. 2022 [2] “Self-supervised learning with lie symmetries for partial differential equations“ Mialon et al. 2023 [3] “Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior“ Subramanian et al. 2023 --- Rebuttal Comment 1.1: Comment: I thank the reviewer for their complete answers to my questions. Based on their answers, I still have other questions/clarifications. - **Q2**: How do you prove these 3 points (experimental). I think fig. 3 and the OOD experiment answer point n1. But how faster is your method compared to others? Is it in term of computational time or number of steps? - **Q3**: I am still not sure to fully understand this point. If your model doesn't have access to the physics and the parameters, how does the model learn the dynamics? Given fixed ICs and/or BCs, several differential operator could be applied to evolve the trajectory. Moreover, you state that you model doesn't have access to the physical parameters. However, on the rebuttal pdf, the legend states that input dimension are different for the reasons of different physical parameters. - **Q4**: thanks for the additional experiment, which I believe is the core question for pre training. - **Q5** thanks for the clarification. This behavior is to be expected since the *pretrained model* have seen more examples than the *random init* one? Are the examples during pertaining the same as the one used during fine-tuning? - **Q8**: I agree with your statement, however, pre training on a single PDE is, I think, very costly, and I would be curious of how long would it take to have a real impact? --- Reply to Comment 1.1.1: Title: Thank you very much for your reply! Comment: We truly thank reviewer 6vSy for reading our response and providing the reply! Below are our responses to your further questions: > **Q2**. How do you prove these 3 points (experimental). We would appreciate it if reviewer 6vSy could kindly read our Figure 4 (main submission) and line 292-303. Three subplots in Figure 4 correspond to our three points. > **Q3**. If your model doesn't have access to the physics and the parameters, how does the model learn the dynamics? Moreover, you state that you model doesn't have access to the physical parameters. 1. In data-driven operator learning, neural operators do not necessarily need to access physical parameters (see [1, 2]). This is mainly because the training PDE data is mostly sampled from a fixed distribution of physical parameters, and the neural operator is trained as a numerical solver for this fixed distribution. This is also why neural operators could perform poorly on out-of-distribution samples. 2. To be more precise, on Reaction-Diffusion and Navier Stokes, FNO and VMAE do not have access to physical parameters. To make this more clear, here we include a table of detailed input/output shapes. Due to the page limit, we will try to squeeze this table into our camera ready. | | Input | Input Shape | Output | |:---:|:---:|:---:|:---:| | Poisson | source function (f), three diffusion coefficients | CxHxW (C = 4) | potential field (u) | | Helmholtz | source function (f), wavenumber | CxHxW (C = 2) | wave function (u) | | NS (FNO) | vorticity (w) | TxHxW (T=33) | vorticity (w) at T+1 | | NS (PDEBench) | velocity (Vx, Vy), vorticity (w) | TxCxHxW (T=15, C=3) | velocity (Vx, Vy), vorticity (w) at T+1 | | RD (PDEBench) | activator (u), inhibitor (v) | TxCxHxW (T=10, C=2) | activator (u), inhibitor (v) | > **Q5**. This behavior is to be expected since the pretrained model have seen more examples than the random init one? Are the examples during pertaining the same as the one used during fine-tuning? 1. This behavior (“random init” is worse than “unsupervised pretrained”) is mainly because of three benefits we discussed in Figure 4: reduced overfitting, faster convergence, and meaningful representations. 2. Examples during pretaining are different from one used during fine-tuning: no overlap. > **Q8**. Pre training on a single PDE is, I think, very costly, and I would be curious of how long would it take to have a real impact? Thanks for this question! Pretraining costs. FNO: 20 GPU hours. Video-MAE: 18 GPU hours. (GPU: A100) We are working with domain scientists to achieve real impacts on scientific problems as soon as possible (representative examples are climate/weather/airfoil in our Figure 5, where our method has clear improvements). Meanwhile, we would like to emphasize: 1. Our extra experiments in Figure 1 of our PDF response indicate that **joint unsupervised pretraining on multiple PDEs can reduce the total computational costs**, because fine-tuning on different PDEs can share the same pretrained checkpoint. 2. **Recent works, like [3], also only considered per-PDE pretraining**, even though their work is claimed to target SciML foundation models. Moreover, previous unsupervised pretraining works in computer vision also focused on a single dataset (like ImageNet), and they did not compare the costs of pretraining versus collecting more downstream samples. We hope the above responses have addressed your concerns. We are happy to discuss this further! [1] “Multiple Physics Pretraining for Physical Surrogate Models” McCabe et al. 2023 [2] “PDEBENCH: An Extensive Benchmark for Scientific Machine Learning“ Takamoto et al. 2022 [3] “Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior“ Subramanian et al. 2023
Summary: This paper aims at improving the data efficiency of deep learning models for tackling Operator Learning. The paper focuses on two aspects: 1) They pretrain neural operators on data that do not assume labels, i.e. without the target function or the trajectory solution of states. To do so, they rely on a masked auto-encoding task or a super-resolution task, that both can be trained without labels. They show that this pretraining strategy can effectively improve the sample efficiency compared to a random initialization. 2) They propose an algorithm to do in-context learning at test time based on a number of demo examples and show results on out-of-domain PDEs. Strengths: * The main strength of the paper is that the pretraining strategy effectively reduces the number of samples to reach a certain target accuracy compared to a random initialization. * The method is tested against multiple equations from different levels of difficulty. * The comparison with pretrained vision transformers, though a little counterintuitive, is informative on the capacity of transferability of this kind of models to different tasks. * The ICL experiments seem to be consistent, the more demos the better the results. Weaknesses: * Overall the paper is quite difficult to read, there are many different aspects that are described within the method section and it can be hard to understand in the experiment section which is what. * If I am not mistaken there is no consistent comparison between the different tasks and architectures of the method (super-resolution vs masking x FNO vs transformer) for the different datasets. * The ICL method should be compared to at least one baseline. I suggest to do a simple kernel method but on the inputs rather than with the predictions of FNO. It can be a kNN or anything, but it would actually give a good understanding of how well the method works. Technical Quality: 3 Clarity: 2 Questions for Authors: * What is the best effective strategy for pretaining ? Is it the masking-damasking or super-resolution that yields the best results ? Or do you use both ? * Did you observe a difference between the video transformer and FNO for this setup ? Is there one that you would recommend better than the other ? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: There is a limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank the time and effort of reviewer oEEz in reviewing our paper! > **Q1**. Overall the paper is quite difficult to read Thanks for this suggestion! We will try to make our teaser figure (Figure 1) clearer and connect it more to the subsections in methods and experiments, so readers can follow easily. > **Q2**. Comparison between the different tasks and architectures of the method 1. To focus on pretraining (model initialization), we keep the network architecture the same in each experiment to ensure that our comparisons in each task are fair. 2. We study the contribution of masking vs. super-resolution in Appendix G.1. We find that when training with a low volume of data, we should use much stronger perturbations (high masking ratios and strong blur), whereas a high volume of data only requires mild perturbations. > **Q3**. The ICL method should be compared to at least one baseline. Thanks for the suggestion! We further provide a baseline for the ICL method in **Figure 3 in our attached PDF response**. In our paper, our ICL leverages the output (prediction) of neural operators to find similar samples (lines 251–255). In this new baseline, we try to use features extracted by the backbone of the neural operator (high-dimensional features before the final output layer) to find similar samples. As we can see, in general, this baseline is worse than our original ICL method, indicating that the final output of the neural operator can more accurately indicate true similar samples. > **Q4**. What is the best effective strategy for pretaining? Is it the masking-damasking or super-resolution that yields the best results? Or do you use both? We find that combined methods (MAE + superresolution) give the best performance. These results are in Appendix G.1, where we exhaustively studied the choices of hyperparameters for MAE and superresolution. > **Q5**. Did you observe a difference between the video transformer and FNO for this setup? We tend to avoid recommending specific model architectures, as our method is agnostic to the architecture choices. We have proven that our method can help both FNO and transformers. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response. I believe the pretraining strategy is interesting, especially with the notion of unlabelled data. On the other hand, I am not convinced by the results of the "in-context" learning experiments. While the loss effectively decreases with the number of demo samples, and while using FNO seems to produce better results than with the backbone features (on navier-stokes only), it appears that the gap on Poisson and Helmholtz datasets between in-context and classical experiments is significant (relative errors close to 1 for ICL). As a result I do not think that the ICL method presented here is effectively taking advantage from the demos. I will maintain my score and suggest the authors to only focus their contribution on the pretraining strategy. --- Reply to Comment 1.1.1: Title: Thank you very much for your reply! Comment: We truly thank reviewer oEEz for reading our response and providing the prompt reply! We understand your concern about our ICL experiments. Meanwhile, we would like to emphasize the following: 1. The **purpose of ICL** is to **continuously reduce the test error with more demos**. This is demonstrated in our figures, where the test error curve drops for Poisson, Helmholtz, and Navier-Stokes. Additionally, we have shown that our method can **scale up to a much larger number of demos** compared to [1]. - Furthermore, we demonstrate that our ICL method can further reduce the model's uncertainty (evidenced by shorter error bars) as more demos are used. This reduction in uncertainty was not shown in [1]. 2. The **value of error** is mainly determined by the **domain gap** between the training source data and the unseen testing data. Since we chose to test our ICL in an out-of-distribution (OOD) setting, this domain gap is very large (as indicated by the error when #demos = 0). - In fact, **our OOD performance is aligned with previous works**, and it is well-known that the performance of neural operators in OOD settings is poor and challenging. For reference, see **subplot (f) in "Figure B.2: Addressing (Q4)" of [2]** (in Appendix B.2, at the very bottom of the paper): Performance on Helmholtz (“SYS-3”) is poor (with errors close to 1 for both their model and ours), even when they fine-tuned with supervision on a few OOD samples. In sum, we are open to further improving the performance of our ICL method. Meanwhile, we have demonstrated that our ICL method advances beyond previous works [1]. [1] "In-context operator learning with data prompts for differential equation problems" Liu et al. 2023 [2] “Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior“ Subramanian et al. 2023
Summary: This paper presents an unsupervised pretraining approach for PDE solvers based on Meta Autoencoding and Super Resolution. The authors show that after pretraining, the model can achieve better accuracy than training a solver from scratch. This paper also presents an "in-context learning" strategy for inference on out-of-distribution data. Strengths: - Designing pretraining techniques for SciML foundation model is an important research direction. - To the best of my knowledge, the proposed pretraining technique is new in the SciML context. Weaknesses: - **Some important experimental setups are missing**. In the main body of the paper, it's not even clear what the model size is, what the pretraining datasets are, and whether the model is pretrained on one class of PDEs or multiple classes of PDEs. - A follow-up concern is that according to Table 2 in Appendix B, it seems the authors pretrain different models for different PDEs. This significantly undermines the value of pretraining, because the ultimate goal of pretraining is to obtain a general purpose foundation model which can be easily transferred to different tasks (i.e., different PDEs in this context). If the method requires pretraining for each single class of PDEs, it's not efficient any more, and the comparison in Fig. 3 is not fair because **the proposed approach is still a problem-specific technique** and the pretrained models require additional training. - **Missing baselines.** For time-independent PDEs, the MAE for image should also be included as a baseline to support the claim of _outperforming conventional vision-pretrained models_. - I feel that **"in-context learning" in Sec. 3.2 is a misnomer**. First, there is no notion of "context" in the pretrained model. Second, the presented model is not an auto-regressive model. Third, unlike the self-attentive models, the presented model does not actively or flexibly "learn" from the examples; the way to aggregate information is fixed given Algorithm 1. The presented "ICL" algorithm deviates too much from the standard practice. I strongly encourage the authors to get rid of "ICL" and refers the proposed approach as a few-shot learning strategy in the paper to avoid misleading the community. - In Sec. 4.3, the few-shot learning accuracy looks poor, and the standard deviation of the error is not reported. - Some discussions in the papers are inaccurate or lack justification. - Line 95: BERT does not leverage next token prediction for pretraining. - Line 142: Multiple benefits of pretraining are listed, but no references are provided, and no relevant experiments are presented to justify those arguments (e.g., better generalization, faster training speed, etc). - Line 158: the authors cite the BERT paper and claim that "_these methods enable training in NLP and CV of generalizable models containing over one hundred billion parameters_". I don't think any variant of BERT is scaled to one hundred billion parameters. - Other issues. - Line 26: DNN $\to$ deep neural network - Line 32: has $\to$ have - Line 46: start $\to$ start with Technical Quality: 2 Clarity: 2 Questions for Authors: - Can you report the model sizes, pretraining cost, and finetuning cost? - Can the pretrained model be applied to other PDEs which are not seen in pretraining? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors discuss some of the limitations of this paper in Sec. 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank the time and effort of reviewer oUb1 in reviewing our paper! Before providing detailed responses, we would like to make a **clarification**: Since our paper focuses on **unsupervised pretraining** *but not* SciML foundation models (we *never* claimed our pretraining leads to such models), we hope reviewer oUb1 will not assume we are pursuing or comparing with the latest SciML foundation model papers. We believe it is *unfair* to ask us to match their experimental settings. As the benefits and strategies of pretraining for SciML are still under-explored, both supervised/unsupervised/per-PDE/joint pretraining are meaningful to this research direction. This is similar to what researchers in the computer vision community explored with single dataset/task pretraining [1, 2, 3] before moving on to foundation models. --- --- > **Q1**. Some important experimental setups are missing 1. Model sizes. FNO: 67.1M. Video-MAE: 23.4M. In each experiment, all compared models are of the same architecture and thus fairly compared. 2. Details about pretraining datasets are clearly documented in Appendix A, which is also clearly referred to in Line 134. 3. In each experiment, we pretrain our model only on one PDE. As we clarified above, our work does not pursue “Pretraining on multiple classes of PDEs”. > **Q2**. Pretrain different models for different PDEs 1. We provide extra experiments in **Figure 1 in our attached PDF response**. We can see that **joint pretraining can further improve the performance of fine-tuning on different PDEs**. We will include this figure in our camera-ready version. 2. Although joint pretraining can be further beneficial, experiments of **pretraining on a single PDE are still fundamental and necessary**. These should be studied before moving to joint pretraining and **cannot be ignored or underestimated**. These experiments help analyze the complexity or difficulty of learning each single PDE for neural operators. 3. **Recent works, like [4], also only adopted per-PDE pretraining, even though their work is claimed to be more related to SciML foundation models than ours**. Previous unsupervised pretraining works in computer vision also focused on a single dataset (like ImageNet). > **Q3**. Missing baselines On time-independent PDEs, we mainly focus on the FNO model. There are no publicly available checkpoints of FNO that are pretrained on natural images. Due to limited time during rebuttal, it is also challenging to pretrain an FNO ourselves on large-scale image datasets like ImageNet or JFT-300M. We will try to include this experiment in our camera-ready version. Meanwhile, on time-dependent PDEs, we studied Video-MAE because we followed previous work [5]. In [5], they also did not consider any image-pretrained MAE models for their claims. > **Q4**. "in-context learning" in Sec. 3.2 We are happy to rename our method. Meanwhile, we have further corresponding comments: 1) In recent literature about large language models (LLMs), “context” is mostly referred to in downstream tasks, not in pretrained models, and we did not claim that our context is in our pretrained models; 2) In-context learning is not directly related to auto-regressive models, see [6]; 3) Models with self-attention are also not learning anything during inference, since all model weights are fixed. Both self-attention and our models are extracting features during inference. > **Q5**. Accuracy and standard deviation of few-shot learning We report standard deviations in **Figure 3 in our attached PDF response**. The performance of neural operators in out-of-distribution (OOD) settings is known to be poor and challenging. See **Figure B.3 (f) of [4]**: performance on Helmholtz (“SYS-3”) is poor (both their errors and ours are close to 1), even with supervised fine-tuning on a few OOD samples. > **Q6**. Line 95: BERT does not leverage next token prediction for pretraining Thanks! We will update this sentence with a more precise description. > **Q7**. Line 142: Multiple benefits of pretraining are listed, but no references are provided As clearly mentioned in Line 148, these experiments about benefits of unsupervised pretraining are in Figure 4 in Sec. 4.1, which is in the main text and should not be ignored. > **Q8**. Line 158: I don't think any variant of BERT is scaled to one hundred billion parameters Thanks! We will cite more papers with models of billion-level parameters that share similar pretraining strategies as BERT. > **Q9**. Other issues. Thanks! We will fix these typos. > **Q10**. Can the pretrained model be applied to other PDEs which are not seen in pretraining? We provide experiments for fine-tuning on PDEs unseen during pretraining, shown in **Figure 2 in our attached PDF response**. Fine-tuning on unseen PDEs leads to worse performance, which is expected not only because they have different initial conditions but also due to their mismatched input dimensions. The input dimension of Poisson is four (forcing function and three diffusion coefficients), while the input dimension of Helmholtz is two (source function and the wavenumber). This mismatched input is documented in Section A. [1] “A Simple Framework for Contrastive Learning of Visual Representations” Chen et al. 2020 [2] “Improved Baselines with Momentum Contrastive Learning” Chen et al. 2020 [3] “Masked Autoencoders Are Scalable Vision Learners” He et al. 2021 [4] “Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior“ Subramanian et al. 2023 [5] “Multiple Physics Pretraining for Physical Surrogate Models” McCabe et al. 2023 [6] “In-Context Operator Learning with Data Prompts for Differential Equation Problems” Liu et al. 2023 --- Rebuttal Comment 1.1: Title: Fix a typo in our response Comment: We would like to fix a typo in our response to your "**Q5**. Accuracy and standard deviation of few-shot learning". "Figure B.3 (f) of [4]" ==> "Figure **B.2** (f) of [4]". We are referring to subplot (f) in "Figure B.2: Addressing (Q4)" of [4] (in Appendix B.2, at the very bottom of the paper). [4] “Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior“ Subramanian et al. 2023 --- Rebuttal 2: Title: Thank you very much for your reply! Comment: We truly thank reviewer oUb1 for reading our response and providing the reply! Below are our responses to your further questions: > **Q1**. Pretraining cost and finetuning cost. 1. We reported our pretraining cost in our response to reviewer 6vSy (https://openreview.net/forum?id=MuPlJ9fT4b&noteId=9dXGDW9BRK). We apologize for this omitted response! - Pretraining costs. FNO: 20 GPU hours. Video-MAE: 18 GPU hours. (GPU: A100) 2. For fine-tuning costs: FNO: 4 GPU hours. Video-MAE: 6 GPU hours. (GPU: A100). Meanwhile, we hope that reviewer oUb1 does not ignore our two other responses: 1) **Extra experimental results**: In Figure 1 in our PDF response. We can see that joint pretraining can further improve the performance of fine-tuning on different PDEs. We will include this figure in our camera-ready version. 2) **Previously published works also focused on single-dataset pretraining and did not compare the costs of pretraining/fine-tuning with those of data simulation/collection**: For example, [1] also only adopted per-PDE pretraining, even though their work is claimed to be more related to SciML foundation models than ours. Previous unsupervised pretraining works in computer vision also focused on a single dataset (like ImageNet). > **Q2**. “no relevant unsupervised learning baseline comparisons in the FNO experiments” Since we are the first to introduce unsupervised learning in SciML, there are indeed no previous related works (that also train FNO in an unsupervised manner) with which we can fairly compare. > **Q3**. “Multiple Physics Pretraining for Physical Surrogate Models, show much more comprehensive baseline comparisons in Table 1.” The reason McCabe et al. needed to compare with more baselines is that their work proposed both a new architecture and pretraining methods. They needed to demonstrate the benefits of both their architecture (by comparing with models with a comparable number of parameters) and their pretraining (by comparing with models trained on a single dataset). However, in our case, for each subplot in Figure 3: 1) If we compare with different models, we cannot draw meaningful conclusions because the architecture changes; 2) Since ours is the only work on unsupervised pretraining of neural operators, comparing with supervised pretrained models would be unfair. Instead, we chose the following approach: 1) To address different **architectures**, we studied both FNO and transformer models on different PDEs; 2) To address different **pretraining methods**, we included a vision-pretrained transformer. We believe these two aspects already cover multiple representative baselines that fulfill the purpose requested by reviewer oUb1. > **Q4**. In-context learning of transformers. We thank these references, all of them are well awared by we authors. We respect reviewer oUb1’s option. Our definition of “learning” is to update model parameters, and we are open to accepting other definitions. > **Q5**. Relevant model of billion-level parameters that share similar pretraining strategies as BERT. One example is the T5 model [2], which is of billion-level parameters and is pretrained with masked language modeling similar to BERT. Here we quote: * “we consider an analogous objective to BERT’s “masked language modeling” objective in” in their Sec. 3. * “we consider an objective inspired by the “masked language modeling” (MLM) objective used in BERT“ in their Sec. 3.3.1. We again thank reviewer oUb1's reply! We are happy to address any further concerns. [1] “Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior“ Subramanian et al. 2023 [2] “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer” Raffel et al. 2019 --- Rebuttal Comment 2.1: Title: Look forward to more discussions Comment: Dear Reviewer oUb1, As the author-reviewer discussion period is nearing its end, and since other reviewers have actively engaged in discussions, we would greatly appreciate it if you could review our responses to your comments at your earliest convenience. This will allow us to address any further questions or concerns you may have before the discussion period concludes. If our responses satisfactorily address your concerns, we kindly ask you to consider revising your rating of our work. Thank you very much for your time and effort! Sincerely, The Authors of Submission #11275
Rebuttal 1: Rebuttal: We deeply appreciate the feedback and suggestions from all four reviewers. We are pleased that **all four reviewers recognized** that our paper targets an **interesting and well-known challenge** in scientific machine learning (SciML). We thank **all four reviewers** for acknowledging our main contribution, which is to **advance the pretraining method for SciML** and **reduce the number of labeled training samples**. This is an important research direction. We also thank reviewers oEEz, 6vSy, and u3i6 for confirming our comprehensive experiments on diverse PDEs, broad ranges of physical parameters, and real-world problems. We address all questions and concerns in individual responses. Following the NeurIPS guidelines, we also attach a one-page PDF with figures of new experiments. Pdf: /pdf/f8d259babac1b7ae5349e40203d83f8163e3fa1c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Unifying Post-Processing Framework for Multi-Objective Learn-to-Defer Problems
Accept (poster)
Summary: While there has been extensive research on L2D, general methods for designing such systems under various constraints (e.g., algorithmic fairness, expert intervention budget, defer of anomaly, etc.) remain largely unexplored. This paper utilizes $d$-dimensional generalization to the fundamental lemma of Neyman and Pearson ($d$-GNP) to obtain Bayes optimal solutions for L2D under various constraint conditions and designs a generic algorithm to estimate these solutions. Strengths: 1. This paper proposes a novel general framework for addressing L2D under various constraint conditions. 2. Different constraint conditions can be simultaneously addressed. 3. The proposed method is a post-processing approach that can be easily extended to existing models. Weaknesses: 1. The general framework provided by the author can address various constraint problems, but it does not seem to be reflected in the experimental section (especially in cases involving multiple constraints), only considering simple single-constraint situations and basic comparisons. Expert Intervention Budget: Liu S, Cao Y, Zhang Q, et al. Mitigating Underfitting in Learning to Defer with Consistent Losses. ICML, 2024. Narasimhan H, Jitkrittum W, Menon A K, et al. Post-hoc estimators for learning to defer to an exper. NeurIPS, 2022. Long-Tail Classification: Narasimhan H, Menon A K, Jitkrittum W, et al. Learning to Reject Meets Long-tail Learning. ICLR, 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Post-Processing Framework has many benefits, but it also faces challenges in confidence calibration, which is particularly evident in deep models. Inevitable overfitting makes the model's output probabilities unreliable, indirectly leading to biases in approaching the Bayes optimal solution through post-processing methods. Does the method proposed in this paper include mechanisms to mitigate these issues? 2. There are many studies similar to L2D, such as the dynamic classifier selection literature (DCS). What is the main difference between L2D and DCS? Cruz R M O, Sabourin R, Cavalcanti G D C. Dynamic classifier selection: Recent advances and perspectives[J]. Information Fusion, 2018, 41: 195-216. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedbacks. We further address their questions in the following. ## Questions **Q1** Dealing with overfitting is largely studied in the machine learning literature. In this work, we assume that the scores are estimated using the well-studied methods such as calibrated neural networks and random forests. This is an assumption that a large body of the field, from the first works of L2D (e.g., Mozannar et al. 2020) holds. If the posterior probabilities are not accurate, as the reviewer has mentioned, all the plug-in methods (including Mozannar et al., Verma et al., and Cao et al. for unconstrained L2D problem) would lead to incorrect solutions. Therefore, we believe the study of this issue is beyond the scope of this paper and is of its own interest. However, we should add that, as the reviewer has mentioned, the calibration of such probabilities would equip us with an interesting set of properties. In fact, if we know that the neural network $g(x)$ is calibrated, i.e., $P(Y=1|g(X)=p)=p$, then if the only information in our hand is that calibrated estimation, then the plug-in methods are reduced to the plug-in methods for $g(x)$, since in this case $\Pr(Y=1|g(x))=g(x)$, and therefore rules such as $\Pr(Y=1|g(x))\geq T$ is equivalent to $g(x)\geq T$. **Q2** These two fields are highly interrelated in the sense that in both of these fields, a competent decision-maker is selected for a particular input feature. However, the difference between these fields is that in L2D systems, due to the some reasons such as transparency and responsibility, the decision is made either by human or the classifier, while in DCS systems the solutions of the classifier can be fused with each other. Such extensions to L2D problems are also studied in the literature, e.g., Charusaie et al. 2024, and Steyvers et al. 2022. We make sure to include these references as well as DCS references in our manuscript. Charusaie, Mohammad-Amin, Amirmehdi Jafari Fesharaki, and Samira Samadi. "Defer-and-Fusion: Optimal Predictors that Incorporate Human Decisions." Steyvers, Mark, et al. "Bayesian modeling of human–AI complementarity." Proceedings of the National Academy of Sciences 119.11 (2022): e2111547119. ## Weaknesses **The general framework provided by the author can address various constraint problems, but it does not seem to be reflected in the experimental section (especially in cases involving multiple constraints), only considering simple single-constraint situations and basic comparisons.** We first should note that, as we mention in the main rebuttal section, the ACSIncome experiment controlls two constraints, namely, the disparity of true negative and false negative predictions for two demographic groups. In the following, we further introduce a new set of multi-constraint experiments in the Hate Speech experiment. We believe that the comments of the reviewer is remained unfinished, or otherwise we are not sure how the references on **Expert Intervention Budget** and **Long-Tail Classification** are related to their comments. We have indeed cited these references in the current version of the manuscript. ## New Experiment on Hatespeech Dataset We further experimented our algorithm on Hatespeech dataset (Davidson et al. 2017) that is a content-moderation task to flag hateful and offensive tweets. The results are plotted in the submitted PDF. We used a pre-trained model (Blodgett et al. 2016) to detect whether a tweet is written in an African-American (AA) or not (NAA). We then used our algorithm to control the (i) parity of predicting a tweet hateful, or (ii) offensive, and (iii) false negative probability difference for group AA and NAA. We can observe in Figure (d) that the disparity in predicting a tweet "Offensive" for our algorithm (violet bars) is reduced to the tolerance $0.1$ by deferral to the human for group AA (see Figure (b)), and otherwise randomization of the results for group NAA (Figure (c)). Similarly, the disparity in predicting "Hate Speech" is diminishing in our algorithm (brown bars of Fig. 2,3, and 4). Furthermore, we can reduce the disparity of false negative prediction (finding a tweet Hate Speech or Offensive, given it is neither) in Figure (e). Here we note that the high variance in Fig. (e) is due to small size of non-offensive tweets. We furtherran experiments on **multi-constraint** setting in which both demographic disparity and human intervention budget is controlled. The accuracy and the true constraints are plotted in Fig. (h), (i), and (j). Similarly, Type-II error and human budget is controlled together and plotted in Fig. (k), (l), and (m). Davidson, Thomas, et al. "Automated hate speech detection and the problem of offensive language." Proceedings of the international AAAI conference on web and social media. Vol. 11. No. 1. 2017. Blodgett, Su Lin, Lisa Green, and Brendan O'Connor. "Demographic dialectal variation in social media: A case study of African-American English." arXiv preprint arXiv:1608.08868 (2016). --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your elaborated response to my review. Please allow me a response to your response. add Q1/A1: I know that addressing the probability calibration issue caused by overfitting is challenging, so the author's plug-in method relies on excellent surrogate loss functions. However, we need to point out that in previous surrogate loss function, only comparisons between probabilities are required (e.g., comparisons may be accurate, but values may not be accurate, such as Mozannar et al. 2020), rather than direct use. In contrast, the author's method requires direct utilization of estimated probabilities that may be inaccurate for multiple operations, which may amplify errors. I know that calibrating probabilities is not the focus of this study, but I believe this is a potential risk of the plug-in method that warrants discussion. add Weaknesses: I apologize for not accurately expressing my confusion earlier. My main concern is that the authors claim their framework can solve various constraint problems, but in the experiment section, they don't compare their work with previous works that have already solved some of these constraint problems (such as expert intervention budget and long-tail classification). I think such comparative experiments are very important. In my opinion, the lack of comparative experiments is the main drawback of the work. --- Rebuttal 2: Comment: We first thank the reviewer for their clarifications and their response. Here, we mention a few notes. Q1: We are glad that the reviewer agrees with us on our choice of surrogate functions. We further agree with the reviewer that linearly combining the scores might propagate the error, in a controlled manner, and in the value of objective and not the constraints. This is precisely what we have discussed after Theorem 5.1 regarding the constraint generalization. There, we have discussed that for a given error in $\ell_{\infty}$ norm of the scores, the constraints can deter at most for that error, plus the sample complexity term. Furthermore, in the error upper-bound (10) that is provided in **Theorem 5.3**, we observe that given the linear combination coefficient being bounded by $K$, and given the sensitivity factor being bounded away from $0$, **the objective cannot be far from the true objective, as long as score errors $\delta_0$ and $\delta_1$ are small enough**. We note that, when the constraint is well within the feasibility set, the value $K$ is more tightly upper-bounded. Furthermore, as we discussed it in the response to Q6 of Reviewer 7rh1, and as we have shown as an example, in Figure (f) and (g) of the submitted PDF, the constraints that lie within the feasibility set improve the sensitivity factor, thereby reducing the risk of error propagation. ## Weaknesses We appreciate the reviewer's clarification of their comments. It is important to emphasize that our work does not aim to compete with other constrained L2D optimal methods but rather to unify these methods within a single framework. Our method should indeed provide a reasonable solution for simple constraints, and improve the empirical methods by obtaining the true optimal solution for cases in which **the true optimal is not known in the literature**, such as **L2D with fairness criteria (demographic parity, equalized odds, and equality of opportunity), or a combination of a set of constraints**. These claims are supported by the experiments within the manuscript and the rebuttal. However, we do not expect that our method outperforms methods that already achieve or approximate the true optimality, such as those dealing with expert intervention budgets or long-tail classification. We will further attempt to run the additional experiments suggested by the reviewer during the remaining discussion period. However, given the time constraints and the incompleteness of the reviewer's initial comments, it might not be feasible to complete them within this period. --- Rebuttal Comment 2.1: Title: New Comparative Experiments + Theoretical Equivalence Comment: Dear Reviewer, We further thank you for the suggestion of comparing our unifying methods with other methods in specific cases. In the time we had we could compare our method to the two of the mentioned work, Liu et al 2024, and Narasimhan et al. 2022. for the sanity check. We have indeed simulated the post-hoc formula of (11) in Narasimhan et al. 2022 and the modified equation (9) of Liu et al. with One-vs-All Losses. Since these works are not tailored to handle hard thresholds, we have fitted the parameters for each specific tolerance via validation data in Narasimhan et al. and via re-training in Liu et al. and searching within 1000 values of deferral cost between -1 and 1. The comparison shows a competetive accuracy in our method compared to these baselines. We note here that the high variance of the accuracy in COMPAS dataset is already reported in Mozannar et al. 2023, Figure 3 (b). ### Hatespeech | Tolerance | d-GNP Accuracy | d-GNP Constraint | Liu et al. 2024 Accuracy | Liu et al. 2024 Constraint | Narasimhan et al. 2022 Accuracy | Narasimhan et al. 2022 Constraint | |----|---|---|---|---|---|----| | 0.05 | **0.890 ± 0.004** | 0.049 ± 0.005 | 0.887 ± 0.004 | 0.043 ± 0.005 | 0.886 ± 0.005 | 0.047 ± 0.006 | | 0.10 | **0.902 ± 0.005** | 0.099 ± 0.007 | 0.899 ± 0.005 | 0.092 ± 0.005 | 0.898 ± 0.005 | 0.098 ± 0.007 | | 0.15 | **0.910 ± 0.004** | 0.147 ± 0.011 | 0.908 ± 0.005 | 0.137 ± 0.009 | 0.908 ± 0.005 | 0.148 ± 0.009 | | 0.20 | **0.916 ± 0.004** | 0.200 ± 0.011 | 0.914 ± 0.005 | 0.186 ± 0.010 | 0.914 ± 0.004 | 0.195 ± 0.012 | | 0.25 | **0.920 ± 0.004** | 0.244 ± 0.009 | 0.918 ± 0.004 | 0.233 ± 0.013 | 0.917 ± 0.005 | 0.240 ± 0.017 | | 0.30 | **0.921 ± 0.005** | 0.287 ± 0.016 | 0.920 ± 0.005 | 0.276 ± 0.025 | 0.919 ± 0.005 | 0.283 ± 0.015 | ### COMPAS | Tolerance | d-GNP Accuracy | d-GNP Constraint | Liu et al. 2024 Accuracy | Liu et al. 2024 Constraint | Narasimhan et al. 2022 Accuracy | Narasimhan et al. 2022 Constraint | |---|---|---|---|---|---|---| | 0.015 | 0.646 ± 0.022 | 0.014 ± 0.010 | **0.649 ± 0.031** | 0.003 ± 0.004 | **0.649 ± 0.031** | 0.016 ± 0.018 | | 0.075 | 0.647 ± 0.026 | 0.042 ± 0.033 | 0.648 ± 0.031 | 0.030 ± 0.022 | **0.650 ± 0.031** | 0.053 ± 0.032 | | 0.135 | 0.649 ± 0.026 | 0.072 ± 0.050 | **0.651 ± 0.026** | 0.065 ± 0.040 | 0.649 ± 0.031 | 0.082 ± 0.048 | | 0.195 | **0.656 ± 0.023** | 0.111 ± 0.074 | 0.654 ± 0.027 | 0.079 ± 0.054 | 0.652 ± 0.033 | 0.132 ± 0.069 | | 0.255 | **0.656 ± 0.029** | 0.122 ± 0.085 | 0.653 ± 0.027 | 0.108 ± 0.085 | 0.655 ± 0.033 | 0.165 ± 0.095 | We further discuss the theoretical equivalence between our method and other methods that tackle human intervention budget and long-tail classification by which we conclude that the closeness of the accuracies to these baselines are not surprising. For human intervention budget, since using Table 1 we know that the embedding functions are as $$\psi_0(x)=[\Pr(Y=0|X=x), \ldots, \Pr(Y=L|X=x), \Pr(Y=M|X=x)],$$ and $$\psi_1(x)=[0, \ldots, 0, 1],$$ therefore, the optimal predictor using Theorem 4.1 and the discussion after (4) is equal to $$h^*(x)=\arg\max_{i\in [1:L]} \psi_0(x)-k\psi_1(x) $$, and $r^*(x) = 1$ if $L+1=\arg\max_{i\in [1:L+1]} \psi_0(x)-k\psi_1(x)$. Therefore, using the definitions of embedding functions, we have $$h^*(x)= \arg\max_{1:L} [\Pr(Y=1|X=x), \ldots, \Pr(Y=L|X=x)],$$ and $r(x)=1$ if $\Pr(Y=M|X=x)-k>\max \{\Pr(Y=1|X=x), \ldots, \Pr(Y=L|X=x)\}$. This is equivalent to what is proposed in (11) in Narasimhan et al. 2022. For the long-tail classification problem using Table 1, we have $$\psi_0(x)= \Big[\frac{\Pr(Y=1|X=x)}{\alpha_{i_1}\Pr(Y\in G_{\alpha_{i_1}})}, \ldots, \frac{\Pr(Y=L|X=x)}{\alpha_{i_L}\Pr(Y\in G_{\alpha_{i_L}})}, 0\Big]$$ and $$ \psi_t(x) = \frac{\Pr(Y\in G_t|X=x)}{\Pr(Y\in G_t)}[1, \ldots, 1, 0] - \frac{\alpha_t}{K}$$ as embedding functions, where $i_j$ is the index of the group $G_{i_j}$ to which the label $j$ belongs. Now, since the first $L$ components of all $\psi_t$s are equal to each other, we have $$h^*(x)=\arg\max_{i\in [1:L]} \psi_0(x) - \sum_{t=1}^K k_t \psi_t(x) = \arg\max_{i\in [1:L]} \psi_0(x) = \arg\max \Big[\frac{\Pr(Y=1|X=x)}{\alpha_{i_1}\Pr(Y\in G_{\alpha_{i_1}})}, \ldots, \frac{\Pr(Y=L|X=x)}{\alpha_{i_L}\Pr(Y\in G_{\alpha_{i_L}})}\Big].$$ Furthermore, the tightness condition $\mathbb{E}\big[\langle \psi_t(x), f^*(x)\rangle\big]=0$ concludes that $\alpha_{i_j}\Pr(Y\in G_{\alpha_{i_j}}) = \alpha^*_{j}$ where $\alpha^*_{j}$ defined in (8) of Narasimhan et al. 2024. These two conclude in the optimal classifier (8) of Narasimhan et al. 2024. Similarly, we can show that the optimal deferral strategy using $d$-GNP is $r(x)=1$ if $\sum_{t}\frac{k_t \Pr(Y\in G_t|X=x)}{\Pr(Y\in G_t)}>\max\Big[\frac{\Pr(Y=1|X=x)}{\alpha_{1}^*}, \ldots, \frac{\Pr(Y=L|X=x)}{\alpha_{L}^*}\Big]$ which is a similar rule to that of (8) in Narasimhan et al. by setting $\mu^*\_t=1-k_t \alpha_{i_t}$. --- Rebuttal 3: Title: A Resoponse to Your Concerns Comment: We thank the reviewer for their response and comments our experimental results. As we have tried to show that via experimental results as well as theoretical results, our method provides the same theoretical post-processing method as of Narasimhan et al. 2022 for the case of human intervention budget. The differences of these methods are reflecting the way that the probabilities are estimated, i.e., the choice of loss function, and the encoding of deferral choice in the network. We have used a softmax loss function, while as mentioned, in our limited time, we implemented Section 4.1 of Narasimhan et al. that uses a form of one-vs-all loss. These two as we see have the difference of order of 0.001, while the standard deviation of accuracies is larger than this amount. As a result, we believe this is an evidence that the two methods lead to a similar result. Here, we want to reiterate the empirical results of our work: 1- We have obtained the L2D solution under demographic parity for COMPAS dataset and compared to Madras et al. 2018 and Mozannar et al. 2022. 2- We have obtained the L2D solution under equalized odds for ACS dataset. 3- In the rebuttal period, we have implemented our work to Hatespeech dataset and under demographic parity and equality of opportunity constraint. 4- To address the reviewer's concern, we have obtained the optimal solution under multiple constraints of demographic parity+human intervention budget and Type-II error+human intervention budget. 5- To address the reviewer's concern, we have compared our method with Narasimhan et al. 2022 and Liu et al. 2024 for Hatespeech dataset as well as COMPAS dataset.
Summary: The paper studies multi-objective learn-to-defer problems, where the objectives include minimizing deferral loss and satisfying several constraints. It demonstrates that these problems are generally NP-hard and can be reduced to functional linear programming. Additionally, it shows that the problem can be further reduced to a d-dimensional generalized Neyman-Pearson problem and characterizes its solution when there is only one constraint. The paper also presents a unifying post-processing algorithm with generalization bounds and provides numerical validations. Strengths: The paper offers several theoretical insights into multi-objective learn-to-defer problems, enhancing the understanding of these problems. It also introduces a unifying post-processing algorithm with provable performance guarantees. Weaknesses: 1.Inconsistent Notations and Typos: The paper is difficult to follow due to inconsistent notations and potential typos. For instance, $m$ can mean the expert decision (line 111) or the number of constraints (Equation 3). $r$ can mean the rejection function (line 110) or a distribution vector (Theorem 4.1). In line 7 of Algorithm 1, should it be $\hat{C}(k)$ instead of $\hat{C}(t)$? And line 208 only defines $\psi_{i}$ for $i=1,...,m+1$, then in line 209, what is $\psi_{0}$? 2.Limitations on Multiple Constraints: The paper acknowledges that the provided analyses for the algorithm (and possibly the algorithm itself) only apply to single constraint settings. More discussion on the challenges of extending these analyses (and potentially the algorithm) to multiple constraints settings would be helpful. Technical Quality: 3 Clarity: 1 Questions for Authors: 1.Why in Theorem 4.2, $f^*_{k,p}(x)=\tau(\psi_1(x)-k\psi_0(x))$, while in Algorithm 1, its estimation version is $\hat{f}_{k,p}(x)=\tau(\hat{\psi}_0(x)-k\hat{\psi}_1(x))$ (where the positions are swapped)? 2.Can Algorithm 1 handle multiple constraints settings? Figure 1 depicts a diagram of multiple constraints settings, but it seems Algorithm 1 is designed for single constraint settings. Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reading the paper and their comments. We are respectively address the concerns of the reviewer. 1. **1.Why in Theorem 4.2, $f_{k, p}*(x)=\tau(\psi_1(x)-k\psi_0(x))$, while in Algorithm 1, its estimation version is $\hat{f}_{k,p}(x)=\tau(\hat{\psi}_0(x)-k\hat{\psi}_1(x))$ (where the positions are swapped)?** This is a typo that we addressed in the newer version of this manuscript. In this version $\psi_0$ corresponds to the objective, while $\psi_1, \ldots, \psi_{m+1}$ correspond to the constraints. 2. **Can Algorithm 1 handle multiple constraints settings? Figure 1 depicts a diagram of multiple constraints settings, but it seems Algorithm 1 is designed for single constraint settings.** This algorithm, as mentioned in the main rebuttal can handle many constraints, just by adding the line of finding the empirical solution of $k_1, \ldots, k_m$ for (3) and based on validation dataset. 3. **Inconsistent Notations and Typos**: We have extensively proofread and resolved the typos that are raised by the reviewer. We refer the reviewer to the main rebuttal section for further details. 4. **Limitations on Multiple Constraints: The paper acknowledges that the provided analyses for the algorithm (and possibly the algorithm itself) only apply to single constraint settings. More discussion on the challenges of extending these analyses (and potentially the algorithm) to multiple constraints settings would be helpful.** As we further explained in the main rebuttal section, the algorithm can handle many constraints. The main result of this paper also hold for many constraints. Further extensions of the sample complexities are obtained in the main rebuttal section. Here, we would be grateful if the respected reviewer would let us know why they think our paper is not fit to this conference, since we believe there is a disparity between the comments and the scores of the reviewer. The typos and notation issues can be minor issues that will not entail any major concerns in our work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It addresses some of my concerns and I have raised the score. However, I must emphasize that the typos and notation issues significantly hinder the understanding of the work. In my view, a high-quality paper should have minimal typos and consistent notations to ensure clarity and readability. --- Rebuttal 2: Comment: Dear Reviewer, Thank you for your response and for raising the score. We are glad that we could address your concerns. Regarding the notation issues, we assure you that we will carefully proofread the next version of the manuscript to eliminate any typos.
Summary: The paper introduces a unifying post-processing framework for multi-objective learn-to-defer problems, allowing the system to defer tasks to an expert under specified constraints. By generalizing the Neyman-Pearson lemma, the paper derives the Bayes optimal solution for this framework and develops an algorithm to estimate it. The proposed algorithm is evaluated using the COMPAS and ACSIncome datasets. Strengths: 1. The paper introduces a general post-processing framework. 2. The paper is well-written and has excellent illustrations, for example, Figure 1. 3. The framework accounts for multiple objectives in L2D. 4. The method can potentially be extended beyond the L2D setting to apply to other constrained objectives. Weaknesses: 1. The proposed algorithm requires the estimation of scores and $\hat \psi$ (Lines 4 and 5 of Algorithm 1). However, accurately estimating these values can be potentially challenging. 2. The experiments could potentially be expanded further. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The multi-class Neyman-Pearson (NP) paradigm has been studied in [66]. Could the authors summarize the key differences between the framework proposed in this paper and the prior work? 2. What is the motivation for imposing specific constraints in learning to defer? Could this compromise the accuracy of the process? 3. How does the proposed algorithm empirically compare with the baselines on the ACSIncome datasets? Do you think the experiments could be expanded further? 4. It seems that the method could potentially be applicable to other constrained objectives beyond the L2D setting. Could the authors further elaborate on this? 5. Could the authors further comment on the challenges of extending the analysis and algorithm to multiple constraint cases? 6. How are the tolerances chosen during the experiments? Can the assumptions in Theorem 5.3 be verified in practice? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and constructive comments on this paper. In the following, we respond their questions in order **Q1** A: This work that we have further referenced in our manuscript, as we mentioned after (6), has three main differences with our work. *(i)* This work only considers the Type-$k$ error and is designed to find the solution of the optimal classifier given that specific constraints. In contrary, our work can contain all types of constraints that are in form of loss function (e.g., fairness criteria, human intervention budget, long-tail classification). *(ii)* This work is based on strong duality theorem that as we discussed in Appendix E is not considered in our work, due to its limitations (see the counterexample of Appendix E). *(iii)* In our work, we not only find the optimal solution to the constrained optimization problem (3), but also show that all solutions to this problem are of the form that we have formulated in (8). **Q2** A: The L2D systems can be used in many applications, including applications that are sensitive in nature. As an instance, we can design an algorithm that either classifies a disease based on the data of a patient, or defer the decision to the doctor. This system in total can be unfair, meaning that the error that is induced by this system not detecting the disease for different demographic groups can be different. As another example, as we further explained our experiment for Reviwer 4QNG, this can be the case when we want to detect Hatespeech from a tweet automatically or using our agents. This decision could entail different errors for the tweets that are written with an African-American dialect and the ones that are not. Our results helps controlling this algorithmic unfairness. Since we add a constraint to our optimization problem, the corresponding search space reduces, and therefore the accuracy can be compromised, which is the case in many algorithmically fair methods. **Q3** A: Our experimental setting now have expanded and include content moderation experiments on Hate Speech Dataset. We refer the reviewer to the rebuttal section of Reviewer 4QNG for the details of these experiments. Note that there is no known basline to ensure a certain equalized odds constraint for an L2D system. Particularly in our setting in which the model is a random forest. However, in the next revision, we will compare our method with best fair classifier and an adapted version of Mozannar et al. 2020 for random forests in which we shift thresholds for different demographic group. **Q4** A very interesting application of $d$-GNP is its use in fair vanilla multi-class classification. This theorem can show that for controlling demographic parity or equality of opportunity, we need to first learn the scores, and then add or multiply a value to these scores based on the demographic identity. In fact, or an $L$-class classifier, if we aim to set constraints on demographic parity $\big|\Pr(\hat{Y}=0|A=0)-\Pr(\hat{Y}=0|A=1)\big|\leq \delta$ or equality of opportunity $\big|\Pr(\hat{Y}=0|Y=0, A=0)-\Pr(\hat{Y}=0|Y=0, A=1)\big|\leq \delta$ on Class $0$, then we can follow similar steps as in Appendix D to find the embedding functions as $$ \psi_{\mathrm{DP}} = s(A)\big[1, 0, \ldots, 0\big], $$ and $$ \psi_{\mathrm{EO}} = t(A, 0)\big[\Pr(Y=0|x), 0, \ldots, 0\big]. $$ As a result, since the accuracy embedding function is $\psi_0(x)=\big[\Pr(Y=0|x), \ldots, \Pr(Y=L|x)\big]$, then, by neglecting the effect of randomness, the optimal classifier under such constraints are as $$ h_{\mathrm{DP}}(x)=\arg\max\big\(\Pr(Y=0|x)-ks(A), \Pr(Y=1|x), \ldots, \Pr(Y=L|x)\big\), $$ and $$ h_{\mathrm{EO}}(x)=\arg\max\big\(\Pr(Y=0|x)\big(1-kt(A, 0)\big), \Pr(Y=1|x), \ldots, \Pr(Y=L|x)\big\). $$ Equivalently, for demographic parity, the optimal classifier includes a shift on the score of Class $0$ as a function of demographic group, and for equality of opportunity, the optimal classifier includes a multiplication of the score of Class $0$ with a value that is a function of demographic group. It is easy to show that under condition of positivity of the multiplied value, these classifiers both reduce to thresholding rules in binary setting. **Q5** A: We refer the reviewer to the main rebuttal for the response to this question. **Q6** A: The tolerances are chosen in a manner that capture the dynamics of the accuracy within that range. If we permit for a very large tolerance, the accuracy saturates to its Bayes optimal value. This is what occurs at the right part of ACSIncome figure. The main assumption of Theorem 5.3 is the sensitivity assumption of Definition 5.2. To test that assumption, we can set $\gamma=1$, and reduce this assumption to a lower-bound on the derivative of the constraint w.r.t the coefficient $k$. We have plotted this in Fig. (f) and (g) of the submitted PDF and we observe that for the cases that $k$ induce a constraint within the plausible constraints, this derivative is lower-bounded. In fact, Fig. (f) shows the accuracy of the estimator for two coefficients of corresponding constraints of demographic disparities of detecting a tweet Offensive (DP-O) or Hate-Speech (DP-HS). We observe that as long as there is dynamics in the accuracy in terms of this coefficients, then the derivative of the constraints in terms of this coefficient is bounded below. This is particularly observable in right-most part of Fig. (g) for DP-O and the left-most part for DP-HS. **Q7** We should mention that estimating the value of the scores are equivalent to estimating the value of a set of posterior probabilities that are used in obtaining these scores. This is a task for which machine learning literature has developed corresponding tools such as deep neural networks, random forests, etc. Therefore, we don't believe the estimation of our scores is of particular hardness compared to this literature. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I will keep my score as is.
Summary: This paper aims to provide a provably consistent unified post-processing framework of learning to defer with constraints. The problem of constrained L2D is first reduced to a linear programming problem. Then the linear programming problem is further tackled with a generalized version of Neymar-Pearson lemma, which lead to an efficient solver of this problem. Non-asymptotic analysis is conducted to further provide guarantee for the empirical version of the algorithm. Experimental results validate the efficiency of the proposed solver. Strengths: 1. A unified post-processing framework for constrained L2D is proposed in this paper, which allows training a randomized classifier-rejector tuple with small number of validation data combined with a trained model’s confidence output. 2. The hardness of directly solving the original problems and in-processing methods are thoroughly analyzed in this paper, which provides enough rationale for the proposed method. 3. The introduction of the generalized Neyman-Pearson lemma is novel to the field of L2D, which can provide insights for the future works of this field. Weaknesses: 1. The related works are moved to the appendices, which can be confusing to readers that are new to this field. I suggest the authors use a more compact type setting in the contribution part of Section 1 and reduce/integrate some of the contributions to make more spaces. The paragraph ‘Type of Constraints’ can also be reduced given the Table 1. 2. The notations in this paper need further proofread, e.g., the realization and random variables are used improperly in line 141, line 188; the quantity $a^{i}$ in the algorithm 1 seems to be unused in this algorithm. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It is mentioned that solving (2) is NP-hard. While the proof is quite clear, I wonder if such hardness is important. In my opinion, a common practice that can avoid directly solving this problem is using a surrogate loss instead of 0-1 loss and integrating the expert coverage constraint into the loss like that in the Selective-net. Can you further make some discussions on this point? 2. If we have rather accurate class probability estimates and expert accuracy, why not use it as the solver of this problem? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first thank the reviewer for their positive feedback to our submission. Here, we respond to their questions 1. **Q: It is mentioned that solving (2) is NP-hard. While the proof is quite clear, I wonder if such hardness is important. In my opinion, a common practice that can avoid directly solving this problem is using a surrogate loss instead of 0-1 loss and integrating the expert coverage constraint into the loss like that in the Selective-net. Can you further make some discussions on this point?** A: The point of the hardness theorem is to justify the use of randomness in multi-objective setting. We should note that the practical solutions to this problem such as Selective-Net that use surrogates of the loss functions do not enjoy the guarantees for optimality. This is as opposed to the unconstrained case, in which there are continuous surrogate functions that are Fisher consistent, and therefore their minimization is equivalent to the $0-1$ loss minimization. 2. **Q: If we have rather accurate class probability estimates and expert accuracy, why not use it as the solver of this problem?** A: The class probabilities as well the expert accuracy can help us find the optimal solution when we allow for randomness and using $d$-GNP. If, however, we aim to find the optimal deterministic solution that is not the case. In Appendix E we introduce an example that the human has perfect information of the label, while the input feature has no information about the label. In this case, it would be more efficient to defer the decision to the human. However, if we bound the budget of the human to $b$ proportion of samples, the optimal deterministic solution would be to defer on inputs that are arised with sum probability of $b$ and not to defer in the other $1-b$ proportion. Therefore, on the inputs with the same conditional accuracy and the same conditional expert accuracy, we have a different decision. This is the reason to the hardness theorem, i.e., finding that set of inputs that sum up to $b$ can be a complex task. 3. **Q: The related works are moved to the appendices, which can be confusing to readers that are new to this field. I suggest the authors use a more compact type setting in the contribution part of Section 1 and reduce/integrate some of the contributions to make more spaces. The paragraph ‘Type of Constraints’ can also be reduced given the Table 1.** A: This was a decision due to the lack of space. We will make make more space for the related works in the next version of the manuscript, as suggested by the reviewer. 4. **Q: The notations in this paper need further proofread, e.g., the realization and random variables are used improperly in line 141, line 188; the quantity $a_i$ in the algorithm 1 seems to be unused in this algorithm.** As mentioned in our main rebuttal, we have made an extensive effort to proofread the paper. We have addressed the typos mentioned by the reviewer and appreciate their thorough reading of the manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Your discussions on the motivation of this work and the hardness analyses have solved my concerns. I’ll keep my decision of acceptance.
Rebuttal 1: Rebuttal: We first thank all reviewers for the time they have put in writing this set of constructive reviews. In particular, we are glad that the reviewers found our method "well-written", "thoroughly analyzed", "novel to the field of L2D", "enhancing the understanding of these problems", "can be extended beyond the L2D setting", and with results that "validate the efficiency of the proposed solver". In the following, we first reiterate the main results of our paper. Next, we address main issues raised by the reviewers. ## Main Results * We find a generalization of Neyman-Pearson lemma (Thm. 4.1) using which we can solve a variety of constrained learning problems, including multi-objective L2D problems. * We formulate the multi-objective L2D problem when the constraints are corresponded to expert intervention budget, OOD detection, long-tail classification, type-$k$ error, demographic parity, equality of opportunity, and equalized odds in terms of this generalization of N-P lemma. * We find the parameters of this solution in closed-form (Thm. 4.2) * We closely analyze the sample complexity of the prediction of this solution from empirical data (Thm. 5.1 and 5.3) * We find the hardness of constrained L2D problems in lack of randomness (Thm. 3.1) * We have an impossibility result of in-processing methods (Prop F.2) * We have experiments on controlling demographic parity and equalized odds for ACS and COMPAS datasets. (in the rebuttal, we introduce a new set of experiments of Hatespeech dataset and for multi-constraint setting) ### Typos and Proofreading We have put an extensive time in proofreading the manuscript and we have further considered all the typos that are raised by the esteemed reviewers. In the new version of manuscript, $\psi_0$ corresponds to the objective, and $\psi_1, \ldots, \psi_{m+1}$ corresponds to the constraints, $f$ is the vector prediction function, $L$ is the number of classes, and $n$ is the size of validation dataset. Moreover, the capital letters are reserved for random variables, and the small letters are reserved for the scalar values. Furthermore, Algorithm 1 is modified notation-wise, and to support multiple constraints. ### Multiple-Constraint (MC) Setting 1. We first note that the main result of this paper, **Theorem 4.1**, which formulated the **Bayes optimal** solution of the constrained optimization problem (3) is designed to handle **multiple constraints simultaneously**. 2. Theorem 4.2 is merely written to simplify the search for coefficient $k$ into finding the root of a monotone function. Although in MC setting, each constraint is still monotone in terms of each coefficient $k_i$ (constraints are marginally monotone), this does not reduce the complexity, since there might be more than one solution of $k_1, \ldots, k_m$ to achieve $\delta_1, \ldots, \delta_m$. Instead, we should use search methods to find the correct coefficients for our dataset. 3. In the **experiment** section, the ACS dataset is tested for equalized odds constraints, which keeps the differences between the true positive rate of two demographic groups as well as the true negative rate of the two groups controlled. **This experiment is a multi-constraint optimization** with the results reflected in the right-most subfigure in Figure 2 of the manuscript. Further experiments are introduced in Reviewer 4QNG rebuttal. 4. **The algorithm can support the MC setting**, by just adding one line in which we search to find the values of $k_1, \ldots, k_m$ that are empirical solutions to (3) for the validation dataset. 5. The **sample complexity analysis for the constraints in Theorem 5.1** are **easily generalizable** to MC setting. If we have $m$ constraints with the empirical value of them being bounded by $\delta_1, \ldots, \delta_m$, then by $m$ times using Theorem 5.1 and using union bound on probabilities, we can show that the true constraint values are bounded by $\delta_1+d_n(\epsilon/m), \ldots, \delta_m+d_n(\epsilon/m)$, respectively with probability $1-\sum_{i=1}^m \epsilon/m = 1-\epsilon$. 6. The MC extension of **sample complexity analysis for the objective of Theorem 5.3** although not readily but **is achievable by modification of the proof** and the assumption that **true scores are available**, and the optimization is done on the empirical dataset. In the proof of this theorem, in (77), we have offered a decomposition of the objective generalization Lagrangian generalization $D_{k}(f_1, f_2)$ and constraint generalization. We can define a similar Lagrangian generalization $D_{k_1, \ldots, k_m}(f_1, f_2)$. Then, following the same steps as in (80), and the equation after, and using Lipschitzness of $t_x(k)$, we can show that this value is bounded above by $2\sum_{i=1}^m |k_i^*-\hat k_i|$ . Furthermore, in case that each constraint achieves $\delta_i$, we can simply repeat the discussion above (78) and show that the constraints generalize. Therefore, it is remained to show that $k_i^*$ and $\hat k_i$ are closed to each other. This entails that Definition 5.2 must hold for all $m$ constraints, i.e., the condition is $\big|\mathbb{E}\big[\langle f_{k}(x)-f_{k'}(x), \psi_{i}(x)\rangle\big]\big|\geq C\delta^{\gamma}$ where $||k-k'||\geq \delta$. Next, due to the generalization of the constraints, we have $\mathbb{E}[\langle f_{\hat{k}}(x), \psi_i(x)\rangle]\in [\delta_i-d_n(\epsilon), \delta_i+d_n(\epsilon)]$ with probability at least $1-\epsilon$. Therefore, we can conclude that $d_n(\epsilon)\geq C\delta^{\gamma}$, or equivalently, $D_{\hat{k}_1, \ldots, \hat{k}_m}(f_1, f_2)$ is bounded above by $2m(d_n(\epsilon)/C)^{1/\gamma}$. This completes the proof. In multi-constraint setting and if we don't have the correct scores, we should use other proof techniques. The reason is that our proof is based on closeness of $\hat{k}$ and $k^*$ and in this case, we cannot assure their closeness with each other, although the objectives might still be close to each other. Pdf: /pdf/08e2006310fdde00c9a690e0b1f3fcf2119a9333.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LRM-Zero: Training Large Reconstruction Models with Synthesized Data
Accept (poster)
Summary: This paper proposes Zeroverse, a new dataset that is entirely synthesized for training large feed-forward 3D reconstruction models. Based on Zeroverse, LRM-Zero is trained with the network structure of GS-LRM. LRM-Zero achieves comparable performance as GS-LRM, which is trained on Objaverse. The idea of using entirely synthesized data to train large reconstruction models is meaningful for the community. However, the performance of LRM-zero is usually worse than GS-LRM, e.g., OmniObject3D. Zeroverse also introduces stability issue for the training of LRM-Zero. It is unclear whether other methods need to carefully adjust the settings of Zeroverse for training stability. Strengths: The paper is overall clearly written. The idea of using entirely synthesized data to train large reconstruction models is interesting and meaningful for the community. The commonly used Objaverse requires lots of efforts to collected and crafted by humans, and is known to have many low-quality 3D models that need to be cleaned in many recent works. With Zeroverse, both the NeRF model (NeRF-LRM-Zero) and 3DGS model (LRM-Zero) can achieve comparable performance as the corresponding models trained on Objaverse. Ablation study shows the effectiveness of different augmentations. Weaknesses: On OmniObject3D, which is not used for training by both GS-LRM and LRM-zero, the performance of LRM-zero is worse than GS-LRM, especially in PSNR. As discussed in Sec. 5.2, Zeroverse may introduce stability issue for the training of 3D reconstruction models. The authors carefully tune the settings and hyperparameters to stably train LRM-Zero. But it is unclear whether other methods need to further adjust the settings for training stability. Technical Quality: 3 Clarity: 3 Questions for Authors: Related works section is not adequate. Before NeRF is used for 3D reconstruction, 3D reconstruction is studied for decades already and multi-view stereo is the main family for 3D reconstruction. These methods [1*-6*], including some recent deep-learning based methods, should be cited and discussed since they are robust, feed-forward, able to scale to very large outdoor scenes and can handle sparse-view settings. But they cannot deal with very sparse setting or single-view setting since they are based on local feature matching, instead of global semantics like LRM. L132: In public release version, seems the texture dataset is different from that is used in the paper. Will the performance become worse with the released texture dataset? Would it be interesting to finetune LRM-Zero on a small set of high-quality captured scenes to learn semantics and improve the performance? [1*] Schönberger et al. Pixelwise view selection for unstructured multi-view stereo. ECCV 2016. [2*] Yao et al. Mvsnet: Depth inference for unstructured multi-view stereo. ECCV 2018. [3*] Gu et al. Cascade cost volume for high-resolution multi-view stereo and stereo matching. CVPR 2020. [4*] Wang et al. PatchmatchNet: Learned Multi-View Patchmatch Stereo. CVPR 2021. [5*] Ding et al. TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers. CVPR 2022. [6*] Wang et al. IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo. CVPR 2022. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed in Sec. B of supplementary. Societal impacts are discussed in Sec. C of supplementary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our writing clarity, the novelty and the value of our method, the generalization of Zeroverse for training both NeRF and 3DGS-based LRM models, and the effectiveness of our ablation study. We will respond to the reviewer’s comments below: 1. **Results on OmniObject3D**: As shown in Tab. 6, LRM-Zero has a 3.51 PSNR gap to GS-LRM on OmniObject3D. We believe that this is because OmniObject3D objects are more realistic (real captures) and have more diverse semantics than other testing datasets, e.g. GSO and ABO. Since LRM-Zero does not learn real-world object semantics from its training data Zeroverse, it thus performs worse on OmniObject3D than GS-LRM, which learns semantics from its training data Objaverse. 2. **Training instability**: We have identified that having too much boolean difference augmented objects in the training data is the main cause of our training instability issues. As shown in experiment 13 in Tab. 3 in the rebuttal pdf, when reducing the ratio of boolean difference augmentation, we do not need to change any training hyperparameters from GS-LRM, including the 0.5 perceptual loss weight. Thus, when extending LRM-Zero/Zeroverse to other tasks, if there is no substantial change in the Zeroverse data synthesis pipeline, there should not be training instability issues as long as the ratio of boolean difference augmentation is kept low. If not, our training stability experiences still provide valuable guidance: avoid making the synthetic object too complex, e.g. with boolean difference augmentation. 3. **Related works on multi-view stereo**: Thanks for curating the list of relevant works on MVS. We will include them in the revision. --- Rebuttal 2: Comment: Thanks for the authors' reply. Some of my questions are addressed. However, I think my following questions are not answered: (1) L132: In public release version, seems the texture dataset is different from that is used in the paper. Will the performance become worse with the released texture dataset? (2) Would it be interesting to finetune LRM-Zero on a small set of high-quality captured scenes to learn semantics and improve the performance? For question (1), authors can answer it if they have any results on the public texture dataset. For question (2), though authors do not answer explicitly, I think the third row in Tab.1 of the rebuttal pdf shows that such finetuning on the Objaverse dataset with semantics may not help to improve performance on object reconstruction. --- Rebuttal Comment 2.1: Comment: Thanks for the reply and below are our response: 1. **Texture dataset**: We have not tested with the open material dataset due to the limited time and computation cost (need to re-render all objects with new materials and run the training). In our observations, LRM relies more on low-level correspondence instead of high-level semantics for reconstruction, and therefore switching to the public material dataset is not expected to have a big impact on the quality. We will also release our original training shapes and images upon approval for better reproducibility. 2. **Finetune LRM-Zero on data with semantics**: Thanks for the suggestion. We did a preliminary experiment where we fine-tune the Zeroverse model with Objaverse data. We observe that the fine-tuning leads to better results (PSNR increased from 31.2 to 32.3 on GSO) than Zeroverse-only training but can not beat training on Objaverse data. It’s possible that a more careful filtering to better bridge the distribution gap would lead to better results, and a more careful tuning of the fine-tuning parameters such as learning rate would also further boost the results. We also want to clarify the results in rebuttal Table 1: we do Zeroverse training in row 1, Objaverse training in row 2, and Zeroverse+Objaverse joint training in row 3 (i.e., not fine-tuning). We found that join training (row 3\) can improve from Zeroverse-only training (row 1\) results. However, since joint-training merges two data distributions, which is more challenging than training on either Zeroverse or Objaverse, it requires tuning our model configurations (such as a larger network capacity) and training hyperparameters that we did not perform. This explains why joint training (row 3\) performs worse than training only on Objaverse (row 2). --- Rebuttal 3: Comment: Thanks for the response. It would be nice to add the results on public texture dataset in the final version to help the readers understand the performance difference.
Summary: This paper explores an unusual route of training a large-scale 3D reconstruction model using synthetic data. It demonstrates that high-quality reconstruction can be achieved solely with synthetic procedural data, bypassing the need for real, hand-crafted 3D models, which are challenging to collect. The paper, trains two reconstruction models, one with objaverse dataset and the other with synthetic dataset which the paper proposes (Zeroverse). Test results of these models on ABO and Google Scanned Dataset shows that competitative reconstruction quality can be achieved by just synthetic data. With this the paper showcases that, global semantics of an object are not crucial for reconstruction. Consequently, similar reconstruction quality can be attained using complex geometric synthetic data with rich textures, even if they lack global semantics. Strengths: - **Novelty** - The use of synthetic data for reconstruction task is novel. The synthetic data generated by the method in this paper can also be used for data augmentation in other tasks. - **Clarity** - The paper is well written with good attention to detail. - **Results** - The approach has been appropriately validated on different datasets (Google Scanned and ABO). - **Related Work** - Comprehensive releated works are covered in the paper. Weaknesses: - The paper lacks enough technical contributions. It seems that the entire paper is about ways to create synthetic data using predefined primitives and applying augmentations over different combinations of these primitives. After this once the data is prepared, an off-the-shelf gaussian splatting based reconstruction model is trained for performing experiments. - From the results in Table 7 in supplementary, it seems that reconstruction quality significantly suffers (~2-3.5 PSNR) when the input views are very sparse (4 views) as compared to results with 8 input views as shown in Table 1. What is the reason behind this performance dip? Is it the object semantics? - As this approach requires comparatively denser views (at least 8), I am concerned about its application in learning 3D priors, as generally 3D prior models work with extremely sparse views. - The paper compares results by training models with same quantity of data for both real and synthetic datasets. It would be interesting to see if the reconstruction quality of the model trained with synthetic data improves or achieves comparable results when the dataset size is increased. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Adequate limitations are discussed by the authors in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the novelty and potential value for future works, the clarify of our writing, the comprehensiveness of our experiments and related works. We respond to the questions of the reviewer below: 1. **Technical contributions**: Building upon the prior work Xu et al. \[1\], we have added boolean difference and wireframe augmentations, which are crucial to the performance of LRM-Zero, and scaled up the data synthesis pipeline to synthesize and render up to 8M objects. Using the GS-LRM \[2\] model architecture, we also iterated on data and model design together to resolve training instability issues and achieve competitive performance with GS-LRM. We believe that such experiences are major contributions in the large scale training era. For this reason, we also share all the details about our experiences in the paper, which is valuable to the community, as recognized by reviewer **NfXg**. 2. **Sparse-view instead of dense-view setting**: This is a good question. LRM-Zero indeed has a larger performance gap to GS-LRM under the sparse-view reconstruction setting, as shown in Tab. 1 and 7\. This is because LRM-Zero does not learn real-world object semantics from Zeroverse. As a result, LRM-Zero cannot hallucinate the appearance and shape of an object where there is occlusion, which is common in the sparse-view setting. The sparse-view reconstruction task requires generation ability and knowledge of object semantics from the reconstruction model, which in turn requires training on a large dataset of realistic objects, e.g. Objaverse. LRM-Zero is more suitable for the dense-view reconstruction task. See the discussion of this limitation in Sec. B. Semantics in the Appendix. 3. **Application of a dense-view reconstructor**: As discussed above in 2., Zeroverse and LRM-Zero are not suitable for tasks that require generation ability and object semantics. To compensate for this, one can combine LRM-Zero with a generative model that possesses rich object semantics, e.g. the text-to-multi-view diffusion model from Instant3D \[1\]. Also, when working with extremely sparse input views, we can use a video diffusion model to obtain dense input views, and then do the reconstruction with LRM-Zero. 4. **Larger data size**: We show LRM-Zero trained on more Zeroverse data (2x, 4x, 10x, and 20x) in Tab. 8 in the Appendix. As discussed in Sec. E Data size in the Appendix, we do not observe visible gains in increasing the data size, even up to 20x. Furthermore, when increasing the data size without also increasing the training steps, the model cannot converge well and performance worse than when using less data (experiments 2, 9, and 10 in Tab. 8). We also perform an experiment on GS-LRM, training it on a randomly sampled subset of 200K Objaverse objects. As shown in Tab. 2 in the rebuttal pdf, GS-LRM’s performance only drops by 0.1 PSNR on GSO. This is likely because for the single-object reconstruction task, the results start to saturate with about 200K realistic data. However, we believe that the advantages of Zeroverse, i.e. the data size, texture quality, controllability are more valuable when extended to other tasks, such as scene reconstruction and relighting, where data is scarse (see Sec. B Beyond object-level reconstructions in Appendix). References \[1\] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3d: Fast text-to-3d with sparse-view generation and large reconstruction model. arXiv preprint arXiv:2311.06214, 2023 \[2\] Kai Zhang, Sai Bi, Hao Tan, Yuanbo Xiangli, Nanxuan Zhao, Kalyan Sunkavalli, and Zexiang Xu. Gs-lrm: Large reconstruction model for 3d gaussian splatting, 2024\. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. After reading the rebuttal and other comments by the reviewers I would like to keep my original rating.
Summary: This paper proposes a pure synthetic training dataset named Zeroverse which is composed of synthetic data generated by simple shape primitives and textures without any real-world semantics. With the Zeroverse dataset, the authors trained a GS-LRM 3D construction model called LRM-Zero and showed that LRM-Zero achieved comparable results to GS-LRM trained on real-world data. The extensive evaluations result to prove the feasibility of using pure synthetic data for 3D reconstruction as well as the effectiveness of the proposed data augmentation techniques. Strengths: - This paper is well-written and has a clear presentation, comprehensive evaluations, and insightful discussions. - The proposed dataset, Zeroverse, has been shown to deliver great 3D reconstruction results in a sim-to-real generalization manner. This is a very valuable contribution to the community since it proposes a novel aspect of generating synthetic data for 3D reconstruction: the dataset doesn't have to contain real-world semantic objects at all to achieve good 3D reconstruction results. Also, I agree with the argument that 3D reconstruction relies heavily on local cues. - The evaluation results are comprehensive and show that LRM-Zero achieves comparable results to GS-LRM on all the real-world dataset being evaluated. Weaknesses: - Although Zeroverse and LRM-Zero show very impressive results, it is unclear to me how to improve the result further. LRM-Zero still falls behind GS-LRM, though very closely, and the last bit may be on the semantics. I don't see a clear path to further improve Zeroverse, so the real-world applicability is questionable. - Although the idea of generating a pure synthetic dataset for 3D reconstruction is novel, the dataset generation method and rendering method themselves are mainly taken from the prior work, *Deep image-based relighting from optimal sparse samples*. So there is not much novelty in the dataset generation algorithm. - Training on Zeroverse seems to introduce more instability and one needs to spend more effort on the training tuning. Also, I don't quite understand why training on perfect synthetic data can introduce instability and it could mean that there is something did not work as expected on the data generation side. Technical Quality: 4 Clarity: 4 Questions for Authors: - It would be great if the authors could show a few representative examples where GS-LRM works much better than LRM-Zero, or vice versa. This way we can understand better where the gap is from. - Does it help to use Zeroverse as pretraining and then finetuning on Objectverse? Or how about mixing them during training? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have discussed the limitations in supplementary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for appreciating our writing quality, the value of the contribution of our work, and the comprehensiveness of our experiment results. We will address the questions of the reviewer below: 1. **Further improving LRM-Zero**: This is a good question. We have made many attempts to reduce the gap between LRM-Zero and GS-LRM, including adjusting the ratios of augmentation, training hyperparameters, and scaling up the model size, data size, and training steps. However, our best result reported in the paper still has a gap to GS-LRM, as shown in Tab. 1\. Among our attempts, increasing the data size was the most promising one, since we can synthesize unlimited amount of Zeroverse data, but it did not provide visible benefits, even at 2x or 3x model sizes, as shown in Tab. 8\. This leads to a conclusion that LRM models trained on Zeroverse has a gap to Objaverse on the single object reconstruction task, where Objaverse is sufficient. However, we believe that the advantages of Zeroverse, i.e. the data size, texture quality, controllability are more valuable when extended to other tasks, such as scene reconstruction and relighting, where data is scarce (see Sec. B Beyond object-level reconstructions in Appendix). 2. **Novelty of our method**: Building upon the prior work Xu et al. \[1\], we have added boolean difference and wireframe augmentations, which are crucial to the performance of LRM-Zero, and scaled up the data synthesis pipeline to synthesize and render up to 8M objects. We also iterated on data and model design together to resolve training instability issues and achieve competitive performance with GS-LRM. We believe that such experiences are major contributions in the large scale training era. For this reason, we also share all the details about our experiences in the paper, which is valuable to the community, as recognized by reviewer **NfXg**. 3. **Training instability**: There are two reasons behind LRM-Zero’s training instability: 1. In order to make our synthesized Zeroverse dataset generalize to real 3D data, it needs to be a superset of real-world objects. To achieve this, we need to make Zeroverse harder than Objaverse. GS-LRM, which does not have structured outputs like NeRF-based LRM, is itself prone to training instability issues. The benefits of making Zeroverse harder than Objaverse might not be obvious for the single object reconstruction task, but can open opportunities for other tasks where data is scarce. 2. Having too much boolean difference augmented objects in the training data is the main cause of our training instability issues. As shown in experiment 13 in Tab. 3 in the rebuttal pdf, when reducing the ratio of data with boolean difference augmentation, we don’t have training instability issues with the default training hyperparameters from GS-LRM, including the 0.5 perceptual loss weight. With the above reasons, we think that the instability is explainable within the current presented framework. 4. **Qualitative comparison**: In Fig. 8 in our supplementary material, we have included 6 qualitative comparison results from LRM-Zero and GS-LRM on GSO and ABO. As suggested by the reviewer, we have included additional qualitative comparison results on our anonymous website (please check https://lrmzero2024.github.io/page_lrm_zero_vs_gs_lrm.html). The 4th to last row is where LRM-Zero outperforms GS-LRM on objects with detailed texture, since Zeroverse objects use a high-quality texture dataset. The last three rows are from Fig. 8 in our supplementary material showing where LRM-Zero performs worse than GS-LRM when there is invisible region in the input views (3rd to last row) and where they performs similarly when the input views have good coverage (last two rows). The remaining results are challenging samples that LRM-Zero and GS-LRM perform similarly or GS-LRM performs slightly better. 5. **Training on both Objaverse and Zeroverse**: As suggested by the reviewer, we conduct experiment 3 in Tab. 1 in the rebuttal pdf. It shows that training on both Objaverse and Zeroverse performs better than Zeroverse only, but worse than Objaverse only. This is likely due to the reasons discussed in 1\. above: the size of Objaverse is adequate for the single object reconstruction task, and the advantages of Zeroverse are not exploited in this task but has high potential in other scenarios. References \[1\] Zexiang Xu, Kalyan Sunkavalli, Sunil Hadap, and Ravi Ramamoorthi. Deep image-based relighting from optimal sparse samples. ACM Trans. Graph., 37(4), 2018\. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' reply. After reading the responses, I decided to keep my rating as it would be insightful work for the community. --- Reply to Comment 1.1.1: Title: Webpage is removed Comment: Dear Reviewers, Per the request of the Area Chairs, we have taken town the webpage for GS-LRM and LRM-Zero comparisons. We will add these comparisons to our final version. Thanks, Submission49 Authors
Summary: The paper explores the feasibility of using procedurally synthesized datasets of 3D shapes to optimize existing large reconstruction models (LRMs), that procure 3D shapes in the form of NeRFs. The data is constructed by 1) sampling several shapes from a set of parametric primitives (cubes, spheres, tori, etc.) with random positions, scales, and rotations; 2) applying randomly sampled textures (from a preselected set of textures); 3) possible application of additional augmentations (changing the curvature of objects, and allowing concave and thin structures). Numerous experiments on existing Objaverse and proposed Zeroverse datasets show that given the correct degrees of the proposed augmentations, existing LRMs can be trained with the proposed synthetic data to produce comparable although slightly worse results, which suggests that existing scarcity of semantically rich 3D data might not be the main limiting factor for the development of reconstruction models since this data is not necessarily needed. Strengths: * Since the experiments are very computationally demanding, the presented results are valuable (reproducing them is computationally prohibitive for most academic labs). * A lot of ablation experiments explore the effects of the proposed augmentations suggesting the existence of an optimal amount of 3D shape augmentation for proper training of LRMs. * The paper is well-written and easy to follow. Weaknesses: * In terms of the metrics, models trained with the proposed synthetic data are worse but the authors claim that qualitatively it is hard to see the difference. At the same time, there are almost no qualitative comparisons of reconstructed objects from models trained on real/synthetic data in the main text, supplementary, or on the provided anonymous webpage. It would be better to show these and let the readers conclude if they are close. * One of the main ideas of the paper that the LRMs do not require semantically rich realistic data for training is somewhat successfully challenged in the same paper, given that the LRMs trained for a longer time improve even further, increasing the gap between the proposed synthetic and conventional data. * Another issue, mentioned by the authors is that the amount of the used augmentations for optimal performance may depend on the test dataset, so the domain gap between the synthetic data and considered test sets may differ, which limits, to some extent, the generalization ability of the proposed models. Technical Quality: 3 Clarity: 3 Questions for Authors: Have the authors tried to perform any similar experiments for significantly different data synthesis pipelines? For example, the space of primitives can be extended with existing non-primitive shapes or with shapes generated by existing shape generation methods. If the semantics do not matter, the primitive set should not be crucially important, but at the same time non-primitive shapes used as base shapes will contain some relevant data patterns needed to capture high-frequency details in other non-primitive shapes from the test sets. Maybe this way, the authors would not need to find a set of perfect parameters for the augmentations to make the method work. See limitations. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: While I do not think any ethics review is necessary for this paper, I think it may make sense to disclose the full computational budget for this project (for example in GPU hours spent) to warn any potential readers of the costs of such works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the value of our work to the community, the comprehensiveness of our ablation experiments, and our writing clarity. We reply to the questions from the reviewer as follows: 1. **Qualitative comparison results**: In Fig. 8 in our supplementary material, we have included 6 qualitative comparison results from LRM-Zero and GS-LRM on GSO and ABO. As suggested by the reviewer, we have included additional qualitative comparison results on our anonymous wepbage (please check https://lrmzero2024.github.io/page_lrm_zero_vs_gs_lrm.html). The 4th to last row is where LRM-Zero outperforms GS-LRM on objects with detailed texture, since Zeroverse objects use a high-quality texture dataset. The last three rows are from Fig. 8 in our supplementary material showing where LRM-Zero performs worse than GS-LRM when there is invisible region in the input views (3rd to last row) and where they performs similarly when the input views have good coverage (last two rows). The remaining results are challenging samples that LRM-Zero and GS-LRM perform similarly or GS-LRM performs slightly better. 2. **LRM-Zero vs GS-LRM at longer training steps**: it is true that the performance gap between LRM-Zero and GS-LRM is enlarged as they are trained for a longer time, as shown in table 8 and 9\. As discussed in Appendix Sec. E, we believe that many of our early exploration on scaling experiments, including the larger performance gap at 2x training steps, suggest that the model converges slower on Zeroverse than on Objaverse. However, we believe that LRM-Zero's competitive performance is still impressive and that this does not undermine the potential of extending this work to other 3D tasks where data is more scarce (see Sec. B Beyond object-level reconstructions). 3. **Generalization of our augmentation configuration**: we did not optimize the Zeroverse augmentation configuration for the testing set metric numbers. Instead, Zeroverse aims to reduce the structural gap by introducing augmentations that allows for better coverage of common shape of real-world objects, e.g. boolean augmentation for concave shapes and wireframe augmentation for thin structure shapes, as detailed in Sec. 3.2. As shown in Table 4/5, some augmentations do not improve numerical metrics (e.g., PSNR) but we found that they are essential for human’s visual alignment. Given these philosophies behind, we think that our method should be generalizable and not just overfit the testing set (also evident in Table 9/Fig 5 with diverse testing data; and Table 5 with different model architectures). 4. **Extending the primitive set**: Thanks for the suggestion. This is an interesting idea that would ideally reduce the training cost on carefully ablating the shape augmentation design. However, since this requires lots of workload on both data synthesis and model training, we could not test this idea during the rebuttal period. This is a great idea for future work that we will consider. 5. **Full computation budget**: Experiments reported in Tab. 1, 2, 3, and 4 take 96 A100 days (we will correct Sec 4). The scaling experiments in Tab. 8 range from 96 to 384 A100 days. The total exploration, including 33 default-scale experiments and 31 scaling-up experiments, is approximately 9120 A100 days. The cost of Zeroverse data generation is approximately 3200 CPU node days. Note that the final model does not cost much, but the data exploration did. We recognized these hidden budgets. Thus, we decided to reveal all technical details regarding stability and release the data creation code. We hope that this can largely reduce the cost in reproducing or following our works. --- Rebuttal Comment 1.1: Title: Webpage is removed Comment: Dear Reviewers, Per the request of the Area Chairs, we have taken town the webpage for GS-LRM and LRM-Zero comparisons. We will add these comparisons to our final version. Thanks, Submission49 Authors
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful feedback and their recognition of our clear writing, comprehensive experiments, and the novelty and value of our proposed method (i.e. using synthesized data to train a LRM). We will address the missing citations mentioned by **3hvP** in the revision. Please see our responses to the questions from each reviewer below. Pdf: /pdf/b40cf75a333160af686c37f04fdf36da5c22561e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning
Accept (spotlight)
Summary: The article introduces an extensive empirical analysis of plasticity loss in on-policy reinforcement learning (RL), focusing on Proximal Policy Optimization (PPO). The main findings include that plasticity loss is also present in on-policy RL and that “regenerative” methods that regularly grow network parameters work well in this setting. The empirical evaluation consists of ProcGen and ALE benchmarks. Additional environmental distribution shifts were introduced to study the plasticity loss root cause thoroughly. 8 previously introduced methods to counteract plasticity loss in supervised and off-policy RL are examined, namely L2 Norm, LayerNorm, CReLU activations, Regenerative Regularization, Resetting final layer, Shrink+Perturb, Plasticity Injection, and ReDo. The work presents and analyzes metrics, such as weight magnitude, number of dead neurons, and gradient norm, previously related to the loss of plasticity. Strengths: The strong part of the work is the extensive evaluation setup of on-policy RL. The claims are well supported by experiments. The paper also interestingly indicates that the problem of warm-start might be partially mitigated by “regenerative” methods that target weight magnitude growth. This has the downstream effect of minimizing the number of dead units in the network previously connected to the plasticity loss. Weaknesses: While the experimental setup is comprehensive, the findings could benefit from deeper analysis. The article does not present new insights into plasticity loss nor propose novel methods to mitigate it, primarily noting that the problem also exists in on-policy RL. A more thorough exploration of the interactions between methods and their impact on plasticity would significantly enhance the work. In particular, it would be beneficial to understand the differences in plasticity loss dynamics between off-policy and on-policy methods. Additionally, some of the figures are overly cluttered, making them difficult to interpret quickly. To improve clarity, consider summarizing the results from Figures 4 and 5 into a single scalar per method and moving the detailed figures to the appendix. I also suggest using the rliable [1] library for more effective results aggregation. [1] https://github.com/google-research/rliable Technical Quality: 2 Clarity: 3 Questions for Authors: Regarding deeper insights: 1. In off-policy methods, it has been shown that the critic network mainly loses plasticity. Can the authors comment on their on-policy experiments through this lens? Specifically, is resetting only the actor's or only the critic's head more or less beneficial? What role does the common backbone play in this problem? If you separate the actor and critic networks, which can overall impact performance [1], will the conclusions remain similar? 2. What happens if we increase the number of epochs? If we combine this with the examined methods, how will it impact the final performance? Does increasing the number of epochs hurt performance due to more and more outdated samples, or is it primarily due to plasticity loss? [1] Andrychowicz, M., Raichuk, A., Stańczyk, P., Orsini, M., Girgin, S., Marinier, R., ... & Bachem, O. (2020). What matters in on-policy reinforcement learning? A large-scale empirical study. arXiv preprint arXiv:2006.05990. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and provide helpful feedback. Our intention with this work was to focus on the on-policy setting, given that the majority of the work in the area of plasticity loss has focused exclusively on the off-policy setting. Our decision not to include off-policy experiments was based on the fact that we believed there was ample evidence characterizing the problem and various solutions already existing in the literature. We agree that a direct comparison utilizing a single codebase between the two may have presented additional value, but we chose to instead allocate resources towards extensive e experiments specifically in the on-policy setting. We agree that the information in Figures 4 and 5 could be presented in a more interpretable way. Our goal is to allow readers to see the entire trends for each method, but given the number of methods and task conditions, this presents a challenge to represent succinctly. Thank you for pointing out the rliable library. We have used it to generate a potential replacement for Figure 4, which you can find [here](https://ibb.co/HxJcVmh). If it seems like an improvement, we will also replace Figure 5 using the same layout, and move the current figures to the appendix. In our experiments which involved resetting the “final layer”, we reset both the critic and policy heads. Preliminary experiments (not reported in the paper) suggested that resetting both was superior to resetting either individually. We believe that this differs from the off-policy setting due to the critical role of the policy in the collection of samples. Whereas in off-policy learning there is a stable buffer of data to draw on, in the on-policy setting the data distribution used for training is much more sensitive to the policy. As such, resetting the policy head increases the overall policy entropy, thus ensuring that the data distribution used for training can remain diverse enough to prevent policy collapse. That said, the majority of methods considered here act on the entire network, rather than just the final layers of the network. As such, we believe that our results are largely robust to this choice. Relatedly, because we were considering discrete action policies, we utilized a shared encoder network, as this is the standard set-up in the PPO algorithm. The total number of epochs was chosen such that convergence would always be reached within a single round of the experiment. As such, running experiments longer did not result in measurably different behavior than what has been discussed in this paper, and we did not find policy collapse or degradation within a single round. We believe that this suggests that the effects we observe in our experiments are indeed the result of plasticity loss. We will state this clearly in the camera ready. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my comments and taking into account my suggestions regarding the charts. I believe that the new form of the chart facilitates comparison. Still, I also understand the authors' intentions regarding the original version of the chart, so I leave the decisions regarding the form of chart 5 to the authors. But if they decide to keep the original version, I suggest the new chart be available to the reader in the appendix. I am sure the article fills the gap in a very reliable way regarding the loss of flexibility in on-policy algorithms. Although I think it is very interesting to draw common conclusions for the on- and off-policy methods, I understand that it is beyond the scope of this article.
Summary: This work studies the loss of plasticity phenomenon in the on-policy continual deep RL setting, where previous work has focused on studying and identifying mitigation strategies for the off-policy RL or supervised learning settings. They conduct experiments over a variety of settings (gridworld, CoinRun, and Montezuma’s revenge) over different variants of environment distribution shift, demonstrating that loss of plasticity still occurs in the on-policy regime. They perform a further analysis of the correlations in both train and test performance with various quantities studied in previous works, across several mitigation strategies previously presented. Based on this, they provide several hypotheses for what properties are needed for successful intervention at addressing plasticity loss as well as ensuring good generalization performance. Strengths: - Paper is overall well-written and particularly introduces the loss of plasticity phenomenon/warm start problem well. Empirical investigations of loss of plasticity for continual RL are important for the community. - Experiments are comprehensive and presented clearly, including domains of varied complexity. The authors took care in implementing several intervention methods that have arose previously in the literature, and report correlation results comprehensively. - There are new insights about previously successful intervention methods not working in their setting, such as concatenated ReLUs and plasticity injection. From their correlation results, they provide connections between their most successful methods (regularization-based) to the greatest predictor of plasticity loss (weight magnitude, and surprisingly not gradient magnitude or magnitude of weight change). - There are interesting next directions exploring mechanisms of plasticity loss and using these metrics as an indicator for what is occurring in the optimization landscape for continual reinforcement learning. Weaknesses: - The graphs are difficult to read, given the number of interventions and the colors chosen. It’s difficult for me to see, eg. how much combining soft shrink + perturb with LN improves upon only doing one or the other. - In Appendix D, it might be useful to highlight the significantly improved intervention methods for each environment and shift condition, and some written insight about the table results. Minor typos: - Line 24: arrive -> arrives - Line 142: withing -> within Technical Quality: 3 Clarity: 4 Questions for Authors: - What is the reason for adding LN to soft shrink and perturb, versus original shrink and perturb? It seems that shrink and perturb seems to be doing better than the soft variant for most of your results, and I would guess that the procedure being applied after each step of gradient descent instead of longer intervals could hurt performance. - Do you have any insights why gradient magnitude/weight change magnitude did not have a significant correlation with plasticity or generalization in your setting, in contrast to previous work? - Your Montezuma's Revenge experiments only analyze two intervention methods. How should I interpret the reward numbers in Figure 6? Do you have similar quantities (weight magnitude, gradient norm, dead units, etc.) for these methods? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Authors acknowledge their correlation plots do not attribute causality. Experiments are comprehensive in testing intervention methods except Montezuma's Revenge, which the authors address. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our paper. Both you and another reviewer have brought up the readability of our graphs. We are working to improve this for the camera-ready version. Reviewer NjbE suggested utilizing the “rliable” library to generate plots. We took their suggestion and generated a potential new version of Figure 4 using the library, which you can see [here](https://ibb.co/HxJcVmh). Please let us know if it improves readability, and if so we will replace both Figure 4 and 5 with this new layout and move the current figures to the appendix. Thank you for pointing out the typographical errors. We have addressed them in our current draft of the manuscript. The issue with the shrink-perturb method is that it is performed only occasionally, rather than at every step of SGD. This is fine in the original context under which it was introduced, where the problem setting enjoys a clear indicator of when it should be applied. In non-contrived RL settings, however, the environment might not provide this information—an implementation of shrink-perturb would either require us to either set the intervention cadance as a hyperparameter or to use additional machinery to detect appropriate points for application. We refer to methods that are applied at every step of SGD (like soft shrink perturb) as continuous, and we argue that this property is desirable in realistic RL settings. It is worth clarifying our results concerning predictive measures for plasticity loss. We find that weight magnitude and dead unit count, when considered independently, are the two significantly predictive metrics of plasticity loss. Although we find that by itself gradient magnitude is not predictive, it becomes predictive as part of a Generalized Linear Model (GLM) which uses all the measures as predictors. This suggests that gradient magnitude is able to account for additional variance in plasticity loss which is not accounted for by weight magnitude, suggesting that it may contribute to some aspect of plasticity loss. In contrast, dead unit count is no longer predictive in the GLM due to its high correlation with weight magnitude, which is the more fundamental variable (because dead unit count is a function of weight magnitude). We believe our results deviate from previous findings largely because of the breadth of settings we considered. Rather than studying only a single task, type of distribution shift, and model architecture, which might provide only narrow insight into the plasticity loss phenomenon, we explore several. Of course, our study also deviates from prior work in that we also consider only on-policy agents, and the underlying dynamics of plasticity loss in the two settings may differ. We considered Montezuma's Revenge to be a more "natural" RL task, presented to show a use for what we have learned from environments studied earlier in the paper, which were chosen specifically to probe the plasticity loss pathology. As such, we unfortunately did not log the same diagnostic information for Montazuma's Revenge as we did for earlier tasks. Fitting these agents takes more time than is afforded in the NeurIPS discussion period, but we will collect this information for the camera ready copy. The reward on the y-axis of Figure 6 is the agent's score in the game, reflecting how many objects it has collected, new rooms it has entered, and enemies it has vanquished. Our code for this is only a small modification to a popular PyTorch RND implementation. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my comments and questions. I personally find the new figure easier to parse. The investigation across several tasks and distribution shifts is certainly interesting, and I would be curious to see if there's a connection between different shifts and the changing underlying loss landscape, as the authors have mentioned as future work.
Summary: This paper studies the problem of plasticity loss in on-policy deep RL. The study is quite wide as it covers many environments, types of non-stationarities, and solution methods. The first main result is the demonstration of plasticity loss in various settings. The second main result is the analysis of existing methods in the problems. The results show that regularization methods effectively mitigate plasticity loss in this setting. Strengths: This is the first study that extensively studies plasticity loss in on-policy RL. Although some prior work has shown plasticity loss in on-policy RL, this study is much more extensive. The paper is generally well-written. The paper claims to be an extensive study of plasticity loss in on-policy learning, and it supports that claim by evaluating a wide range of methods in a wide range of environments. There are some minor weaknesses, but the overall paper is good and would be a good contribution to the plasticity loss literature. I recommend accepting the paper. Weaknesses: The statistical significance of the results is unclear. How many runs were performed for all experiments? 5? And what do the shaded regions represent in the figures? All the figure captions should contain details about the number of runs and shaded regions. Technical Quality: 3 Clarity: 3 Questions for Authors: What values of $\beta$ were used for Adam? Dohare et al. found that equal $\beta$s in PPO (particularly when used with ReLUs) significantly improve the performance when evaluated over a long time horizon, as may be the case in your experiments. Dohare et al. Overcoming policy collapse in Deep RL. EWRL 2023. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately discuss the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate you taking the time to read and review our paper. We used five replicates per experiment unless indicated otherwise, and the shaded regions of the graphs correspond to standard error. We will revise all figure captions to make both of these points explicit. We used a learning rate of 5e-4 for all experiments, as displayed in Appendix C. We further use the default PyTorch values for Adam's $\beta_1$ and $\beta_2$, and we will update the section to reflect this detail. We agree that it is possible that using the “Non-stationary Adam” method from Dohare et al. would improve performance on the tasks considered here. That said, we do not find evidence of policy degradation within-rounds (see for example Figure 2 upper-left in contrast to Figure 1 of Dohare et al.). Note that we chose the number of training epochs to reflect the asymptotic behavior of the policy—longer-running experiments, which were used to select this parameter, did not show evidence of policy collapse when the task/environment distribution is held fixed. Regardless, we will include a discussion “Non-stationary Adam” in our related work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It answers all my questions. I'm happy to suggest accepting this paper.
null
null
Rebuttal 1: Rebuttal: To all reviewers: We thank the reviewers for their time and insights. Two reviewers have suggested clarity improvements to Figures 4 and 5. We have updated Figure 4 using the rliable library suggested by Reviewer NjbE (https://ibb.co/HxJcVmh). If reviewers agree that it is an improvement, we will also update Figure 5 and similar Appendix figures using the same layout. We respond to individual concerns below.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Beyond Slow Signs in High-fidelity Model Extraction
Accept (poster)
Summary: This paper proposes a unified approach to model parameter extraction by combining two prior works and integrating efficiency optimization. The authors find out that the neuron wiggle sign extraction method proposed in the previous work addresses the bottleneck identified by Calini's paper. Furthermore, the authors present methods to identify hard-to-extract neurons and parallelize the extraction process. Empirical evaluation of different benchmarks shows that the proposed unified model extraction method achieves speed up compared to prior works. Strengths: This paper has the following strengths: + The authors make an important observation that the neural wiggle method alleviates the sign extraction bottleneck in the prior work. + The authors show a unified model extraction method that combines sign extraction and model signature extraction built on top of two previous works. + The authors propose additional optimization methods to speed up the unified model extraction process. Weaknesses: This paper has the following weak points: - There is no explicit description of the attack model/threat model used in this work; - While the proposed unified model extraction method shows speedup compared to the prior works and the results are successful on standardly trained ML models, the benchmarks used in this paper are still small-scale. This makes the intellectual property value of the evaluated models questionable since they might be too simple and do not carry financial value. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weak points above. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Please see the weak points above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > There is no explicit description of the attack model/threat model used in this work Currently, we have discussed the threat model in the background section, in part because it is the same as in the related work, namely, Carlini et al, Jagielski et al. Rolnick and Körding and Canales-Martinez et al.. Following feedback, we will add an explicit section on it. The threat model is an adversary aiming to replicate a model’s predictive capabilities from a black box setting where only the input and output are known to the adversary. The goal for the adversary is high fidelity, in other words the extracted model has to be as close to the original as possible. As described in Section 2.2 in Carlini et al. we assume following knowledge from the attacker: architecture knowledge, complete outputs, precise computations, scalar outputs and RELU activations. As described in Section 3 of Jagielski et al. the confidentiality of a model is compromised with a model extraction attack. Adversaries target the model for intellectual property and their goal is to extract a model with equivalent capabilities to the target model. This threat model is applicable to many machine learning models that are currently query accessible as ML-as-a-service, e.g. in the medical sector. Understanding limits of model extraction here is important, since developing these models requires vast datasets, extensive computational resources and expert knowledge, making them very valuable. > While the proposed unified model extraction method shows speedup compared to the prior works and the results are successful on standardly trained ML models, the benchmarks used in this paper are still small-scale. This makes the intellectual property value of the evaluated models questionable since they might be too simple and do not carry financial value. It is true that the attack is limited to models with limited depth and number of neurons per layer. However, we want to highlight that our paper uses the largest models compared to prior work. Our work further highlights that to scale to larger models, adversaries would need to improve on weight extraction methodology, since previously identified bottleneck is no longer relevant. Also, there exist small models that are valuable in this scale. These types of fully connected DNNs are used in practice e.g. in biology, healthcare, physics. For example, Smith et al. use a Multi Layer Perceptron with one hidden layer of size 500 for the classification of malignant childhood cerebellar tumors [1]. In a recent breakthrough advancing the field of nuclear fusion for sustainable energy, Degrave et al. [2] use such a fully connected DNN with 3 hidden layers of size 256 for the control policy of the tokamak plasmas. This is close to the size of models we tested and our attack improvements make stealing such a model feasible within a reasonable amount of time. [1] K. S. Smith et al., “Unified rhombic lip origins of group 3 and group 4 medulloblastoma,” Nature, vol. 609, no. 7929, pp. 1012–1020, Sep. 2022 [2] J. Degrave, F. Felici, J. Buchli, M. Neunert, B. Tracey, F. Carpanese, T. Ewalds, R. Hafner, A. Abdolmaleki, D. de las Casas, C. Donner, L. Fritz, C. Galperti, A. Huber, J. Keeling, M. Tsimpoukelli, J. Kay, A. Merle, J.-M. Moret, S. Noury, F. Pesamosca, D. Pfau, O. Sauter, C. Sommariva, S. Coda, B. Duval, A. Fasoli, P. Kohli, K. Kavukcuoglu, D. Hassabis, and M. Riedmiller, "Magnetic control of tokamak plasmas through deep reinforcement learning," Nature, vol. 602, no. 7897, pp. 414–419, Feb. 2022
Summary: This paper explores advanced techniques for high-fidelity model extraction that go beyond simply observing "slow signs" like model outputs or gradients. The authors evaluate and enhance existing parameter extraction methods, particularly those developed by Carlini et al. and further improved by Canales-Martínez et al., applying them to models trained on standard benchmarks. Key contributions include: 1. A unified codebase integrating previous methods 2. Optimizations to improve efficiency in extracting weight signs 3. Identification of weight extraction, rather than weight sign extraction, as the critical bottleneck 4. Significant improvements in extraction speed (e.g., extracting a 16,721 parameter MNIST model in 98 minutes vs. 150+ minutes previously) 5. Proposals for more robust benchmarking of model extraction attacks Strengths: - Comprehensive analysis and improvement of existing techniques - Practical advancements in extraction efficiency - Important insights into the relative difficulty of extracting different model components - Contribution to standardizing evaluation methods in the field Weaknesses: - The scalability of the approach to larger, more complex models is not fully explored - Potential ethical implications of improving model extraction techniques are not thoroughly discussed - The paper could benefit from more discussion on potential defenses against these improved extraction methods Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Ethical Considerations: Given the potential misuse of advanced model extraction techniques, could you elaborate on the ethical implications of your work? What safeguards or guidelines do you propose to mitigate potential negative impacts? Defense Mechanisms: While your paper focuses on improving extraction techniques, have you explored any potential defenses against these advanced model extraction attacks? If so, what were your findings, and if not, what are your thoughts on possible defensive strategies? 2. Defense Mechanisms: While your paper focuses on improving extraction techniques, have you explored any potential defenses against these advanced model extraction attacks? If so, what were your findings, and if not, what are your thoughts on possible defensive strategies? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The experiments are only conducted the ViT-B/32 CLIP architecture. It should be at least mentioned that in future it makes sense to justify on more architectures. The authors lowered this limitation by doing evaluation on several datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Scalability of approach to larger, more complex models is not fully explored. With Table 2 we actually explore the largest models amongst all of the related work. Unfortunately the model extraction becomes very hard for deeper layers in for example an MNIST model with 8 hidden layers, so we had to stop attempting extraction for most of them. For models with more neurons per layer extraction also becomes increasingly harder and we were not able to finish extraction within 32 hours for layer size 128 MNIST models. In fact, our paper highlights that in order to make extraction work for even larger models we need further improvements to the weight extraction process. > Ethical Considerations: Given the potential misuse of advanced model extraction techniques, could you elaborate on the ethical implications of your work? What safeguards or guidelines do you propose to mitigate potential negative impacts? Ethically it is good for people to know about this vulnerability so they can (1) adjust their mental models of what adversaries are capable of, (2) adjust their machine learning models to not fit into the extraction criteria, and (3) to implement possible defenses. We will add a section to the paper that further goes into details on the above. > Defense Mechanisms: While your paper focuses on improving extraction techniques, have you explored any potential defenses against these advanced model extraction attacks? If so, what were your findings, and if not, what are your thoughts on possible defensive strategies? Current cryptanalytic model extraction only works for relatively small models with up to 3 hidden layers or so in practice, since extraction of deeper layers becomes harder since finding inputs that activate and deactivate a neuron in a later layer is harder due to deactivations of neurons in prior layers. So, larger models are currently not at risk. Furthermore, this extraction does not work with integer quantized weights since triggering an individual neuron is often not possible in this setting. Hence, smaller models quantized to integer weights can also not be extracted. This in no way provides any guarantees against other types of extraction as is mentioned in the background section e.g. with model distillation. **Query Accessibility:** Furthermore, for remaining models, query accessibility is needed. Millions of queries must be performed in order to extract a model. Companies could ratelimit the queries, so that extraction becomes harder. Even if adversaries were to be querying from a huge number of accounts, extraction would become a lot slower and easier to detect due to a huge increase in queries compared to usual operations, in line with the stateful detection work for example from Chen et al. [1]. **Attack Prevention with Noise:** Adding noise to the output to limit the accuracy of the output should significantly limit the accuracy of cryptanalytic model extraction. However, perhaps instead one could then query ten times for each input and then take the average as the value to try signature extraction with. Ultimately, the minimum magnitude of the noise that would prevent the attack would depend on what precision extraction still works. Currently, signature extraction works for weights of float32 precision but could perhaps be tuned to work for float16 precision. **Attack Prevention with Parameter Variance:** We have found that models with minimized parameter variance across a layer are more resilient against this type of extraction attack because it is harder to distinguish between neurons and hence harder to find each neuron’s distinctive features. Hence, one could scale down parameters of a model or confine parameters of each layer in a different area of the parameter space so that extraction becomes increasingly difficult. Hence, our paper currently discusses high variance in performance and highlights the importance of the hyperparameters involved for a fair evaluation. We will add a subsection discussing these mitigations. [1] S. Chen, N. Carlini, and D. Wagner, “Stateful Detection of Black-Box Adversarial Attacks,” Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence, pp. 30–39, Oct. 2020 > The experiments are only conducted the ViT-B/32 CLIP architecture. It should be at least mentioned that in future it makes sense to justify on more architectures. The authors lowered this limitation by doing evaluation on several datasets. We are not sure what this comment refers to. Our paper only looks at MLPs and ViTs are not mentioned in the manuscript at all. The datasets considered are standard for this branch of literature. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. I will leave my review unchanged. Note: I understand that sharing code in this field is uncommon. However, I believe that having a unified codebase could advance this research toward more realistic models and real-world scenarios. --- Reply to Comment 1.1.1: Comment: Thank you very much! We also do believe sharing code is important, it all lives here: https://anonymous.4open.science/r/anonymized-cryptanalytical-extraction-main-9335/README.md
Summary: This paper proposes to perform model extraction attacks against deep neural networks. First, the authors combine two previous proposed methods into a uniformed code-base. Then, they optimize the sign extraction strategy to achieve a speed up in model extraction. Their proposed method can be used for larger models (e.g., a model with 16721 parameters and two hidden layers). Strengths: - An important problem. - The results show some efficiency of the proposed method. Weaknesses: - This paper is hard to understand. The main novelty and the insight behind it are unclear to me. - If Canales-Martinez’s work already eliminates the sign extraction bottleneck (Line 47), why the authors still propose to optimize this process? - What is the design motivation? Why the authors could do better? - Carlini’s work is not the only crypt-analytical work, why not compare to other works (e.g., reference 7)? - Too much background in the Methodology part. For example, the whole Sec. 3.1 is about some background. - Section 3.4 is hard to follow. - The cost of parallel computing is exponential. Technical Quality: 2 Clarity: 2 Questions for Authors: See above. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Too much background in methodology, i.e. whole of Sec. 3.1 Beginning from line 201 “Confidence in Practice” is our contribution. We will mark it more explicitly to differentiate clearer between prior contribution and our contribution. > Section 3.4 hard to follow We are very sorry to hear that the reviewer found Section 3.4 hard to follow, could you possibly clarify which bits were confusing? Reviewer yp6m scored the paper good for presentation and r46i even complimented the paper for distilling complex attacks in a readable way, and further called the paper very well written. > Cost of parallel computing is exponential Although the worst case complexity may be exponential, we find that in practice such worst case branching never happens, since detection for erroneous branching gets detected within the next layer in a handful of queries as we note in Section 3.2. > If Canales-Martinez’s work already eliminates the sign extraction bottleneck (Line 47), why the authors still propose to optimize this process? We did not know that Canales-Martinez et al.’s work already eliminated the sign extraction bottleneck because they did not test the whole extraction and the numbers from Carlini’s codebase and Canales-Martinez’ codebase were not comparable at first. Hence, the paper is written in a way to highlight that the previously identified bottleneck is no longer relevant and suggests that future improvements need to be made to the weight extraction process. What is more, it argues for a more thorough and detailed attack variance performance. > What is the design motivation? Why the authors could do better? We identified that some neurons are harder to extract than others and performing the neuron wiggle method on them for more iterations does not help extract more correct signs. So, overall we minimized the neuron wiggle extraction time for easy to extract neurons and performed sign extraction for hard to extract neurons later in the pipeline. In particular, we can identify hard to extract neurons and can identify through the signature and sign extraction in the next layer if the sign extraction for a neuron was wrong. In this way we also add robustness in extraction which was missing in Canales-Martinez et al.’s work. Additionally we identified that the precision improvement process was taking a long time especially for MNIST models. > Carlini’s work is not the only crypt-analytical work, why not compare to other works (e.g., reference 7)? We do not for example compare to reference 7, Rolnick and Körding (2020), because Carlini et al. already compare their work against them in their Table 1. Carlini et al. mentions that their work builds upon previous works including Milli et al. (2019), Jagielski et al. (2020), and Rolnick and Körding (2020). After developing their attack Carlini et al. compare against Rolnick and Körding (2020) to discover that Rolnick and Körding’s attack is significantly more query intensive, especially for extraction of more than the first two layers. Ultimately Carlini et al. claim that their method is 100+ times more accurate, query intense and can handle larger models than Rolnick and Körding (2020). --- Rebuttal Comment 1.1: Comment: Dear Reviewer PoZg, Thank you so much for your insightful feedback on our paper! We hope you found our responses useful. As the discussion period is coming to a close, please feel free to ask any remaining questions you may have. We're happy to provide further clarification! --- Rebuttal Comment 1.2: Title: Thanks for the response Comment: 1. Section 3.1's majority of contents are still backgrounds (about two pages, which occupies about half of the methodology section). 2. Section 3.4 is unclear about the acceleration improvements that have been made. This part seems to focus on engineering efforts rather than systematic methodologies. 3. According to the introduction and the methodology, the paper focuses most of its effort on sign extraction. Based on the rebuttal, I understand this paper tries to convince others that sign extraction is not a bottleneck. Is that correct? If so, why do the authors still spend effort on improving sign extraction (see abstract lines 12-14)? 4. The authors stated "Hence, the paper is written in a way to highlight that the previously identified bottleneck is no longer relevant and suggests that future improvements need to be made to the weight extraction process." **However, according to the rebuttal, the paper's main contribution is to optimize the sign extraction attack instead of the weight extraction process.** "We identified that some neurons are harder to extract than others and performing the neuron wiggle method on them for more iterations does not help extract more correct signs. So, overall we minimized the neuron wiggle extraction time for easy to extract neurons and performed sign extraction for hard to extract neurons later in the pipeline." This is contradictory. Thanks again for the authors's response. However, my questions are not well-addressed. I decide to keep my score. --- Reply to Comment 1.2.1: Title: Thank you for your response Comment: 1. The first part of Section 3.1 is indeed about one and a half pages of explaining previous works’ methodologies but they were necessary to make our contribution understandable. From reviewer r46i we received positive comments regarding this: “The paper is very well-written and does a great job of summarizing the fairly complex attacks that it builds on. Honestly this paper is one of the best I've ever seen for how well it values the reader's time and presents the relevant background and contributions.” 2. Section 3.4 is a mix of methodological improvements and engineering efforts. We could not go into detail about it in the paper due to limited space but have a full description of what these entail in Appendix A. Especially our insights on how precision improvement is not necessarily needed are not trivial, since this was not possible yet in Carlini et al.’s paper and Canales-Martinez et al. never connected signature and sign extraction. 3. It was not clear at first that sign extraction was not the bottleneck anymore until we unified all methodologies and metrics. In Figure 1 (a), we show with Sign CM original and unified compared to Signature Carlini that sign extraction was more inefficient before but after unifying codebases it is taking a little less time than the signature extraction. However, this does not mean sign extraction time became trivial just using Canales-Martinez et al.’s methods. Only with our improvements in sign extraction, the sign extraction actually becomes trivial compared to the signature extraction. Further, in the full pipeline our improvements in the sign extraction significantly improve the overall extraction time as shown in abstract lines 15-17. 4. We are not too sure what the reviewer means by contradictory, we followed prior work and improved over the sign extraction performance. By the time our improvements became significant we decided to evaluate the system, including evaluation of the system as a whole, not separately as Canales Martinez. This is where we realized that sign extraction time can become trivial compared to signature extraction, and future efforts should be directed towards signature extraction improvement. Note that everyone working in this field from reading all of the work assumed that sign extraction is the bottleneck and hence iterated over it.
Summary: This paper continues a line of work on cryptanalytically extracting network parameters from (input, logit) pairs. It includes a concise explanation of relevant prior work in the area, a codebase that unifies two key prior works in the area to enable standardized comparisons, and several improvements to these prior methods that enable faster extraction of weight signs and signatures. Strengths: - This is an interesting research area that deserves more attention from the broader community - This paper not only makes progress in this area, but it also enables future work with its codebase - The paper is very well-written and does a great job of summarizing the fairly complex attacks that it builds on. Honestly this paper is one of the best I've ever seen for how well it values the reader's time and presents the relevant background and contributions. - The performance improvements are substantial - The discussion includes some interesting analysis Weaknesses: Moderate issues: - The main reason I'm not giving this paper a higher score is that it seems to make solid incremental improvements to two prior techniques. It doesn't introduce fundamentally new methods as far as I can tell. However, I think it still adds considerable value to the area, passes the quality bar, and could foster useful discussion at NeurIPS. Minor issues / suggestions: - My first question coming into this paper (not having read the prior work) was, "How on earth do people extract the layers one by one when all they have are the inputs and logits?" It took some time to understand this, and I think this is possibly something that could be improved with a well-designed figure. - Line 85: "In a study where the adversary is assumed to have complete access to both the training data and hyperparameters, 93.4% was the maximum fidelity reached by the replicated model." What dataset is this on? Is 93.4% lower than expected? By how much? Typos: - Missing a space in the Figure 3 caption: "(a)The" Technical Quality: 3 Clarity: 3 Questions for Authors: Line 48 of the paper states, "Further improving on sign extraction we speed the process up by up to 14.8 times, so that sign extraction only takes up as little as 0.002% of the whole extraction time for larger models." A 14.8x speedup is nice, but doesn't this mean that sign extraction only took ~0.02% of the whole extraction time before? If so, then it was already negligible. Maybe this is part of your point about Canales-Martinez incorrectly estimated the difficulty of full extraction relative to sign extraction alone. Which of the analyses in Section 3.1 are novel, and which have been touched on before in prior work? Is there any chance that attacks like this could work for networks using smoother nonlinearities like GELU? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Line 85: "In a study where the adversary is assumed to have complete access to both the training data and hyperparameters, 93.4% was the maximum fidelity reached by the replicated model." What dataset is this on? Is 93.4% lower than expected? By how much? Jagielski et al. use the Fashion-MNIST dataset. They use this as an oracle labeled dataset and produce model f1 and change sources of determinism and run the learning-based attack to produce f2. They obtain 93.7% fidelity when training and initialization randomness are fixed and only GPU non-determinism remains. When no randomness is fixed, they obtain 93.4% fidelity. 93.4% fidelity is lower than expected given the oracle access to the labeled dataset. We will rephrase the sentence to make it more readable. It is also worth noting that there is now more recent work by Martinelli et al. [1] that suggests that high fidelity can actually be reached with learning based methods but it assumes a slightly different setting, only works for smaller networks, and appears to be more expensive. [1] F. Martinelli, B. Imşek, W. Gerstner, and J. Brea, “Expand-and-Cluster: Parameter Recovery of Neural Networks,” Arxiv, Apr. 2023. Accessed: Jul. 31, 2024. [Online]. > Line 48. Line 48 of the paper states, "Further improving on sign extraction we speed the process up by up to 14.8 times, so that sign extraction only takes up as little as 0.002% of the whole extraction time for larger models." A 14.8x speedup is nice, but doesn't this mean that sign extraction only took ~0.02% of the whole extraction time before? If so, then it was already negligible. Maybe this is part of your point about Canales-Martinez incorrectly estimated the difficulty of full extraction relative to sign extraction alone. The 14.8 times speed up value comes from Table 1 extraction of a random model 30-15x2-1. For this model the total speedup was 1.12 compared to Canales-Martines et al.The 0.002% of whole extraction time comes from an MNIST model in Table 2 MNIST784-64x2-1 (s1). This is more to show that since signature extraction time has no bound, if signature extraction due to randomness takes a lot longer than sign extraction, signature extraction time can become very insignificantly small. However, for other models where signature extraction time is relatively small depending on reasons mentioned in the Discussion, the sign extraction time will weigh in more. Indeed, it is true that Canales-Martinez incorrectly estimated the difficulty of full extraction relative to only the sign extraction, so to start off with we did not know how sign extraction time compares to signature extraction time hence started looking into further optimizing the process. > Section 3.1: Which of the analyses in Section 3.1 are novel, and which have been touched on before in prior work? The part beginning from line 201 “Confidence in Practice” is novel. The parts before have been touched upon in prior work by Carlini et al. and Canales-Martinez et al. > Is there any chance that attacks like this could work for networks using smoother nonlinearities like GELU? According to Rolnick and Körding “Other piecewise linear activation functions admit similar algorithms.” In their Section 5 they further discuss other architectures and how well this type of cryptanalytical attack generalizes for them. We did not investigate this further since it is not the commonly assumed setting for this literature, but it is an interesting question for future work. > Suggestion: My first question coming into this paper (not having read the prior work) was, "How on earth do people extract the layers one by one when all they have are the inputs and logits?" It took some time to understand this, and I think this is possibly something that could be improved with a well-designed figure. We will make sure to work on improving readability and will look into adding a figure for this as per your suggestion.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning in Markov Games with Adaptive Adversaries: Policy Regret, Fundamental Barriers, and Efficient Algorithms
Accept (poster)
Summary: This paper provides upper and lower bound on the complexity of learning Markov games. The authors focus on the notion of "policy regret", which already exist in bandit and repeated games and adapt this notion to the general case of episodic Markov games. The first results of the authors are negative results: The authors shows that, in the general case, there are no algorithm achieveing a small regret (Theorem 1, 2 and 3 of the paper, for different variants of the problem). Given these negative results, in a second part, the authors introduce a notion of "consistent adversaries" (Definition 3) and dervive an algorithm that has a small regret against this weak adversary. Strengths: Markov games are notoriously hard to learn. This paper presents some results to show how hard they are to learn in some specific settings. The paper contains both negative and positive results on classes of problems that can be learned. Weaknesses: The paper is too dense, at the point that it hurts readability: - the proofs of all results are only in the appendix and no intuition is give in the paper - the algorithms are very hard to read because their presentation is too compact - some parts of the algorithms are only given in appendix As a result, the paper is essentially impossible to understand without looking at the appendix. The notion of "consistent adversary" deserves more discussion: given that a policy is supposed to exploit future, I do not understand why it is consistent for the response of the adversary should essentially only depend on the action in a given state (which is sort of what this assumption implies). The notion of policy regret is very strong, which illustrated by the fact that it is essentially impossible to have an algorithm with a low regret (Theorem 1, 2, 3) unless imposing very strong assumption on the learner (Theorem 4). Hence, this notion of regret feels a bit arbitrary to me. The impossibility results are very similar to the one of [37]. Technical Quality: 3 Clarity: 2 Questions for Authors: Please justify the regret notion and the notion of "constistent adversary". Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. --- > "The paper is too dense, ... without looking at the appendix." While two subroutines for Algorithm 3 were put in the appendix, these are fairly standard subroutines and a mere distraction. In fact, we moved it for the sake of better readability – regardless, if one is unfamiliar with them, they can easily find it in the appendix (we give simple embedded hyperlinks for ease of navigation). We do discuss at length high-level intuitions behind every key result, idea and algorithm in the main text. The intuition and the details for the technical proofs are also presented in the appendix, which are not required for understanding our main results. It is worth noting that other reviewers commended us on the clarity of writing and presentation. --- > The notion of "consistent adversary" To be clear, consistency is a sufficient condition for learnability. In fact, we can weaken it to be approximately consistent (see discussion with Reviewer 1MKG). As we say on Lines 255-261, consistency implies that, given any $(h,s,a)$, the response $f([\pi]^m)_h(\cdot|s)$ stays the same across all policies $\pi$ such that $\pi_h(s) = a$. Thus, given any $(h,s)$, there are $A$ possible ways the adversary can play in step h at state s. Here is what we already say in the paper, but can emphasize it all in one place, per your suggestion. - (a) Adversary being consistent does not imply that it is sub-optimal. On the contrary, we assume that the adversary is all knowledgeable and has infinite computational power to have figured out the optimal response to any strategy of the learner. So being consistent does not take away any power from the adversary. - (b) A consistent adversary does not always play the same strategy when the learner adopts the same strategy at (s,h). The adversary’s strategy does not depend only on the learner's strategy at (s,h), but also what the learner played in previous (m-1) episodes. - (c) We agree that consistency may be too strong to seem necessary for ensuring learnability. We believe that something weaker, e.g., approximate consistency (i.e., small deviations) should still be fine. That could be an avenue for future work. What we have currently is still a very interesting result as it gives an easy to understand notion which is sufficient for learnability. --- > "The notion of policy regret is very strong ... Hence, this notion of regret feels a bit arbitrary to me." Regarding “Policy Regret it Arbitrary”, We could not disagree more with the reviewer. We argue that if the environment is adaptive/reactive, i.e., the actions of the opponent depend on learner’s past actions then no other notion of regret even makes sense. This is what has already been argued and settled in numerous papers before us. This is what motivated policy regret in the first place (see the paper by Arora et al. ICML 2012.) So, not the notion of policy regret is not arbitrary. If you want to do counterfactual learning, that is the only notion of regret that even makes sense. Of course, if you are in a setting where the environment is oblivious to your actions (e.g., forecasting weather), then you do not need policy regret – but most real world applications are not like that. For instance, consider the following scenario: a big investing firm unveils its new trading software, which implements the strategy of switching all of the firm’s investments back and forth between Google and Microsoft on a daily basis. This strategy sets off huge fluctuations in the stock market (the reactive environment), and the market crashes. A post hoc regret analysis reveals that any competing strategy (for instance, buying and holding Apple shares) would have lost all the money, too (as the market crashed), thereby making the regret zero and perhaps concluding that there was nothing wrong with the strategy. It is completely ignored that the market crashed in reaction to the algorithm’s actions and would have reacted differently to a different sequence of actions. Clearly, the notion of regret is misleading in a reactive scenario. In other words, minimizing regret against adaptive adversaries may not lead to learning. Policy regret, on the other hand, is a counterfactual notion of regret, which evaluates a competing strategy on the sequence of events that would have been generated if the competing strategy were followed. This is why policy regret is preferred (and only thing that makes sense) when you are up against a strategic opponent. Regarding “Hardness of minimizing policy regret”. Yes, minimizing policy regret is harder than standard regret since the adversary is given more power. This is why we ask what restrictions can we impose on the adversary so that it is stronger than oblivious adversaries but not completely arbitrary against whom it is impossible to learn. That is also the point of the hardness results. So, Theorems 1-3 are actually very useful results from a theory perspective. Negative results are as valuable as positive results (if not more), as they tell us what is not even possible, so we can stop wasting our time thinking about the problem in that way. Hopefully, the reviewer sees our contributions differently in the light of our comments above and will reconsider their evaluation of our work. Thanks! --- > The impossibility results are also very similar to the one of [37]. Except in proof of Thm2 that benefits from a reduction of any latent MDP to a Markov game constructed by [37], our negative results are of completely different nature, and our paper and [37] also solve two completely different problems. Negative result in [37] is about external regret. Our negative results say that policy regret is linear in T or exponentially large when the adversary has unbounded memory or is non-stationary. Can you be more specific about how exactly similar our impossibility results are to the one of [37]? --- Rebuttal Comment 1.1: Comment: Thank you for your answer. I was probably too picky on oyvfirdt evaluation given the other papers that I had to review. I updated my score. --- Reply to Comment 1.1.1: Comment: We thank you and appreciate your response to our rebuttal. We find the rating a bit harsh given that we have addressed your concerns — we explained why policy regret is important and why negative results are important in theoretical works. We wonder if you can provide a clear reasoning/justification for your current rating. The other three reviewers all seem positive about the paper. If you have further questions, we are happy to address them.
Summary: This paper studies learning in a dynamically evolving environment modeled as a Markov game (MG) and the adversarial is allowed to be adaptive. Authors focus on the policy regret rather than external regret commonly used by many existing work, and further investigate the fundamental limits on learning MG with different memory size. Finally, authors restrict adversary’s behavior and propose efficient algorithms achieving sublinear policy regret bounds. Strengths: - Studying how the power (w.r.t. memory and other behaviors characterized by stationarity and consistency) of adversary impacts the learnability is very interesting. - Show statistically hardness for unbounded memory adversary and further show that even if the adversary is stationary, the hardness cannot be alleviated. - Consistency of adversary is introduced and algorithms with sublinear policy regret are proposed for special case m=1 and general case m. - The connections between existing results and related ones are clearly presented. Weaknesses: No major weaknesses. Other minor points have been well discussed in the paper. Technical Quality: 4 Clarity: 3 Questions for Authors: Apart from imposing additional constraints on the adversary, I am curious whether the proposed algorithm can smoothly degrade with the increase of inconsistency. Specifically, in definition 3, given two sequence of policies $\pi$ and $v$ with the same policy mapping, if we do not assume $f_t(\pi)$ and $f_t(v)$ are exactly the same, but assume the difference is denoted as $D_t$ and let $D=\sum_{t=1}^T D_t$ be the inconsistency. I’d appreciate if authors can discuss this. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. --- > Question about "approximate consistency" $D$ We thank the reviewer for an interesting suggestion. Our approach estimates the version space of the response function of the adversary using the adversary’s actions. The MLE analysis (Lemma B.1 and B.2) readily enables us to incorporate an inconsistency measure of D into the radius of the version space. Thus, our final policy regret bound should scale linearly with D as a result. We will discuss this extension in more detail in our revision. But we are happy to provide a more detailed analysis here, if the reviewer is interested. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 1MKG Comment: Thanks for your response. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thanks for acknowledging our response!
Summary: This paper addresses the problem of designing optimal strategies in Markov Games against adaptive adversaries. Specifically, the paper proposes the notion of $\textit{policy regret}$ which admits the adversary's ability to adaptively change their policies according to the policies applied by the learner. This work demonstrates statistically hardness results for the learner to achieve no-policy-regret when the adversary has unbounded memory or is non-stationary w.r.t different episodes. Further more, under the $\textit{consistent adversary}$ assumption, this work designs $O(\sqrt{T})$-regret algorithms respectively to the scenario when the memory bound for the adversary is 1 or some constant $m$. The algorithms proposed are some optimistic variants of upper confidence bound value iteration. Strengths: 1. This paper move a step further from the previous results in [1] by introducing novel analysis w.r.t. policy regret. 2. Hardness results are shown when assumptions are not met, indicating the necessity of such assumptions. 3. Two proposed algorithms achieve $O(\sqrt{T})$ regret. 4. In general the paper is well-organized and theorems are well-supported by the proof. [1] Qinghua Liu, Yuanhao Wang, and Chi Jin. Learning markov games with adversarial opponents: Efficient algorithms and fundamental limits. In International Conference on Machine Learning, pages 14036–14053. PMLR, 2022 Weaknesses: My concerns and questions mainly focuses on the consistent adversary assumption. 1. While I understand the necessity of proposing some assumptions about the adversary in order for the learner to efficiently learn the adversarial strategy mapping function $f(\pi)$, this assumption doesn't not seem to be a favorable one. Firstly, looking from a game theoretical perspective, it makes sense even for a self-interested adversary to utilize previous information and play differently even when the learner adopts the same strategy at $(s, h)$. For example in a two-player zero-sum game, if the adversary observes that the learner behaves poorly at some certain state $s'$, then it is reasonable for the adversary to play certain strategy which leads to $(s', h+1)$ from $(s, h)$. 2. Furthermore, when the policy space for the learner includes all the deterministic strategies and when the adversary is consistent. The cardinality for the strategy space for the adversary is also finite, this is in sharp contrast with the results in [1] where only one of the two needs to be finite. 3. When the adversary is consistent, is it the case that the transition probability solely depends on the state and the learner? In other words, is this setting equivalent to the setting of single-controller Markov games where the controller is always the learner? 4. When the memory bound for the adversary is greater than 1, this paper proposed an algorithm which is a variant of [2] in order to achieve optimal global switching cost. However, under the consistent adversary assumption, would it be better to consider minimizing local switching cost instead? [1] Qinghua Liu, Yuanhao Wang, and Chi Jin. Learning markov games with adversarial opponents: Efficient algorithms and fundamental limits. In International Conference on Machine Learning, pages 14036–14053. PMLR, 2022 [2] Dan Qiao, Ming Yin, Ming Min, and Yu-Xiang Wang. Sample-efficient reinforcement learning with loglog (t) switching cost. In International Conference on Machine Learning, pages 18031–18061. PMLR, 2022 Technical Quality: 4 Clarity: 4 Questions for Authors: See weakness. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. --- > "Weakness 1." There are three points we would like to make. - (a) The assumption that the adversary’s behavior is consistent does not imply that it is sub-optimal. On the contrary, we assume that the adversary is all knowledgeable and has infinite computational power to have figured out the optimal response to any strategy of the learner. So being consistent does not take away any power from the adversary. - (b) A consistent adversary does not always play the same strategy when the learner adopts the same strategy at (s,h). The adversary has memory; that is, the adversary’s strategy does not only depend on the learner’s strategy at (s,h), but also what the learner has adopted in the previous (m-1) episodes. - (c) We agree with the reviewer that consistent behavior may be too strong to seem necessary for ensuring learnability. In fact, we firmly believe that something weaker, for e.g., approximate consistency (i.e., small deviations) should still be fine. That seems far from trivial and could be an avenue for future work. What we have currently is still a very interesting result as it gives an easy to understand notion which is sufficient for learnability. --- > "Weakness 2." Actually, the policy space for the learner can be infinite with appropriate assumptions on its bounded complexity and the response function, but dealing with that largely deviates from the main points we want to convey in this paper. We assumed the learner’s policy space of all deterministic strategies for the sake of simplicity of presentation. More importantly, our paper and [1] are solving two completely different problems (policy regret minimization vs external regret minimization), making the two results incomparable. To be clear, we discussed [1] in our paper only to contrast the differences of the two settings, not to compare the two results. --- > "Weakness 3." No. The transition probability still depends on the adversary’s actions. Even when the adversary is consistent, the adversary’s (*possibly non-memoryless*) response function f can still be arbitrary in any state. The learner does not know and cannot control the response function. --- > "Weakness 4." Thank you for the interesting question. However, we argue a low local switching cost algorithm would not be suitable for this case. The consistent behavior assumption has more to do with enabling an efficient estimation of the adversary’s response function, and less to do with the local switching cost behavior. That said, sample efficient algorithms could also be possible for a different structural assumption other than the consistent behavior assumption, as long as it is not necessary to visit all possible policies that the learner can take in order to learn about the adversary’s response behavior (as an extreme example, when $f$ maps all of the learner’s strategies to the same policy, which reduces the problem to single-agent MDP). We can, more formally, argue that low local switching cost cannot obtain low policy regret in general. Recall that the goal is to achieve low (policy) regret against the benchmark $ \max_{\pi} V_1^{\pi, f([\pi]^m)}$. A low local switching cost algorithm would only guarantee that a sequence of consecutive policies $\pi^1, …., \pi^{m-1}$ can be very similar “locally” (e.g., they agree in *many but all* states and steps) but they are not guaranteed to be identical “globally”. So, even in the episode $m$ that the learner happens to play the optimal policy $\pi^* = arg \max_{\pi} V_1^{\pi, f([\pi]^m)}$, the learner only gets to see the data for playing $\pi^*$ and $f(\pi^1, …, \pi^{m-1}, \pi^*)$. This data can be completely non informative for the purpose of estimating $V_1^{\pi^*, f([\pi^*]^m)}$ in the worst case, since $f(\pi^1, …, \pi^{m-1}, \pi^*)$ can be arbitrarily different from $f([\pi^*]^m)$, even when each $\pi^1, …, \pi^{m-1}$ might be similar to $\pi^*$ locally. For example, let’s say $m-1 \leq H$ and $\pi^i$ agrees with $\pi^*$ in all steps $h \neq i$. In this case, the consistent adversary places no restriction whatsoever in his response $f(\pi^1, …, \pi^{m-1}, \pi^*)$, i.e., $f(\pi^1, …, \pi^{m-1}, \pi^*)$ can literally be anything and disagree with $f([\pi^*]^m)$ in every single state and step. --- Rebuttal Comment 1.1: Comment: Thanks for making clarifications, I've adjusted my score accordingly. --- Reply to Comment 1.1.1: Comment: Thanks for acknowledging our response!
Summary: The paper studies the learning problem in a Markov game against the adaptive adversary. The adversary's policy can depend on all the learner's past strategies. The paper first shows that if the adversary can be fully adaptive, then sublinear policy regret cannot be obtained for the learner. The paper then characterizes the fundamental barriers for the learner to achieve the sublinear regret. The paper shows that if the adversary is $m$-memory bounded, i.e., the adversary's strategy depends at most on the $m$ past strategies of the learner, then the sublinear policy regret is achievable. The paper provides both the lower bound and the efficient algorithm that achieves a regret upper bound at the tight $O(\sqrt{T})$ order. Strengths: 1. Strong lower bounds are established to illustrate how hard it is to minimize policy regret against the adaptive adversary in the Markov game setting, which is fundamentally different from the bandit learning setting. 2. Efficient algorithms are presented in the paper with strong theoretical guarantees. Weaknesses: 1. For the general $m$-memory bounded adversary, the algorithm developed in the paper requires prior knowledge of $m$. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. If the adversary is $m$-memory bounded, could we regard the system state as the combination of $(s_t, \pi_t, \dots, \pi_{t-m+1})$ and then reduce everything to the $0$-memory bounded case? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As discussed in the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. --- > If the adversary is $m$-memory bounded, could we regard the system state as the combination of $(s_t, \pi_t, \dots, \pi_{t-m+1})$ and then reduce everything to the $0$-memory bounded case? Thank you for your interesting suggestion. If we augment the past policies as states, the state space will be as large as $S |\Pi|^m$. Since our bound for 1-memory bounded adversary scales polynomially with the number of states, a simple reduction from general $m$ to $m=1$ will not be sample-efficient. --- > the algorithm developed in the paper requires prior knowledge of $m$ While our algorithms do require some knowledge of $m$, we would like to clarify that it does not need an exact knowledge of $m$. Any upper bound on $m$ is sufficient for our algorithms.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CLIPCEIL: Domain Generalization through CLIP via Channel rEfinement and Image-text aLignment
Accept (poster)
Summary: The paper tackles the issue of domain generalization for vision-language models like CLIP. The authors propose a new and simple method which is divided into multiple stages, to mitigate the performance gap for this problem setup. Their method achieves State of the art performance on the benchmarks evaluated. Strengths: 1. The paper is well-written, simple, and easy to follow, making it an enjoyable read. 2. The method's motivation is well-founded, aiming to exclude domain-sensitive and class-irrelevant visual features, which makes the approach straightforward to understand. 3. The authors conduct thorough ablations for each component of their proposed method and training pipeline, clarifying the importance of each element to the overall approach. Weaknesses: 1. In many Domain Generalization tasks, the community has evaluated ImageNet and its variants. One of the popular baselines, CoOP (compared by the authors), does this as well. I am curious why the authors did not present these comparisons, especially since they use the 80 prompt templates initially designed for ImageNet. 2. It appears that the improvements diminish once the model is fine-tuned, with the most significant gains observed when the backbones are kept frozen. 3. It would have been helpful to see the model's performance on other ViT backbones, as the authors primarily focus on the ViT-B/16 model. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. The objective of the Image-Text alignment appears similar to contrastive loss functions. According to Table 3, this component is the most crucial part of their method. Did the authors consider ablating this component with other contrastive loss objectives or alignment methods? 2. Line 292 states that using "multi-scale information alone can enhance performance compared to CLIP zero-shot," and Table 3 also mentions this observation. However, I am unclear on what the authors mean by "multi-scale information." Could they provide further clarification? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: After reading the paper, I realize that although the authors achieve state-of-the-art performance, it requires significant compute resources. Could the authors provide a comparison of their method against previous state-of-the-art methods, such as CoOP, in terms of computational efficiency? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's detailed comments and offer our responses below. ### Weakness >**[W1]:** Present the comparisons on ImageNet Thank you for your insightful suggestion. We conducted experiments on ImageNet (w/ 1000 classes) using the same setting as CoOp. Since the setting is the **single source domain**. We modify our loss and remove the domain variance term in the $\mathcal{L}_{\rm{ref}}$. Similar to CoOp, we train our model on **few-shot** ImageNet (16 samples per class) and test the model on different variants of ImageNet datasets. **Table 5 in the rebuttal PDF** indicates that our CLIPCEIL outperforms both the "CLIP Zero-Shot" and "CoOp". >**[W2]:** It appears that the improvements diminish once the model is fine-tuned We appreciate the reviewer's insightful question and are eager to engage in further discussion. One possible explanation for the observed phenomenon is that when the backbone is frozen, we can only adjust the adapters and loss functions, meaning that all the improvements come from these adjustments. In contrast, fine-tuning the entire backbone allows us to leverage a large number of model parameters to enhance performance, which might limit the potential benefits of the adapters and loss functions. >**[W3]:** It would have been helpful to see the model's performance on other ViT backbones. Thank you for your insightful suggestion. We conducted experiments **using different ViT backbones, i.e., ViT-B/32 and ViT-L/14**, on the OfficeHome dataset. The performance results are presented in **Table 6 of the rebuttal PDF**. It shows that CLIPCEIL consistently outperforms the zero-shot prediction on these two ViT backbones, demonstrating its generalization ability on different architectures. ### Question >**[Q1]:** Did the authors consider ablating this component with other contrastive loss objectives or alignment methods? Thank you for your insightful suggestion. We try another contrastive loss \textit{i.e.,} SLIP, which combines CLIP loss and SimCLR. We conducted experiments by replacing $\mathcal{L}_{\rm{CE}}$ with $\mathcal{L}_{\rm{SLIP}}$. As shown in **Table 7 of the rebuttal PDF**, combining our proposed loss with $\mathcal{L}_{\rm{SLIP}}$ achieves the best performance on the OfficeHome dataset, indicating the effectiveness of our proposed loss. **[SLIP]** SLIP: Self-supervision meets Language-Image Pre-training, ECCV 2022. >**[Q2]:** What do the authors mean by "multi-scale information? Thank you for your clarifying question. "Multi-scale information" refers to the latent representations obtained from different transformer blocks at various levels within the CLIP visual encoder. While the term "multi-scale information" is typically associated with CNNs and might not be entirely accurate for ViTs, we use it to convey the concept that features are derived from multiple levels. ### Limitations >**[L1]:** Could the authors provide a comparison of their method against previous state-of-the-art methods, such as CoOP, in terms of computational efficiency? Thank you for your insightful question. We measured the GPU memory usage and average training time per step for various methods on the OfficeHome dataset using a single A100 GPU. The table below indicates that CoOp is the most memory-efficient and fastest model, while CLIPCEIL offers comparable computational efficiency to CoCoOp and ERM. | **Model** | **GPU Memory** | **Time/step** | | -------- | -------- | -------- | | ERM | $20.8$G | $1.56$s | | CoOp | $2.6$G | $0.53$s | | CoCoOp | $25.4$G | $1.73$s | | CLIPCEIL | $26.5$G | $1.87$s | --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns and answering my questions. However, I would suggest rephrasing the "multi-scale information" writing, since by your admission it may not be best suited for ViTs. Finally, I am more certain of the work and have increased my confidence score to 4. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the time you spent reviewing our work, your thoughtful comments, and your recognition of our efforts. We are pleased to have addressed your concerns and increased your confidence in accepting our paper. To avoid confusion, we will replace the term "multi-scale." Do you think "multi-level" would be a better alternative?
Summary: The paper addresses the challenge of domain generalization for CLIP. To tackle this, the authors introduce CLIPCEIL, a method that enhances CLIP's performance on unseen test datasets with domain shifts, which employs Channel rEfinement and Image-text aLignment techniques to refine visual feature channels, ensuring they are domain-invariant and class-relevant. It aligns text embeddings with corresponding image embeddings and removes domain-specific features. Additionally, CLIPCEIL integrates multi-scale CLIP features using a self-attention fusion module implemented through a Transformer layer. Strengths: The paper is presented very clearly, is well-structured, and is generally easy to follow and understand. Weaknesses: - The adapter $g$ is a model-specific design, intended only for CLIP with ViT as the backbone. This prevents the proposed method from being used with CLIP which has ResNet as the backbone. - The lightweight adapter $g$ is implemented using a Transformer layer and an MLP projector. This lacks clarity on how the Transformer layer is necessary. - The problem setup (line 131) demonstrates that the goal is to train a model $f$ in the source domain and expect it to perform well in the target domain. This needs to be more specific, such as whether it involves training from scratch or fine-tuning a pre-trained model. - CLIPCEIL currently only applies to multi-source domain generalization. Technical Quality: 2 Clarity: 2 Questions for Authors: - How many data points were used in the experiment in Table 1? - CLIP pre-training essentially has no concept of domain. The proposed method, when fine-tuned on a specific dataset, such as Office Home, will essentially introduce dataset bias. In other words, the domain-invariant feature obtained by the Office Home fine-tuned model is only valid for the Office Home data and may not be valid for another dataset. Therefore, I am curious about how the model obtained by this method on Office Home performs on other datasets, such as ImageNet. - Why do Table 4 and Table 5 demonstrate different performances of models with only Multi-scale employed? - Can you experiment with $g$ as a relatively simple architecture, e.g., just a linear layer, and see how it differs from the Transformer layer? Also, how about $L\_{ref}$​ and $L\_{dir}$​ incorporating $g$ as such a relatively simple architecture—do they still boost performance? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: See previous responses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's detailed comments and offer our responses below. ### Weakness >**[w1]:** The adapter is designed for ViT, and is hard to use for ResNet backbone. Thank you for your insightful comments. Our proposed method can also be **extended to the ResNet backbone**. The primary difference lies in the way we extract latent representations, while other components, such as the architecture of the adapter $g$, remain unchanged. For ResNet backbone, we use the latent feature map as the latent feature and apply Attention Pooling to convert the 2D feature map into a 1D vector. The vectors of different layers are then fed into the Transformer layer of adapter $g$. We conducted experiments using the ResNet-50 backbone, and our method **outperformed other ResNet based models**, as shown in **Table 2 in the rebuttal PDF**. >**[w2]:** How the Transformer layer is necessary. Thank you for your insightful comments. We conducted an ablation study to investigate the Transformer layer. We replaced the Transformer layer with Average Pooling or an MLP. As shown in the **orange block in Table 4 of the rebuttal PDF** , the Transformer layer outperformed the other fusion strategies, indicating its necessity. >**[w3]:** Training from scratch or fine-tuning a pre-trained model. > Thank you for your clarifying question. Our proposed method builds upon a pre-trained CLIP model, which we then fine-tune on the source domains. >**[w4]:** CLIPCEIL currently only applies to multi-source domain generalization. Thank you for your insightful comments. We also put this in the limitation section of the main text. However, CLIPCEIL model can be **adapted to single source domain generalization** by simply removing the domain variance term in the loss function. We conducted experiments on ImageNet (w/ 1000 classes) using the same few-shot single source domain generalization setting as CoOp. **Table 5 in the rebuttal PDF** indicates that CLIPCEIL outperforms "CoOp". ### Question >**[Q1]:** How many data points were used in the experiment in Table 1? Thank you for your clarifying question. Table 1 in the main text presents the results of a simple proof-of-concept experiment based on our observations. This experiment is **training-free**, with no data points used for training, and is tested on the OfficeHome test set. We first calculate the domain variance in text embedding channels across 80 prompt templates and the class variance across 65 classes. We then select the channels with larger class variance and smaller domain variance. Assuming effective alignment of visual-language features in CLIP, we use the same selected channels for visual embeddings. During inference, we use the inner product of the visual and text feature vectors, similar to the approach used in CLIP zero-shot. >**[Q2]:** How the model trained on OfficeHome performs on other datasets? Thank you for your insightful perspective. The domain generalization task aims to enhance the model generalizability, and as such, the domains in the benchmarks designed for domain generalization are already very diverse. Moreover, domain generalization assumes that the source and target domains **share the same label space** (i.e., the categories are consistent across different domains), which is not the case with different benchmark datasets. Therefore, a model trained on one benchmark dataset (e.g., OfficeHome) typically is very difficult to test on another one (say PACS). Nevertheless, we are intrigued by the reviewer's suggestion about evaluating the performance of a model trained on one benchmark dataset on another dataset. To investigate this, we identified a common category, "person", within the PACS and VLCS datasets. We then conducted an experiment by testing a model trained on the PACS dataset in the "Labelme" domain of the VLCS dataset. The results below shows the performance drop but **we would be happy to discuss this interesting result with the reviewer**. | **PACS (leave Sketch)**| **VLCS (leave Labelme)** | | :-----: | :----: | | $80.9\%$ |$92.7\%$| >**[Q3]:** Why do Table 4 and Table 5 demonstrate different performances? Thank you for your clarifying question. Since the content in Table 5 is irrelevant, we assume that the reviewer referred to Table 3 and 4. Table 3 in the main text presents the ablation studies for different loss terms, while Table 4 focuses on the ablation studies for various adapter architectures. This distinction may be somewhat confusing, and we will address this in the revised paper. The term "Multi-scale" in Table 3 indicates training CLIPCEIL using **only $\mathcal{L}_{\rm{CE}}$**. In contrast, "Multi-scale" without "bypass" in Table 4 signifies training CLIPCEIL **with all loss terms except for the bypass connection**. Thus, they have different performances. >**[Q4]:** Can you experiment with as a relatively simple architecture? Thank you for your insightful suggestion. We conduct the experiment with a simple adapter $g$, **without considering the multi-scale information**. $\textit{i.e.,}$ one-layer MLP projector, and evaluate the impact of our proposed loss. The **pink block in Table 4 of the rebuttal PDF** indicates that incorporating $\mathcal{L}_{\rm{ref}}$ and $\mathcal{L}_{\rm{dir}}$ with a simple adapter $g$ still improves the performance. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I find the experiment in Table 5 to be somewhat unfair because Coop was not designed for domain generalization. Subsequent works, like CoCoop, were developed specifically to address this issue. I personally agree the Reviewer cn9a's concerns about the domain generalization setting in CLIP. According to the original CLIP paper, CLIP doesn't inherently operate with a concept of domains, and due to the closed nature of the data, it's unclear whether a given dataset is truly out-of-domain. Additionally, the original CLIP paper emphasizes that its superior performance compared to ImageNet models might be due to CLIP’s ability to avoid easily learning dataset-specific biases. For these reasons, I’m not particularly in favor of imposing the domain concept on CLIP. Additionally, these experiments do not conclusively demonstrate that their generalization performance is superior to CLIP's. While improvements were observed in the datasets chosen by the authors, CLIP's generalization performance was validated using over 20 datasets. While I acknowledge the author’s contributions and agree that the author's fine-tuning method is effective with multi-domain data; however, I'm not convinced that this approach necessarily enhances CLIP's domain generalization ability. I would like to maintain my original score. --- Rebuttal 2: Comment: We greatly appreciate the time you've taken to review our rebuttal and for recognizing our contributions and the effectiveness of our proposed method on multi-domain datasets. We are pleased to address the reviewer's follow-up questions and discuss these intriguing points. >**Q1:** The experiment in Table 5 to be somewhat unfair because Coop was not designed for domain generalization. Subsequent works, like CoCoop, were developed specifically to address this issue. Thank you for your insightful comment. We have also compared our CLIPCEIL model with CoCoOp on the ImageNet under exactly the same **few-shot single domain** generalization setting, using the performance results for CoOp and CoCoOp as reported in the original CoCoOp paper. As shown in the table below, our proposed method **outperforms both CoOp and CoCoOp**, achieving the highest average target domain accuracy among the three models. It's important to clarify that these experiments on ImageNet are **not** part of the standard domain generalization benchmark, which typically involves a multi-source domain generalization setting. This is why we did not include them in the main text. Instead, the ImageNet experiments referenced in Table 5 in the rebuttal PDF focus on a few-shot single-source domain setting, which is **not the design focus of our proposed model either**. | Model | ImageNet | V2 | S | A | R | Avg | | ----- | -------- | ---- | --- | --- | --- | --- | | CLIP Zero-Shot | 66.7 | 60.8 | 46.1 | 47.8 | 74.0 | 57.2| | CoOp | 71.5 | 64.2 | 48.0 | 49.7 | 75.2 | 59.3| | CoCoOp | 71.0 | 64.0 | 48.8 | 50.6 | 76.2 | 59.9| | CLIPCEIL | **71.6** | **64.6** | **49.2** | 50.5 | **76.8** | **60.3**| >**Q2:** It's unclear whether a given dataset is truly out-of-domain. Thank you for this interesting point. As we mentioned in our response to Reviewer cn9a, the datasets used to train the CLIP model are indeed quite diverse, covering a wide range of image types such as digits images, human faces, traffic signs, remote sensing, self-driving, pathology, human actions, natural photographs/pictures, etc., but these images are typically **photographs of objects and scenes taken from the real world**. Some domains that are commonly featured in standard domain generalization benchmarks are not adequately represented in CLIP's pre-trained datasets. For example, domains like **quickdraw**, **infograph**, **clipart**, **sketch** in DomainNet, **clipart**, **art** in OfficeHome, **Cartoon**, **Sketch** in PACS are examples of underrepresented categories. Also, the unique image styles from the Terra Incognita dataset, which features camera trap images for monitoring animal populations, appear to be underrepresented. As a result, we believe that the data in the domain generalization benchmark datasets are not fully represented in CLIP’s training data, which may explain why the domain generalization performance, even with the CLIP model, remains relatively low (around 50%-60%) on the DomainNet and Terra Incognita datasets. >**Q3:** Additionally, the original CLIP paper emphasizes that its superior performance compared to ImageNet models might be due to CLIP’s ability to avoid easily learning dataset-specific biases. For these reasons, I’m not particularly in favor of imposing the domain concept on CLIP. Thank you for this interesting point and your insightful comments. We agree that CLIP does have better generalizability in many widely used datasets than ImageNet trained models, largely due to its extensive and diverse training datasets. However, as mentioned earlier, after carefully checking the datasets used to train the CLIP model, we found these images are typically **photographs/video frames of objects and scenes taken from the real world**. although CLIP was evaluated on 27 datasets as shown in Table 10 of the original paper, a close examination of these datasets reveals that they also consist of **photographs/video frames of objects and scenes taken from the real world**. From this perspective, the CLIP model also exhibits bias to images **taken from the real world**, and has not yet demonstrated sufficient generalizability to other domains featured in domain generalization benchmarking datasets, such as quickdraw, clipart, cartoons, and sketches, as indicated by the "CLIP zero-shot" results in Table 2 of the main text. Our model aims to narrow this gap. --- Rebuttal 3: Comment: >**Q4:** These experiments do not conclusively demonstrate that their generalization performance is superior to CLIP's. Thank you for your insightful comments. The CLIP model is indeed powerful and has found widespread application across various fields. Its alignment of visual and textual information enables impressive zero-shot capabilities for unseen categories. However, in this paper, our goal is to enhance the generalizability of vision-language models (e.g.CLIP) within the context of standard domain generalization settings, specifically on established domain generalization benchmark tests. --- Rebuttal Comment 3.1: Comment: Reviewer dJc5: Any further thoughts given the authors' further responses. Thanks. -AC
Summary: The paper addresses Domain Generalization (DG) by leveraging the superior generalization abilities of CLIP. While most prior works that utilize CLIP focus solely on the adaptation of CLIP for the given downstream task, this work investigates the domain-specific properties of CLIP. Specifically, the authors demonstrate that a meticulous channel selection on CLIP’s image embeddings to exclude the domain-specific channels can improve the zero-shot performance. Motivated by this observation, they propose CLIPCEIL - a simple approach that aims to enhance CLIP’s generalization properties by focusing on domain-invariant parameters through a transformer-based adapter. CLIPCEIL learns refined features through attention on multi-scale features from CLIP’s image encoder. These refined features are learned through channel refinement and image-text alignment on the downstream dataset. Experiments on datasets from the DomainBed benchmark demonstrate the effectiveness of the approach. Strengths: - **Presentation:** The paper has been presented well overall, making it easy to understand. The authors first motivate the core idea of the paper through a simple zero-shot experiment on the channels of CLIP’s image features, followed by a clear outline of the proposed method. The authors also present various visualizations demonstrating how the proposed approach improves the CLIP baseline. - **Motivation:** Rather than leveraging CLIP as is (often done by prior works), the authors investigate and improve its generalization properties through a simple and efficient channel refinement strategy. Weaknesses: ### (a) Proposed Method - **Novelty:** The idea and motivation behind the proposed approach are the same as DomainDrop [14]. The histogram analysis of the channels in a pre-trained model in the current paper is the same as in [14]. Thus, [14] is an important paper that needs to be discussed in more detail. Additionally, some of the ideas in CLIPCEIL are similar to the following works, [R8-R11]. The authors should discuss these works in the “Related Works” section and how CLIPCEIL differs from the ideas in these works. - **Channel refinement strategies:** Fig. 6 shows that the various channel refinement strategies are quite similar in performance on most datasets except PACS and TI. This gives rise to the following question - are both inter-task and inter-domain refinement required for effective DG? With sufficiently diverse domains, could the inter-domain refinement be sufficient? Especially in the case of DN which has several diverse domains, the performance gap is negligible. How would the refinement strategies scale with more source domains or with domains having more diversity? The paper does not address this aspect. *(An example can be the difference between the PACS and DomainNet datasets. Even if we consider the same number of source domains in these datasets, DomainNet has infograph, painting, quickdraw, etc which are more diverse than the domains in PACS. How would this affect the training?)* ### (b) Experiments - There seems to be a misunderstanding of how MIRO [5] works, according to the way Table 2 has been presented. MIRO trains the entire CLIP backbone and does not freeze it. However, Table 2 indicates that the backbone is frozen for MIRO. Thus, MIRO should be moved to the blue section in Table 2. - SAGM [15] and DomainDrop [14] have been shown on ResNet-50 while all the other results are presented on ViT-B/16. The authors should present these results on CLIP ViT-B/16 for a fair comparison. - The authors need to compare with additional prior works mentioned in the missing references (below). - There is no analysis on the weights for the loss terms in Eq. 7. Does equal weightage give the best performance across all datasets or is there a certain set of weights that work best? The authors should provide this analysis for a better understanding of the method. ### (c) Missing references - [R1] Zanella, Maxim, et.al “Low-Rank Few-Shot Adaptation of Vision-Language Models”, CVPR 2024. - [R2] Cha, Junbum, et al. "Swad: Domain generalization by seeking flat minima." NeurIPS 2021. - [R3] Bose, Shirsha, et.al, “STYLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIP-based Domain Generalization”, WACV 2024. - [R4] Wang, Zhengbo, et.al, “A Hard-To-Beat Baseline for Training-free CLIP-based Adaptation”, ICLR 2024. - [R5] Kan, Baoshuo, et.al, “Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models”, ICCV 2023. - [R6] Khattak, Muhammad Uzair, et al. "Maple: Multi-modal prompt learning." CVPR 2023. - [R7] Addepalli, Sravanti, et.al, “Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification”, CVPR 2024. - [R8] Singha, Mainak, et.al, “AD-CLIP: Adapting Domains in Prompt Space Using CLIP”, ICCV 2023. - [R9] Chang, Chia-Yuan, et.al, “DISPEL: Domain Generalization via Domain-Specific Liberating”, CVPR 2023. - [R10] Yu, Ding, et.al, “Domain Generalization by Learning and Removing Domain-specific Features”, NeurIPS 2022. - [R11] Hu, Xuefeng, et.al, “ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation”, WACV 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Following the point about channel refinement strategies from the Weaknesses section, could there be an alternative strategy where inter-domain refinement is sufficient? Based on the results from Fig. 6, it appears that sufficiently diverse source domains (as in DN) enable this strategy. Thus, could this strategy be realized by using augmentations on the source domains? 2. Can the multi-scale mechanism in the adapter module also be extended to the text encoder? Would that enable better image-text alignment since the proposed approach would then align the refined image-text features rather than aligning the refined image embeddings with CLIP’s vanilla text embeddings? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's detailed comments and offer our responses below. ### Weakness #### [Proposed Method] >**[W1]:** The idea and motivation are the same as DomainDrop [14]. Some of the ideas in CLIPCEIL are similar to the following works, [R8-R11]. Thanks for your insightful comments. The technique used in the DomainDrop [14] paper differs from CLIPCEIL. DomainDrop **explicitly drops the domain-specific feature channels** identified by a domain discriminator. However, each feature channel may include **both domain-specific and domain-invariant information**. Thus, directly dropping entire feature channels can lead to the **loss of class-relevant information** for downstream tasks and may not be optimal. In contrast, CLIPCEIL implicitly drops the domain-sensitive information by **minimizing the inter-domain variance and maintaining class-relevant information** as much as possible by maximizing the inter-class variance. Moreover, CLIPCEIL also proposed the Image-text alignment that **specifically designs to the vision-language model**, compared to DomainDrop only utilizes visual features. The histograms in Table 1 are the **visualization tool** adapted from DomainDrop [14]. Our observations align with those in the DomainDrop paper; features extracted from pretrained models without specific adaptation for the domain generalization task exhibit large variances across domains. Moreover, we also plotted the channel histogram across classes, and noticed that certain channels are insensitive to class variations, motivating our loss function to maximize inter-class variance. We will include [R8-R11] in our revision, but want to emphasize the **difference between CLIPCEIL and these models**. [R8] is a prompt learning-based method for domain adaptation by learning domain-agnostic tokens from multi-scale visual styles, whereas CLIPCEIL is an adapter-based method. [R9] learns a mask to explicitly filter out the domain-specific feature elements, which can be problematic as one element can contain both shared and specific information, similar to DomainDrop. [R10] uses multiple domain-specific classifier heads to remove domain-specific features. [R11] removes class-agnostic visual information by projecting all the visual embeddings onto the span of text embedding, focusing on source-free domain adaptation. In contrast, our method removes the domain-specific and class-agnostic information by minimizing the domain variance and maximizing the class variance, inspired by our channel histogram observation. >**[W2] & [Q1]:** Channel refinement strategies, esp. domain refinement. Thank you for your insightful perspective. We would be happy to discuss the channel refinement strategies with the reviewer. We conducted a simple experiment to show how domain variance dynamically changes during training in **Figure 1 in the rebuttal PDF**. Diverse datasets (e.g. TI and DN) start with large variances, which are reduced to reasonable levels using our domain variance loss term. Figure 1 in the main text also demonstrates this. Of course, there are other options, such as adversarial learning, information theory based, etc, which are the main focus in the domain generalization area., we welcome the opportunity to discuss these further with the reviewer. #### [Experiments] >**[W1]:** Misunderstanding of how MIRO [5] works. > Thanks for pointing it out. We will move it to the blue section. >**[W2]:** Fair comparison with SAGM and DomainDrop. Thanks for the insightful suggestion. Our original purpose in listing SAGM and DomainDrop was to demonstrate that the CLIP ViT backbone can outperform the best ResNet backbone models and that CLIPCEIL is built upon this superior backbone. To fairly compare with the ResNet backbone models, we conducted experiments on **CLIPCEIL using the CLIP ResNet-50 backbone**. **Table 2 in the rebuttal PDF** demonstrates that CLIPCEIL outperforms SAGM and DomainDrop with the ResNet-50 backbone on OfficeHome datasets. >**[w3]:** Comparison with additional prior works. Thank you for your suggestions. References [R1, R4, R5] pertain to **few-shot learning**, while [R8, R11] address **domain adaptation** problems. We did not include these as comparing models focused on other tasks within the domain generalization setting, which is the primary goal of our paper, is challenging. We have included references [R3,R6] (which are already included in main text), and [R7] with the ViT-B/16 backbone (**see Table 1 in the rebuttal PDF**), as well as references [R2,R9] with the ResNet-50 backbone (**see Table 2 in the rebuttal PDF**). Our CLIPCEIL outperforms the prior works with different backbones. >**[w4]:** Weights analysis for the loss terms. Thank you for your suggestions. To avoid hyper-parameters ($\alpha$ for $L_{ref}$, $\beta$ for $L_{dir}$) searching for different datasets, we set $\alpha=1$ and $\beta=1$ as default. Following the Reviewer's suggestion, we investigated hyper-parameter sensitivity by tuning one parameter at a time while keeping the other at 1. **Figure 2 in the rebuttal PDF** demonstrates that $\alpha=1$ and $\beta=1$ yields the best accuracy and that CLIPCEIL achieves stable performance with $\alpha\in[0.6,1.2]$ and $\beta\in[0.6,1.2]$. ### Questions >**[Q1]:** See [ Proposed method] [W2] >**[Q2]:** Can multi-scale adapter be extended to the text encoder? Thank you for your insightful suggestion. We conducted experiments to incorporate a multi-scale adapter into text encoder. As shown in **Table 3 of the rebuttal PDF**, using both visual and text adapters did not perform as well as only using the visual adapter. This may be due to the increased complexity of optimizing both adapters simultaneously. It also suggests that focusing on image feature adaptation is more crucial for domain generalization tasks, since the semantic gap between visual features in pretrained and custom datasets is larger than that of text features. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer k7YB Comment: I sincerely appreciate the effort that the authors have put in for the rebuttal. The authors have addressed most of my concerns through the rebuttal. After reviewing the rebuttal, the comments from the other reviewers as well as the authors’ responses to their concerns, I have a few follow-up points: - Fig. 1 in the rebuttal PDF highlights the observation from point 1 under the Questions section of my previous review. The standard deviation across the domains for TI and DN are notably higher than that of the other datasets. However, the authors have not addressed my question about the presence of diverse domains or more domains (as in the case of TI and DN). If there are more domains or if there are few but sufficiently diverse domains, would it be sufficient to train with the inter-class variance loss alone? This does seem to be the case for DN. *Note - I misphrased my first point in the Questions section of my original review. As mentioned above, the right question is - If there are more number of domains or fewer but diverse domains, would the inter-class variance loss alone be sufficient?* - A lot of the reviewers have raised concerns about the DG setting. Specifically, reviewers dJc5 and cn9a are concerned that the datasets used in DG may not be truly OOD for CLIP, because CLIP has been pre-trained on a large dataset of 400M image-text pairs. I would like to refer them to the following paper - ***Xu, Hu, et al. "Demystifying CLIP Data."ICLR 2024***. This paper uncovers the pre-training strategies of CLIP and constructs the 400M dataset that CLIP was pre-trained on. Given that CLIP was pre-trained on a strictly balanced dataset, one can expect poor performance on datasets such as DomainNet that are inherently long-tailed. Moreover, the performance of CLIP on TerraIncognita and DomainNet highlights the fact that several domains are underrepresented in CLIP’s pre-training data (e.g. Infograph and Quickdraw in DomainNet, camera trap images for animals in TerraIncognita). Additionally, as pointed out by the authors, CLIP possesses an inherent bias towards realistic images as opposed to stylized images (e.g. Clipart, Paintings, etc) owing to the pre-training dataset. Given the detailed nature of the authors’ responses to my concerns as well as that of the other reviewers, I raise my score to **Weak Accept**. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you so much for your reply, willingness to reconsider your rating, and providing the reference that related to the restriction of the CLIP training datasets. Your question -- If there are more number of domains or fewer but diverse domains, would the inter-class variance loss alone be sufficient? -- is very interesting and constructive! While we don't have a definitive answer at the moment, here's our current thinking: We think the question could be essentially translate to (correct us if we're wrong): whether the domain invariant feature is crucial when there are either many source domains or fewer, but more diverse, source domains? One possible reason why domain-invariant features might seem less essential in these scenarios is that a larger number of source domains or more diverse source domains **may span a broader distribution space**. This increases the chance that **at least one of the source domains will have a distribution similar to the target domain**, which will significantly enhance accuracy in target domain. While, of course, the more source domains we have, the greater the chance that one will closely resemble the target domain, in practice, we can't guarantee that at least one source domain will be similar to the target domain. In these scenarios, the **domain-invariant information is still important**. These are our thoughts, but we haven't had the opportunity to conduct experiments to verify (or disprove) them yet. We will do so in the near future, and this could lead to new perspectives on domain generalization. We are also happy to discuss further if you have any additional concerns or questions. --- Rebuttal 2: Comment: Dear Reviewer, I would like to follow up on my previous response to your review. As the discussion period is nearing its end, I am eager to address any remaining concerns you might have. Your feedback is invaluable, and I would greatly appreciate the opportunity to discuss any aspects of the review further. Please let me know if there are any specific points you'd like to revisit or if you have any additional questions.
Summary: The paper introduces a novel method called CLIPCEIL, designed to improve the performance of CLIP on unseen test datasets with domain shifts. The approach refines visual feature channels to maintain domain-invariant and class-relevant features using a lightweight adapter. This involves minimizing inter-domain variance and maximizing inter-class variance. Additionally, CLIPCEIL ensures image-text alignment by matching text embeddings of class descriptions with corresponding image embeddings while eliminating domain-specific features. The model also integrates multi-scale CLIP features using a self-attention fusion module implemented through a Transformer layer. Extensive experiments on five benchmark datasets show that CLIPCEIL outperforms current state-of-the-art methods. Strengths: 1. The paper is well-structured, good written, and easy to follow. 2. Analysis is thorough and extensive and could bring some insight to readers. Weaknesses: 1. This paper lacks novelty overall. CLIP adapters for few-shot transfer learning have been well-studied these years. This paper might get accepted if submitted 2 years ago. The novelty of this work compared with other adapters' work is a little bit incremental. 2. Some famous adapters for CLIP are not cited, like Vt-CLIP (Qiu L, Zhang R, Guo Z, et al. Vt-clip: Enhancing vision-language models with visual-guided texts[J]. arXiv preprint arXiv:2112.02399, 2021.) or cited but not compared, like Tip-Adapter (Zhang R, Fang R, Zhang W, et al. Tip-adapter: Training-free clip-adapter for better vision-language modeling[J]. arXiv preprint arXiv:2111.03930, 2021. ) 3. Performance gain compared with other methods is not significant enough to verify the effectiveness of the design. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Why do authors list the venue in the table? It's quite strange I have to say. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The author addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's careful comments and provide our responses below. ### Weakness >**[W1]:** Overall novelty. Thanks so much for your comments. First, we would like to emphasize the difference between **few-shot learning** and **domain generalization**. Although they are related, they have distinct differences. In domain generalization, the key assumption is that there are no target domain examples available during training. This means the model must learn to generalize to entirely unseen domains based solely on the information from the source domains. In contrast, few-shot learning typically focuses on task adaptation, where there are, although only a small number, target domain examples available (actually not only available but also labeled ) during training. This allows the model to fine-tune its knowledge and adapt to the new task with minimal data. This paper specifically targets domain generalization task. Second, we want to mention that there are **two main directions** in fine-tuning pre-trained large vision-language models (e.g., CLIP): (text and/or visual) **prompt learning** and **adapter techniques**, and thus the adapter itself is not our contribution. Our model, along with CLIP-Adapter, Tip-Adapter, etc. all fall under the adapter technique. We want to emphasize that our contributions lie in **proposing novel loss functions tailored specifically for the DG task**, compared to other adapter based models that target few-shot learning or other tasks. >**[W2]:** Some famous adapters for CLIP are not cited, like Vt-CLIP, or cited but not compared, like Tip Adapter. > Thanks so much for your suggestions. We will add Vt-CLIP to our related work section. Since Vt-CLIP and TiP-Adapter are both for the **few-shot learning** setting, comparing them in the domain generalization setting is difficult. >**[W3]:** Performance gain compared with other methods is not significant. > Thanks so much for your comments. Our CLIPCEIL achieves the best average performance on five widely used domain generalization datasets with both the frozen encoder and fine-tuning the entire visual encoder settings. Considering the frozen encoder is a more realistic setting in practice, CLIPCEIL exceeds second-best by **$2.3$%** on average. ### Question >**[Q1]:** Why do authors list the venue in the table? > Thanks for your question. Our original purpose is to make it easier for readers to identify when the comparison methods were published (highlighting the latest progress in this area) and where they were published (demonstrating that they are state-of-the-art models by referencing the top conferences or journals). --- Rebuttal Comment 1.1: Title: Concerns about domain generalization setting Comment: I've thoroughly reviewed the authors' responses and appreciate their thoughtful engagement. However, I have some concerns about the domain generalization setting using CLIP. The authors state, "In domain generalization, **no target domain examples are available during training**, requiring the model to generalize based on source domains alone." Given the extensive pre-training data used by CLIP, it is questionable whether the target domain is truly unseen by CLIP. Additionally, I believe the technique for domain generalization could be more accurately classified as zero-shot or few-shot generalization. It appears that this paper is leaning towards few-shot generalization, which is closely related to few-shot learning. The authors should discuss related work on few-shot learning and clarify the differences between few-shot learning and domain generalization. Nevertheless, I acknowledge the paper's contribution regarding the "novel loss functions.", and would like to increase my rating later. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the time you’ve taken to review our paper, recognizing our contribution regarding the "novel loss functions", and your willingness to reconsider the rating. We are particularly thankful for your insightful question: Is there any domain that CLIP has not encountered during its extensive pre-training, given the vast amount of data it was trained on? Driven by intellectual curiosity, we carefully checked the datasets used to train CLIP models in the original paper (https://arxiv.org/pdf/2103.00020, section "A.1. Datasets"). While the datasets are quite diverse, including digits image (MNIST), human face image (Facial Expression Recognition 2013 dataset), traffic sign (GTSRB), remote sensing (EuroSAT, NWPURESISC45), self-driving (KITTI), pathology (PatchCamelyon), human action (UCF101, Kinetics), natural photographs/pictures (ImageNet-1k, STL-10), etc., some domains that are commonly featured in domain generalization benchmarks are missing. For example, **quickdraw, infograph, clipart, sketch** in DomainNet (https://arxiv.org/pdf/1812.01754), **clipart, art** in OfficeHome (https://paperswithcode.com/dataset/office-home), **Cartoon, Sketch** in PACS (https://paperswithcode.com/dataset/pacs), etc. Also, the **unique image styles** from the Terra Incognita dataset, which features camera trap images for monitoring animal populations, appear to be underrepresented. Therefore, we think the data in the **domain generalization benchmark datasets are not fully disclosure to the CLIP model**, and this is why the domain generalization performance, even utilizing the CLIP model, is still relatively low (around 50%-60%), on DomainNet and Terra Incognita datasets, and we see room for improvement and believe that further efforts in this area could enhance CLIP's generalization capabilities. We agree with the reviewer that domain generalization and few-shot learning are closely related. However, we also see the unique use cases of the domain generalization, particularly due to its "ready-to-use" nature. Few-shot learning **requires the preparation of labeled data on the custom dataset and training a machine learning model**, even with a small number of data. While this process might seem straightforward to us as computer scientists, it can be **challenging for end users in other fields** who lack machine learning expertise, such as doctors, materials scientists, and biologists. Our collaborators often request models that are ready to use on their new data, which may differ from the training data and may not have been encountered during training. In such scenarios, the domain generalization approach proves to be valuable. We shared our insights here and welcome further discussions with the reviewers.
Rebuttal 1: Rebuttal: ## General Reply We would like to express our sincere appreciation for all reviewers' invaluable feedback and comments. Below are the general replies to the common concerns and the summary of the additional conducted experiments. First, we would like to clarify the difference between few-shot learning and domain generalization. In domain generalization, no target domain examples are available during training, requiring the model to generalize based on source domains alone. In contrast, few-shot learning involves a small number of labeled target domain examples for task adaptation. **This paper specifically targets domain generalization.** Additionally, there are **two main directions** to fine-tuning pre-trained vision-language models like CLIP: **prompt learning and adapter techniques**. Our model falls under adapter techniques, similar to CLIP-Adapter and Tip-Adapter. Our key contribution is **proposing novel loss functions specifically for the domain generalization task**, unlike other adapter-based models that focus on few-shot learning or other tasks. ### Additional experiments (order of the response to reviewers) In response to the reviewer's comments, we conducted thorough additional experiments, enhancing the paper from the following aspects: * Fair comparison with SAGM and DomainDrop by using ResNet-50 backbone. * Weights analysis for the loss terms. * Apply multi-scale mechanism to text encoder. * Ablation Study for the Transformer layer in Aadpter $g$. * Evaluate the CLIPCEIL on the ImageNet datasets. * Performance on other ViT backbones. * Ablation Study on other alignment methods. * Computational resources comparison. Pdf: /pdf/8d4c3abee64991128192c95cea6c8cbd67cb860b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
You Only Look Around: Learning Illumination-Invariant Feature for Low-light Object Detection
Accept (poster)
Summary: The paper focuses on the Low-light Object Detection task from the perspective of feature learning. In detail, the paper proposes an Illumination-Invariant Module to extract illumination-invariant features and a learning illumination-invariant paradigm. Experiments verify the effectiveness of the proposed method. Strengths: 1. The writing is good with beautiful illustrations. 2. The idea is novel and interesting. The paper leverages illumination-invariant features to detect objects in low light, which is novel for the low-light object detection task. The proposed method is easy to follow and achieves good performance. Weaknesses: More related methods need to be compared in Table 1 and Table 2. [1] Ziteng Cui, Kunchang Li, Lin Gu, Shenghan Su, Peng Gao, Zhengkai Jiang, Yu Qiao, and Tatsuya Harada. You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction. In BMVC, page 238, 2022. [2] Shangquan Sun, Wenqi Ren, Tao Wang, and Xiaochun Cao. Rethinking image restoration for object detection. Advances in Neural Information Processing Systems, 35:4461–4474, 2022. [3] Wenyu Liu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jianke Zhu, and Lei Zhang. Image-adaptive yolo for object detection in adverse weather conditions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 1792–1800, 2022. [4] Sanket Kalwar, Dhruv Patel, Aakash Aanegola, Krishna Reddy Konda, Sourav Garg, and K Madhava Krishna. Gdip: Gated differentiable image processing for object detection in adverse conditions. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 7083–7089. IEEE, 2023. [5] Qingpao Qin, Kan Chang, Mengyuan Huang, and Guiqing Li. Denet: Detection-driven enhancement network for object detection under adverse weather conditions. In Proceedings of the Asian Conference on Computer Vision, pages 2813–2829, 2022. [6] Xiangchen Yin, Zhenda Yu, Zetao Fei, Wenjun Lv, and Xin Gao. Pe-yolo: Pyramid enhancement network for dark object detection. In International Conference on Artificial Neural Networks, pages 163–174. Springer, 2023. [7] Khurram Azeem Hashmi, Goutham Kallempudi, Didier Stricker, and Muhammad Zeshan Afzal. Featenhancer: Enhancing hierarchical features for object detection and beyond under low-light vision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6725–6735, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: How does the method perform in comparison to recent related methods [1-7]? The proposed method assumes neighboring pixels exhibit high similarity of illumination. However, how does it hold true when meeting uneven light at night? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The proposed method assumes neighboring pixels exhibit high similarity of illumination. However, how does it hold true when meeting uneven light at night? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We sincerely thank you for your insightful and positive comments.** --- >**How does the method perform in comparison to recent related methods?** We complement more detailed comparison experiments as presented in Table 1, which includes runtime, model size, and performance. Note that our YOLA still achieves the best performance and speed among the evaluated methods. Besides, ReForDe[7] didn’t release their detailed training data, so we can not implement it. However, compared to the improvements reported by ReForDe in their paper ($\leq$0.2 mAP on ExDark using YOLOv3), our YOLA demonstrates a more substantial enhancement, achieving improvements of 1.7 mAP and 2.7 mAP on YOLOv3 and TOOD, respectively. Additionally, FeatEnHancer[1] didn't release their code, we thus follow the FeatEnHancer’sexperimental setting to implement the RetinaNet-based detectors as shown in Table 2. We can see that even though our baseline implementation on the ExDark dataset is inferior to FeatEnHancer’s, the integration of YOLA enables our method to achieve the best performance (1.9 mAP significant improvement compared to baseline). For DarkFace dataset, FeatEnHancer decreases the baseline performance by 0.1 mAP, which is attributed to hierarchical features that failed to be captured by RetinaNet, as claimed in [1]. In contrast, our YOLA, triggered from the physics-based model perspective without elaborate design, surpasses the baseline with a remarkable improvement of 2.5 mAP. It strongly suggests the generalizability and effectiveness of YOLA. --- >**The proposed method assumes neighboring pixels exhibit high similarity of illumination. However, how does it hold true when meeting uneven light at night?** In the main text, we assume that the illumination values of neighboring pixels are equal, which allows us to eliminate the influence of illumination when extracting features. However, in cases of uneven illumination, this assumption's constraint is weakened but still helps reduce the impact of illumination, as shown in teaser (b). Our method is detection-driven and can further mitigate the influence of such uneven illumination during the learning process, as illustrated in teaser (d). Additionally, please refer to the appendix. When the actual distance between neighboring pixels is too large, the assumption may not hold. Therefore, we propose IIloss to constrain the extraction of illumination-invariant features to as close a region as possible, mitigating the impact of uneven lighting. --- | Detector | mAP | Size(M) | FPS | |------------------------------------|------|---------|------| | Baseline | 72.5 | 32.044 | 57.7 | | IAT[2] | 73.0 | 32.135 | 50.9 | | IAYOLO[3] | 65.0 | 32.209 | 52.5 | | GDIP[4] | 72.8 | 167.00 | 54.0 | | DENet[5] | 73.5 | 32.089 | 55.7 | | PEYOLO[6] | 67.8 | 32.135 | 38.8 | | Ours | 75.2 | 32.052 | 56.6 | **Table 1:** Quantitative comparisons of the ExDark dataset based on TOOD detectors --- | Dataset | Method | mAP$_{50}$ | |------------|----------------------------------------|----------------------| | Exdark | Baseline | 72.1 | | | w/ FeatEnHancer | 72.6 (+0.5) | | |-------------------------|---------------------------------| | | Baseline$^{\dagger}$ | 70.9 | | | w/ YOLA | **72.8 (+1.9)** | |----------|-------------------------|---------------------------------| | DarkFace | Baseline | 47.3 | | | w/ FeatEnHancer | 47.2 (-0.1) | | |-------------------------|---------------------------------| | | Baseline$^{\dagger}$ | 50.2 | | | w/ YOLA | **52.7 (+2.5)** | **Table 2:** Quantitative comparisons (YOLA vs. FeatEnHancer), $\dagger$ indicates our implemented baseline. --- **Reference**: >[1] Khurram et al. Featenhancer: Enhancing hierarchical features for object detection and beyond under low-light vision, ICCV 2023. >[2] Cui et al. You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction. BMVC 2022. >[3] Liu et al. Image-adaptive yolo for object detection in adverse weather conditions. AAAI 2022. >[4] Sanket et al. Gdip: Gated differentiable image processing for object detection in adverse conditions. ICRA 2023. >[5] Qin et al. Denet: Detection-driven enhancement network for object detection under adverse weather conditions. ACCV 2020. >[6] Yin et al. Pe-yolo: Pyramid enhancement network for dark object detection. ICANN 2023. >[7] Sun et al. Rethinking image restoration for object detection. NeurIPS 2022. >[8] Cui et al. Multitask aet with oerthogonal tangent regularity for dark object detection, ICCV 2021. --- Rebuttal Comment 1.1: Title: More detailed experimental results Comment: **We sincerely thank you for your time and effort in reviewing our manuscript.** We have provided detailed information on the performance of quantitative experiments across different detectors, as shown in Tables 1 and 2. For a fair comparison, we reimplemented these methods using the MMdetection toolbox based on their open-source code, ensuring consistent training hyperparameters. We can see that our YOLA still achieves the best performance across different detectors. Additionally, we will include these experiments in the revised version. Thank you once again for your valuable suggestions. | Detector |Exdark mAP|DarkFace mAP| |----------|----------|----------| | Baseline| 71.0 | 60.0 | | IAT | 72.6 | 59.8 | | IAYOLO | 68.1 | 59.9 | | GDIP| 67.5| 60.4| | DENet| 71.3 | 60.0 | | PEYOLO| 68.8 | 53.9 | | Our | **72.7** | **61.5** | **Table 1**: Quantitative comparisons based on YOLOv3 detector. | Detector |Exdark mAP|DarkFace mAP| |----------|----------|----------| | Baseline| 72.5 | 62.1 | | IAT | 73.0 | 62.0 | | IAYOLO | 65.0 | 55.5 | | GDIP| 72.8| 62.9| | DENet| 73.5 | 66.2 | | PEYOLO| 67.8 | 61.1 | | Our | **75.2** | **67.4** | **Table 2**: Quantitative comparisons based on TOOD detector. --- Rebuttal Comment 1.2: Comment: Thanks to the author's detailed answer, which has resolved my doubts. I am raising my score to 6: Weak Accept. --- Reply to Comment 1.2.1: Comment: We are grateful for the feedback and thanks very much for the approval.
Summary: This paper proposes YOLA, a framework for object detection in low-light conditions by leveraging illumination-invariant features. A novel Illumination-Invariant Module to extract illumination-invariant features for low-light image enhancement. Strengths: Figures are helpful for understanding. The proposed method gives better performance than others. Weaknesses: RG/RB/GB-chromaticity is widely used in computer vision; learning-based methods for intrinsic images or inverse rendering also adopt the constraint. It deals with light intensity but does not try to solve the image noises in low light conditions. Does this paper consider noise reduction in the pipeline? Illumination invariant features based on chromaticity have ambiguities between colors with the same chromaticity, e.g. dark red or light red. Such limitations are not discussed. In experiments, ``LLIE methods fail to achieve satisfactory performance due to inconsistency between human visual and machine perception. The enhancement methodologies prioritize human preferences. However, it is important to note that optimizing for enhanced visual appeal may not align with optimized object detection performance``. It is hard to understand without visual examples. In the pipeline, the work is basically low-light enhancement + detector. In evaluations, only object detection is evaluated, why not evaluate on low-light enhancement datasets/benchmarks? The proposed features are claimed to fit many tasks. More tasks other than detection can be demonstrated. Technical Quality: 2 Clarity: 2 Questions for Authors: See above. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Overall I think the idea of illumination invariant feature is simple, so the technical contribution is limited. The results of ``ours`` in Figure 3-4 are strange, the images are very cloudy. It is very different from the example in the pipeline. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We sincerely thank you for your constructive comments.** --- >**In the pipeline, the work is basically low-light enhancement + detector. In evaluations, only object detection is evaluated, why not evaluate on low-light enhancement datasets/benchmarks?** ` First and foremost, we need to emphasize that our framework is designed to enhance the performance of object detectors in low-light scenarios, rather than for visual brightening or denoising` . Human visual alignment can improve detection performance, but it is not the only solution. From a machine vision perspective, detection does not depend on achieving image quality that aligns with human vision. Despite the presence of noise and blurriness in low-light images, this does not hinder our method from improving performance in detection tasks. Essentially, our method is object detection rather than image enhancement, which means comparing our results based on image enhancement benchmarks is unfair. More specifically, our framework does not employ any extra low-lit and normal-lit image pairs or other image restoration-related loss to enhance the input images. We only leverage detection loss to guide IIM in producing task-specific illumination invariant features. Therefore, improving visual quality is not our goal. Interestingly, we found that the subsequent enhanced image yielded by FuseNet tends to display increased brightness. --- >**In experiments, LLIE methods fail to achieve satisfactory performance due to inconsistency between human visual and machine perception. The enhancement methodologies prioritize human preferences. However, it is important to note that optimizing for enhanced visual appeal may not align with optimized object detection performance. It is hard to understand without visual examples.** As we have shown, some of the images we present are cloudy, which exemplifies the discrepancy between machine vision and human vision. In LLIE (Low-Light Image Enhancement) methods, many loss functions are designed based on human prior assumptions, such as illumination smoothness and color consistency in Zero-DCE, or TV loss in denoising. These losses often lead to the loss of many details while preserving details preferred by human perception. This difference may result in LLIE methods performing poorly in detection tasks, as discussed in related works[1, 2, 3]. --- >**RG/RB/GB-chromaticity is widely used in computer vision; learning-based methods for intrinsic images or inverse rendering also adopt the constraint. It deals with light intensity but does not try to solve the image noises in low light conditions. Does this paper consider noise reduction in the pipeline?** >**Illumination invariant features based on chromaticity have ambiguities between colors with the same chromaticity, e.g. dark red or light red. Such limitations are not discussed.** Enhancing image quality, such as reducing noise and improving chromaticity distinction, is a practical strategy, but it is not the only one for improving detection performance. In this work, We are committed to finding an end-to-end approach to enhance downstream detection tasks, rather than focusing on image restoration. Besides, improving both image quality and detection performance typically requires paired annotated datasets, which presents significant challenges for practical applications. Therefore, we strongly believe that developing simpler and more efficient end-to-end methods to reduce the burden of data annotation will greatly benefit the community. --- >**The proposed features are claimed to fit many tasks. More tasks other than detection can be demonstrated.** We present our evaluation of YOLA on the instance segmentation task in Table 1. We report the quantitative comparisons of several advanced LLIE and low-light object methods using Mask R-CNN on the low-light instance segmentation (LIS[10]) dataset. We can see that our YOLA achieves the best performance across all metrics, indicating that YOLA not only facilitates low-light object detection but also low-light instance segmentation. For more visual comparison, please refer to our appendix. --- | Method | AP$^{seg}$ | AP$^{box}$ | |----------------------------------|------------|------------| | Baseline | 34.2 | 41.3 | | DENet[3] | 38.6 | 46.4 | | PENet[4] | 36.1 | 43.6 | | Zero-DCE[5] | 38.7 | 46.4 | | EnlightenGAN[6] | 38.4 | 45.8 | | RUAS[7] | 36.1 | 43.8 | | SCI[8] | 36.5 | 44.3 | | NeRCo[9] | 36.7 | 44.6 | | **Ours** | **39.8** | **47.5** | **Table 1:** Quantitative Comparisons. **Reference** >[1] Cui et al. Multitask aet with oerthogonal tangent regularity for dark object detection, ICCV 2021. >[2] Khurram et al. Featenhancer: Enhancing hierarchical features for object detection and beyond under low-light vision, ICCV 2023. >[3] Qin et al. Denet: Detection-driven enhancement network for object detection under adverse weather conditions. ACCV 2020. >[4] Yin et al. Pe-yolo: Pyramid enhancement network for dark object detection. ICANN 2023. >[5] Guo et al. Zero-reference deep curve estimation for low-light image enhancement. CVPR 2020. >[6] Jiang et al. Enlightengan: Deep light enhancement without paired supervision. TIP 2021. >[7] Liu et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. CVPR 2021. >[8] Ma et al. Toward fast, flexible, and robust low-light image enhancement. CVPR 2022. >[9] Yang et al. Implicit neural representation for cooperative low-light image enhancement. CVPR 2023. >[10] Chen et al. Instance segmentation in the dark. IJCV 2023. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, We would like to extend our sincere appreciation for the time and effort you have dedicated to reviewing our manuscript. Your valuable comments and insights are greatly appreciated. We eagerly await your feedback on the points we have addressed in our rebuttal. If you have any concerns or require further clarification, please do not hesitate to let us know. Thank you once again for your commitment to the review process. Sincerely, Authors
Summary: In this paper, the authors propose an object detection method in low-light scenarios that is based on illumination-invariant feature learning. Additionally, the extraction of illumination-invariant features from low-light images, which can be easily integrated into existing object detection frameworks, The results reveal significant improvements in low-light object detection tasks as well as promising results in both well-lit and over-lit scenarios. Strengths: 1.This paper introduces an new object detection in low-light conditions by leveraging illumination-invariant features. 2. The writing is well done and well organized. 3. The Illumination-Invariant Module seems to be a plug and play module that is very useful. Weaknesses: 1. As a plug and play module, I think the lighting invariant module should be integrated into more detectors to prove its effectiveness. 2. The authors claim that they learned light invariant features, and then models trained in low light conditions should be able to generalize directly to normal light conditions. And vice versa. The authors should provide more experiments to demonstrate the generalization ability of their light invariant features. 3.The authors should provide evaluation content such as runtime and memory used by the run, it seems that their approach is more lightweight. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We sincerely thank you for your insightful and constructive comments.** >**As a plug and play module, I think the lighting invariant module should be integrated into more detectors to prove its effectiveness.** We report more detectors as shown in Table 1. It contains the **Anchor-based** detector (Faster-RCNN[1], RetinaNet[2], YOLOV3[3]), and the **Anchor-free** one (TOOD[4], Sparse-RCNN[5]). By integrating YOLA, the performances of all detectors are significantly improved, demonstrating the generalization capability of YOLA. Besides, please refer to the appendix, we also evaluate our YOLA on the instance segmentation task, demonstrating YOLA’s effectiveness on Mask R-CNN detector. --- >**The authors claim that they learned light invariant features, and then models trained in low light conditions should be able to generalize directly to normal light conditions. And vice versa. The authors should provide more experiments to demonstrate the generalization ability of their light invariant features.** We complement the more generalization experiments as presented in Table 2, where $GAP$ represents mAP$_{50}$ gap between in-domain and cross-domain. We trained detectors separately in low-light (ExDark) and normal-light(Pascal VOC 2012[6]) conditions and then tested them in the opposite conditions. We can see that our YOLA achieves the best performance across different domains and shows a smaller performance drop when tested on cross-domain. --- >**The authors should provide evaluation content such as runtime and memory used by the run, it seems that their approach is more lightweight.** We have included a comparison of runtime, model size, and performance in Table 3. The inference speeds, evaluated on an RTX 2080Ti using the MMdetection toolbox, indicate that YOLA introduces the fewest additional parameters (0.008M) while achieving the highest FPS and outstanding performance among the evaluated detectors, excluding the baseline. --- | Detector | mAP$_{50}$(ExDark) | mAP$_{50}$(DarkFace) | Types | |--------------|--------------------|----------------------|---------------| | YOLOv3 | 71.0 | 60.0 | **Single Stage** | | +YOLA | **72.7** | **61.5** | | |--------------|--------------------|----------------------|---------------| | RetinaNet | 70.9 | 50.2 | **Single Stage** | | +YOLA | **72.8** | **52.7** | | |--------------|--------------------|----------------------|---------------| | FasterRCNN | 71.4 | 43.0 | **Two Stage** | | +YOLA | **72.5** | **44.6** | | |--------------|--------------------|----------------------|---------------| | SparseRCNN | 63.5 | 43.5 | **Two Stage** | | +YOLA | **68.7** | **52.8** | | |--------------|--------------------|----------------------|---------------| | TOOD | 72.5 | 62.1 | **Single Stage** | | +YOLA | **75.2** | **67.4** | | |--------------|--------------------|----------------------|---------------| **Table 1:** YOLA with more detectors. --- | Detector | Training Set | Testing Set | mAP$_{50}$ ↑| Testing Set | mAP$_{50}$↑ | GAP ↓ | |----------|----------------|--------------|------------|--------------|------------|------| | YOLOv3 | ExDark_train | ExDark_test | 71.0 | VOC_val | 57.6 | 13.4 | | +YOLA | ExDark_train | ExDark_test | **72.7** | VOC_val | **60.5** | **12.2**| |----------|----------------|--------------|------------|--------------|------------|------| | YOLOv3 | VOC_train | VOC_val | 78.8 | ExDark_test | 57.4 | 21.4 | | +YOLA | VOC_train | VOC_val | **78.9** | ExDark_test | **58.5** | **20.4** | **Table 2:** Generalization comparison. --- | Detector | mAP | Size(M) | FPS | |------------------------------------|------|---------|------| | Baseline | 72.5 | 32.044 | 57.7 | | IAT[7] | 73.0 | 32.135 | 50.9 | | IAYOLO[8] | 65.0 | 32.209 | 52.5 | | GDIP[9] | 72.8 | 167.00 | 54.0 | | DENet[10] | 73.5 | 32.089 | 55.7 | | PEYOLO[11] | 67.8 | 32.135 | 38.8 | | Ours |75.2 | 32.052 | 56.6 | **Table 3:** Quantitative comparisons of the ExDark dataset based on TOOD detectors --- **Reference**: >[1] Ren et al. Faster r-cnn: Towards real-time object detection with region proposal networks. NeurIP2015. >[2] Lin et al. Focal loss for dense object detection. ICCV 2017. >[3] Redmon et al. Yolov3: An incremental improvement. ArXiv. >[4] Feng et al. Tood: Task-aligned one-stage object detection. ICCV 2021. >[5] Sun et al. Sparse r-cnn: End-to-end object detection with learnable proposals. CVPR 2021. >[6] Everingham et al. The {PASCAL} {V}isual {O}bject {C}lasses {C}hallenge 2012 {(VOC2012) >[7] Cui et al. You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction. BMVC 2022. >[8] Liu et al. Image-adaptive yolo for object detection in adverse weather conditions. AAAI 2022. >[9] Sanket et al. Gdip: Gated differentiable image processing for object detection in adverse conditions. ICRA 2023. >[10] Qin et al. Denet: Detection-driven enhancement network for object detection under adverse weather conditions. ACCV 2020. >[11] Yin et al. Pe-yolo: Pyramid enhancement network for dark object detection. ICANN 2023
Summary: This paper proposes a plug-and-play module for extracting illumination-invariant features from low-light images. By integrating a zero-mean constraint within the module, a diverse set of kernels is effectively learned. These kernels excel at extracting illumination-invariant features, thereby enhancing detection accuracy. Experiments on object detection and semantic segmentation tasks demonstrate the effectiveness of the module. Strengths: 1. The authors design a Illumination-Invariant Module to extract illumination-invariant features without requiring additional paired datasets, and can be seamlessly integrated into existing object detection methods. 2. Lambertian assumption is introduced, which enhances the interpretability of the model. 3. The authors claim that the proposed method achieves state-of-the-art performance on several benchmark datasets for object detection. Weaknesses: 1. The authors assume uniform illumination between neighboring pixels to eliminate the influence of the positional term 𝑚 in Equation 1. However, images captured in real-world scenes often exhibit uneven lighting. I question the validity of this assumption. 2. The paper does not provide definitions for the symbols used in Equation 2. 3. What does "Baseline" refer to in Tables 1 and 2, and why is the object detection performance based on low-light enhancement methods worse than the Baseline? 4. Many end-to-end low-light face detection algorithms have been proposed. The authors should compare their method not only with general object detection algorithms but also with these specialized low-light face detection algorithms on the DarkFace dataset. Technical Quality: 4 Clarity: 3 Questions for Authors: There are some questions regarding the training loss. The paper does not provide the training loss of the model. Are the components for learning illumination-invariant features and the object detection network trained jointly, or is the illumination-invariant feature learning component trained separately? If trained separately, considering that the ExDark and DarkFace datasets consist only of annotated low-light images without corresponding normal-light images, is this training unsupervised? The authors should provide detailed information about the training process in the paper. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We would like to thank the reviewer for carefully reading our submission and providing many insightful comments.** > **The authors assume uniform illumination between neighboring pixels to eliminate the influence of the positional term $m$ in Equation 1. However, images captured in real-world scenes often exhibit uneven lighting. I question the validity of this assumption.** In the main text, we assume that the illumination values of neighboring pixels are equal, which allows us to eliminate the influence of illumination when extracting features. However, in cases of uneven illumination, this assumption's constraint is weakened but still helps reduce the impact of illumination, as shown in teaser (b). Our method is detection-driven and can further mitigate the influence of such uneven illumination during the learning process, as illustrated in teaser (d). Additionally, please refer to the appendix. When the actual distance between neighboring pixels is too large, the assumption may not hold. Therefore, we propose IIloss to constrain the extraction of illumination-invariant features to as close a region as possible, mitigating the impact of uneven lighting. --- > **The paper does not provide definitions for the symbols used in Equation 2.** We apologize for the oversight. In the revised version, we will include the corresponding explanations. $B$ denotes the blue channel, whereas $R$ represents the red channel. --- > **What does "Baseline" refer to in Tables 1 and 2, and why is the object detection performance based on low-light enhancement methods worse than the Baseline?.** 'Baseline' refers to the detector that our method is built upon. The difference between 'Baseline' and 'Ours' is that we utilized the IIM (Illumination Invariant Module). Lowlight image enhancement methods may fail to achieve satisfactory performance due to the task of primarily satisfying human vision instead of machine perception. This phenomenon is also claimed by many related works[1,2,3]. --- > **Many end-to-end low-light face detection algorithms have been proposed. The authors should compare their method not only with general object detection algorithms but also with these specialized low-light face detection algorithms on the DarkFace dataset.** We explored several specialized low-light face detection algorithms [4,5,6]. However, most of these algorithms are evaluated on the DarkFace test set that does not provide Ground Truth, or lacks open-source code. Consequently, we selected some representative face detection algorithms adopted by these methods as our baselines, including DFSD[7] and PyramidBox[8]. We implemented our YOLA using their default settings for a fair comparison, as shown in Table 1. Our YOLA outperforms the baselines, achieving 0.6 and 0.8 higher mAP on PyramidBox and DFSD, respectively, demonstrating its generalization ability in the face detection task. --- > **There are some questions regarding the training loss. The paper does not provide the training loss of the model. Are the components for learning illumination-invariant features and the object detection network trained jointly, or is the illumination-invariant feature learning component trained separately? If trained separately, considering that the ExDark and DarkFace datasets consist only of annotated low-light images without corresponding normal-light images, is this training unsupervised? The authors should provide detailed information about the training process in the paper.** Our YOLA model is trained without any additional image pair annotations using an end-to-end joint training fashion. Specifically, the features produced by the IIM are supposed to be inherently illumination invariant at initialization, based on the Lambertian assumption. Thus, we do not require any additional normal light images or other loss such as Brightness loss to guide its learning. We only employ detection loss to guide the IIM in producing task-specific illumination invariant features for downstream tasks. --- | Detector | Baseline | Ours | |--------------------------------------|----------|--------------| | PyramidBox| 47.7 | **48.3** | | DFSD | 44.9 | **45.7** | **Table 1:** Face detection algorithms on DarkFace Dataset --- > **Reference**: > [1] Cui et al. Multitask aet with oerthogonal tangent regularity for dark object detection, ICCV 2021. > [2] Khurram et al. Featenhancer: Enhancing hierarchical features for object detection and beyond under low-light vision, ICCV 2023. > [3] Qin et al. Denet: Detection-driven enhancement network for object detection under adverse weather conditions. ACCV 2020. > [4] Wang et al. Unsupervised face detection in the dark. T-PAMI 2022. > [5] Wang et al. Hla-face: Joint high-low adaptation for low light face detection. CVPR 2021. > [6] Yu et al. Single-stage face detection under extremely low-light conditions. ICCVW 2021. > [7] Li et al. Dsfd: dual shot face detector. CVPR 2019. > [8] Tang et al. Pyramidbox: A context-assisted single shot face detector. ECCV 2018. --- Rebuttal Comment 1.1: Comment: Thanks to the author's detailed response, I choose to rise my score to 7. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback! We appreciate your support!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unsupervised Discovery of Formulas for Mathematical Constants
Accept (poster)
Summary: The authors propose an algorithm to filter, cluster and label polynomial continued fractions. It is based on several features linked to the asymptotic behaviour of their rational approximations. The authors detail how these are computed. They apply this algorithm to a large set of formulae ; they discuss the properties of the obtained clusters and highlight some novel PCFs linked to known mathematical constants. Strengths: This work seems to be an interesting application of unsupervised data analysis to fundamental science. It proposes a way to build relevant metrics from sequences of numbers, and to numerically estimate them. More generally, the structures revealed on fig. 3 and 4 are intriguing and the article is well written. Weaknesses: The claim that "we connect the challenge of formula creation to modern approaches in AI for Science" seems a bit bold. To my understanding this work does not involve modern AI in the sense that it requires manual and careful feature extraction and that nothing is learnt. Technical Quality: 3 Clarity: 3 Questions for Authors: - Part 3.4 was a bit confusing. The Blind-δ Algorithm is presented as a proxy for delta eq. 4. Is it not rather to estimate the approximation error epsilon ? - There is an inconsistency : part 3.2 it is n^beta vs table 1 P(n) and no beta ; is it because beta is hard to estimate ? is it usefull for clustering ? - Could the authors comment on the fact that the predicted delta eq. 5 can be quite far from the actual delta (fig. 2a, fig. 2b) ? Yet it seems it provides relevant information for clustering ; what does it capture ? - More generally, did the authors try to extract more features, other metrics, not necessary having a mathematical interpretation, in an automated way ? l. 100 "equval" l. 152 "but how is it related to the actual series delta?" is not clear l. 197 "We’ll start" informal Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **“The claim that "we connect the challenge of formula creation to modern approaches in AI for Science" seems a bit bold. To my understanding this work does not involve modern AI in the sense that it requires manual and careful feature extraction and that nothing is learnt.”** In the revised manuscript, we softened this sentence in l. 234 to express a more specific claim. For context, there is no previous work that succeeded applying modern AI to conjecture generation in number theory. Our work is the first in this field to succeed with clustering algorithms, and this resulted in finding novel formulas on a large scale. *For further discussion regarding the connection of our work to modern AI for Science, please see the joint rebuttal section.* Regarding the feature extraction itself, the full set of metrics was chosen manually, because the mathematics of rational approximant series limits the possibilities of additional metrics which give independent information about the formula. During the conjecture generation process (Fig. 1), the subset of metrics used was selected automatically. In the revised manuscript, we now discuss the broader set of possible metrics in section 3.5, describing the relative value of including additional metrics and the rationale for the metric selection. Regarding the use of the term learning, please see Fig.5 in the attached PDF. There we show that the clusters trained on our original dataset successfully classify (based on the same dynamical metrics) new formulas outside the original dataset. The new formulas are based on higher-order polynomials than the ones used for building the clusters - showing the trained classifier can work even on higher complexity formula structures. * **“Part 3.4 was a bit confusing. The Blind-δ Algorithm is presented as a proxy for delta eq. 4. Is it not rather to estimate the approximation error epsilon ?”** This is a good point. “Blind-$\delta$” is the name we gave the entire algorithm, although this name arises from eq. 4, where indeed it is the approximation error $\epsilon$ that is being calculated without prior knowledge about the limit. It is just a terminology issue, but we should have explained it better. The ability to estimate $\delta$ without prior knowledge provides us with a unique and powerful dynamical metric, which is why we consider it a key to the algorithm. Following this comment, we clarify the use of terminology. We thank the referee for raising this point and helping to improve our work. * **“There is an inconsistency : part 3.2 it is n^beta vs table 1 P(n) and no beta ; is it because beta is hard to estimate ? is it useful for clustering ?”** We thank the referee for raising this point. The vast majority of our dataset consists of PCFs with an exponential or factorial error decay rate - in which cases the polynomial coefficient is noisy and less useful for characterizing the PCF. So yes, $\beta$ is hard to measure accurately for most PCFs. In fact, only 534 PCFs were measured to have both $\left| \gamma \right| < 0.1$ and $\left| \eta \right| < 0.1$ (as defined in Table 1). In those cases we believe the polynomial coefficient to be potentially useful, but a larger subset is required to test the automated process. The definitions in section 3.3 now follow the same form as Table 1, not mentioning $\beta$ in the manuscript any more. * **“Could the authors comment on the fact that the predicted delta eq. 5 can be quite far from the actual delta (fig. 2a, fig. 2b) ? Yet it seems it provides relevant information for clustering ; what does it capture ?”** We thank the referee for this remark. Figures 2a and 2b intentionally show the possible discrepancy between the direct calculation of $\delta_n$ and the value of $\delta_{\mathrm{predicted}}$ (eq. 5) for $n < 1000$. The prediction formula is accurate in the limit as $n \rightarrow \infty$. In practice, we set the numerical limit at $n = 10^7$, making $\delta_{\mathrm{predicted}}$ much closer to the actual $\delta$ value. This is now explicitly stated in the caption of fig. 2. $\delta_{\mathrm{predicted}}$ provides significant validation for the numerical $\delta$ in clustering. We used several sanity checks to identify anomalies or execution issues. One of the flags was a large discrepancy between the numerical and the predicted $\delta$. In the future, we plan to use the prediction formula in a gradient-descent-based approach to search for high-$\delta$ formulas, which we will explore in further research. The $\delta$ prediction formula holds additional importance in our research. The extension of the basic formula in Elimelech et al. (2023) to the general case was motivated and conjectured based on the results of the numerical $\delta$ measurements. In a sense, it is the first conjecture that arose from the dataset and was later proven analytically. * **“More generally, did the authors try to extract more features, other metrics, not necessarily having a mathematical interpretation, in an automated way ?”** The dynamical features are extracted automatically for the 1,543,926 formulas, but as the referee suggests, the choice of which metrics is predetermined. We have tried several other dynamic metrics during this research (like measuring $p_n$ and $q_n$ modulo primes, tracking their sign etc.) - which gave no apparent value. The idea to automate the choice of dynamic features is intriguing. Such an idea was never attempted in number theory. We thank the referee for this suggestion. It is now mentioned in Section 5 (“Discussion and Outlook”), and we leave it for future research. * **“l. 100 "equval"** * **l. 152 "but how is it related to the actual series delta?" is not clear** * **l. 197 "We’ll start" informal”** We thank the referee for the careful review and for spotting these issues. L. 100 is now corrected. L.152 is now removed. In l. 197 the first sentence is now removed. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for their replies, the precisions and the additional results they give.
Summary: The paper presents a classification of 1.5 million polynomial continued fractions (PCF), continued fractions having as coefficients the integer values of two polynomials, $A(n)$ and $B(n)$, with $A$ $B$ of degree two, with integer coefficients in $[-5,5]$. PCF are classified according to the asymptotic properties of the sequence of approximation errors (difference between the convergents and the limit), the asymptotic properties of the denominator of the convergents (in simplest terms), and the measure of irrationality, which is the difference between the logs of the two previous metrics. Along these metrics, the authors observe that groups of PCF with the same limit tend to cluster together, and that by "anchoring" these clusters on known formulas, one can discover new PCF decompositions of mathematical constants. Strengths: The paper is well written, and interesting to read. It is an original work, and both theoretical and experimental results are adequately supported. The results are intriguing, and seem to point to scientific discovery. Weaknesses: The link with AI or machine learning is not completely clear to me. Whereas the clustering techniques used in the paper, and the t-SNE projections used to represent the results, are commonly used by ML practitionners, most of the analyses conducted in this paper amount to descriptive statistical analysis of a specific mathematical dataset. To demonstrate a possible link with Machine Learning, it would be useful that the authors discuss (and perhaps demonstrate) how their approach scales to large sets of PCF, by letting the coefficients and degrees of $A$ and $B$ grow larger. I lean towards acceptance because of the potential interest of such approaches in AI for Science. Technical Quality: 3 Clarity: 3 Questions for Authors: * Figure 1 caption is very long, in the interest of clarity, it might be worth describing your methodology in a specific section (3.2?) * l.96: couldn't we assume that the convergents $p_n/q_n$ are always in simplest terms? this would avoid having to introduce of $\tilde q_n$. Besides, I believe it is assumed in the usual definition of the measure of irrationality. * in section 3.1, you explain that the irrationality measure is either 0 or larger than 1. You then claim that the blind-$\delta$ method provides a good estimator of $\delta$, and figure 2.a seems to support this, yet in figure 2.b most estimates of $\delta$ are below $1$, and some are even negative. What happens? Doesn't this compromise the use of blind-$\delta$ as an estimator of the irrationality measure? * Section 3.5, can you elaborate on "representation power", what it is? why is it important? * Table 1: The exponential factor of the growth coefficient seems useless, why is it? Also, could the larger value of the Davies Boulding Index the irrationality measure be a sign of the problem with its estimation? * Figure 4: can you provide a description of the axes in the tSNE graph? (maybe switch to PCA for more explainability) * l.29: shouldn't Lambert's original paper be quoted, instead of a modern compilation? * l. 100 "equal" (typo) * l.135: the error rate $\epsilon$ is used before it is defined (in section 3.4) * Figure 4, constant $C1$ is not defined anywhere, lemniscate? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **“To demonstrate a possible link with Machine Learning, it would be useful that the authors discuss (and perhaps demonstrate) how their approach scales to large sets of PCF, by letting the coefficients and degrees of A and B grow larger.”** We thank the referee for this suggestion. We successfully tested our hypothesis regarding the value of identified clusters for broader forms of formulas. Specifically, we showed how the clustering applies to 3rd and 4th degree polynomial continued fractions (PCFs), showing that they belong to the same clusters created by 2nd degree PCFs, and importantly, using the same dynamical metrics. The 2nd degree PCFs in our initial dataset served as training data for a classifier (based on the automatically identified clusters) that was then tested on higher degree PCFs. These results are now part of a new appendix in the paper (please see the figure in the joint rebuttal PDF). * **“Figure 1 caption is very long, in the interest of clarity, it might be worth describing your methodology in a specific section (3.2?)”** We thank the referee for this suggestion. The content of Figure 1 caption is now split between the caption and the main text. * **“l.96: couldn't we assume that the convergents 𝑝𝑛/𝑞𝑛 are always in simplest terms? this would avoid having to introduce of 𝑞~𝑛.”** We thank the referee for this comment. $q̃_n$ is no longer introduced. * **“In section 3.1, you explain that the irrationality measure is either 0 or larger than 1. You then claim that the blind-𝛿 method provides a good estimator of 𝛿, and figure 2.a seems to support this, yet in figure 2.b most estimates of 𝛿 are below 1, and some are even negative. What happens? Doesn't this compromise the use of blind-𝛿 as an estimator of the irrationality measure?”** This is a delicate point in the concept of irrationality measure. Given a converging series of rational approximations $p_n / q_n$, $\delta$ is defined as the irrationality measure of that _series_ - and it can be any number $\geq$-1 (Eq.4). The irrationality measure of a _number_ is defined as the supremum of all possible $\delta$’s (Eq.3). Any converging PCF produces a $\delta$ measure - which is almost never exactly 0 or 1. But the _true_ irrationality measure of their limit will always be 0 or $\geq$1. The challenge in proving the irrationality of a mathematical constant is mostly finding a series that produces a positive $\delta$, as they are not known in advance and are notoriously hard to construct. We thank the referee for bringing this to our attention. We now stress this point in section 3.1 and reverse the order in which these two concepts are introduced. * **“Section 3.5, can you elaborate on "representation power", what it is? why is it important?”** “Representation power” of a metric can be thought of as maximal conditional information (conditioned on the metrics already included). We don’t measure information - we aim for best clustering and identification - so the added value is measured by the Davies-Bouldin Index. The core idea of step (e.2) in figure 1 is to gradually choose the metrics that give the most value for the resulting clustering of unidentified PCFs - choosing the one with the most representation power each time we need additional granularity / quality of clustering. * **“Table 1: The exponential factor of the growth coefficient seems useless, why is it?“** We believe the reason is that for the majority of PCFs the dominant convergence factor is factorial. Only when the error factorial coefficient ($\eta$) is $\approx0$, then the exponential coefficient has true meaning. The dataset contains only 72,610 PCFs with $|\eta|<0.1$. When measuring the DB Index for this subset we get: Exponential factor of the growth coefficient = 0.479275 Factorial factor of the growth coefficient = 2.87425 The factorial rate is now worse, as expected, but the exponential growth rate metric becomes valuable for clustering, supporting the hypothesis. * **“Also, could the larger value of the Davies Boulding Index the irrationality measure be a sign of the problem with its estimation?”** Yes, a situation where the underlying structure has good clusters but the measurement of some of the metrics is noisy can indeed produce high Davies Bouldin Index values. We believe that is not the case here: 1. The "Blind-$\delta$" algorithm was validated on multiple examples of PCFs with known, analytically proven, $\delta$'s. 2. The $\delta$ DB Index remains consistent between different random samplings of the dataset (as described in section 3.5). If the $\delta$ estimation noise was substantial enough to mask the underlying clustering, it would also create substantial variance in the clustering quality assessment. * **“Figure 4: can you provide a description of the axes in the tSNE graph? (maybe switch to PCA for more explainability)”** In this case there is a tradeoff between visualization and explainability. We opted to go for a better visualization. Due to the nature of tSNE, we cannot provide a simple description of the axes for the existing clustering graph, but following the referee’s comment we are now creating a new visualization based on non linear axis scales - to better combine explainability and graphical fidelity. * **“l.29: shouldn't Lambert's original paper be quoted, instead of a modern compilation?”** We thank the referee for the remark. Lambert’s work is now cited in addition to Berggren’s. * **“l. 100 "equal" (typo)** * **l.135: the error rate 𝜖 is used before it is defined (in section 3.4)** * **Figure 4, constant 𝐶1 is not defined anywhere, lemniscate?”** We thank the referee for the careful review and for spotting these issues. L. 100 is now corrected. $\epsilon$ is now defined in section 3.3. $C1$ in this context is the Continued Fraction Constant. It is now renamed to $C_{\mathrm{cf}}$ and referenced in the caption. --- Rebuttal Comment 1.1: Comment: Thank you very much for your replies, which clarify a number of my questions. I will keep my rating, and believe this paper is a good fit for NeurIPS.
Summary: They generate continued fraction formulas and test if they evaluate to mathematical constants. They introduce a distance metric to compare formulas. They discover novel formulas for known constants. Strengths: Mathematical constants are always used, so it is important to have formulas to calculate them well-written throughout experiments Weaknesses: It generates formula hypotheses, so we do not always know if the formulas are actually correct Rather limited structure of the formulas Technical Quality: 2 Clarity: 3 Questions for Authors: all formulas are continued fraction formula of quadratic polynomials? is that not rather limited? in (1) you give the example tan(x), but in the definition of a/b (p4 118), there is only n and no x. So you cannot actually get a formula for tan(x), can you? is there a list of all 1,543,926 formulas? how many are equal to pi or e? is there pi^pi or e^e among them? or ln(pi)? do you know how many are irrational? or even how many are transcendent? how many are proven to be correct and how many remain hypotheses? have you consulted mathematicians if they are going to use these formulas for anything? >(4) is it necessary to introduce q̃_n and not use q_n without tilde there? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **“It generates formula hypotheses, so we do not always know if the formulas are actually correct”** * **“how many are proven to be correct and how many remain hypotheses?”** We thank the referee for the constructive feedback. *Please see the joint rebuttal.* * **“Rather limited structure of the formulas”** * **“All formulas are continued fraction formulas of quadratic polynomials? is that not rather limited?”** First, please note the surprising generality of polynomial continued fractions (PCFs). Many useful sums, Taylor series of ubiquitous functions, and common families of integrals, are all equivalent to continued fractions via Euler’s continued fraction formula. This way, studying PCFs covers relations to trigonometric, hyperbolic, Bessel and other important functions. This fact is now stressed further in the revised manuscript. For more complex formulas, our dynamical metrics clustering approach can be directly extended, as it does not depend on the specific structure of the formula. Our work directly applies to a wide range of parametric families of functions: including ones whose evaluation is iterative / recursive / an infinite sum / any process producing rational approximants. Any mathematical structure of these types can be measured, clustered, and identified using the proposed method - the underlying generating functions can be thought of as a black box. To exemplify this universal concept, we also specifically looked into higher depth recursion relations, which are a promising research direction because little is known about their global structure, yet they are involved in several important conjectures. For example, the best rational approximation formula known for Euler’s gamma constant is constructed via such a recursion relation [Aptekarov et al., Trans. Moscow Math. Soc. 70, 237 (2009)]. This family of formulas is broader than continued fractions, yet our analysis showed that it is **described by the same metrics** we originally discovered for PCFs. Another type of mathematical formula that we analyzed in the revised manuscript are hypergeometric functions, which are given as infinite sums and can be analyzed by the exact same metrics that we originally developed for PCFs. Hypergeometric functions show the applicability of our approach to an even bigger family of functions for measurement, clustering, and conjecture generation. This can be useful in a wide variety of contexts, e.g., in investigations of integral formulas (e.g., Beukers-type integrals [Beukers, B. Lond. Math. Soc. 11, 268 (1979); Dougherty-Bliss et al., The Ramanujan J. 58, 973 (2022); Brown et al., arXiv:2210.03391 (2022)]). These prospects and additional potential generalizations are now expanded on in Section 5, with the above references included therein. * **“in (1) you give the example tan(x), but in the definition of a/b (p4 118), there is only n and no x. So you cannot actually get a formula for tan(x), can you?”** Yes, we can get formulas for functions of $x$ like $\tan(x)$. For this to be possible, we need $A_n,B_n$ to be functions of $n$ and of $x$. In eq. (1) the polynomials are: $A_n = 2n - 1$ $B_n = -x^2$ and the PCF converges to $x*\tan(x) - 1$. In general, given any rational $x$, there are polynomials $A_n,B_n$ with integer coefficients that produce a PCF that converges to $\tan(x)$. But the parametric family applies to any value $x$. In fig.2a, we show an example of a PCF that we found and converges to $1/\tan(1)$. Generating an $x$-dependent conjecture thus requires an additional step, but it is just as automatable as the rest. For example, once a large-enough dataset is measured and automatically identified, the family of formulas converging to $\tan(1),\tan(2)$ etc. can be extrapolated and we get a formula for $\tan(x)$. * **“Is there a list of all 1,543,926 formulas? how many are equal to pi or e? is there pi^pi or e^e among them? or ln(pi)? do you know how many are irrational? or even how many are transcendent?”** Due to its size the full dataset cannot be explicitly shown inside the article, but the code used to generate and measure it is attached to the original paper submission. Specifically: $\pi =$ 39 previously known + 116 new conjectures $e =$ 44 previously known + 80 new conjectures $e^2 =$ 28 previously known + 178 new conjectures Positive $\delta$ (proving irrationality) = 913,056 These numbers are now stated explicitly in the manuscript. We thank the referee for this constructive comment. There are no $\ln(\pi)$ formulas in the dataset. There were no $\pi^2$ formulas in the original dataset, but we have now discovered and proved 2 new high degree PCFs for $\pi^2$ (see the PDF). Regarding the question of transcendence, there were many conjectures found for known transcendental constants (like $\pi$ and $e$) and known non-transcendental constants (like the Golden Ratio). We do not know whether the unidentified PCFs are transcendental or not. * **“have you consulted mathematicians if they are going to use these formulas for anything?”** Yes, we are in contact with experts in several fields, like Doron Zeilberger (Rutgers), Jeffrey Lagarias (University of Michigan), Uri Bader (Weizmann), Dzmitry Badziahin (University of Sydney) and others. We also have researchers with PhDs in mathematics as part of the team (and co-authors). One common usage is for irrationality proofs, which require a series of rational approximations with $\delta>0$. Out of the 1,543,926 converging formulas, 913,056 have $\delta>0$ and are thus irrationality proving formulas. In the next stage, we will explore how many of these constants were not known to be irrational. Such discoveries can be of substantial importance as new irrationality proofs are scarce and far between. * **“(4) is it necessary to introduce q̃_n and not use q_n without tilde there?”** We thank the referee for the suggestion. $q̃_n$ is no longer introduced.
Summary: The paper addresses a long-standing challenge in number theory by proposing a new methodology for the categorization, characterization, and pattern identification of mathematical formulas, specifically Polynomial Continued Fraction (PCF) formulas. The authors introduce metrics based on the convergence dynamics of these formulas, enabling the first automated clustering of mathematical formulas. The methodology is demonstrated on a dataset of 1.7M PCF formulas, leading to the identification of both known and previously unknown formulas for significant mathematical constants such as π, ln(2), Gauss, and Lemniscate constants. The uncovered patterns allow for the generalization of individual formulas to infinite families, revealing rich mathematical structures. This work sets the stage for a generative model capable of creating continued fractions with specified mathematical properties, potentially accelerating the discovery of useful formulas. Strengths: The work introduces a novel methodology for the automated investigation of mathematical formulas, specifically focusing on Polynomial Continued Fractions. This approach is new in its use of convergence dynamics as metrics for clustering and categorization. The methodology is rigorously tested on a large dataset, resulting in the discovery of both known and previously unknown formulas for important mathematical constants. This demonstrates the robustness and potential of the approach. The paper clearly outlines the problem, the new methodology proposed, and the important findings. The inclusion of detailed explanations and relevant figures helps in understanding the approach and its implications. By automating the discovery of mathematical formulas, this work has the potential to impact the field of number theory and mathematical discovery. The ability to generalize formulas into infinite families could lead to new insights and advancements in mathematics. Weaknesses: The paper has some weaknesses that should be addressed to improve its quality: While the addressed task and the methodology is unique, it may not be the best fit for a machine learning conference like NeurIPS. The focus on mathematical discovery might be better suited for a specialized conference or journal in mathematics or computational mathematics. The paper’s writing can be further improved for clarity and conciseness. The abstract could be more concise, and the figures are currently blurry and not well-organized, detracting from the overall presentation. The study is based on a limited-size dataset and a small set of metrics. Expanding the dataset and incorporating a broader range of metrics could enhance the robustness and applicability of the findings. The newly identified formulas for significant constants need to be further verified. Ensuring their correctness and utility is crucial for the validity of the contributions. Technical Quality: 2 Clarity: 2 Questions for Authors: How do the authors plan to further verify the newly discovered formulas for significant constants such as π and ln(2)? Can the authors provide more details on how their methodology can be scaled to larger datasets and more complex formulas? How can the approach be adapted or extended to other types of mathematical formulas beyond Polynomial Continued Fractions? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The paper could benefit from a more detailed discussion on the impact of the limited dataset size and the potential benefits of using larger and more diverse datasets. The choice of metrics is crucial for the clustering and characterization of formulas. Providing a rationale for the selected metrics and discussing potential additional metrics would strengthen the paper. Further elaboration on the methods for verifying the newly discovered formulas would enhance the credibility of the findings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **“While the addressed task and the methodology is unique, it may not be the best fit for a machine learning conference like NeurIPS. The focus on mathematical discovery might be better suited for a specialized conference or journal in mathematics”** *Please see the joint rebuttal about this important point.* * **“The paper’s writing can be further improved for clarity and conciseness. The abstract could be more concise, and the figures are currently blurry and not well-organized, detracting from the overall presentation.”** We thank the referee for this constructive feedback. The abstract has been shortened and made more concise, and the figures' quality and readability improved. * **“The _choice of metrics_ is crucial for the clustering and characterization of formulas. Providing a _rationale for the selected metrics_ and discussing _potential additional metrics_ would strengthen the paper.”** * **“...and a _small set of metrics_. Expanding the dataset and _incorporating a broader range of metrics_ could enhance the robustness and applicability of the findings.”** Indeed, the choice of metrics is crucial. We have tried several other metrics during this research (like measuring $p_n$ and $q_n$ modulo primes), which gave no apparent value. In practice, our selection is automated (Fig. 1), selecting the metrics that provide the largest immediate improvement in clustering. In the revised manuscript, we now discuss the broader set of possible metrics in Section 3.5, describing the relative value of including additional metrics and the rationale for the metric selection. We thank the referee for this constructive and useful comment. * **“The paper could benefit from a more detailed discussion on the impact of the _limited dataset size_ and the potential benefits of _using larger_ and more diverse _datasets_.”** * **“Can the authors provide more details on how their methodology can be _scaled to larger datasets_…?”** * **“The study is based on a _limited-size dataset_…“** Our methodology can be scaled to larger datasets in multiple ways. For example, extend the range of PCF coefficients from [-5,5] to [-10,10], increasing the size of the dataset ~50X, or going up to 3rd degree $A_n$ and 4th degree $B_n$ with coefficients in [-5,5], increasing the dataset ~1333X. To combat this rapid growth in required compute, we propose a 2-fold approach: * Make the measurement depth dynamically chosen (instead of constant for all) during the evaluation - aiming at a fixed precision for all PCFs instead of a fixed fraction depth. * The heaviest part of the computation - calculating the metrics for each formula - is “embarrassingly parallelizable”. We recently adapted our algorithm to the Berkeley Open Infrastructure for Network Computing (BOINC), enabling parallel experiments on thousands of volunteer computers. Assuming a typical contribution of 1000 BOINC volunteer cores, we expect the above dataset to require about ~1 month of compute. These estimations and advances are now added to Appendix A. * **“Can the authors provide more details on how their methodology can be scaled to … _more complex formulas_?”** * **“How can the approach be adapted or _extended to other types of mathematical formulas_ beyond Polynomial Continued Fractions?”** First, please note the surprising generality of PCFs. Many useful sums, Taylor series and families of integrals, are all equivalent to continued fractions via Euler’s formula (see PDF). This way, studying PCFs covers relations to trigonometric, hyperbolic, Bessel and other important functions. This fact is now stressed further in the revised manuscript. Our dynamical metrics clustering methodology can be scaled to more complex formulas as it does not depend on the specific structure of the underlying function - including ones whose evaluation is iterative / recursive / an infinite sum / any process producing rational approximants. Any such mathematical structure can be measured, clustered, and identified using the proposed method - treating the generating functions as a black box. To exemplify this universal concept, we also specifically looked into higher depth recursion relations, which are a promising research direction because little is known about their global structure, yet they are involved in several important conjectures. For example, the best rational approximation formula known for Euler’s gamma constant is constructed via such a recursion relation [Aptekarov et al., Trans. Moscow Math. Soc. 70, 237 (2009)]. This family of formulas is broader than continued fractions, yet our analysis showed that it is **described by the same metrics** we originally discovered for PCFs. Another type of mathematical formula that we analyzed in the revised manuscript are hypergeometric functions, which are given as infinite sums and can be analyzed by the exact same metrics that we originally developed for PCFs. Hypergeometric functions show the applicability of our approach to an even bigger family of functions for measurement, clustering, and conjecture generation. This can be useful in a wide variety of contexts, e.g., in investigations of integral formulas (e.g., Beukers-type integrals [Beukers, B. Lond. Math. Soc. 11, 268 (1979); Dougherty-Bliss et al., The Ramanujan J. 58, 973 (2022); Brown et al., arXiv:2210.03391 (2022)]). These prospects and additional potential generalizations are now expanded on in Section 5, with the above references included therein. * **“The newly identified formulas for significant constants need to be further verified. Ensuring their correctness and utility is crucial for the validity of the contributions.”** * **“How do the authors plan to further verify the newly discovered formulas for significant constants such as π and ln(2)?”** * **“Further elaboration on the methods for verifying the newly discovered formulas would enhance the credibility of the findings.”** *Please see the joint rebuttal about this important point.*
Rebuttal 1: Rebuttal: We would like to summarize and address the most important comments brought by more than one referee. Regarding the link of our work to ML --- * **“While the addressed task and the methodology is unique, it may not be the best fit for a machine learning conference like NeurIPS.”** * **“The link with AI or machine learning is not completely clear to me.”** Until now, leading methods in ML were not successful in problems of conjecture generation in number theory. This field has been harder to penetrate compared to other areas, such as theorem proving. Our work is the first to successfully apply a basic learning method to conjecture generation in number theory. This is also the first example of successful clustering for conjecture generation in any area of mathematics. Our clustering methods are well-known in ML, but the underlying dynamical metrics we found and used are completely new and they open a path for many other ML applications in this field. For context, our work offers advances in automated conjecture generation (ACG), a subfield of AI for Science. Notably, current generation LLMs do not succeed in generating relevant new conjectures in number theory. The reason is attributed to the lack of metrics, needed to provide a measure of being “closer to correct”. In number theory, such metrics have not existed until now - our work is the first to suggest successful metrics, and to use them. In the time that passed since our initial submission, we improved our manuscript especially in these aspects. We successfully tested our clustering method on broader forms of formulas and found clusters applicable to formulas in ranges of parameters outside those used for training: 3rd and 4th degree polynomial continued fractions (PCFs) are captured by the same clusters trained on (constructed by) 2nd degree PCFs, and importantly, using the same dynamical metrics. These results are now part of the new appendix D (see the revised Fig. 2b in the attached PDF). Our systematic approach for formula generation, with these new validations, provided the first example of automated learning for conjecture generation in number theory, hopefully inspiring other efforts for ML applications in this field. Regarding the correctness of the automatically generated formulas --- * **“The newly identified formulas for significant constants need to be further verified. Ensuring their correctness and utility is crucial for the validity of the contributions”.** * **“How do the authors plan to further verify the newly discovered formulas for significant constants such as π and ln(2)?”** * **“Further elaboration on the methods for verifying the newly discovered formulas would enhance the credibility of the findings.”** * **“It generates formula hypotheses, so we do not always know if the formulas are actually correct”** * **“how many are proven to be correct and how many remain hypotheses?”** Following these remarks, we have taken two important verification steps: 1. The novel formulas are now evaluated to a higher depth: 4 million steps or more, producing between 13 digits to thousand of digits of precision, depending on the convergence rate. These additional digits reduce the chance of an incorrect accidental identification by a measure independent of our clustering (elaborated in the revised Appendix A). Even with this additional verification, none of the previously identified constants was found erroneous, further supporting the robustness of our approach. 2. We worked to prove selected formulas in cases of especially slow convergence (see the attached PDF). 47 of the automatically generated conjectures are now analytically proven. These proofs help validate our approach and show that the results can be relied on in future research efforts. We are working in parallel with mathematicians on general mathematical approaches for proofs that can be applied in scale, and automatically, which is necessary to cope with the large number of newly discovered formulas. In addition, we emphasize that the value of unproven formulas (or generally conjectures) cannot be understated, as it is usually such a conjecture that acts as the first step that eventually leads to a discovery of a new theory. For example, consider Srinivasa Ramanujan’s contributions to mathematics, many of which were initially unproven formulas, yet his impact on the mathematical world is undeniable. In a similar way, our formula generation algorithm provides new leads for mathematical research that can have long-term impact. Pdf: /pdf/d366bc863db7e41141384a51bbf15d77031ff2dd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Intrinsic Robustness of Prophet Inequality to Strategic Reward Signaling
Accept (poster)
Summary: This paper studies the robustness of threshold algorithms when involving strategic manipulations in the classic prophet inequality problem. Specifically, the paper considers the scenario when each random reward of $N$ variables is associated with a strategic player, who can commit to a signaling scheme before the search, and the searcher can only see the signal rather than the actual reward during the whole search. Each player wants to maximize the probability of being chosen by the searcher and, thus, would optimally choose the signaling scheme against the searcher's thresholding algorithm. The authors first characterize any player's optimal scheme, which has a quite simple form. Based on the result, the authors show that threshold $T_\mathsf{KW}$ is a $(1 - 1/e)/2$-approximation in this setting and is tight. For i.i.d. distributions, $T^*$ gives a $1/2$-approximation, and is tight. For log-concave distributions, a spectrum of thresholds also gives a $1/2$-approximation. Strengths: Personally, I like the idea of involving incentives in the classic online decision-making problems, and this work successfully gives such a trial. This work is well-motivated in the real-world scenarios for the prophet inequality problem, where the searched agents have incentives to manipulate their true rewards by giving signals. Consequently, this paper transfers the decision-making problem into a game. Also, thresholding algorithms are concise and known to be optimal for the original problem, and it is interesting to see that they can also work well with strategic players under the result that the SPE for these players is also concise. I like this paper's motivation and results, and they can be set as a basis for future work. Weaknesses: This paper leaves some major problems unanswered. For example, can the authors provide some preliminary thoughts if we only require a $\beta < 1/2$ approximation in the original world and search for the Pareto frontier of $(\alpha, \beta)$? That being said, I think the authors have already provided enough. Technical Quality: 3 Clarity: 4 Questions for Authors: Please see the weaknesses above. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback! Please find below our response to your comment. $(\alpha, \beta)$ **Pareto frontier** We thank the reviewer for this very interesting comment. Characterizing the Pareto frontier of the approximation ratios requires substantial additional effort. So far we do not have a clear picture on how this Pareto frontier would look like, and we agree with the reviewer this is an interesting open question to explore as the future work. --- Rebuttal Comment 1.1: Comment: I appreciate your response!
Summary: The paper studies a variant of the classical prophet inequalities problem, where "boxes" are now strategic agents who can pool their realized value into bins and only reveal to the decision maker which bin it falls into. The expected value in that bin then plays the role of the realized value. The authors study best-of-both-worlds threshold policies, which (1) achieve the tight 1/2 approximation ratio in the non-strategic setting, and (2) achieves a ratio of alpha in the strategic model. The main results are (conditional) optimal policies in the general setting, the IID setting, and the log-concave priors setting. Strengths: The model appears novel and sensible, and the results are relatively complete. It is nice to see best-of-both-worlds guarantees are possible in this model, which wasn't clear to me before the fact. Weaknesses: Parts of the model can be better justified (see detailed comments). I'd also appreciate the results better if there were unconditional / tighter impossibility results. Technical Quality: 4 Clarity: 3 Questions for Authors: (also including detailed comments) Line 70, threshold policies: is this without loss of generality? Timeline paragraph: I feel this is a bit ambiguous. In particular, since the searcher has first-order commitment power, essentially the model assumes the searcher only commits to policies that put a threshold on the posterior expected value (otherwise the searcher would do something like "if you don't reveal the full information, you are out" and we are back in the classical model). I feel this part of the model could be better motivated. Characterization of players' response: this is reminiscent of the response of value maximizing players in [19], which has a very similar structure. I wonder if a formal connection can be made between the two models. Remark 4.1: there is an upper bound (i.e., impossibility result) strictly smaller than 0.5 in [19], which might imply the impossibility of 1/2-robustness in your model, given the intuitive connection between the two models. This would make your results a bit more complete. Around line 368, dynamic threshold: I think you can get 1/2-approximation in the strategic setting? Take any 1/2-approximation policy in the non-strategic setting. Construct a new policy such that each player's response cutoff under the new policy is the same as the threshold of the old policy. Then you can couple the two policies in the two worlds and argue they have exactly the same performance? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: No concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback! Please find below our response to each of your comments. **The generality of threshold policies** The reason that we restrict our attention to the threshold policies is that they are the policies that achieve the classic prophet inequality in the non-strategic setting. Given that our main focus in this work is to understand the robustness of the classic prophet inequality, it is indeed without loss to focus on the threshold policies. We will make this point more clear in the revision. **Threatening policy** This is a great point and we also thought about it previously. However, we felt that the threatening policy is not natural because it is not a subgame perfect Nash, which is the standard solution concept in such sequential interactions. Our model can be viewed as an approximate version of subgame perfect Nash, where the first player (i.e. the searcher) plays an approximately optimal strategy from a simple policy space (the exact optimal stopping policy is likely very complex). The motivation for adopting such approximately optimal yet simple policy naturally echoes the entire literature on prophet inequality. **Connections to [19]** We thank the reviewer for pointing out the connection to the [19]. It is indeed interesting to see that the player’s optimal information revealing strategy has the similar structure to the response of the value maximizing buyers studied in [19], especially given that the player in our setting and the buyer in [19] have two very different optimization problems. We will add a discussion on this connection in the revision. After going through the hard example provided in [19], we can conclude that the same example can also be used in our setting to show the best approximation ratio that one can achieve for a static threshold policy is strictly smaller than 0.5 under a strategic setting, implying that a $1/2$-robustness is impossible for static threshold policy. We appreciate the reviewer for bringing out this impossibility result in [19] to our attention, and we will for sure add this result! **½-approximation policy in strategic settings** The reviewer is correct that there always exists an ½ approximation policy for the strategic settings. Indeed, the reviewer’s proposed method for constructing such a policy is essentially the dynamic threshold stopping policy (using different thresholds for different players) that we mentioned in Line 368 about our preliminary result. --- Rebuttal Comment 1.1: Comment: We sincerely thank the reviewer for the valuable comments/suggestions and hope that our response has effectively addressed your main questions/comments. If there are any lingering questions, we would be more than happy to address them.
Summary: The paper studies a variant of the prophet inequality modeled as a game between the reward holders (Players $1, \ldots, n$) and the searcher. In this model, each reward $X_i$ is sampled from a distribution $H_i$ that is known to the searcher. However, each player $i$ can choose not to reveal $X_i$, and instead only reveal a signal $\Phi(X_i)$ that provides partial information about $X_i$. The objective of each player $i$ is to maximize the probability of being selected by the searcher. On the other hand, the searcher's goal is to maximize the expected value that it selects, while maintaining a competitive ratio of $1/2$ in the case where all players fully reveal their prices, i.e. in the standard prophet inequality. The authors first characterize the optimal information-revealing strategy for the players, then present an algorithm that achieves a competitive ratio of $(1-1/e)/2$ in this strategic setting and $1/2$ in the standard setting. They also prove that this competitive ratio is the best possible. Finally, they give improved competitive ratios in the strategic setting with IID and with Log-concave Heterogeneous Distributions, also maintaining a competitive ratio of $1/2$ in the standard prophet inequality. Strengths: * The setting is motivated * The paper is well-written and presents a good balance between technical proofs and high level intuitions * The paper presents some interesting results. Weaknesses: * The optimal revealing strategy is only characterized when the searcher follows a threshold policy * The proposed algorithm is only optimal among threshold policies with thresholds in the spectrum given in Definition 2.2, and not among all possible algorithms. * The paper requires the algorithms to maintain a competitive ratio of $1/2$ if all the players reveal their reward values. While this is a perfectly reasonable assumption in the general setting, it makes much less sense in the case of IID rewards. The optimal competitive ratio in the prophet inequality with IID random variables is $0.745...$. Even if we restrict ourselves to threshold policies, the optimal competitive ratio is $1-1/e$. The reasonable constraint in the IID case would then be that the algorithm maintains a competitive ratio of $1-1/e$ if all the players reveal their rewards. * The problem assumes that the searcher does not observe the values $(X_i)_i$ but only signals $(\Phi(X_i))_i$. The threshold policy should then be defined as selecting the first reward $X_i$ such that $(\Phi(X_i) \geq T)$ and not $(X_i \geq T)$. Minor weaknesses: * Line 138: be more precise in the definition of the competitive ratio: it is the largest constant $\alpha$ satisfying $E[X^{(q)}] \geq \alpha \text{OPT}$ * Maybe Definition 2.2 should be a Lemma or Proposition instead * Remark 3.1 should mention that Proposition A.1 also addresses the case where $X_i < T$ almost surely, not only the case of distributions with point masses, otherwise $t_i$ does not look well-defined. Technical Quality: 3 Clarity: 3 Questions for Authors: * See weaknesses * The game timeline (line 170) is confusing to me: the searcher selects a threshold without knowing the types of signals they will observe, and then the players select revealing schemes. How does the searcher choose its stopping strategy without even knowing the type of information they will observe, which can be for example binary ($\Phi(X_i) = (X_i > a)$), negative $\Phi(X_i) = -X_i$, constant $\Phi(X_i) = 1)$, or others ? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The assumptions for all the are clearly stated for all the claims made in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the feedback and valuable comments! We notice that there may exist some conceptual misunderstandings in the reviewer's comments about the searcher’s stopping policy and the reason for us to study threshold stopping policies. So we would like to first clarify these potential misunderstandings and then provide our response to the other comments. **Clarification of the searcher’s stopping policy** We kindly recall that $\phi_i(\cdot \mid X_i)$ denotes the probability distribution over the signal space when the player $i$’s realized reward is $X_i$. Here, the signal space could be an arbitrary measurable space. Thus, directly comparing the realized signal with the stopping threshold is not feasible.. Indeed, the stopping threshold $T$ is used for the searcher to decide whether she should accept the realized reward or move to the next player. Yet in our setting, even though the searcher is not able to directly observe the realized reward, she can observe the realized signal. With the observed signal, the searcher is able to derive a Bayesian posterior belief over the underlying realized reward $X_i$. Then the stopping threshold $T$ is used for the searcher to decide whether she should accept the current player by looking at whether the mean of this posterior belief is larger or smaller than the stopping threshold $T$. For example, if the reward distribution is uniform over $[0, 1]$, and the signaling scheme is $\phi(X_i) = \Bbb{1}_{X_i > a}$ (i.e., the searcher can observe whether the realized reward is smaller or larger than $a$), and if the searcher observes a signal that tells \indicator{X_i > a} = 1, then the searcher is able to update her Bayesian posterior and know that the expected realized reward should equal to (1+a)/2. Consequently, the searcher’s decision should be based on comparing $(1+a)/2$ and the stopping threshold $T$. **Benchmarking within the threshold stopping policies** We would like to clarify that the primary goal of this work, as also apparent from the paper's title, is to understand whether the well-studied prophet inequality (which is achieved by the static threshold stopping policies with the thresholds defined in Definition 2.2) is robust to the players’ strategical reward signaling. Our motivation is to study how these threshold stopping policies would perform if the players are strategically disclosing reward information (also known as the “best-of-both-worlds” style of results). Characterizing the approximation ratio for general stopping policies is also an interesting question (with a different motivation), but is beyond the scope of this paper. **Optimal revealing strategy when the searcher follows a threshold policy** We would like to clarify that our characterized optimal information revealing strategy actually holds as long as the researcher uses a stopping policy that only depends on the reward distributions $ (H_i)_{i\in[N]}$. So it is beyond just "following a threshold policy" as the reviewer suggested. On the other hand, as we mentioned in Remark 3.2, when the searcher’s stopping policy is more generally allowed to even depend on the realized signal or the information strategies, the subgame Perfect Nash equilibrium among players’ game would adopt significantly more complex structures and it requires complex backward induction analysis. While analyzing this challenging situation could also be a technically interesting question, it deviates from our current paper’s motivation of analyzing classic prophet inequality’s robustness to strategic rewards, and could be suitable for a future work focusing on developing new prophet inequalities under strategically revealed reward signals. **1-1/e competitive ratio for iid case** The 0.745 competitive ratio for the iid case in the non-strategic setting is achieved by an **adaptive** threshold policy where the threshold for each player can depend on the previous reward realizations. This policy, however, is beyond the scope of this work as we focus on understanding how the classic static threshold policy performs (in both non-strategic and strategic settings). This is not to suggest that a dynamic threshold is uninteresting, but we aimed to maintain coherence in our results and avoid overwhelming readers with too many different settings. This is especially important as this study is the first of its kind, and the static threshold setting, while basic, is already quite intriguing. The 1-1/e competitive ratio, to our best knowledge, can be indeed achieved by a static threshold policy but requires some careful probabilistic tie-breaking rule when the searcher faces a tie between the realized reward and the stopping threshold (e.g., https://arxiv.org/pdf/2108.12893). While in our paper, we focus on a deterministic tie-breaking rule where the searcher always accepts the player if the expected realized reward is no smaller than the stopping threshold. The rationale behind this focus is that: (1) this class of static threshold policy with this simple tie-breaking rule are indeed effective policies for the classic prophet inequality, and we are studying their robustness in this paper; (2) they are more straightforward to implement in real-life cases. Lastly, we also kindly remark that in Corollary 5.3, we show there exists a static threshold policy that is 1-1/e approximation under the strategic setting (and this ratio is tight), however such policy has a bad performance in the non-strategic setting. Notice that our Proposition 5.2 rules out the possibility for all possible static threshold stopping policies (being within the spectrum in Definition 2.2 or beyond) to achieve $\alpha$-robustness with $\alpha>1/2$. --- Rebuttal 2: Comment: I thank the authors for their response. * **Clarification of the searcher’s stopping policy.** I thank them for the clarification. * **Benchmarking within the threshold stopping policies** I agree that studying algorithms with adaptive thresholds is highly challenging. However, in many variants of prophet inequalities, when the observation order is fixed, as in the current paper, static threshold policies are enough to achieve an optimal competitive ratio. I am curious to know if this is the case also in the current problem or if adaptive thresholds can yield better competitive ratios. My question is not about studying general threshold policies, but instead on proving the optimality, or not, of the proposed static threshold policy among all algorithms, or at least among all algorithms with static thresholds (beyond the limited spectrum of Definition 2.2). * **1-1/e competitive ratio for iid case** The competitive ratio of $1-1/e$ in the iid case is achieved by a simple static threshold algorithm, which does not seem to require a random tie-breaking rule, see the proof of Theorem 3.1 in https://arxiv.org/pdf/2205.05519. I still believe that in the iid case, it makes much more sense to require a competitive ratio of 1-1/e in the non-strategic setting instead of 1/2. The authors' response addressed some of my concerns, though not all. However, in light of the other reviews and all the authors' responses, I have slightly raised my score to lean towards acceptance. --- Rebuttal Comment 2.1: Title: Response to Reviewer's Comments Comment: We sincerely thank the reviewer for engaging in our discussions, and thanks for the insightful comments. We believe the following responses should help to address most of the reviewer’s concerns raised in Item 2 and 3 of the follow-up. **Benchmarking within the threshold stopping policies**: We agree with the reviewer that understanding optimality of our threshold policy compared to best **dynamic policy** is a very interesting future direction. Our current paper did not study this question. However, when compared with optimal **static threshold** (not necessarily within the $1/2$-approximation threshold spectrum as in Definition 2.2), our $1/2$-robustness in Theorem 5.1 for IID case is indeed optimal. This is shown by our Proposition 5.2. (please also refer to its refined proof in our additional response to the Reviewer **mXL5**). It constructs examples to show that, even in IID case, **any** static threshold (not necessarily within Def 2.2’s spectrum) achieving at least $1/2$-competitive ratio for non-strategic setting cannot be $(1/2+\epsilon)$-robust for any $\epsilon > 0$. **1-1/e competitive ratio for i.i.d. case**: 1. The referred paper requires distributions to have NO point mass (see page 5, the third line after the **Query** paragraph, for the assumption statement; essentially they need the threshold T such that $\text{CDF}(T) = 1-1/n$ to exist). In our setting, we strived to make least assumptions on the reward distributions to understand its robustness against the strategic reward signaling, thus we do not make any assumptions on the reward distributions and allow them to have point masses. 2. Take a step back, even for the IID continuous distributions, it is not difficult to identify examples to show that the above threshold T with $\text{CDF}(T) = 1-1/n$ — which to the best of our knowledge is the only known threshold to achieve the $(1-1/e)$ competitive ratio for non-strategic setting — can become arbitrarily bad in strategic setting (i.e., cannot guarantee $\epsilon$-robustness for any $\epsilon > 0$). Reviewer **mXL5** happened to also ask this question, hence please refer to our last response to Reviewer **mXL5** for a construction of such an example. 3. It is an interesting question to study whether there are other thresholds, other than the $T = F^{-1}(1-1/N)$ (here F denotes the CDF) as in the referred paper, that guarantees $(1-1/e)$-CR in both non-strategic and strategic settings. However, this question is beyond the scope of this paper since it is not even known whether there is another threshold other than $T = F^{-1}(1-1/N)$ that can have $(1-1/e)$-CR even just for the non-strategic setting. --- Reply to Comment 2.1.1: Comment: We sincerely thank the reviewer again for engaging in our discussions and hope that our response has effectively addressed your further questions. If there are any lingering questions, we would be more than happy to address them. --- Rebuttal 3: Comment: I thank the authors for all the clarifications. I strongly encourage them to include the additional discussion of the IID case, and state more explicitly the limitations of their work indicated during the rebuttal in the revised version of the paper. The authors have addressed most of my concerns. I raised my score to reflect my satisfaction with their responses. --- Rebuttal Comment 3.1: Comment: We greatly thank the reviewer for the feedback! We will make sure to make our results for the IID case more complete, and state the limitations of our work more explicitly in the revision.
Summary: This paper considers a bayesian persuasion variant of the classical prophet inequality problem: an online decision-maker will face a sequence of independent positive random variables $(X_1,\dots,X_n)$, $X_i \sim F_i$ known, and must decide when to stop in order to maximize the expectation of the selected item. Contrarily to the original problem, each individual $i$ is strategic and chooses a signaling schemes aiming to maximize its chance of being selected (such as $\mathbb{1}_{X_i \geq t}$): the decision maker will only observe the signal from individual $i$, and not necessarily the $X_i$ directly. The process is in $3$ steps, the decision maker selects a stopping threshold $T$, the individuals select their signaling schemes, and finally the sequence of signals is observed. They first show that the usual rule of $\mathbb{E}[\max_i X_i]/2$ which without strategic agents achieves a $1/2$ competitive ratio, achieves a $(1-1/e)/2$ competitive ratio with strategic agents. In addition this is tight over a range of thresholds. Additional settings when the distributions are i.i.d. or are log-concave are considered. Strengths: 1) This is a novel model which incorporates strategic behavior into prophet inequalities, adding to the growing literature of algorithms in strategic environments. 2) The paper is clear and well written. 3) The new results are significant, and non-trivial. Upper and lower bounds on the competitive ratio in the strategic setting are provided. This work also considers multiple sub-settings, giving more insight and leaving specific interesting open questions for this new problem. Weaknesses: 1) It remains unknown whether $(1-1/e)/2$ is the best competitive ratio achievable in the strategic setting. 2) Some of the additional assumptions made when considering log-concave distributions are strong and too specific. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions and remarks : 1) In the i.i.d. setting, the negative results do not seem to preclude the existence of a threshold with a $1-1/e$ CR in the non-strategic setting and a $1/2$ CR in the strategic setting. Is it possible to achieve? For instance, what is the performance of the quantile rule $T=F^{-1}(1-1/n)$ which achieves $1-1/e$ in the non-strategic iid setting? 2) Related to Corollary 5.3, do the authors know if better results are achievable in the non-iid strategic setting if we give up the performance in the non-strategic setting ? 3) Do the authors have any insights about the prophet secretary case? 4) I think it would be useful to give more details with respect to [29] as it seems to directly pertain to the topic at hand, and this could provide some insights in terms of what is gained in a centralized vs decentralized setting. 5) Regarding the payoffs obtained by the individuals when their item is selected, have other types of payoff structures been considered? For instance, player $i$ receives reward $X_i$ if item $i$ is selected, and $0$ otherwise. How does it affect the strategizing? 6) A brief comment on the existence of $T^{\dagger}$ and $T^*$ that satisfy their relevant equations should be added. 7) Are there examples of log-concave distributions which does not respect the additional slow decrease condition and which achieves a CR smaller than $1/2$ with the suggested range of thresholds? More broadly, are the additional assumptions (other than log-concave) truly necessary? line 145: the \nicefrac{} command for $\mathbb{E}[\max_i X_i]/2$ makes it too small. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Some limitations regarding strong assumptions and tightness of results have been accordingly addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback and valuable comments/suggestions! Please find our response to each of your questions below. **Q1: Results for i.i.d case** We would like to first clarify that, to the best of our knowledge, the existence of a static threshold policy that achieves $1-1/e$ CR in non-strategic IID setting requires that the searcher carefully breaks the ties when she faces a tie between the realized reward and the stopping threshold (i.e. it involves some probabilistic tie-breaking, especially when facing distributions with masses). While in our setting, we consider a static threshold policy where the searcher always accepts the player when the expected realized reward is no smaller than the stopping threshold (see Definition 2.1). Thus, our negative result in Proposition 5.2 indeed excludes the case mentioned by the reviewer. However, we note that the reviewer might be asking that if there exists a static threshold that achieves an ½ CR in non-strategic setting and an 1-1/e CR in strategic setting. The answer to this question is also no: consider the example of $5$ iid distributions with binary support $\{4.2,s\}$ where $s$ is sufficiently large and the mean is $5$. One can compute in this case that $\lim_{s \to \infty} \text{OPT} = \lim_{s \to \infty} (\frac{s-5}{s-4.2})^5 \cdot 4.2 + (1 - (\frac{s-5}{s-4.2})^5) \cdot s = 8.2$. We first assume such a threshold $T$ exists. For the non-strategic setting, if $T > 4.2$, then the searcher only accepts realized reward $s$, so the searcher’s payoff is $\lim_{s \to \infty} (1 - (\frac{s-5}{s-4.2})^5) \cdot s = 4$. Clearly $4/8.2 < ½$ so we must have $T \le 4.2$. Then in the strategic setting, if $T \le 4.2$, then the searcher would always accept the first player, so her payoff is $5$ with clearly $\frac{5}{8.2} < 1-1/e$, showing that such a $T$ ceases to exist. This example also shows that $T=F^{-1}(1-1/n)$ does not achieve $½$ CR in a non-strategic setting. In fact, if we consider the same example of $5$ iid distributions on binary support $\{t,s\}$ but letting $t \to 5$ instead of $t = 4.2$, we can see that since $T > t$, the searcher only accepts reward $s$ in the non-strategic setting. If $t$ becomes close enough to $5$ and $s$ is large enough we see that the non-strategic CR for such $T$ can actually approach zero. However, we note that extending our results to more general stopping policies would be definitely an interesting future direction. We will add corresponding discussions in the revision. **Q2: Better results in strategic setting while ignoring performance in non-strategic setting** When ignoring the performance in non-strategic setting, in the paper we show that the best approximation ratio that a static threshold stopping policy can achieve is $(1-1/e)/2$. So far it is unclear to us if this ratio could be improved for the static threshold stopping policy. Indeed, Reviewer 1242 pointed out a hard example provided in [19], under the same example we can conclude that the best approximation ratio of a static threshold policy one can achieve is no more than 0.4823 when players are strategic. However, whether $(1-1/e)/2$ is the optimal or if it can be improved for static threshold stopping policies still remains an open question, and we consider it as an intriguing future work. **Q3: Prophet secretary** Thanks for bringing out this interesting extension. We think the results we establish in Section 3 (i.e. characterizing the player’s optimal information revealing strategy when the searcher adopts a threshold stopping policy) still hold. Characterizing strategic-robust stopping policy may require significant effort for this problem and we leave it as another interesting future direction. **Q4: Comparisons to [29]** We thank the reviewer for this suggestion. We agree with the reviewer and will add more discussions about the comparison to the work [29]. **Q5: Different payoff structure** Our results can be easily extended to the setting where each player $i$ has a payoff $v_i \ge 0$ if his reward is accepted and 0 otherwise (please see our footnote 6 for this discussion). Using the terminology in information design, here the player’s payoff is essentially state-independent (i.e., it only depends on the searcher’s action and not on the realized reward $X_i$). It would be interesting to explore how the results would change if the player’s payoff is state-dependent. **Q6: Existence of** $T^\dagger$ **and** $T^*$ We thank the reviewer for this suggestion. We will add a comment about the existence of $T^\dagger$ and $T^*$. The proofs in the main text are for the continuous distributions, thus $T^\dagger$ and $T^*$ always exists. The definitions of $T^\dagger$ and $T^*$ are slightly different for the distributions with atoms, and they also always exist. **Q7: Additional assumptions for log-concave distributions** We acknowledge that, after extensive simulations, we did not find an instance with log-concave distributions that has strictly smaller than ½-robustness when this instance does not satisfy the given conditions. These assumptions are meant to serve as smoothing parameters to imitate nice behavior of endpoints on [0,1]. We believe relaxing these assumptions may require significant effort and we leave it as an intriguing future work. --- Rebuttal Comment 1.1: Comment: We thank the authors for their responses and clarifications. I still have a few questions left, Q1. I agree that when the distributions are discrete a tie-break is required to achieve $1-1/e$. However much of the paper’s main section presentation assumes continuous distributions, in which case the $1-1/e$ competitive ratio is achieved. I would expect that under continuous distributions the natural definition for i.i.d. $\alpha$-robustness would be to guarantee the non-strategic competitive ratio of $1-1/e$. Even without the continuous assumption, does allowing for random tie-breaks make any of the proofs catastrophically fail? If this is the case this should be pointed out, as allowing for tie-break is quite standard in the literature, in which case looking at thresholds without tie breaks might not be the right policy set to consider. I am also now confused about the current proof of Proposition 5.2 in the paper. Is it correct that what the counter example proves, is that any fixed threshold (without tie breaking) in the non strategic setting will bet at most $½$? Maybe I misunderstood something, but this does not prove that for any $\alpha$-robust policy, $\alpha \leq ½ $, as we could still have $½$ in the non strategic setting and more than $½$ in the strategic one. However your new counter example does show that if $½$ is achieved in the non strategic setting, then the competitive ratio is at most $5 / 8.2$ in the strategic setting, but this is still not $½$. Please feel free to correct me. Regarding $Q7$, I think this limitation should therefore be included next to the result for transparency. --- Reply to Comment 1.1.1: Title: Response to Reviewer's Comments Comment: We appreciate the reviewer’s engagement and very interesting questions, which also help us to further deepen our results. Our response below is a bit long, but it should resolve the reviewer’s three major questions we believe. In a nutshell, 1. We identify an example to show that, even for continuous distributions, the known threshold to achieve the (1-1/e) competitive ratio (CR) in IID case for non-strategic setting can become arbitrarily bad in strategic setting (i.e., cannot guarantee $\epsilon$-robustness for any $\epsilon > 0$). While this is a very interesting question, it is unclear whether one could obtain interesting $\alpha$-robustness results for IID setting, subject to $(1-1/e)$-CR in non-strategic situations. Answering this question will also require significant advances over the state-of-the-art results for standard (non-strategic) prophet inequality for IID cases, as the currently known threshold would not work. 2. We thank the reviewer for pointing out the incompleteness of current proof of Proposition 5.2 to our attention. We acknowledge that the current proof of Proposition 5.2 indeed does not fully prove the statement, though it has the right intuition and ideas (and the statement of Prop 5.2 is also correct). A slight modification of the constructed example by changing $N-1$ in the construction to $N-\alpha_1$ and then letting $\alpha_1 \to 1$ suffices to prove the proposition. We also include a complete argument below in case the reviewer would like to take a look. 3. Lastly, we'd also like to note that tiebreaks do not affect our proofs. In particular, allowing tiebreaks does not help to get a better CR in a strategic setting. This is followed by a simple observation of the structure of the players’ optimal information strategies (see Proposition 3.1), where each player’s optimal information strategy is a two-point-mass distribution with larger mass being the same $T$. Thus, the searcher would never be better off by randomly rejecting any player if he is realizing a mass $T$ since that is the maximum possible value sent by all the players. Needless to say, we will incorporate all the above discussions in the next draft and also follow the reviewer’s suggestion and add our discussions about $Q_7$. **The threshold with $1-1/e$ competitive ratio in IID case is NOT $\epsilon$-robust for any constant $\epsilon>0$**: Under **continuous** iid distributions, one can always achieve $1-1/e$ competitive ratio (CR) in the non-strategic setting using the threshold $T$ such that $T = F^{-1}(1-1/N)$, where $F$ is the CDF. An interesting question is whether this $T$ is $(1-1/e)$-robust in the sense that it also achieves $1-1/e$ CR in the strategic setting. Unfortunately, the answer to this question is no --- in fact, this threshold $T$ cannot guarantee $\alpha$-robustness for any positive $\alpha$. We provide an intuitive counterexample below, but happy to convert it to a rigorous construction (which should require no new ideas but just tedious math constructions) if the reviewer is interested to see. Notably, it is an interesting question to study whether there are other thresholds, other than the threshold $T = F^{-1}(1-1/N)$, that guarantees $(1-1/e)$-CR in both non-strategic and strategic settings. However, this question is far beyond the scope of this paper since, to our knowledge, it is even not known whether there is another threshold other than $T = F^{-1}(1-1/N)$ that can have $(1-1/e)$-CR even just for the non-strategic setting. The counterexample considers $N$ players with continuous iid distributions that almost have a binary support in $\{0,1\}$, with probability $ \frac{1}{2N}$ to be $1$ hence mean $1/(2N)$ (rigorous construction only needs to ``smooth'' this discrete distribution to be a continuous one that mostly concentrates its mass on $0, 1$). These continuous distributions can be made to satisfy $F^{-1}(1-1/N) < \frac{1}{2N}$ (i.e., the threshold is very close to $0$ but just a little bit larger). Note under these distributions, the first player can use the no-information strategy and reveal $\delta(\frac{1}{2N})$ and will always be picked by the searcher. Hence the searcher’s payoff is $u^s(T) = \frac{1}{2N}$. Since we can approximate continuous distributions close to the binary support one, for any $\epsilon > 0$ we can always find a continuous distribution such that its $\text{OPT} = (1 - (1-\frac{1}{2N})^N) - \epsilon$, as $(1 - (1-\frac{1}{2N})^N)$ is the $\text{OPT}$ of the binary support one. Then the searcher’s CR is $\frac{u^S(T)}{\text{OPT}} = \frac{1}{N \cdot (1 - (1-\frac{1}{2N})^N - \epsilon)}$, which goes to $0$ as $N \to \infty$. Hence the searcher’s CR in the strategic setting can get very close to $0$, showing that $T = F^{-1}(1-1/N)$ is not robust to strategic reward signaling.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Implicit Regularization of Decentralized Gradient Descent for Sparse Regression
Accept (poster)
Summary: This paper proves the convergence of the decentralized gradient descent (DGD) algorithm under RIP condition when the initialization scale is small. The authors also propose a truncated version of the algorithm with a cheaper cost but comparable performance (in certain situations) to the original DGD. Strengths: The theory seems sound and nontrivial. The authors prove the convergence for DGD, while previously it seems that only centralized GD results exist. Weaknesses: This paper is a little bit technical. Technical Quality: 4 Clarity: 4 Questions for Authors: There are already many technical explanations and comparisons in the paper. However, it would be better if more intuitive understanding of the convergence of DGD, and the comparison with centralized GD can be added. Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in reviewing our manuscript and providing valuable feedback. We have addressed your comments and questions as detailed below.\ $\textbf{High-level explanation of promoting sparsity for centralized GD.}$ For gradient descent (GD), the diagonal linear reparameterization turns the additive updates into multiplicative updates as $w_+^{t+1} = w_+^t \odot \left( \boldsymbol 1_d - 4\eta \left( \boldsymbol s^t - \boldsymbol w^\star + \boldsymbol p^t + \boldsymbol b^t\right) \right)^2$ and $w_-^{t+1} = w_-^t \odot \left( \boldsymbol 1_d + 4\eta \left( \boldsymbol s^t - \boldsymbol w^\star + \boldsymbol p^t + \boldsymbol b^t\right) \right)^2$ based on signal and error decomposition, where $\boldsymbol s^t$ denotes the signal part and $\boldsymbol p^t$ and $\boldsymbol b^t$ are bounded error parts. For the element $i \in \mathcal S$, the small initialization can make $|1 \pm 4\eta( s^t_i - w^\star_i + p^t_i + b^t_i )|>1$ roughly, which can amplify the $s^{t}_i$ to $s^{t+1}_i$ and let $s^t_i$ tend to converge to $w^{\star}_i$. On the contrary, if $i \in \mathcal S^c$, we can select the appropriate step size to bound the cumulative product $\prod^t_j { |1 \pm 4\eta(s^j_i + p^j_i + b^j_i)| }$ at the $t$-th iteration before early stopping to keep $s^t_i$ is still small enough. The difference dynamics on elements in support set $\mathcal S$ and non-support set $\mathcal S^c$ result in sparsity. Sparsity is induced with reparameterization together with small initialization size (one without the other doesn’t work).\ $\textbf{High-level explanation of the success of DGD and comparison.}$ The consensus errors induced from decentralized network complicate the multiplicative updates, which becomes inexact multiplicative updates as $\overline{\boldsymbol u}^{t+1} = \overline{\boldsymbol u}^t \odot \left(1 - 4\eta \left( \overline{\boldsymbol u}^t \odot \overline{\boldsymbol u}^t - \boldsymbol w^\star + \hat{\boldsymbol p}^t + \hat{\boldsymbol b}^t \right)\right) + \boldsymbol e^t$. Compared with the exact multiplicative updates, the challenge is that the extra error term $\boldsymbol e^t$ outside of the multiplication prevent applying the centralized analysis trivially. In addition, the perturbation error terms $\hat{\boldsymbol p}^t, \hat{\boldsymbol b}^t$ within the multiplication are much more complicated than the ${\boldsymbol p}^t, {\boldsymbol b}^t$ in the centralized setting due to additionally multiple consensus errors and loss of global RIP condition for each agent. This requires carefully bounding the consensus error terms, which can control the complicated perturbation errors $\hat{\boldsymbol p}^t, \hat{\boldsymbol b}^t, \boldsymbol e^t$ not to be large. Specially, we can utilize the signal and error decomposition and diagonal linear reparameterization to reparameterize $\boldsymbol e^t$ into the formula as $\boldsymbol e^t= \overline{\boldsymbol{u}}^t \odot 4\eta \boldsymbol f^t$, where the $\boldsymbol f^t$ is the bounded perturbation error that can be merged into $\hat{\boldsymbol p}^t, \hat{\boldsymbol b}^t$. Now we can rewrite the inexact multiplicative updates $\overline{\boldsymbol u}^{t+1} = \overline{\boldsymbol u}^t \odot \left(1 - 4\eta \left( \overline{\boldsymbol u}^t \odot \overline{\boldsymbol u}^t - \boldsymbol w^\star + \hat{\boldsymbol p}^t + \hat{\boldsymbol b}^t + \boldsymbol f^t \right)\right)$. Trivially applying existing consensus error analysis would give crude bounds for $\hat{\boldsymbol p}^t, \hat{\boldsymbol b}^t, \boldsymbol f^t$, which hinder achieving statistical optimal recovery. Thus, we propose a fine-grained analysis for consensus errors to bound $\hat{\boldsymbol p}^t, \hat{\boldsymbol b}^t, \boldsymbol f^t$ carefully. In addition, the existence of $\boldsymbol e^t$ makes extending the proof of the simplified non-negative $\boldsymbol w^\star$ case to the general $\boldsymbol w^\star$ setting in the decentralized framework non-trivial compared to the centralized setting, which only needs to conduct induction hypothesis for non-negative $\boldsymbol w^\star$ case. Therefore, we conduct a comprehensive induction process to both $\boldsymbol u$ and $\boldsymbol v$ simultaneously for general $\boldsymbol w^\star$. Under our integrated induction hypothesis, we can use network connectivity to control the fine-grained consensus errors to bound these three perturbation errors $\hat{\boldsymbol p}^t, \hat{\boldsymbol b}^t, \boldsymbol f^t$ small enough to make the distance between two trajectories obtained by inexact and exact multiplicative updates within statistical accuracy, which can promote sparsity in the decentralized setting. The detailed theoretical mechanism of promoting sparsity has been demystified in Proposition 3. Please refer to the Appendix on pages 28-29.\ $\textbf{Empirical Results Comparison between GD and DGD.}$ The comparison results are shown in Figure 4 of $\textbf{supply.pdf}$, which indicates that the trajectory of DGD mimics that of GD. DGD can achieve optimal statistical estimation as GD. The delayed convergence of DGD is due to the decentralized network. These experimental results corroborate our high-level explanation of the success of DGD. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanation. It helps but still seems a little technical to me. It would be better to present higher-level ideas instead of technical illustrations. Please consider adding them to the paper. I will maintain my score. --- Reply to Comment 1.1.1: Title: Authors' feedback Comment: Thank you for the feedback. We will add higher-level explanations to make the ideas clearer and more accessible.
Summary: This manuscript studies a decentralized optimization method for training linear sparse models using a network of agents that collect linear measurements. Unlike decentralized methods relying on L1 regularization, this approach leverages implicit regularization inherited in the gradient descent process. The authors propose a communication-efficient variant referred to as Truncated Decentralized Gradient Descent T-DGD. The authors analyze their decentralized version of gradient descent applied to a non-convex least squares formulation. The manuscript concludes with numerical results that validate the effectiveness of both DGD and T-DGD for sparse learning tasks. Strengths: The manuscript is well-written and provides an interesting perspective on decentralized optimization of a linear least squares system. Weaknesses: It remains unclear to me how the approach promotes sparsity. This has neither been discussed nor numerically explored. Here, I would expect a comparison to l1-type regularization approaches promoting sparsity in w* but such comparison and sparsity levels are not provided. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and for highlighting the areas that need further clarification.\ $\textbf{High-level explanation of promoting sparsity for centralized GD.}$ For gradient descent (GD), the diagonal linear reparameterization turns the additive updates into multiplicative updates as $w_+^{t+1} = w_+^t \odot \left( \boldsymbol 1_d - 4\eta \left( \boldsymbol s^t - \boldsymbol w^\star + \boldsymbol p^t + \boldsymbol b^t\right) \right)^2$ and $w_-^{t+1} = w_-^t \odot \left( \boldsymbol 1_d + 4\eta \left( \boldsymbol s^t - \boldsymbol w^\star + \boldsymbol p^t + \boldsymbol b^t\right) \right)^2$ based on signal and error decomposition, where $\boldsymbol s^t$ denotes the signal part and $\boldsymbol p^t$ and $\boldsymbol b^t$ are bounded error parts. For the element $i \in \mathcal S$, the small initialization can make $|1 \pm 4\eta( s^t_i - w^\star_i + p^t_i + b^t_i )|>1$ roughly, which can amplify the $s^{t}_i$ to $s^{t+1}_i$ and let $s^t_i$ tend to converge to $w^{\star}_i$. On the contrary, if $i \in \mathcal S^c$, we can select the appropriate step size to bound the cumulative product $\prod^t_j { |1 \pm 4\eta(s^j_i + p^j_i + b^j_i)| }$ at the $t$-th iteration before early stopping to keep $s^t_i$ is still small enough. The difference dynamics on elements in support set $\mathcal S$ and non-support set $\mathcal S^c$ result in sparsity. Sparsity is induced with reparameterization together with small initialization size (one without the other doesn’t work).\ $\textbf{High-level explanation of promoting sparsity for DGD.}$ The consensus errors induced from decentralized network complicate the multiplicative updates, which becomes inexact multiplicative updates as $\overline{\boldsymbol u}^{t+1} = \overline{\boldsymbol u}^t \odot \left(1 - 4\eta \left( \overline{\boldsymbol u}^t \odot \overline{\boldsymbol u}^t - \boldsymbol w^\star + \hat{\boldsymbol p}^t + \hat{\boldsymbol b}^t \right)\right) + \boldsymbol e^t$. Compared with the exact multiplicative updates, the challenge is that the extra error term $\boldsymbol e^t$ outside of the multiplication prevent applying the centralized analysis trivially. In addition, the perturbation error terms $\hat{\boldsymbol p}^t, \hat{\boldsymbol b}^t$ within the multiplication are much complicated than the ${\boldsymbol p}^t, {\boldsymbol b}^t$ in the centralized setting due to additionally multiple consensus errors. This requires carefully bounding the consensus error terms, which can control the complicated perturbation errors $\hat{\boldsymbol p}^t, \hat{\boldsymbol b}^t, \boldsymbol e^t$ not to be large. Thus, we can use network connectivity to control the consensus errors to bound these three perturbation errors small enough to make the distance between two trajectories obtained by inexact and exact multiplicative updates within statistical accuracy, which can promote sparsity in the decentralized setting. The detailed theoretical mechanism of promoting sparsity has been demystified in Proposition 3. Please refer to the Appendix on pages 28-29.\ $\textbf{ Validating promoting sparsity numerically.}$ We direct the reviewer to Section 6.1 of our main paper. The simulations presented in Figure 1 provide empirical evidence supporting Proposition 3's theoretical mechanism for promoting sparsity. These results show that DGD effectively distinguishes between non-zero and zero support elements, aligning with our theoretical findings. Reviewer can also refer to Figure 4 in $\textbf{supply.pdf}$, which also demystifies the promoting sparsity for GD and DGD.\ $\textbf{(4)Comparison with decentralized sparse solvers.}$ \ $\textbf{(1) Vanilla Comparison.}$ \ We have compared with existing decentralized sparse solvers, namely: CTA-DGD (LASSO) [2], ATC-DGD (LASSO)[3], and DGT (NetLASSO) [4]. These methods are all based on the LASSO formulation with explicit regularization. The results are presented in Figure 1 of $\textbf{supply.pdf}$. For each method, we tuned the step size to achieve the best performance. Our proposed method demonstrated the best recovery performance in all network settings with minimum iterations.\ $\textbf{ (2) Truncated version comparison.}$ \ We further compared T-DGD with truncated versions of existing methods: Trun-CTA-DGD (LASSO), Trun-ATC-DGD (LASSO), and Trun-DGT (NetLASSO) which use the same Top-$s$ truncation operator. As shown in Figure 2.(a) of $\textbf{supply.pdf}$, our proposed method is the only one to achieve successful recovery, while all other methods failed. This demonstrates that naively combining sparsification with decentralized algorithms is not granted to converge. This is precisely one of the motivations of this work: to provide communication-efficient algorithms with both provably statistical and computational guarantees. \ [1] Yuan, K., Ling, Q., & Yin, W. (2016). On the convergence of decentralized gradient descent. SIAM Journal on Optimization, 26(3), 1835-1854. [2] Ji, Yao, et al. "Distributed sparse regression via penalization." Journal of Machine Learning Research 24.272 (2023): 1-62.\ [3] Ji, Yao, et al. "Distributed (ATC) gradient descent for high dimension sparse regression." IEEE Transactions on Information Theory 69.8 (2023): 5253-5276.\ [4] Sun, Ying, et al. "High-dimensional inference over networks: Linear convergence and statistical guarantees." arXiv preprint arXiv:2201.08507 (2022). --- Rebuttal Comment 1.1: Comment: Thank you for your response to my concerns. I have slightly increased my ratings. --- Reply to Comment 1.1.1: Title: Authors' feedback Comment: Thank you for highlighting the weaknesses and for considering a slight increase in the rating. --- Reply to Comment 1.1.2: Title: Increased ratings Comment: Dear Reviewer, We apologize for the inconvenience, but we would greatly appreciate it if you could remind us where you have increased the rating in your review. Upon reviewing the feedback, we realized that we no longer have the details of the previous ratings and would like to ensure we have an accurate understanding. Thank you for your time and assistance. Bests, Authors
Summary: This paper focuses on deriving the implicit regularization effects of decentralized gradient descent (DGD) for minimizing an objective function over undirected mesh networks. In particular, this paper establishes the fact that the solution returned by DGD with early stopping is statistically optimal under certain conditions. In addition, this paper also proposes a new method, the T-DGD, that can have better performance than vanilla DGD. Strengths: 1. The studying of the DGD setting is novel, as most works in the direction of implicit regularization of algorithms typically do not pay attention to DGD. The main contributions are clearly reflected in Theorem 1, showing the statistical guarantee and computational complexity of DGD. 2. The proposed TDGD can be seen as an interesting application of the theoretical observation, which additionally makes the theoretical claims inspiring. Weaknesses: 1. The simulations are only for diagonal linear networks, which limits the generalizability of the proposed methods and theoretical conclusions. 2. Though results for the setting of this paper are novel, the motivation for studying such a setting is unclear. It would be better to specify clearly why and how the setting could be useful in practice. Currently it is hard for to see how the theoretical results could be important. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Could you demonstrate your theoretical conclusions in more general architectures to further elaborate the implications of the theoretical claims? 2. Previous works shows a transition from kernel regime to rich regime by varying the initialization scale, does such phenomenon exist in the DGD setting? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: 1. The motivation and implications of the theoretical claims are not clear. 2. Numerical experiments are only conducted in simple architectures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful comments. We appreciate the opportunity to address your concerns.\ $\textbf{Experimental results for general architectures}.$\ We add two experiments to validate the implicit regularization of DGD on general overparameterized neural network architecture, the details of two experiments is in the caption of Figure 3 of $\textbf{supply.pdf}$.\ (1) The first is that we use vanilla decentralized SGD(DSGD) to train a depth-2, 5000 hidden ReLU network with the cross-entropy loss on the MNIST dataset. We plot the average test error (which is defined as the summation of the test error of each agent's model, then divided by the number of agents) vs. $\alpha$ in Figure 3.(a), which shows a visible phase transition for generalization ($\approx 98\%$ for $\alpha\leq 6$, and $\approx 96.6\%$ error for $\alpha \geq 100$). Figure 3.(a) shows that the transition from the kernel regime to the rich regime by varying the initialization scale may also exist in complex fully connected neural networks.\ (2) The second is we use vanilla DSGD to train the VGG11-like deep convectional neural network on CIFAR10. The Figure 3.(b) plots the average test accuracy vs. $\alpha$. In addition, we adopt the sparse feature learning measure [1] to monitor the sparsity of learned features in all agents along the epoch in Figure 3.(c). From two figures, we can observe that the implicit regularization of DGD under small initialization prefers choosing solutions with better generalization performance and makes DSGD implicitly learn the models that can induce sparser features.\ $\textbf{Motivation of the study.}$\ The purpose of this work is twofold. \ $\textbf {1).}$ The sparse recovery problem (1) itself has important applications in sensing, signal processing, and learning where data is limited and distributed into different locations [2]. Our algorithm provides decentralized solutions to these problems and outperforms state-of-the-art methods, as shown by Figure 1 and Figure 2.(a) in $\textbf{supply.pdf}$. Furthermore, Problem (2) does not require choosing the regularization parameter before executing the algorithm. Rather, sparsity regularization is controlled through early termination. In practice, it is much easier to implement since one can stop the algorithm by monitoring the test error on the validation dataset.\ $\textbf {2).}$ The over-parameterized formulation (2) can also be regarded as training a two-layer diagonal linear neural network and thus connects to the literature of deep learning theory. Existing works in this area primarily focus on studying the implicit bias of gradient methods in the centralized setting, with the decentralized regime less explored. When the data is split over multiple agents, a new component, the interaction of the agents through the network, comes into play and will affect the algorithms' trajectory. For a gradient algorithm that is biased toward benign solutions in the centralized setting, it is not clear if the new perturbation introduced by the network will drive it towards somewhere else. This work provides an understanding of the problem and shows how initialization, step size, and network connectivity should be coordinated to retain a small error. \ $\textbf{Formal study of more general NN architecture.}$ Analyzing the implicit regularization for NNs, especially in the rich regime, relies heavily on the specific problem structures. As such, existing works, even in the centralized setting, are largely case-by-case studies. The analysis in this paper for DGD could be potentially generalized to other cases, but providing formal studies is beyond the scope of the current work. We also mention there are some recent efforts trying to provide a unified perspective on the inductive bias of gradient descent [3,4]. These results characterize where the algorithm converges to, but not how fast it reaches such a solution. The technique along this line is less close to that developed in this paper. Yet it would be interesting to investigate the decentralized/FL counterpart of it. The results of Figure 3 of $\textbf{supply.pdf}$ on fully connected and deep CNN can demonstrate that our current theorem 1 (DGD with smaller initialization implicitly chooses the sparser and better generalization solutions in solving overparameterized models) is not limited to the diagonal linear network, but might be valid for general architecture.\ $\textbf{Kernel to rich regime transition.}$\ Previous works show a transition from the kernel regime to the rich regime by varying the initialization scale; this also happens for DGD. Experimental results showing the phenomenon are shown in Figure 2.(b) of $\textbf{supply.pdf}$. We observe that when we increase the initialization scale $\alpha$ gradually, DGD would converge to the minimal $\ell_2$ norm solution $\boldsymbol w^\star_{\ell_2}$. On the contrary, when we decrease the $\alpha$, the DGD would converge to the minimal ${\ell_1}$ norm solution (sparse solution) $\boldsymbol w^\star_{\ell_1}$. Thus, Figure 2.(b) demonstrates the existence of phase transition from kernel to rich regime for DGD when decreasing initialization $\alpha$. Since we focus on sparse recovery, small initialization would achieve this aim with better generalization.\ [1] Andriushchenko, Maksym, et al. "Sgd with large step sizes learns sparse features." International Conference on Machine Learning. PMLR, 202.\ [2] Mateos, Gonzalo, Juan Andrés Bazerque, and Georgios B. Giannakis. "Distributed sparse linear regression." IEEE Transactions on Signal Processing 58.10 (2010): 5262-5276.\ [3] Azulay, Shahar, et al. "On the implicit bias of initialization shape: Beyond infinitesimal mirror descent." International Conference on Machine Learning. PMLR, 2021.\ [4] Moroshko, Edward, et al. "Implicit bias in deep linear classification: Initialization scale vs training accuracy." Advances in neural information processing systems 33 (2020): 22182-22193.
Summary: The paper shows that the implicit regularization enjoyed by a well-known reparameterizion of least squares extends to the decentralized setting. Convergence guarantees are provided, and it is also shown that communication can be limited by thresholding vectors before they are communicated to neighbors. Strengths: The convergence guarantess end up having the form that one would hope. Weaknesses: I don't quite understand how this paper is contributing to our understanding of machine learning. That running gradient descent on the nonlinear least-squares problem in (2) gives you sparse solutions that are statistically near-oprimal is interesting, but is of course known and written about a lot already. That you can make gradient descent for non-convex problems decentralized using consesus averaging has also been written about a lot already. That communications can be reduced using truncation when we are solving sparse regression problems is also an idea that has received plenty of attention. This paper puts all of these things together, but we don't seem to learn anything new about implicit regularization or decentralized optimization. The algorithm is not compared against existing decentralized sparse solvers. If it happened to perform better even on stylized examples, that would at least be something new. Technical Quality: 4 Clarity: 3 Questions for Authors: none Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank Reviewer E5yh for your valuable comments. We appreciate the opportunity to clarify the contributions and address the concerns in your review. $\textbf{(1) Contribution and significance}$ \ While implicit bias has been studied for centralized gradient methods for various models, the decentralized setting is relatively less with several key questions largely under-explored: $\textbf{1)}:$ Will decentralized algorithms also experience implicit bias as in the centralized setting? $\textbf{2)}:$ How will the initialization and the interaction of the agents through the network affect the implicit regularization? $\textbf{3)}:$ How to choose the algorithm and set hyper-parameters to obtain solutions with small errors? \ Even under the overparameterized sparse model, answering these questions is non-trivial due to the additional error terms induced by the decentralized network, which can hinder DGD from inducing sparse regularization. For example, DGD in poorly connected large-scale networks may not induce optimal regularization, as validated in Figure. 3(a) of the paper. Our main contributions are as follows:\ $\textbf{Statistical and computational guarantee of DGD}.$ We provided sufficient conditions—namely, if the global design matrix satisfies the RIP condition, the initialization is sufficiently small, and the network is sufficiently connected—the early-stopped DGD can implicitly obtain statistically optimal solutions. These results, formally stated in Theorem 1, provide a potential answer to the general questions listed above.\ $\textbf{New communication-efficient algorithm by truncation.}$ The analysis of DGD shows that the magnitude of the elements on the support will progressively increase while those off support remain small as initialization, which motivates the development of the T-DGD algorithm that keeps the top $s$ largest elements and truncates others. Proposition 1 shows in the high SNR regime, T-DGD enjoys the same iteration complexity as DGD, but the communication cost is significantly reduced from $\mathcal O(d)$ to $\mathcal O(s)$. There is currently no work to prove that the truncated explicitly decentralized LASSO can achieve the $\mathcal O(s)$ communication complexity. Our experiments in Figure 2.(a) of $\textbf{supply.pdf}$ show that truncated explicitly decentralized LASSO solvers do not work. This indicates that the implicit regularization induced by DGD in solving the overparameterized model can be better than existing explicitly decentralized LASSO solvers. This expands the new understanding of the benefits of optimization-induced implicit regularizations in overparameterized learning models. $\textbf{(2)Simple combination of existing results will not work.}$ \ We fully agree that the implicit bias of GD on problem (2) has been analyzed in centralized setting. However, existing understandings of DGD indicate that it cannot converge to exact minimizers or stationary points if the problem is nonconvex [1]. As problem (2) is nonconvex, none of these results could easily predict that DGD will compute statistically optimal solutions.\ T-DGD reduces the transmitted elements from $d$ to $s$ without affecting the iteration complexity, which is achieved by carefully exploiting the problem's sparsity structure induced by implicit regularization of DGD, which is new to our knowledge. \ $\textbf{(3) Insights of broader interest.}$\ $\textbf{ Algorithm selection}.$ In decentralized optimization literature, methods with gradient correction are often preferred due to heterogeneity of local loss functions. However, our results suggest that for certain machine learning models, the simpler DGD suffices to achieve satisfactory or even optimal outcomes.\ $\textbf{ Generalization to more complex machine learning models}.$ The study potentially initiates further investigations into the implicit regularization induced by decentralized methods for more complex models. Our fine-grained analysis of the interaction between the optimization dynamics and network effect can be developed and generalized to explore the implicit regularization of other decentralized methods in more complex learning models, such as DNN.\ $\textbf{(4)Comparison with decentralized sparse solvers.}$ \ $\textbf{Vanilla Comparison.}$ \ We have compared with existing decentralized sparse solvers, namely: CTA-DGD (LASSO) [2], ATC-DGD (LASSO)[3], and DGT (NetLASSO) [4]. These methods are all based on the LASSO formulation with explicit regularization. The results are presented in Figure 1 of $\textbf{supply.pdf}$. For each method, we tuned the step size to achieve the best performance. Our proposed method demonstrated the best recovery performance in all network settings with minimum iterations.\ $\textbf{ Truncated version comparison.}$ \ We further compared T-DGD with truncated versions of existing methods: Trun-CTA-DGD (LASSO), Trun-ATC-DGD (LASSO), and Trun-DGT (NetLASSO) which use the same Top-$s$ truncation operator. Figure 2.(a) of $\textbf{supply.pdf}$ shows our method is the only one to achieve successful recovery, while all other methods failed. This demonstrates that naively combining sparsification with decentralized algorithms is not granted to converge. This is precisely one of the motivations of this work: to provide communication-efficient algorithms with both provably statistical and computational guarantees. \ [1] Yuan, K., Ling, Q., & Yin, W. (2016). On the convergence of decentralized gradient descent. SIAM Journal on Optimization, 26(3), 1835-1854. [2] Ji, Yao, et al. "Distributed sparse regression via penalization." Journal of Machine Learning Research 24.272 (2023): 1-62.\ [3] Ji, Yao, et al. "Distributed (ATC) gradient descent for high dimension sparse regression." IEEE Transactions on Information Theory 69.8 (2023): 5253-5276.\ [4] Sun, Ying, et al. "High-dimensional inference over networks: Linear convergence and statistical guarantees." arXiv preprint arXiv:2201.08507 (2022). --- Rebuttal Comment 1.1: Comment: Thanks for this detailed response and for putting your numerical experiments in more context. I can bump my score up given this. --- Reply to Comment 1.1.1: Title: Authors feedback Comment: We sincerely appreciate your thoughtful review and raising your score. Thank you for providing us the opportunity to clarify our contributions. We will ensure that the numerical comparison results are included in the revised version of the paper.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their careful review and insightful comments. We have addressed each of the questions raised by all reviewers in a point-by-point manner as detailed below. We hope our responses can address the concerns raised. Additionally, we have included supplementary experimental results in the attached file $\textbf{supply.pdf}$. We invite the reviewers to read these results and welcome any further suggestions they might have. Pdf: /pdf/dc83491d4d806f30cacb2dbc4c58d99a482ea281.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null