title stringlengths 15 163 | paper_decision stringclasses 4 values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171 values | rebuttals_5 stringclasses 166 values | review_6 stringclasses 25 values | rebuttals_6 stringclasses 24 values | review_7 stringclasses 4 values | rebuttals_7 stringclasses 4 values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Evaluating LLMs Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving | Accept (poster) | Summary: This paper presents an evaluation framework inspired by Bloom’s Taxonomy, integrating multiple tasks to reflect diverse cognitive levels. The author evaluates popular general and medical LLMs, observing a significant performance decline in performance with the increasing of cognitive complexity.
Their findings highlight the importance of scaling up models' parameter sizes to more effectively tackle clinical challenges.
## update after rebuttal
In the rebuttal stage, the authors provide details about the human evaluation of the cognitive levels. These evidence addressed my concerns regarding the experiment design. The paper proposed a valuabale evaluation pipeline to analysis the clinical ultility of LLMs.
Therefore, I update my score to 3 (weak accept).
However, I still think the findings within this paper, larger LLMs performs better in more difficult tasks, is not supervising. I hope the author could provide more in-depth analysis regrading these results.
Claims And Evidence: Problematic claim:
1. The author claims that question-answering (QA) is a task with low cognitive complexity, as large language models (LLMs) primarily need to memorize the medical information. However, some clinical questions in MedQA[1] require that LLMs analyze symptoms and engage in intricate reasoning to arrive at the correct answer.
2. The cognitive complexity of Mid-Level tasks (such as statement validation questions) is influenced by the question's complexity. An example from the MedMCQA[2] dataset illustrates this:
"Q: Which vitamin is provided solely by animal sources?" represents a straightforward knowledge question. Even if reformulated into a statement validation question such as: "Does Vitamin C come exclusively from animal sources?", it remains a simple task, requiring the LLM simply to memorize relevant information, without applying it to clinical scenarios.
In summary, the author classifies different cognitive levels based on the tasks with different input and output formats. However, in my opinion, a more suitable way to classify cognitive levels is based on the complexity of questions.
[1] Jin, Di, et al. "What disease does this patient have? a large-scale open domain question answering dataset from medical exams." Applied Sciences 11.14 (2021): 6421.
[2] Pal, Ankit, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. "Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering." Conference on health, inference, and learning. PMLR, 2022.
Methods And Evaluation Criteria: This paper adopts accuracy and full-path diagnosis accuracy as evaluation metrics. I think this part is reasonable.
Theoretical Claims: This paper does not contain theoretical claims.
Experimental Designs Or Analyses: 1. The evaluation of different LLMs on the proposed benchmark ignores state-of-the-art reasoning LLMs (such as openai o1 and o3-mini, Deepseek-R1).
2. This paper ignores the evaluation of LLMs from (1) post-training (2) inference-time scaling dimensions. How different post-training and inference-time strategies can enhance the capability of LLMs at different cognitive levels is not explored.
Supplementary Material: The supplementary materials provide the (1) details of tasks at different levels, and (2) detailed evaluation metrics.
Relation To Broader Scientific Literature: The findings within this paper are aligned with the scaling law[3].
[3] Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv preprint arXiv:2001.08361 (2020).
Essential References Not Discussed: The key contribution of this paper is evaluating LLMs from different cognitive levels. However, previous works [4][5] have thoroughly evaluated LLMs across a variety of tasks and dimensions. It is important for the authors to discuss these studies.
[4] Wu, Chaoyi, et al. "Towards evaluating and building versatile large language models for medicine." npj Digital Medicine 8.1 (2025): 58.
[5] Johri, Shreya, et al. "An evaluation framework for clinical use of large language models in patient interaction tasks." Nature Medicine (2025): 1-10.
Other Strengths And Weaknesses: Strength: this paper comprehensively evaluates a wide range of popular LLMs across different tasks, which is valuable for the community to better understand the clinical capability of LLMs.
Weakness: the classification of different cognitive levels is not convincing enough for me. Furthermore, this paper offers limited insights regarding how to enhance the performance of medical LLMs on high cognitive level tasks beyond merely increasing parameter sizes.
Other Comments Or Suggestions: I will increase my overall score if the authors can address my concerns mentioned above.
Questions For Authors: Please refer to previous 'Claims and Evidence', 'Method and Evaluation Criteria', 'Experimental Designs or Analyses', and 'Other Strengths and Weaknesses' parts.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful and constructive feedback. Below, we address each of the concerns you raised.
1. **Benchmark design issue** : We highly appreciate your kind comments. Indeed, the cognitive difficulty of a question is affected by multiple factors, including task type and question complexity. However, the difficulty of evaluation questions is typically constrained by the task format. For example, while the MedQA dataset is designed to evaluate LLMs’ ability to analyze clinical cases, its multiple-choice format somewhat limits its ability to fully reflect how well LLMs request and integrate diagnostic information in real clinical settings. In fact, the task formats in our benchmark inherently reflects variations in cognitive difficulty: Mid-level tasks reflect the impact of increasing reasoning steps and expanding decision space on performance, while high-level tasks further evaluating LLMs in requesting and integrating critical information for decision-making in clinical scenarios. While there is indeed some variation in difficulty among questions within the same task format, the constructed benchmark generally meets our design expectations.
We also conducted a clinician evaluation to validate the proposed benchmark. Specifically, we first randomly sampled 20 questions from each task to form a subset of 100 questions. Then, we recruited four licensed clinicians with 3 to 8 years of experience to evaluate this subset from two perspectives: (1) their accuracy in answering the questions and (2) their subjective difficulty ratings to the questions (1=Easy, 10=Hard). The evaluation results are as follows:
|Cognitive Level|Clinician Accuracy|Clinician Subjective Difficulty|
|-|:-:|:-:|
|Low|68.8|Mean:5.0 Median: 5.5|
|Mid|54.2|Mean:6.0 Median: 5.8|
|High|23.5|Mean:7.5 Median: 7.5|
The experimental results show that clinicians’ accuracy decreases as the cognitive level increases, aligning with their subjective difficulty rating. This indicates that the designed tasks successfully achieved the intended difficulty levels and can effectively reflect how far LLMs are from effectively solving real clinical problems. Thank you again for your kind suggestions, and we will add this clinician evaluation into the revised paper.
2. **Evaluation of reasoning LLMs**: Thank you for your constructive suggestions. Indeed, since our paper submission, reasoning LLMs have been receiving increasing attention. Following your kind suggestions, we primarily evaluated two typical reasoning LLMs, DeepSeek-R1 and o3-mini, and compare them with SOTA chat LLMs in corresponding parameter scales. The evaluation results are as follows:
|Model|Low-Level|Mid-Level|High-Level|
|-|:-:|:-:|:-:|
|DeepSeek-V3|78.3|44.8|19.4|
|**DeepSeek-R1**|**89.3**|**73.1**|**25.9**|
|GPT-4o-mini|62.7|43.0|13.8|
|**o3-mini**|**88.1**|**75.2**|**15.5**|
Results show that reasoning LLMs outperform chat LLMs across all cognitive levels, though the performance gap narrows on high-level tasks due to their significantly increased difficulty. Note that we have not evaluated other reasoning LLMs due to time and cost constraints (as the price of o1 is quite high). We plan to add this evaluation and include additional evaluations in our revised paper.
3. **Insights of medical LLM development**: Thank you for your thoughtful comments. This work provides two key insights into the development of medical LLMs:
(1) Although the performance of smaller LLMs (<10B) on medical benchmarks is gradually approaching that of larger LLMs, our study indicates that increasing parameter size remains crucial for tackling tasks in higher cognitive levels.
(2) While medical post-training (used in current medical LLMs) and inference-time scaling (applied in reasoning LLMs) strategies work on low- and mid-level tasks, effectively solving high-level tasks demand further advancements in clinical reasoning abilities—particularly in the retrieval and integration of key information for decision-making in real-world scenarios.
Again, we appreciate your comments and will further highlight our insights in the revised paper.
4. **Discussion with more benchmarks**: We are sincerely grateful for your kind suggestions. Wu et al. investigate LLMs’ ability to handle diverse medical tasks by constructing MedS-Bench, a large-scale benchmark covering 11 types of clinical tasks. Johri et al. study LLMs' ability to obtain the necessary patient information for diagnosis through diaglogue by proposing CRAFT-MD, an evaluation framework that simulates multi-turn doctor-patient interactions using a multi-agent approach. Meanwhile, our work focuses on evaluating how close LLMs are to effectively solving real-world clinical tasks by designing evaluations with progressively increasing cognitive difficulty and systematically analyzing existing LLMs. We will further enhance the discussion of other medical LLM benchmarks in our revised paper.
---
Rebuttal Comment 1.1:
Comment: The author's rebuttal addressed some of my concerns. But I still consists that solely modifying the MCQs does not result in tasks with different ''cognitive levels''.
Although the author demonstrates in the rebuttal that modifying the MCQs changes the diificulty of the question, for more difficult tasks, smaller LLMs fails to scale up as good as larger LLMs is not a supervising conclusion. Therefore, I will keep my score at this stage.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your timely response.
1. **About "some of concerns" unresolved**: We truly appreciate your feedback, but frankly speaking, we are currently unsure which specific concerns you feel remain unaddressed. In our initial response, we have provided clear clarifications to each of your concern regarding (1) task settings, (2) lack of evaluation on reasoning LLMs, (3) insights into medical LLM development, and (4) comparisons with more benchmarks. Specifically, we (1) added a clinician evaluation and restated our motivation in designing tasks of different levels, (2) included evaluations of reasoning LLMs, (3) restated several key insights on how to improve the performance of medical LLMs, and (4) discussed the relations between our benchmark and other existing medical benchmarks. It's worth noting that Reviewer z2s4 has also carefully read and expressed agreement with our response to your concerns.
2. **About "solely modifying MCQs"**: We are sorry that our rebuttal did not fully explain our task settings. In fact, (1) the Mid-Level tasks are not merely modified versions of MCQs. Instead, they are carefully designed to reflect the challenges faced in real-world clinical scenarios, such as limited information and a broader decision space. (2) It is worth noting that the High-Level task is **not** derived from MCQs. Instead, they are directly constructed from electronic medical record data, aiming to assess the LLMs’ ability to actively plan, request key diagnostic information, and complete the diagnostic through reasoning.
3. **Further clarification regarding the impact of task settings on cognitive difficulty:** We feel sorry that our response did not fully convey how the task settings impact cognitive difficulty. Below, we will further explain this from several aspects: (1) We primarily constructed the Mid-Level tasks through task reformulation to keep the evaluated knowledge points unchanged (as Reviewer z2s4 mentioned, “the key knowledge point may be the same”), enhancing the comparability across tasks of different cognitive difficulties. (2) Given that Low- and Mid-Level tasks typically provide all key information at once, we further designed the High-Level task that evaluates the LLMs’ ability to actively plan and request key information based on limited patient information to complete a diagnosis.
Furthermore, for a given clinical case, providing complete patient information and limited options can largely reduce the reasoning difficulty. For example, for the following Low-Level question, LLMs can directly match the patient’s symptoms and examination results with each candidate option to easily arrive at the correct answer:
> Question: A 30-year-old woman presents to the physician because of ongoing diarrhea ... She denies any recent travel history ... Clinical examination shows mild tenderness ... Findings on colonoscopy include patchy erythema ... Mucosal biopsy shows colonic crypts ... What is the most likely diagnosis?
>
> A: Ulcerative colitis B: Crohn's Disease C: Acute infective colitis D: Pseudomembranous colitis E: Irritable bowel syndrome.
In contrast, for our High-Level tasks, where only the patient history is provided initially, LLMs are required to actively plan the next steps for examination and integrate multiple test results to reach a final diagnosis, significantly increasing the cognitive difficulty:
>A 30-year-old woman presents to the physician because of ongoing diarrhea ... She denies any recent travel history ...
>
>Your ultimate goal is to diagnose the patient's condition. You can order additional examinations for more information. Output the final diagnosis when you are confident.
4. **About LLM scaling effect**: Thank you for your comments. Our primary goal is to provide necessary insights for developing real-world usable medical LLMs through an in-depth evaluation of existing general and medical-specific LLMs. Our key findings include:
+ LLMs smaller than 10B are unsuitable for higher cognitive level tasks, a conclusion with significant practical implications for real-world clinical applications. If developing a MedLLM for real-world clinical use, our findings can help guide the selection of an appropriately sized backbone model for post-training. Choosing a model that is too small may hinder achieving desired performance. In fact, our team encountered a similar challenge during the development of usable medical LLMs.
+ Existing medical LLMs do not achieve significant improvements on High-Level tasks, highlighting the need to enhance LLMs' ability to actively plan, request key information, and perform reasoning based on obtained information.
Once again, we sincerely appreciate your review and the concerns you raised, which led us to include the clinician evaluation and evaluations of reasoning LLMs, enhancing the completeness of our work. We also hope this response may further address your concerns unresolved. | Summary: This paper assesses large language models (LLMs) across multiple cognitive levels, based on Bloom's taxonomy that proposes six cognitive objectives/levels in ascending order of complexity. In particular, tasks pertaining to three cognitive levels (preliminary knowledge grasp, comprehensive knowledge application, scenario-based problem solving) were defined and attempted with five state-of-the-art LLMs. It was found that LLM performance declined with increased task cognitive complexity, and that larger LLMs performed better when higher cognitive complexity was required.
## update after rebuttal
The authors' response is appreciated and improved our assessment of the study.
Claims And Evidence: The claims are based on comprehensive empirical evaluation on 29 separate general-domain LLMs from five main families (Llama, Qwen, Gemma, Phi3, GPT), as well as eight medical-domain-specific LLMs (Tables 2 & 6). The main claims on performance declining with increasing task complexity as well as increasing with model size appear generally true, within each LLM model family.
Methods And Evaluation Criteria: While the proposed evaluation framework (Figure 2) is reasonable, whether the (accuracy) results are directly comparable can be contested, especially between low/mid-level and high-level tasks. For example, the low-level "preliminary knowledge grasp" tasks involve MCQs with four options, the mid-level tasks involves multiple steps but fewer options (equivalence justified in Section B), and the high-level tasks appear to be free-form (Section A.3). Empirical evaluation of task difficulty by clinicians would help to calibrate/justify their appropriateness.
Theoretical Claims: No theoretical claims are presented.
Experimental Designs Or Analyses: The experimental design appears largely sound.
Supplementary Material: All of the supplementary material was reviewed.
Relation To Broader Scientific Literature: While the experiments are comprehensive, the main findings that higher-level cognitive tasks are more challenging, and that larger models fare better on such tasks, are largely unsurprising.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: A potential weakness would be the lack of prompt engineering and chain-of-thought query techniques, that can significantly affect LLM performance.
Other Comments Or Suggestions: 1. Some preliminary empirical justification for the selected model temperature (and other) parameters (as discussed in Section D) would be appropriate.
2. Some examples of LLM answers (especially for a high-level case) would be welcome, in the appendix.
Questions For Authors: 1. From Section D, it is stated that for low and mid-level tasks, the variance in reported performance metrics is due to "conduct[ing] five repeated experiments by randomly selecting five training samples from the rest of dataset". However, it is not immediately clear as to whether any additional "training" is performed for the LLMs, for purposes of this study. This statement might thus be clarified.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful and constructive feedback. Below, we address each of the concerns you raised.
1. **Comparability between different cognitive levels**: Thank you for your thoughtful concern. Indeed, comparing LLMs’ performance across different cognitive levels is crucial for analyzing their medical capabilities. Considering this, we have implemented a metric alignment strategy in the "Performance Metrics" section (see Equation 4 on page 5) to eliminate interference caused by random guessing and ensure the comparability of tasks across different cognitive levels.
In fact, we have carefully considered the cognitive difficulties when designing the tasks in our benchmark. Compared to MCQs, mid-level tasks increase difficulty by removing clues, increasing reasoning steps, and expanding the decision space. High-level tasks go further by requiring LLMs to request necessary information instead of receiving it directly from the question. To ensure that the constructed benchmark meets our expectations, we followed your kind suggestions and conducted a clinician evaluation. Specifically, we randomly sampled 20 questions from each task, resulting in a clinician evaluation subset of 100 questions. Four licensed clinicians with 3 to 8 years of experience were recruited to assess the benchmark’s difficulty from two perspectives: (1) their accuracy in answering the questions and (2) their subjective difficulty ratings to the questions on a scale from 1 (Easy) to 10 (Hard). The evaluation results are as follows:
|Cognitive Level|Clinician Accuracy|Clinician Subjective Difficulty|
|-|:-:|:-:|
|Low|68.8|Mean:5.0 Median: 5.5|
|Mid|54.2|Mean:6.0 Median: 5.8|
|High|23.5|Mean:7.5 Median: 7.5|
We found that clinicians’ accuracy also decreases as the cognitive level increases across three levels. Moreover, their subjective difficulty ratings align with this trend, further demonstrating the validity of our benchmark. Once again, we appreciate your kind suggestions and will incorporate this clinician evaluation into the revised paper.
2. **LLM scale effect issue**: Thank you for your thoughtful comments. Indeed, the scaling law indicates that larger models generally perform better, but it lacks a detailed analysis of the impact of parameter size on tasks across different cognitive levels (difficulties). Considering this, we conducted a systematic evaluation to investigate parameter size effects at varying cognitive levels and found that larger LLMs significantly outperform smaller ones on harder medical tasks, demonstrating the critical role of parameter scale in real-world clinical problem-solving. Moreover, our analysis of medical-specific LLMs reveals that existing post-training methods fail to enhance high-level task performance, offering key insights for the development of medical LLMs.
3. **Evaluation setting & parameter issues**: We sincerely appreciate your thoughtful comments.
(1) Evaluation setting: For low- and mid-level tasks, we adopted the few-shot learning (widely used in LLM benchmarks), as it introduces minimal subjective bias by only guiding LLMs with examples, while prompt engineering and CoT techniques may introduce subjective biases through prompt design, potentially affecting the fairness across LLMs. For high-level tasks, regarding the reasoning difficulty, we adopted an agent-based setting [1], guiding the model to generate a rationale before producing the corresponding action.
(2) Decoding parameters: We chose the decoding parameters to align with task characteristics. For low- and mid-level tasks, we set temperature = 0 (greedy decoding) as these tasks have well-defined answer formats, and greedy decoding selects the most confident answer. For high-level tasks, we set the temperature=0.8 to balance the reasoning diversity and output stability of LLM responses. We also verified through preliminary experiments that LLMs produce stable outputs with this parameter setting.
[1] Hager P et al. Evaluation and mitigation of the limitations of large language models in clinical decision-making. Nature medicine, 2024.
4. **Examples of LLM answer**: Thank you for your constructive advice. We will include more examples of LLM answers in the appendix to better illustrate our evaluation process.
5. **Clarification of the Ambiguous Statement**: Thank you for your careful reading. We sincerely apologize for the ambiguity—here, "training samples" refer to the demonstrative examples used in few-shot in-context learning. Regarding few-shot learning, we have briefly introduced the process in Section 4.1 ("Evaluation Setting") and illustrated the corresponding input format in Figure 9. Notably, our evaluation does not involve any model training. Once again, we appreciate your careful reading. We will conduct thorough proofreading to avoid similar ambiguities in the revised manuscript. | Summary: The paper proposed a novel medical LLM evaluation benchmark inspired by Bloom's taxonomy. Different from existing benchmarks that only evaluate the LLM on one single style of QA tasks, the proposed benchmark constructs a multi-cognitive-level evaluation framework and provides more informative results. The proposed method revealed the fact that the existing LLM is only performing well on low-cognitive-level tasks that mainly involve knowledge memorization. However, even the most state-of-the-art LLM can fail in high-level clinical diagnosis tasks. It also reveals the important relationship between model size and its ability in high-level complex scenarios.
Claims And Evidence: The proposed medical LLM benchmark is inspired by Bloom's taxonomy, which is very intuitive and convincing. The proposed task construction protocol is reasonable and thoughtfully evaluated by human experts. The evaluation results provide intuitive but still inspiring results. Also, the QA example provided in the supplementary helps to better understand each task.
Methods And Evaluation Criteria: The proposed novel benchmark dataset seems to be very promising and significant to the reviewer. The concerns about the LLM's performance on medical application is a long-existing problem. However, current benchmarks mainly focus on simple knowledge-based QA tasks, ignoring the complex situation in the real-world diagnosis procedure. The proposed benchmark dataset will be helpful to better evaluate the capability of existing LLMs and provide more informative results to guide the development of the field.
Theoretical Claims: N/A. There is no new theoretical claims proposed in the paper.
Experimental Designs Or Analyses: The experiment in the paper is very convincing and thorough. It covers more than 20 different general LLMs and multiple medical-specific LLMs fine-tuned from these baseline LLMs. The baselines have also covered different parameter sizes from 2 B to 70 B. The task-specific results in Tables 3 and 4 further help to understand the behavior of the medical LLMs. Overall, the experiment is convincing and reasonable. The evaluation of the medical-specific LLMs is very interesting since it helps illustrate the potential problem within these LLMs, where the post-training potentially ruined the reasoning capability of the original model, resulting a worse high-level task performance.
Supplementary Material: The supplementary material provides detailed examples of QA pairs for each level of tasks, which illustrate the complexity of the proposed benchmark dataset.
Relation To Broader Scientific Literature: The paper has properly discussed related literature and the status quo of current medical LLM evaluation. It can potentially serve as the standard evaluation protocol for future medical LLM evaluation, providing a more reasonable and realistic evaluation.
Essential References Not Discussed: While the paper has a detailed discussion of existing works. It will be great if it can provide some more discussion on the other medical LLM benchmark released recently, such as [a]. But this will not harm the contribution of the paper.
[a] Zhou, Yuxuan, et al. "Reliable and diverse evaluation of LLM medical knowledge mastery." arXiv preprint arXiv:2409.14302 (2024).
Other Strengths And Weaknesses: Overall, the paper looks pretty solid and convincing. The proposed benchmark provides a more detailed understanding of existing LLMs. It can serve as an important step towards LLM's application in real-world diagnosis.
Other Comments Or Suggestions: N/A
Questions For Authors: One small thing: I am wondering if the proposed benchmark has a name or not. It will be much easier for others to refer to in the future.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your kind feedback as well as your recognition of our work. Below are our responses to each of the concerns you raised.
1. **Discussion with other LLM benchmark**: We sincerely appreciate your kind suggestions. Zhou et al. [1] proposed a medical evaluation framework to generate reliable and diverse test samples based on knowledge bases, addressing the issues of low reliability and diversity in automatic test sample generation. Meanwhile, our work aims to explore the limitations of LLMs in solving real-world medical problems by constructing a benchmark with medical tasks of varying cognitive levels and systematically evaluating existing LLMs on this benchmark, offering insights for developing LLMs suited to real-world medical applications. Once again, thank you for your kind suggestions. We will strengthen discussions with other medical LLM benchmarks in the revised paper.
[1] Zhou, Yuxuan, et al. "Reliable and diverse evaluation of LLM medical knowledge mastery." arXiv preprint arXiv:2409.14302 (2024).
2. **Name of proposed benchmark**: Thank you for your kind suggestions. Indeed, a good benchmark name could facilitate reference and discussion. We plan to name the benchmark MulCogEval (Multiple-Cognitive-Level Evaluation) and will include relevant annotations in the paper.
---
Rebuttal Comment 1.1:
Comment: I also apperciate the effort of the author during the rebuttal period. It is very impressive to see the additional results of human expert on the same evaluation, which can serve as an important baseline for future purpose. The new results of the Deepseek-R1 and GPT-o3 is very impressive. The performance of Deepseek-R1 is very promising and I would like to see to more discuss about the capability of the reasoning LLM and ituition of why it can work so good.
The concerns about QA settings does not concerns me too much since the high-level idea is intuitive to me. While the key knowledge point may be the same, the changing form of the question will still increasing the difficulty of the task via different path, e.g. extended answering space, hidden information in the description, and reasoning request.
Overall, the reviewer acknowledge the contribution and significance of this work and would like to maintain my recommendation of acceptance. A more complex and high-level medical benchmark is important to the current development of MedLLM. It is also a necessary path to approaching real-world usable MedLLMs.
---
Reply to Comment 1.1.1:
Comment: We would like to express our sincere appreciation for your kind and insightful feedback.
1. **About expert evaluation**: Thank you for recognizing our efforts in conducting the expert evaluation during the rebuttal phase. This evaluation not only helps validate the difficulty distinctions across cognitive levels in our benchmark, but also can serve as a meaningful baseline for future research.
2. **Discussion about reasoning LLMs**: We are grateful for your constructive suggestions. Following your kind suggestions, we further explored the possible reasons behind the strong performance of reasoning LLMs, based on the evaluation of DeepSeek-V3 and DeepSeek-R1. The evaluation results are presented below (*Relative Improvement* is calculated by dividing the absolute gain by the performance of DeepSeek-V3):
|Model| Low-Level|Mid-Level|High-Level|
|-|:-:|:-:|:-:|
|DeepSeek-V3|78.3|44.8|19.4|
|DeepSeek-R1|89.3|73.1|25.9|
|Relative Improvement|+14.0%|+63.2%|+33.5%|
We observe that although DeepSeek-R1 improves across all three levels, the gains are larger on Mid-Level tasks compared to Low-Level ones. This may be because, while both levels evaluate the same knowledge points, Mid-Level tasks are more difficult due to reduced additional information and a broader decision space. Meanwhile, reasoning LLMs can tackle these challenges by analyzing task requirements and breaking down the problem into manageable steps. For example, consider the following Answer Existence Judgment Question, where the model must determine whether a correct answer is present among the given options:
> Question: A 30-year-old woman presents to the physician because of ongoing diarrhea ... What is the possible diagnosis?
>
> A: Ulcerative colitis B: Chronic diverticulitis C: Acute infective colitis D: Pseudomembranous colitis E: Irritable bowel syndrome.
In this case, the correct diagnosis "Crohn’s disease" is not included in the options. We observe that DeepSeek-V3 incorrectly answers "yes", whereas DeepSeek-R1 arrives at the correct answer by first analyzing the patient condition, then evaluating each option based on the analysis, and finally integrating the evaluation results:
> DeepSeek-R1: Okay, so I need to figure out the diagnosis for this patient. Let's start by reviewing the case details…
>
> Then, let's list the options: … I need to check if any of these fit. UC (option A) typically presents with ... However, UC's biopsy usually ... Chronic diverticulitis (option B) often presents with ... the biopsy findings don't align with ...
>
> If the options are only the given ones, and none fit perfectly, then the correct answer is not listed. Therefore, the answer would be 'no'.
Moreover, although Low-level tasks are relatively simple and both chat & reasoning LLMs perform well at this level, we observe that reasoning LLMs can further enhance performance by generating rationale that more thoroughly link the problem to the learned knowledge.
Furthermore, compared to tasks at the other two levels, High-level task requires LLMs to actively obtain patient information through examinations and ultimately complete the diagnosis based on the gathered information, making them significantly more difficult. Notably, reasoning LLMs achieved a 33.5% improvement on this type of task. We further analyzed DeepSeek-R1’s fine-grained performance on this task, and the results are as follows:
|Model|Examination Recall|End-Point Diagnostic Acc|Full-Path Diagnostic Acc|
|-|:-:|:-:|:-:|
|DeepSeek-V3|30.0|53.6|19.4|
| **DeepSeek-R1** |**43.7**|**56.0**|**25.9**|
We observed that DeepSeek-R1 demonstrates a stronger ability (higher Exam Recall) to actively request key diagnostic information compared to DeepSeek-V3, resulting in higher full-path diagnosis accuracy on the High-Level task.
Additionally, while reasoning LLMs performed notably on our benchmark, our error analysis revealed that a significant portion of the errors stem from insufficient mastery of medical knowledge. Therefore, we suggest that current LLMs should further integrate more medical knowledge and combine it with their reasoning capabilities to more effectively address real-world medical problems.
3. **About QA settings**: Thank you for your deep understanding of our benchmark settings. Indeed, while Low- and Mid-Level tasks evaluate the same knowledge points, task format changes can further increase cognitive difficulty from different perspectives (e.g., reducing information, expanding decision space). Moreover, compared to lower-level tasks where LLMs passively receive information, High-level tasks increase cognitive difficulty by evaluating LLMs’ ability to actively plan and request key diagnostic information. Once again, we appreciate your thorough understanding and recognition of our approach and are confused that Reviewer VJ98 fails to grasp the purpose behind our design. | null | null | null | null | null | null | null | null |
Strong and Weak Identifiability of Optimization-based Causal Discovery in Non-linear Additive Noise Models | Accept (poster) | Summary: The manuscript introduces a criterion for strong vs. weak identifiability in causal graphs and explores the implications for optimization based structure discovery algorithms. Specifically, the authors propose a gradient-based approach whose objective combines a standard goodness of fit measure ($R^2$) with a residual independence test to score candidate orderings. Experiments demonstrate strong performance compared to additive noise model causal discovery algorithms on a range of synthetic and real-world benchmarks.
## update after rebuttal
I thank the authors for their rebuttal. After reading the discussion with other reviewers, I am inclined to agree that this manuscript is somewhat under-developed at present and could benefit from further experiments and/or theoretical analysis. I will be revising my score downward for consensus but encourage the authors to revise and resubmit in the near future. This paper is nearly there and will find a good home soon!
Claims And Evidence: The main theoretical claim is the purported distinction between "strong" and "weak" identifiability of causal structures in additive noise models (ANMs). If I understand correctly, the structural equation for variable $V_i$ is "strongly identifiable" if we can uniquely solve for each of its parents by fixing the value of $V_i$ and all other parents (including an exogenous noise variable). So far so good. But I'm a bit confused about if/how these notions map onto classical distinctions between identifiable vs. partially identifiable structures. If an ANM is only "weakly" identifiable, does this mean that there is no unique solution (at least without further assumptions)? It's not obvious to me that this follows. If there is no unique solution, can we at least characterize the space of possible solutions (e.g., something akin to a Markov equivalence class in constraint-based approaches?) Also, the text appears to suggest that strong and weak identifiability form a partition on the space of ANMs. But surely some ANMs are simply _unidentifiable_?
The empirical results are impressive. The idea of adding a residual independence testing component to the standard objective for optimization-based causal discovery makes good sense and appears quite effective. I am unaware of any previous proposals along this line.
Methods And Evaluation Criteria: The experiments are convincing and well-designed. Synthetic and real-world results tell a similar story.
Theoretical Claims: The proofs appear sound, though I did not check them closely. I have some questions about how to interpret Lemma 3.2. Why do the constants $m, M$ have to be positive? This seems like a very restrictive assumption. If I understand correctly, it means that $F$ is strictly monotone in each of its $n+1$ arguments, at least within the range $E$? Also, is $E$ meant to be the full support or just a subset? If the former, then the lemma should probably say so. If the latter, then what happens outside this range?
I'm not entirely clear to me what to make of Thm. 3.3 on its own – do we not have any sufficient conditions for strong identifiability? Or for that matter unidentifiability?
Thm. 4.3 is a nifty result. Always nice when greedy methods are globally optimal!
Experimental Designs Or Analyses: As noted above, the experiments are clear and compelling.
Supplementary Material: I looked through the appendix, though not in close detail.
Relation To Broader Scientific Literature: The topic of optimization-based causal discovery is of great interest to the ICML community, and has broad scientific application. Leveraging neural networks and gradient-based learning for this difficult task, which has traditionally been formulated with discrete reasoning, is a promising direction. Early works in this area faced some challenges (var-sortability, etc.) but I believe there will continue to be more interesting developments in this space. The present work makes a small but meaningful contribution to this discourse.
Essential References Not Discussed: I am not aware of any essential references that were not discussed, but I confess I am not an expert in this domain.
Other Strengths And Weaknesses: The idea is clear. The results are compelling. I feel I might be missing some link between the theoretical and empirical results. Greater elaboration on the former could help readers better appreciate the latter.
Other Comments Or Suggestions: -The $R^2$ formula is missing a $1–$ before the ratio
-Is 2 hours a reasonable time limit for causal discovery algorithms? Seems a little conservative.
-I understand the motivation for the residual independence test, but I was a bit baffled by the choice to implement this via a $\chi^2$ test. Surely we lose information by discretizing? This also introduces hyperparameters that may influence results (how many bins to use?) There are plenty of nonparametric independence tests for continuous data that could be used instead, e.g. Spearman $\rho$ or HSIC.
Questions For Authors: My main questions are:
(1) What exactly does "weak identifiability" amount to? Is it just that we need more samples to get the right answer, or that even in the infinite limit we will not converge on a unique answer?
(2) We have a sufficient condition for weak identifiability, but none for unidentifiability (or for that matter strong identifiability). Any idea what these might look like? How about necessary conditions for any of these?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive feedback and insightful suggestions. Below, we address each point raised in the review.
### **Claims and Evidence**
1. **confusion about the strength of identifiability**
Thank you for raising this important point. Within the ANM framework—which is inherently identifiable—our distinction between strong and weak identifiability refines the classical notion of identifiability into subclasses based on practical difficulty:
-Strong Identifiability: The causal direction is uniquely recoverable via regression alone (examples see Fig. 1), requiring no additional criteria.
-Weak Identifiability: The causal direction is still uniquely identifiable but requires residual independence tests to resolve ambiguity.
Both classes are fully identifiable under ANM assumptions, differing only in the difficulty of identification. Weak cases do not imply partial identifiability but instead demand stricter criteria to isolate the true graph. This distinction guides algorithm design: GENE adaptively combines criteria to address both regimes, ensuring robustness. We will clarify these nuances in the revision.
### **Theoretical Claims**
1. **Questions about Lemma 3.2**
Thank you for your careful reading. m and M have to be positive because this property is used in the process of constructing the inequality proof for the contraction. The positivity ensures strict monotonicity of F, guaranteeing a unique implicit function φ(x) over the domain E. Regarding the support range of E, it is necessary to understand it in conjunction with the role of this Lemma and Theorem 3.3. The purpose of providing this theorem is to move from a purely theoretical definition of strong and weak identifiability to providing a practically operational way to detect the strong and weak identifiability of SEMs (although in causal discovery, this is rarely done). It bridges theoretical definitions to actionable insights: if F satisfies the lemma's criteria within any plausible domain E (e.g., observed data ranges), the SEM is deemed weakly identifiable. This aligns with causal discovery's practical focus, where identifiability is assessed within empirically relevant regimes. We will clarify this intent in the revision.
2. **Confusion of Thm. 3.3**
Theorem 3.3 focuses on weak identifiability because it represents the more challenging and nuanced regime where classical methods fail. Strong identifiability, by definition, arises when no implicit functions exist (Def. 3.1), making it straightforward to identify causal directions via regression alone. Thus, strong identifiability does not require separate conditions—it is the default when Theorem 3.3's criteria are unmet. Unidentifiable SEMs, fall outside our scope, as ANMs inherently assume identifibility. We will clarify this hierarchy in the revision.
### **Other Comments Or Suggestions**
1. **Issue of R^2 Formular**
Thank you for catching this important technical detail. You are absolutely correct that we miss a $1-$ in the formulation. We sincerely appreciate your careful reading, and we will make sure to correct this in the revised manuscript.
2. **Time Limit**
Since in our simulation experiments, for each setting (node_num, density, function form), we generate 10 graphs, which is equivalent to having 10 problems. Each algorithm is then repeated 10 times on each of these problems, meaning each algorithm has 10 * 10 = 100 runs for each setting. Therefore, setting the time limit to 2 hours is quite reasonable.
3. **Choice of Independence Tests**
We appreciate this suggestion. While nonparametric tests (e.g., HSIC, Spearman) avoid discretization, they introduce computational bottlenecks—HSIC scales as O(n^2) per test, and Spearman requires rank calculations across O(d^2) variable pairs. The chi^2 test balances efficiency (O(n) per test) with empirical reliability. We validated binning choices (m=10) across synthetic datasets, observing stable performance. That said, we agree that discretization loses information and will explore hybrid strategies (e.g., kernel-based tests for critical nodes) in future work.
### **Questions For Authors**
1. **Meaning of Weak Identifiability**
Weak identifiability does not imply unidentifiability. Under ANM assumptions, even weakly identifiable SEMs are uniquely identifiable in the infinite-sample limit. The distinction lies in the practical requirements: weakly identifiable cases demand residual independence tests to resolve directionality ambiguities that persist with finite data (e.g., symmetric regression fits). While strong identifiability allows causal discovery via regression alone, weak identifiability requires additional criteria—but both guarantee convergence to the true graph asymptotically.
2. **Condition for Unidentifiability or Strong identifiability**
Please refers to above response for Confusion of Thm. 3.3. | Summary: The paper tackles the problem of causal discovery in additive noise models. It identifies different classes of ANMs, strongly identifiable or weakly identifiable, that pose different levels of difficulty for traditional discovery algorithms. It characterizes sufficient conditions for the different classes and proposes a novel algorithm (GENE) that has consistent performance across both classes of ANMs. The algorithm proposes to find a topological ordering that maximizes a goodness-of-fit measure based on R^2 values and residual independence via a greedy search approach; a novel edge pruning approach that leverages R^2 values (least pruning) is proposed to enable parent set identification. The authors validate GENE real and synthetic data, finding that their algorithm maintains performance on strongly identifiable datasets while having superior performance on both weakly identifiable ANMs and real-world data.
Claims And Evidence: 1. On page 3, following Definition 3.1, the authors claim that if an implicit function does not exist, the SEM can be identified by simple regression - however, this is never shown formally. This is important, as the validity of the distinction between weak and strongly identifiable ANMs hinges on whether such a simplification can be made.
2. In Section 4.3, the authors introduce "least-pruning" as a way to prune spurious edges, given a correct topological ordering. They suggest that Group LASSO is inappropriate given that the ANM may not have an additive contribution from each parent. However, it is unclear how their Least Pruning approach overcomes such a limitation, or in general, what the advantage is of this approach, obscuring the paper's contribution.
3. On page 4, paragraph 4, the authors claim that "For strongly identifiable SEMs, consider the goodness of fit in an order-based manner is sufficient for causal discovery". However, this claim lacks substantiation, which casts doubt on whether the objective function proposed by the authors is well-motivated.
Methods And Evaluation Criteria: The evaluation criteria (F1 and SHD) as well as the synthetic benchmarks used make sense for the application of causal discovery. However, there are significant additional experiments to run in order to fully evaluate the proposed approach (see 'Questions for Authors' for more details).
Theoretical Claims: I checked the proof of Theorem 4.3 and found no issues.
Experimental Designs Or Analyses: The experimental designs presented are both sound and valid. However, there are significant additional experiments to run in order to fully evaluate the proposed approach (see 'Questions for Authors' for more details).
Supplementary Material: I reviewed all of the supplementary material.
Relation To Broader Scientific Literature: Prior work ([1,2]) has highlighted how ANMs may be generated with different characteristics (var-sortability, R^2-sortability), and propose heuristic algorithms (Var-Sort, R^2 Sort) that can exploit these characteristics to recover the underlying DAG. This paper outlines a novel characteristic of ANMs, strong or weak identifiability, and constructs a heuristic algorithm to exploit recovery in these contexts.
[1] Reisach, A. G., Seiler, C., & Weichwald, S. Beware of the simulated DAG! Causal discovery benchmarks may be easy to game. *Proceedings of Machine Learning Research*, vol TBD:1–24, 2021.
[2] Reisach, A. G., Tami, M., Seiler, C., Chambaz, A., & Weichwald, S. A scale-invariant sorting criterion to find a causal order in additive noise models. *Proceedings of Machine Learning Research*, vol TBD:1–24, 2023.
Essential References Not Discussed: 1. In the ordering stage of GENE, the authors propose to use a combination of R^2 and residual independence to define the fitness function, with greedy search as an optimization algorithm; however, this appears to be a fusion of the R^2-sortability approach discussed in [1], as well as the independence-based score approach discussed in Section 4.2 and 4.2.2 of [2] - how does the proposed approach differ from these concepts?
2. In the edge pruning stage of GENE, the suggested least pruning algorithm is extremely similar to the pruning algorithm suggested by RESIT [2] (Section 4.1), with the main difference being that the R^2 measure replaces the residual independence measure. However, this remains undiscussed in the paper, and thus the novelty of this approach appears low.
[1] Reisach, A. G., Tami, M., Seiler, C., Chambaz, A., & Weichwald, S. A scale-invariant sorting criterion to find a causal order in additive noise models. *Proceedings of Machine Learning Research*, vol TBD:1–24, 2023.
[2] Peters et. al, Causal Discovery with Continuous Additive Noise Models, (2014).
Other Strengths And Weaknesses: Weakness:
1. The paper's novel contribution can be considered relatively minor. The algorithm GENE simply combines existing ideas from RESIT, GES, and R^2-Sort to enable both its ordering and pruning algorithms. Although the identification of strong/weak identifiable ANMs is novel and interesting, it is underexplored in this work, with little formal reasoning about how different algorithms may succeed or fail in different types of ANMs.
Other Comments Or Suggestions: NA
Questions For Authors: 1. What is the intuition behind what makes strongly or weakly identifiable ANMs easier or harder to discover? Why is it that "simple regression" suffices in the strongly identifiable cases, whereas it does not in the weakly identifiable? Further, is it possible to prove how various classic methods might fail in each scenario? Without extensive explicit and formal characterization of the importance of identifiability type, the contribution of the paper remains unclear.
1. To this end, perhaps there is a connection between invertibility and strong/weak identifiability?
2. Although the authors provide a correctness result for their ordering algorithm, there is no such correctness result for their edge pruning algorithm (least pruning). In fact, there is little discussion of the assumptions under which least pruning would be expected to be accurate, and it seems as if the approach simply leverages the R^2 heuristic. Without guarantees of correctness, it is unclear when we might expect least pruning to succeed, or outperform other approaches.
3. The experimental results can be considered an incomplete evaluation, and raise a few questions that cast doubt on the validity of the author's approach:
1. To properly evaluate the effectiveness of the proposed edge pruning "least pruning", an ablation study must be conducted. In particular, given a correct topological ordering, the authors should compare the accuracy of the adjacency matrix yielded against other baseline methods (Lasso, CAM-pruning, Edge Discovery; see [1] for details on such an experimental setup).
2. To further understand the performance increase over baselines, GENE should be compared against classic baselines such as DirectLiNGAM [2], as well as newer state-of-the-art approaches such as NHTS, SCORE, NoGAM, CaPS, and NHTS [1, 3, 4, 5]. Additionally, experiments should be conducted with non-gaussian noise (laplacian, uniform) to demonstrate the robustness of GENE.
3. The causal mechanisms considered by the author contain mechanisms that either all have implicit functions, or all do not - this may not be representative of real-life scenarios, where one is likely to encounter a range of mechanisms for which an implicit function may or may not exist for each parent. The authors should design an experiment to test whether GENE still outperforms when some causal mechanisms have implicit functions, and some do not, in order to ensure their experimental results do not overstate the performance of GENE.
4. R2-Sort performs poorly for both MIM and GP, which are both strongly identifiable ANMs - however, the authors claim in Section 4 that for strongly identifiable ANMs, considering the good-of-fit measure yielded from R^2 is sufficient for accurate discovery. This seems contradictory and casts doubt on the authors' explanation for the overperformance of GENE.
[1] Hiremath et. al, Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models, NeurIPS (2024).
[2] Shimizu et. al, DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model, (2011).
[3] Montagna et. al, Scalable Causal Discovery with Score Matching, CLeaR 2023.
[4] Montagna et. al Causal Discovery with Score Matching on Additive Noise Models with Arbitrary Noise. PMLR 2023.
[5] Xu et. al, Ordering-Based Causal Discovery for Linear and Nonlinear Relations, NeurIPS 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive feedback and insightful suggestions. Below, we address each point raised in the review.
### **Claims and Evidence**
1 and 3 **Claims about Simple Regression's Sufficiency for Strongly Identifiable Problems**
The experiments in Fig. 4 show that continuous optimization-based methods like Noteras-MLP and Gran-DAG, which rely on simple regression, suffer more significant performance degradation after standardization on weakly identifiable problems compared to strongly identifiable ones (e.g., MIM, GP). Additionally, removing the independence penalty in GENE (Fig. 5) harms performance more on weakly identifiable tasks while maintaining similar results on strongly identifiable ones.
2. **Issues of Least Pruning**
We acknowledge that the "least pruning" strategy is somewhat heuristic, but our focus is on the strength of identifiability in ANMs, which is orthogonal to variable selection. Besides, existing methods assume additive SEMs (e.g., Group LASSO, CAM-pruning) or specific functional forms (e.g., kernel lasso), limiting their generality. In contrast, least pruning iteratively removes edges with minimal impact on $R^2$, aligning with ANM's non-additive nature while remaining intuitive and empirically effective.
### **Essential References Not Discussed:**
1. **Difference to Existing Methods**
GENE differs fundamentally from R2-Sort [1] and RESIT [2]. While R2-Sort relies on an empirical observation (monotonicity of $R^2$ along causal orderings) without theoretical guarantees, and RESIT depends solely on residual independence tests (sensitive to test accuracy), GENE is guided by the strong and weak identifiability theory, which explicitly unifies $R^2$ and residual independence into a single framework, ensuring robustness across both identifiable regimes. For pruning strategies, we have to say they indeed often share structural similarities (e.g., iteratively removing potential parent and refitting models). GENE's least pruning prioritizes $R^2$-based impact over residual independence, directly targeting ANM's goal of preserving predictive fidelity while promoting sparsity. We include comparative experiments below to demonstrate the performance of different pruning strategies
### **Other Strengths And Weaknesses:**
1. **Contribution**
While GENE integrates elements from existing methods, its core novelty lies in formalizing strong and weak identifiability for ANMs—a theoretical advancement that explains why prior methods succeed or fail under varying functional complexities (Section 3). This framework directly guides GENE’s design, unifying $R^2$-based and independence-based criteria adaptively, rather than as a heuristic combination. Experiments validate its necessity: GENE uniquely addresses weakly identifiable problems, where existing methods collapse. We agree that broader analysis of algorithm-class relationships is valuable, but our focus here is establishing the identifiability theory and its algorithmic implications.
### **Questions For Authors:**
1. **Intuition**
Intuitively, the distinction between strongly and weakly identifiable ANMs arises from the existence of implicit functions. In strongly identifiable cases, causal directions cause asymmetric fitting, with good regression only in the correct direction. Weakly identifiable ANMs allow near-perfect fits in both directions, necessitating residual independence tests. Classic methods fail because they rely solely on regression or independence.This aligns with invertibility in 2-variable case.
2. **correctness for Pruning**
See above Issues of Least Pruning part. We also add relevant experiments below.
3. **Experiments**
(1) We add experiments for GENE ordering with different pruning: Lasso, CAM pruning and Edge discovery on problems with d=20, density=2, function=MLP, report mean±std on 10 repeatitions.
|Function|Metric|GENE|GENE+Lasso|GENE+CAM-pruning|GENE+Edge Discovery|
|-|-|-|-|-|-|
|MLP|F1|0.76±0.11|0.48±0.13|0.69±0.08|0.55±0.11|
|MLP|SHD|18.8±5.6|47.2±12.3|26.7±9.5|33.9±10.1|
(2) We add experiments for 2 mentioned SOTAs ([1] and [5]) on problems with d=20, density=2, report mean±std on 10 repeatitions.
|Function|Metric|GENE|NHTS|CaPS|
|-|-|-|-|-|
|MLP|F1|0.76±0.11|0.34±0.09|0.42±0.10|
|MLP|SHD|18.8±5.6|52.3±17.6|41.3±14.0|
|MIM|F1|0.88±0.09|0.40±0.07|0.71±0.12|
|MIM|SHD|7.7±2.1|31.6±5.4|15.9±7.1|
|GP|F1|0.90±0.09|0.38±0.10|0.73±0.07|
|GP|SHD|9.1±3.3|39.9±9.9|14.5±4.3|
(3) We appreciate this insightful observation. However, Def. 3.1 explicitly states that any structural equation with an implicit function renders the entire SEM weakly identifiable. Thus, real-world scenarios with mixed mechanisms inherently fall under the weakly identifiable category.
(4) As mentioned earlier in the discussion of R2-Sort, it relies on an empirical observation but does not provide sufficient theoretical justification. Of course, it is also possible that their method requires more careful tuning.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author’s response, and have given my replies below. Generally, I find that the paper lacks sufficient theoretical justification to make its contribution of strong/weakly identifiability compelling, and the experimental section limited and containing some results that contradict the author’s claims.
**Claims and Evidence**
While it is nice that the experimental results show that GENE’s performance on a weakly identifiable DGM is harmed by removing the independence penalty, this does not substitute for a rigorous proof, or even a tentative theoretical analysis. In order to claim that the strong/weak distinction is truly important, more justification is needed.
If the focus of this paper is on the strong/weak identifiability, it is not clear why the least pruning strategy is introduced. What is the motivation to introduce a heuristic strategy that is probably provably not consistent? Other methods are at least consistent under reasonable assumptions on the DGM (linear or additive causal models), while least-pruning doesn’t offer such guarantees. Strong empirical performance on only a few DGPs does not necessarily mean that least pruning is a good strategy - it could be that least-pruning requires high overall R^2-sortability for effective performance.
**Other Strengths and Weaknesses**
Again, more theoretical discussion is needed to claim that the core contribution of this paper lies in formalizing strong and weak identifiability . In general, one can always arbrtirarily divide ANMs into different types - it is not enough to compare empirical results ona a few hand-selected DGPs to determine whether the classification scheme is meaningful.
**Questions for Authors**
If this aligns with invertibility for the 2-variable case, can this be proved? If so, please present a proof sketch, and add the corresponding result to the paper. Can you argue that this argument would generalize to higher-dimensional cases?
I appreciate the additional experimental results, but I have a few other issues:
1) The experimental setup for these results is not reported - what is the distribution of the noise used? Was the data standardized? If not, then the results should be repeated with the data standardized for a fair comparison - many existing algorithms, such as NHTS, NoGAM, SCORE, etc., perform better when data is standardized, and standardize the data in the experimental procedure of their own papers. This problem extends to the main results in the paper as well - the runtime of the algorithms is not comprehensively reported.
2) Baselines such as DirectLiNGAM, SCORE, and NoGAM (and potentially more) should be added. Without theoretical results supporting the claim that traditional methods perform particularly well only on strongly identifiable problems, the paper’s contribution can only really be empirically supported. Therefore, the experimental results should be comprehensive, and show that GENE outperforms all recently released baselines. It is not enough to have limited comparison to a few algorithms.
3) Experiments including DGPs with mixed mechanisms (some implicit functions/weak identifiability and some non-implicit functions/strong identifiability) should be added. Although the authors touch on this in part (3) of their response, noting correctly that any SEM with even one implicit function is definitionally weakly identifiable, this does not address whether the empirical performance of traditional methods depends on how many implicit functions are present in the SEM. This is especially important because the existence of an implicit function is intuitively somewhat rare, as it puts a particular constraint on the functional relationships. If the performance of traditional methods is independent of the number of implicit functions unless many of the functions are implicit, then the contribution of GENE may be limited.
4) I find the fact that R2-Sort performs poorly for both strongly identifiable DGPs to be extremely troubling - it directly contradicts the authors' claim that strongly identifiable problems can be solved with simple regression. Further, it contradicts the author’s motivation to use R^2 value as part of the fitness function, as it was claimed that the R^2 value is sensitive to the direction of regression in strongly identifiable problems. Without further clarification/investigation, this empirical result casts doubt on the validity of the strong/weak distinction, and its relationship to the R^2 score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful comments and valuable feedback on our work. Below, we address your remaining concerns:
### **Claims and Evidence**
**Least Pruning**
Thank you for your thoughtful question regarding the least pruning strategy. In order-based causal discovery, the causal order inherently produces a fully connected DAG, necessitating a pruning step to recover the sparse true structure. Existing pruning methods, such as CAM pruning or kernel Lasso, rely on assumptions like additive parent effects or specific nonlinear functional forms, which may not absolutely align with the general ANM settings. This motivates us to design the least pruning strategy.
### **Other Strengths and Weaknesses**
**Contribution**
Thank you for your constructive feedback. Our classification is not arbitrary but rooted in the mathematical properties of structural equations under the ANM framework. Specifically, the existence of implicit functionsdirectly determines the difficulty of identification. In strongly identifiable cases, causal directions can be identified through regression asymmetry alone, as incorrect directions yield poor fits. In weakly identifiable cases, near-perfect regression fits occur in both directions, necessitating residual independence tests to resolve ambiguity. This theoretical insight not only explains the limitations of existing methods but also guides the design of GENE, which integrates both criteria in a unified framework.
Regarding experiments, the three function classes (MIM, GP, MLP) were chosen to represent canonical ANM scenarios. Besides, each class involves randomized parameters, ensuring diversity in function forms (see Eq. (9) in Appendix F). For instance, MLPs generate diverse implicit functions via randomized weighting matrixs, while GPs cover low-to-high-frequency nonlinearities. This design ensures generalizability beyond handpicked examples.
### **Question for Authors**
**Alignment with invertibility in 2-variable case**
**Proof Sketch**: In the 2-variable case, an ANM takes the form: $Y=f(X)+N$, the corresponding implicit equation is $F(X,Y,N)=Y-f(X)-N=0$. To formalize its connection with invertibility, we can leverage Lemma 3.2 and Theorem 3.3 from the paper. For 2-variable cases, this condition directly relates to the monotonicity and, therefore, the invertibility of the function $f$, i.e., if $f$ is invertible, then $0 \leq F_x(X, Y)$ (The case where $0 \geq F_x(X,Y)$ is symmetric). Apart from the case where $F_x=0$ that we have already discussed in the sufficiency of Theorem 3.3, this condition is essentially equivalent to the condition $m≤F_y(x_1,x_2,\ldots,y)≤M$ in Lemma 3.2.
1. **Experimental Setup**
Thank you for raising these critical points. We sincerely apologize for the lack of clarity in our previous rebuttal due to limited space. Here, we provide full details:
- **Noise distribution**: All experiments use Gaussian noise with zero mean and unit variance.
- **Standardization**: All variables are standardized before applying any method.
- **Runtime**: The runtime details of the main experiments are presented in Appendix Fig. 7. For the added experiments, average wall-clock times were **GENE: 1352s**, **NHTS: 1923s**, and **CaPS: 307s**.
2. **More Baselines**
Thank you for your suggestion. We have incorporated DirectLiNGAM and NoGAM into our experiments under the same settings as in the earlier comparisons with CaPS and NHTS.
|Function|Metric|GENE|DirectLiNGAM|NoGAM|
|-|-|-|-|-|
|MLP|F1|0.76±0.11|0.15±0.05|0.31±0.07|
||SHD|18.8±5.6|40.8±7.1|64.7±15.3|
|MIM|F1|0.88±0.09|0.16±0.07|0.60±0.05|
||SHD|7.7±2.1|35.7±6.1|25.0±2.2|
|GP|F1|0.90±0.09|0.06±0.03|0.70±0.02|
||SHD|9.1±3.3|39.0±5.3|19.7±0.9|
Runtime: **DirectLiNGAM: 4s**, and **NoGAM: 2428s**.
3. **Mixed Mechanisms**
We add experiments on mixed Mechanism SEMs. Specifically, each structural equation in the SEM is randomly selected from three types of nonlinear mechanisms, compared with the previously well-performing baseline CAM. All other experimental settings remain consistent with those used in our previous added experiments with CaPS and NHTS.
|Function|Metric|GENE|CAM|
|-|-|-|-|
|Mixed|F1|0.82±0.08|0.70±0.11|
|Mixed|SHD|14.3±5.4|21.2±7.9|
4. **Issues of R2-Sort**
Although R2-Sort performed poorly in our experiments overall, a closer examination reveals that its performance on MLP is significantly worse compared to GP and MIM (Fig. 3). To illustrate this more clearly, we removed the pruning stage and only compared the performance of order discovery (metric: REV(↓) as defined in Definition 4.2 of the paper). Other experimental setup remains consistent with our previous CaPS and NHTS experiments.
|Method|Metric|MLP|MIM|GP|
|-|-|-|-|-|
|R2-Sort|REV|23.0±5.8|16.2±5.9|18.3±6.1|
From the results, it is evident that R2-Sort indeed performs significantly worse on the MLP order learning task compared to MIM and GP. This observation also aligns with our claim regarding strong/weak identifiability. | Summary: The paper considers nonlinear ANMs for observational data, providing new structural identifiability results based on "implicit functions". Using these results, it proposes a learning algorithm that provably learns the correct order (in the large sample limit) and then heuristically prunes down to a sparse graph. The first phase of the algorithm does greedy score-based search over causal orderings, using independence tests to penalize the score---the unpenalized part of the score measures goodness of fit and is sufficient in the "strongly identifiable" case (the easy case), while the penalization suffices for the "weakly identifiable" (hard) case; the second phase starts at the complete graph with the learned causal order and heuristically prunes away edges.
Claims And Evidence: Generally, the claims seem reasonable and the proofs look correct. However, I found some aspects of the title, abstract, and intro to be somewhat vague/confusing/misleading:
- I normally see "identifiability" described as a property of a model, while the corresponding property of an algorithm is rather called "consistency" or "validity"; While it doesn't make sense to talk about validity of an algorithm without assuming some indentifiable underlying model, these are nevertheless distinct concepts/claims and require different kinds of proofs. This becomes clear starting in Section 2 (and in Thm 3.3 vs 4.3), but it's a bit confused before that.
- "Causal Discovery" in the title sounds quite general, but it seems the results are only for nonlinear ANMs (this is certainly less restrictive than other assumptions common in the field, but still not as general as the title implies).
- I think the paper has nice theoretical identifiability results, and that the algorithm and experiments are a fine "proof of concept"; however the presentation leading up to the algorithm made me expect stronger experimental results as well as additional theoretical results about the pruning phase, both of which are lacking.
Methods And Evaluation Criteria: Both the methods compared against and the real dataset used are potentially lacking:
- as far as I've seen, the Sachs et al. data is _only_ interventional; can the authors clarify exactly how they obtained observational data?
- other SOTA methods should be compared against (e.g., O-MCMC and GRaSP, referenced later in the review), even if it requires comparing CPDAGs instead of fully oriented DAGs.
Theoretical Claims: I checked all of the proofs. They look good.
Experimental Designs Or Analyses: I looked through the experimental design and analysis, and found this shortcoming:
- limited simulated models: only up to 20 nodes and somewhat sparse; should go at least up to 100 nodes (especially considering claims about the high-dimensional scalability of optimization-based methods and the claims about pareto optimality of efficiency vs effectiveness of the method, which I suspect doesn't hold has $d$ increases).
Supplementary Material: I went through all of the supplementary material, including the code.
Relation To Broader Scientific Literature: The paper makes use of some classic analysis results for its identifiability results. I found this simple but quite interesting.
Essential References Not Discussed: There should be discussion of SOTA order-based methods, including:
- O-MCMC: Kuipers, J., Suter, P., & Moffa, G. (2022). Efficient sampling and structure learning of Bayesian networks. Journal of Computational and Graphical Statistics, 31(3), 639-650.
- GRaSP: Lam, W. Y., Andrews, B., & Ramsey, J. (2022, August). Greedy relaxations of the sparsest permutation algorithm. In Uncertainty in Artificial Intelligence (pp. 1052-1062). PMLR.
I'm not as familiar with the ANM literature, but I'm a bit surprised these identifiab results are new (which I think works very much in the paper's favor, if they are new); I'd like to see more discussion of ANM identifiability results to support this claim of novelty.
Other Strengths And Weaknesses: The paper would benefit from some polishing/editing, but I found it generally quite easy to understand while still being insightful.
I think the identifiability have more potential for impact than the algorithm (which isn't differentiable, doesn't seem to scale, and lacks theoretical gaurantees for the pruning phase).
Other Comments Or Suggestions: - generally, use \mathrm{} where appropriate, e.g., $\mathrm{pa}$ vs $pa$ and $\mathrm{MSE}$ vs $MSE$, as well as many other others like $Var$, $IT$, $fit$, $OP$, $Rev$, ...
- the first paragraph of the intro calls RCTs the gold standard for casual discovery (which is defined in the first sentence of the abstraction to be from observational data), but RCTs use interventional data, and they're rather more for causal inference than causal discovery (in a general graphical setting, rather than the bivariate case).
- many figures are too small; a good rule of thumb I've seen is that the smallest text in the figs should still be at least footnote size---in any case, check the ICML formatting instructions
- first sentence after Def 2.1 is a run-on sentence
- next sentence, it should be "assumptions", since it's two distinct ones; "..recover _a_ graph..."
- Line 119: doubled "Hoyer et al"
- Line 122: "_Specifically_..."; and rephrase last sentence of that paragraph to be less confusing
- Line 132 and on: confusing using "forward" and "backward" in that way; maybe try something like "preceding" and "subsequent"
- clarify $F_i$ in (3) and $g$ later in Defn 3.1
- nice explanation on Line 200
- Line 218: fix "Next We..."
- after (4), should be "..._preceding_..."
- a few sentences later, should be "_In other words_..."
- I disagree with the sufficient vs necessary contrast in the last paragraph on page 4. Goodness of fit is sufficient for strongly identifiable models but _insufficient_ for weakly indentifiable ones, while leveraging residual independence _is sufficient_ for weakly identifable ones.
- Line 294: "mild assumptions" here also includes infinite data? The method requires lots of independence tests, so even with a lot of data and a very low significance level, there will still be a nonnegligable probability of the IT making a mistake somewhere.
__Questions not important enough to be in the next section__:
1. In abstract and intro, the "dimensionality" of the effect variable is described as "restricted". What does this mean for a continuous random variable?
2. What does the $pa(i)_k = ...$ mean in Def 3.1? The left side just the index of some parent of $i$, right? Why is the right side equal to this index? I understood the idea based on the description after and the Rudin reference, but I still don't understand the notation here in the Def.
3. Why even do experiments on non-standard data?
Questions For Authors: These questions are all related to the algorithm/performance (which is the weakest part, in my opinion), so depending on the answers I would increase my overall score.
_Complexity_
1. The operation in Definition 4.1 looks to me like Cycle Sort. Can the authors clarify and if so, add a citation?
2. Line 220: why use this test instead of something that can handle continuous data, like Chaterjee's coefficient or distance covariance?
3. What about using the statistic or p-value directly in the penalty instead of thresholding it?
4. Whats the complexity of the proposed algorithm? Scoring a single causal order seems to require $O(d^2)$ tests (and these have some complexity in terms of sample size $n$), and then the Cycle Sort used has a multiplicative $O(d^2)$, and then there's still the pruning phase.
_Theory_
5. Any results/assupmtions ensuring a $\theta$ exists that makes the pruning phase valid?
_Experiments_
6. Are there any results over larger and/or denser graphs (preferably with density given as a proportion between 0 and 1)? Or against more standard SOTA methods that learn a CPDAG?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive feedback and insightful suggestions. Below, we address each point raised in the review.
### **Claims and Evidence**
1. **Identifiability vs. Consistency**
Thank you for highlighting this distinction. We agree that identifiability and consistency are distinct concepts. In Section 2 and Theorems 3.3/4.3, we clarified these notions. We will further revise the Introduction to explicitly differentiate them and ensure consistency in terminology.
2. **Title Specificity**
We appreciate this observation. The title will be revised to:
*"Strong and Weak Identifiability of Optimization-based Causal Discovery under Nonlinear Additive Noise Models"*
This better reflects the scope of our work.
3. **Theoretical and Experimental Extensions for Pruning**
Please refers to the discussion on pruning in the response to Reviewer XUn3 below.
### **Methods and Evaluation Criteria**
1. **Observational Data in Sachs**
The Sachs dataset is continuously updated and has different versions. The version we used contains a total of 7466 samples, including 853 observational samples . Our usage aligns with prior works like *Causal Discovery with Reinforcement Learning* (ICLR 2020) and *Ordering-Based Causal Discovery with Reinforcement Learning* (IJCAI 2021).
2. **Comparison to Other SOTA Methods**
- **O-MCMC**: Designed for discrete Bayesian networks
- **GRaSP**: Focuses on linear models
Neither of these two methods falls within the scope of the ANM studied in this paper. Additionally, the CPDAG method is a compromise for unidentifiable models, and its metrics are also difficult to align with those of deterministic causal graphs. Therefore, we did not include these comparisons in our study.
### **Experimental Designs and Analyses**
1. **Graph Size and Density**
For ordering-based methods, especially in the context of nonlinear problems, 20 nodes is already a significant number. In Generalized Score Functions for Causal Discovery (KDD 18) and Ordering-Based Causal Discovery for Linear and
Nonlinear Relations (NeurIPS 24), simulation experiments were conducted with a maximum of only 10 nodes.
### **Essential References**
1. **Novelty of Identifiability Results**
Previous work has focused on identifiability in causal discovery, which originated from Bayesian network structure learning that limited to distinguishing up to the MEC via CPDAGs. However, Shimizu et al. (2006) showed that unique DAG identification is possible under linear causal relationships with non-Gaussian noise, leading to LiNGAM. Hoyer et al. (2008) extended this with ANM, which is identifiable under certain conditions. However, function properties in ANM affect practical identifiability, prompting the introduction of identifiability strength to guide metric usage in causal discovery practice.
### **Other Comments and Suggestions**
1. **Writing Improvements**
Thank you for your thorough comments and pointing out the areas where our writing was not up to standard. We will carefully revise our manuscript according to your suggestions.
2. **Minor Questions**
- **Dimensionality Restriction**: The dimensionality restriction process is described in Eq. (5). Dividing the MSE by the variance of the effect variable below eliminates the influence of dimensionality (e.g., the measurement units of the effect variable) on fitness.
- **Def 3.1 Notation**: $pa(i)_k$ denotes the k-th parent of $V_i$
- **Non-Standardized Data**: While Reisach et al. (2021) point out the significance of standardization, most benchmarks still use raw data. We compare both settings to validate our claims of scale variance/invariance.
### **Questions for Authors**
1. **Operation in Definition 4.1**
Our operation differs from Cycle Sort's cycle-based swaps.
2. **Independence Test Choice**
The chi^2-test was chosen for simplicity and computational efficiency. While distance covariance or Chatterjee's coefficient could also be applied, they add potential complexity. We will explore these in future work.
3. **Using p-Values Directly**
Thresholding stabilized performance in preliminary experiments. Soft penalties (e.g., weighted p-values) may improve results but require careful tuning.
4. **Algorithm Complexity**
The order search stage involves O(d²) fitness function evaluations, each taking O(n*d) time, resulting in a total complexity of O(n*d3). The pruning phase has a complexity of O(d²) w.r.t independence tests.
5. **Pruning Assumptions**
Please refer to the discussion on pruning in the response to Reviewer XUn3 below
6. **Additional Experimental Results**
It is worthy to note that for problems with d=10, density=4 corresponds to a density proportion of 0.89, which is already quite dense. Unfortunately, due to the limited time available for rebuttal, we were unable to conduct additional experiments in this area. We sincerely apologize for any inconvenience this may cause.
---
Rebuttal Comment 1.1:
Comment: Thanks for the thorough rebuttal! While some improvements are clear (like identifiability vs consistency, title change, clarifying novelty of identifiability results), I find other important technical parts of the rebuttal unsatisfying.
In particular, I would insist:
1. Either (i) more experimental and/or theoretical results about runtime/complexity/scalability should be given or (i) the claims around these in the paper should be reined in and the limitations made more explicit:
- Running experiments on d>20 should be a matter of modifying one line of code and rerunning (say just twice, with d=50 and d=100). While density(actually degree)=4 for d=10 results in a density proportion of 0.89, the proportion drops to 0.42 for d=20 and down to 0.16 and 0.08 in case experiments are run on d=50 and d=100 as requested. Expected degree should be increased with d (or density proportion should be used as the parameter and can be held fixed as d increases) for robust experimental results.
- If the complexity results described in the rebuttal are clear and rigorous, they should be added to the paper (which would help alleviate the need for more experiments).
- Looking at plot that is used to claim pareto optimality and extrapolating for what it might look like with d>20 makes me doubt that the pareto optimality claim will continue to hold. This claim should be removed (or more clearly restricted in scope to d<=20) unless more experimental evidence is given.
- The abstract seems to me to imply that GENE is applicable in the high-dimensional setting. This should be amended if more support (experiments or complexity results) is not provided.
- To be clear, I think the identifiability results are a good enough contribution, so removing the unsupported claims about scalability of GENE doesn't detract from the paper in my opinion---it just makes sure all included claims are well-supported.
2. The specific version of the Sachs data used (including a source or detailed processing instructions) needs to be included in the paper so that the experiments are reproducible.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your constructive feedback and for recognizing the core contributions of our work. We deeply appreciate your guidance in strengthening the paper's rigor. Below, we address your remaining concerns:
### **1. Scalability Claims and Experimental Validation**
We fully agree that the claims require careful support. To address this:
- **New Experiments**: We have added experiments for **d=20** under **function=MLP** with **density proportions={0.4, 0.6, 0.8}** (standardized data, 10 trials on 1 random sampled dataset), compared with the previously well-performing baseline CAM. While larger graphs (d=50/100) remain computationally prohibitive for GENE's current implementation, we will explicitly discuss this limitation.
|Method|Metric|dense=0.4|dense=0.6|dense=0.8|
|--|--|--|--|---|
| GENE | F1|0.85±0.11 |0.79±0.08|0.75±0.05|
| | SHD |20.8±9.6 |42.3±14.7|59.3±20.4|
| CAM|F1| 0.66±0.09 |0.68±0.06|0.60±0.04|
| | SHD |52.3±12.1 |71.6±19.4|110.9±25.8|
- **Complexity Analysis**: A detailed complexity breakdown (as outlined in the rebuttal) will be added to the paper.
- **Claim Adjustments**:
- The Pareto optimality claim will be restricted to **d≤20** to reflect empirical validation.
- The part in the abstract that may have caused you some misunderstanding is actually trying to say: Certain optimization-based methods (referring to continuous optimization methods) have attracted extensive attention due to their scalability. We have also pointed out that these methods face serious challenges, namely limited scope of application and scale-variance. I believe we have explained this quite clearly in the introduction. The abstract will be revised to remove unintended implications about high-dimensional scalability. We will clarify that GENE's main focus lies on the strength of identifiability, but not necessarily high-dimensional.
### **2. Sachs Dataset Reproducibility**
We will specify the exact version of the Sachs dataset used, along with preprocessing steps. Code and data loading instructions will be included in the supplementary material.
Thank you for your patience for helping us improve this work. All revisions will reflect your invaluable feedback to ensure clarity, reproducibility, and rigor. | Summary: This paper proposes to further divide the structure identifiability of ANM into strong one and weak one. The authors also proposes GENE, a generic method for causal discovery that works for both cases. The method is validated by both synthetic and real life data experiments.
## update after rebuttal
Thank you for the author's response. After reading all the review comments I decide to keep my rating unchanged.
Claims And Evidence: Yes. The claims are well-supported by theory and experimental results.
Methods And Evaluation Criteria: The method makes sense to me, though I am not sure the way to use p value in eq6 is optimal or not.
Theoretical Claims: No.
Experimental Designs Or Analyses: Yes. The experimental setting including the ablation study looks plausible to me.
Supplementary Material: Yes, I went through the Appendix A.
Relation To Broader Scientific Literature: The key contributions of the paper looks novel compared to previous findings.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
1. The paper is well-written and the idea is clearly presented.
2. The claims made are supported by theory and empirical study.
Weaknesses
1. The design of eq 6 seems not optimal to me. Probably it can be improved, e.g., by family wise error control.
Other Comments Or Suggestions: A typo in line 217.
Questions For Authors: 1. Is it possible to extend to post-nonlinear setting?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive feedback and insightful suggestions. Below, we address each point raised in the review.
### **Claims and Evidence**
1. **Optimality of Eq. 6**
Thank you for this constructive suggestion. We agree that Equation (6) could benefit from refinements like family-wise error control, especially when conducting multiple independence tests across variable pairs. Our current design prioritizes simplicity and computational efficiency, as greedy search over orders inherently involves numerous hypothesis tests. While thresholding via p-values (with Bonferroni-like penalties) helps mitigate false positives, we acknowledge that stricter error control (e.g., hierarchical testing or weighted penalties) might improve robustness. We will explore these enhancements in future work, balancing statistical rigor with scalability. Your insight aligns with our long-term goal of refining GENE’s theoretical grounding, and we appreciate your guidance on this critical aspect.
### **Other Comments Or Suggestions**
1. **Typo**
We sincerely appreciate your careful reading, and we will make sure to correct this in the revised manuscript.
### **Questions For Authors**
1. **Extension to Post-nolinear Case**
Thank you for this forward-looking suggestion. Extending GENE to post-nonlinear (PNL) models is indeed a promising direction. While PNL models introduce additional complexity (e.g., nonlinear transformations of both causes and effects), the core idea of leveraging implicit function theory to characterize identifiability can generalize. For instance, existence of implict functions in PNL mechanisms could similarly mask causal directions, necessitating adaptive criteria like residual independence. However, formalizing strong/weak identifiability in PNL settings requires careful theoretical work, as identifiability hinges on stricter functional constraints. We plan to explore this extension in future work, adapting GENE’s framework to address PNL-specific challenges while retaining its unified approach to identifiability. | null | null | null | null | null | null |
Diversifying Robot Locomotion Behaviors with Extrinsic Behavioral Curiosity | Accept (poster) | Summary: This paper introduces Quality Diversity Inverse Reinforcement Learning (QD-IRL), a framework that integrates quality-diversity (QD) optimization with inverse reinforcement learning (IRL) to enable robots to learn diverse locomotion behaviors from limited demonstrations. The key innovation is Extrinsic Behavioral Curiosity (EBC), which rewards agents for discovering novel behaviors, encouraging a broader exploration of the behavior space. The proposed method is tested on multiple robot locomotion tasks and and improves the performance of some QD-IRL instances. The results show that EBC can surpass even expert performance in some cases.
Claims And Evidence: The claims made in the paper are well-supported by experimental results, particularly across three benchmark environments (Halfcheetah, Walker2d, Humanoid).
Methods And Evaluation Criteria: The proposed method is well-justified for the problem of diverse behavior generation in robotics. The criteria are appropriate and comprehensive for evaluating diverse locomotion behaviors.
Theoretical Claims: The theoretical foundation is solid, particularly:
- The derivation of the EBC reward bonus using an indicator function for novel behaviors.
- Lemma 3.1, which provides a probabilistic guarantee that EBC increases the likelihood of discovering new behaviors.
The proofs appear sound, and the methodology aligns well with existing QD and IRL literature.
Experimental Designs Or Analyses: The experimental setup is robust. Clear comparisons showing improvements with EBC. However, more complex imitation tasks can enhance the expressiveness of the method.
Supplementary Material: The supplementary material is well-structured, though it would be helpful to add detailed failure case analyses and sensitivity studies on hyperparameters.
Relation To Broader Scientific Literature: The paper is well-positioned within the fields of Imitation Learning (IL), Inverse Reinforcement Learning (IRL), and Quality Diversity (QD) Optimization.
Essential References Not Discussed: The related work is well discussed.
Other Strengths And Weaknesses: - Strengths
- Novel combination of IL and QD for diverse behavior generation.
- Extrinsic Behavioral Curiosity (EBC) is a simple yet effective exploration mechanism.
- Weaknesses
- The experiments in the paper need to be further strengthened.
- Source code not provided.
Other Comments Or Suggestions: - Important concepts should be briefly introduced when first mentioned, such as MAP-Elites, VPPO, etc.
- The measure function m defined in the experiments does not seem to reflect curiosity-driven exploration.
- The experiments were conducted only in a limited set of MuJoCo environments. Imitation tasks based on real-world data (e.g., human motion capture data) could better highlight the advantages of the proposed method.
Questions For Authors: - How is a sufficiently explored region defined?
- From the perspective of imitation learning theory, how can the learned policy outperform the expert policy?
- Is there an optimal balance between imitation rewards and EBC rewards?
- What happens if the expert demonstrations are of low quality?
- In Figure 2, the improvement in reward for EBC in the Halfcheetah and Walker2d tasks does not seem significant. Is this related to the difficulty of the tasks?
- Some more concise mutual-information-based methods[1-5] also achieve diverse behavioral policies. What are the advantages of QD-IRL compared to these approaches?
[1] Eysenbach B, Gupta A, Ibarz J, et al. Diversity is all you need: Learning skills without a reward function[J]. arXiv preprint arXiv:1802.06070, 2018.
[2] Li Y, Song J, Ermon S. Infogail: Interpretable imitation learning from visual demonstrations[J]. Advances in neural information processing systems, 2017.
[3] Strouse D J, Baumli K, Warde-Farley D, et al. Learning more skills through optimistic exploration[J]. arXiv preprint arXiv:2107.14226, 2021.
[4] Peng X B, Guo Y, Halper L, et al. Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters[J]. ACM Transactions On Graphics (TOG), 2022.
[5] Fu H, Tang K, Lu Y, et al. Ess-InfoGAIL: Semi-supervised imitation learning from imbalanced demonstrations[J]. Advances in Neural Information Processing Systems, 2023.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## We feel gratitude for the valuable insights and suggestions, and below is our detailed response.
- - -
**Q1**: More complex imitation tasks can enhance the expressiveness of the method.
**A1**: We have added experiments with additional tasks and measure. Please refer to our response to Q3 of reviewer Cxuk.
- - -
**Q2**: It would be helpful to add sensitivity studies on hyperparameters.
**A2**: We have conducted a hyperparameter study for the $q$ hyperparameter, the scale of the EBC reward; these results are mentioned in Appendix E.4. Other hyperparameters follow the settings in original implementations since these are not newly introduced by our method.
- - -
**Q3**: Source code not provided.
**A3**: We will open-source the code and seeds when the paper is published.
- - -
**Q4**: Important concepts should be briefly introduced, e.g. MAP-Elites, VPPO, etc.
**A4**: The details of MAP-Elites can be found in Section 2.1. For VPPO, we only briefly mention it in Section 2.1 as Vectorized PPO in the context of PPGA and in Appendix A2 l.671-673. We will expand the explanation both in the main text and in the appendix.
- - -
**Q5**: The measure function **m** defined in the experiments does not seem to reflect curiosity-driven exploration.
**A5**: The measure value itself is not the curiosity. Instead, whether or not the diverse measure has been visited represents the behavioral-level curiosity.
- - -
**Q6**: Imitation tasks based on real-world data (e.g., human motion capture data) could better highlight the advantages of the proposed method.
**A6**. We have conducted additional experiments on various imitation learning locomotion tasks (see our response to Q3 of reviewer Cxuk), and we are happy to extend our method to imitation tasks based on real-world data in our future work.
- - -
**Q7**: How is a sufficiently explored region defined?
**A7**: The authors see that the “sufficiently explored region” is mentioned on l.240 page 5. We will remove the “sufficiently” from the text. Basically, as soon as the region is visited, the EBC reward will be zero for consequent visits, to facilitate exploitation of high-performing policy in this region.
- - -
**Q8**: From the perspective of imitation learning theory, how can the learned policy outperform the expert policy?
**A8**: When we talk about performance we typically talk about the QD score. Hypothetically, it is possible that the expert policy has a higher cumulative reward since the expert would not be the optimal policy. However, we do not claim that the learned policy outperforms the expert. In Humanoid (see Fig.2), we do observe a higher QD score and coverage compared to PPGA with true reward. However, as shown in Fig.4, none of the techniques can outperform PPGA with true reward with EBC.
- - -
**Q9**: Is there an optimal balance between imitation rewards and EBC rewards?
**A9**: We studied the scale of the EBC reward, $q$ in Appendix E.4. $q=2$ appears to be the best choice. A sufficiently high $q$ is needed to for the behavioral level exploration to encourage higher QD score. It is impossible to find the exact optimal choice since the q value is continuous and there are infinite choices, and the optimal choice varies from tasks to tasks, so we choose q=2 in our experiments.
- - -
**Q10**: What happens if the expert demonstrations are of low quality?
**A10**: Because the demonstrations are selected to be diverse, the expert data are already sub-optimal. If the performance is much lower than even a local optimum solution, then it seems difficult to achieve policy improvement. However, this can then be said for all imitation learning methods since it is assumed that the demonstrations are of desirable quality (otherwise there is limited point to imitating them).
- - -
**Q11**: The improvement in the Halfcheetah and Walker2d tasks does not seem significant.
**A11**. We have performed a **Tukey HSD Test** on Table 7; comparing our EBC-enhanced algorithm to their counterparts, and we find that 7/9 effects are positive and 5 of these are significant with $p \leq 0.05$. Also refer to our answer to Q4 of U4H2 for further explanations.
- - -
**Q12**: Mutual-information-based methods also achieve diverse behavioral policies. What are the advantages of QD-IRL?
**A12**: The mentioned references assume it is possible to sample skills from a skill distribution, and then condition policies based on the given skill variables. In our work, the measures are not known in advance and are determined from the evaluation. The key difference is that we can optimise diversity across a particular measure rather than be limited to a particular skill distribution. So typically, unless the distribution of said techniques is uniform across the entire feasible space, QD algorithms will have more emphasis on diversity. We will discuss this difference in the updated paper.
- - -
## We hope our response fully address your concerns.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for answering my questions and for providing additional experimental results. Although the task demonstration could be stronger, this work demonstrates novelty and potential. Therefore, I have revised my score to accept (4).
---
Reply to Comment 1.1.1:
Comment: The authors are pleased that your concerns have been addressed. Thank you again for your time and your effort to review our submission, as well as your valuable suggestions to improve our paper. | Summary: This work proposes a new paradigm called Quality-Diversity Inverse Reinforcement Learning (QD-IRL) as well as a new component to encourage the acquisition of novel and diverse behaviors, called Extrinsic Behavioral Curiosity (EBC). The goal of the QD-IRL framework is to enable the agent to learn diverse and performant policies from limited demonstration data and can be integrated on top of existing imitation learning (IL) approaches. Thus, the authors can combine the best of both approaches, overcoming the problems of vanilla IL, which struggles to learn diverse behavioral patterns. This framework achieves that by using IRL to learn the reward function, and then it utilizes Differentiable QD, specifically, QD-RL, to optimize the solutions archive. The framework is evaluated on locomotion tasks based on the performance gains it offers when integrated into existing IL methods.
Claims And Evidence: Most of the claims are supported by clear evidence. However, the proposed framework is quite generic, and thus, I believe the authors should have conducted experiments on more diverse tasks rather than focusing solely on locomotion. The selection of a more diverse set of tasks would have contributed to a more constructive evaluation.
Methods And Evaluation Criteria: Aside from the comments above, which also apply here, I believe the evaluation criteria are on point. In general, for QD algorithms the coverage and QD score present a clear overview of the performance of such algorithms.
Theoretical Claims: No problems were found in the theoretical claims of the manuscript.
Experimental Designs Or Analyses: I believe there is some vital information missing from the manuscript. The authors never refer to how they construct the archive for the QD algorithm. Additionally, there is not a single reference to what behavior descriptor they utilize for each experiment, so the reader can understand how solutions diversify. Lastly, even though the locomotion tasks are well-known, I think it is important for the authors to add more information about the tasks such as the policy's inputs.
Supplementary Material: The supplementary material is thorough and provides good insight into the proposed approach and the implementation details.
Relation To Broader Scientific Literature: This work is novel enough, and it presents a very interesting extension of IRL to the QD optimization paradigm. Moreover, the fact that the proposed approach can be implemented on top of existing algorithms is very positive. Nonetheless, I am a little concerned about the results in Fig. 2. The general improvements in performance EBC has to offer, in some cases are very minimal. An example would be the Halfcheetah and Walker2d experiments, where the QD increase in most of the algorithms is not that significant.
Essential References Not Discussed: The authors have provided all the essential references required for the reader to comprehend the proposed approach.
Other Strengths And Weaknesses: One other strong point of the manuscript is that it is well written and most of the figures are very intuitive. On the other hand, it would have been very positive if the authors had provided some videos of the optimal solutions in the archive for the reader to understand how diverse the solutions are, since for such high-dimensional tasks, it is hard to illustrate the outcome of each solution in the archive.
Other Comments Or Suggestions: No other comments.
Questions For Authors: * Is the EBC values bounded?
* Could the authors explain how they did the visualization for Fig. 3 and elaborate a bit more on what this figure represents?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## We appreciate the valuable insights and suggestions, and below is our detailed response to address your concerns.
- - -
**Q1**: More diverse tasks other than locomotion will be beneficial.
**A1**: While we limit the scope of the experiments to robot locomotion tasks, we do agree with the inclusion of much more diverse tasks to demonstrate the capability of our algorithm. In particular, we have extended our method to more diverse tasks and will add these contents in the revised paper. It is the same table also mentioned to reviewer Cxuk in Q3 (reproduced here for convenience):
| Game | Measure | Model | QD-Score | Coverage |
|-----------|------------------|-----------|-----------------------|--------------------|
| hopper | jump | GAIL | 64017±5558 | 100.0±0.0 |
| hopper | jump | GAIL-EBC | 81987±6890 | 100.0±0.0 |
| humanoid | angle | GAIL | 63771±7113 | 53.0±3.0 |
| humanoid | angle | GAIL-EBC | 170273±19711 | 96.0±0.0 |
| humanoid | jump | GAIL | 18812±6412 | 41.0±31.0 |
| humanoid | jump | GAIL-EBC | 90631±61894 | 85.0±15.0 |
| ant | feet_contact | GAIL | -484468±3264 | 75.84±0.0 |
| ant | feet_contact | GAIL-EBC | -52521±383213 | 77.28±3.2 |
- - -
**Q2**: The authors never refer to how they construct the archive for the QD algorithm.
**A2**: We use the archive construction rules based on MAP-Elites, which is mentioned in section 2.1.
- - -
**Q3**: It would be helpful to introduce the behavior descriptor for each experiment, and the details of each tasks.
**A3** : The information about the behavior descriptor is referenced at various locations in the paper under the terminology “measure” (which is the terminology that was used in the PPGA paper). Specifically, the measure used is highlighted in Section 4 as follows:
“the measure function maps the policy into a vector where each dimension indicates the proportion of time a leg touches the ground.” Moreover, we have clarified definition of state (policy’s inputs) and action (policy’s outputs) of each task in our revised paper.
- - -
**Q4**: The general improvements in performance EBC has to offer, in some cases are very minimal.
**A4**: Actually, when comparing performance, we should compare the base method with this method plus EBC (e.g., GAIL vs. GAIL-EBC) in terms of the QD-score. In the QD-score plots, the authors can only observe 2 comparisons among 9 comparisons (3 pairs $\times$ 3 environments), namely the Halfcheetah for DiffAIL-EBC vs DiffAIL and VAIL-EBC vs VAIL, that are not significantly better. We highlight that EBC improves the QD score of GAIL, VAIL, and DiffAIL by up to 185%, 42%, and 150%, and that the standard deviations are very small. We have performed a **Tukey HSD Test** on Table 7; comparing our EBC-enhanced algorithm to their counterparts, and we find that 7/9 effects are positive and 5 of these are significant with $p \leq 0.05$.
- - -
**Q5**: It would be helpful to provide videos to illustrate the learned diverse solutions.
**A5**: Thanks for the suggestion. One of the benefits of diversity is that the archives can be used for adaptation to new environment. In the updated submission, we will include a video of adaptation to new environments which can demonstrate the types of behaviors in the evolved archives as well as how this diversity can be exploited.
- - -
**Q6**: Is the EBC values bounded?
**A6**: Since the unscaled EBC reward is either 0 or 1, so the scaled EBC rewards are bounded in $[0,q]$, where $q$ is the hyperparameter that controls the weight.
- - -
**Q7**: Could the authors explain how they did the visualization for Fig. 3 and elaborate a bit more on what this figure represents?
**A7**: Fig.3 represents the overall QD performance of the archive using a heatmap. The colors represent the fitness levels (quality), while the spread across the two-dimensional measure/behavior space represents the diversity.
- - -
## We sincerely hope these responses fully address your concerns. | Summary: This paper proposes to combine Quality Diversity (QD) algorithms with Inverse Reinforcement Learning (IRL) problems. The authors introduce Quality Diversity Inverse Reinforcement Learning (QD-IRL), a method that uses rewards estimated from demonstrations (via GAIL, VAIL, and DiffAIL) with the PPGA quality diversity algorithm. The key contribution is Extrinsic Behavioral Curiosity (EBC), a reward mechanism that encourages exploration in unvisited areas of the measure space. Their experiments on three MuJoCo locomotion tasks (Halfcheetah, Walker2d, and Humanoid) to show that EBC significantly improves the performance of QD-IRL methods. The authors also demonstrate that EBC can enhance standard PPGA when applied to the true reward function.
Claims And Evidence: The primary claim about EBC improving QD-IRL methods is supported by the experimental results. The performance improvements across metrics like QD-score, coverage, and reward are clearly demonstrated in the figures and tables.
However, the claim that QD approaches help with "adaptation in changing environments" (line 46) and "unpredictable real-world scenarios" (line 14) lack supporting evidence.
The authors do not provide experiments showing how the learned diversity could be practically useful, e.g. for scenarios like damage adaptation or hierarchical control, as done in prior works [1,2].
Such experiments would be useful to know if the EBC mechanism can really make agents more resilient.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally appropriate for the problem. The QD metrics (QD-Score, Coverage, Best Reward, Average Reward) are standard and suitable for evaluating quality diversity algorithms.
However, the evaluation would be stronger if it included more diverse environments beyond the three locomotion tasks that all share similar objectives (maximizing forward progress) and measures (foot contact patterns). Adding tasks with different properties, such as an Ant Omni task from [5], would better demonstrate the generalizability of the approach.
Also, there is problem with the method design. While GAIL and VAIL rewards are updated at every iteration, the fitnesses stored in the archive are not regularly updated (as acknowledged in the limitations section, lines 907-910). This means that solutions in the archive are evaluated based on potentially outdated reward functions. This design choice could lead to poor performance because the archive might contain solutions that were highly rated under an old reward function but would score poorly under the current one. For example, AURORA [8] regularly updates the descriptors of the solutions stored in the archive (as the measure functions slightly changes every iteration).
Theoretical Claims: While the proof of Lemma 3.1 appears to be correct, I find one Lemma assumption intriguing: the MDP reward appears to be non-Markovian as it depends on the full trajectory (episode measure).
Experimental Designs Or Analyses: The authors appropriately test their approach with three different IRL methods (GAIL, VAIL, and DiffAIL) with and without EBC and compare against true reward baselines.
The experiments effectively show that EBC improves the QD-Score and Coverage in the 3 environments under study.
It would also be worth mentioning that the BestReward of EBC variants takes longer to converge (which is normal, as PPGA now optimizes for $fitness+q\times EBC$).
The computational requirements seem overly demanding (48 hours on 4 A40 GPUs per experiment).
Supplementary Material: I read all the appendix. In particular, I found the hyperparameter study from E.3 interesting. Also, I really enjoyed the demonstrators selection illustration (Figure 6), and having a detailed reminder about PPGA (A.2).
Relation To Broader Scientific Literature: To the best of my knowledge, this paper is the first to apply quality diversity to an inverse reinforcement learning setting. The authors implement their approach using the recent PPGA algorithm and introduce the EBC reward to enhance exploration. However, as presented, the EBC reward seems limited to the Gradient Arborescence subfamily of QD approaches (like CMA-MEGA and PPGA) and may not be applicable to other QD algorithms such as MAP-Elites, PGA-ME, and DCRL [3].
Essential References Not Discussed: A notable omission is the connection to improvement emitter mechanisms from CMA-ME [6], which use a similar approach to encourage exploration in empty areas of the archive.
Other Strengths And Weaknesses: Strengths:
- The proposed EBC reward is clear, sound, and well-motivated
- The results effectively show improvements in coverage and QD-score
Weaknesses:
- The authors describe their work as a "framework" (line 97 and abstract), but it's only applied to a specific QD algorithm (PPGA) and relies heavily on operations specific to gradient arborescence approaches. Indeed, the introduced EBC is only used when “branching solutions” and when “walking the search policy”, which are two steps very specific to PPGA and other gradient arborescence approaches like CMA-MEGA. Given the strong reliance on PPGA and gradient arborescence methods, the authors might consider renaming the algorithm to MEGA-IRL or PPGA-IRL rather than presenting it as a general framework
- The lack of experiments demonstrating the practical utility of the learned diversity (e.g. damage adaptation, hierarchical control...)
- Figure 4 is redundant with Figure 2 - the additional variant could simply have been included in Figure 2
- policy archives in Figure 3 have inconsistent colorbars, making the results difficult to interpret.
Other Comments Or Suggestions: - Consider reporting CCDF plots [4] (also called "archive profiles", see [5, 2]) to better characterize the distribution of episode returns in the final policy archives.
- Line 45 2nd column, the reference is likely meant to be [7]
- The reference to differential quality diversity (DQD) appears twice in the bibliography
- The size of the markers in Figure 2 makes it difficult to read
- Lines 377-379 should clarify that the baselines are only detailed in the Appendix
- Line 279: absolute values appear to be missing around $c_0$
Questions For Authors: - Lemma 3.1: the MDP reward appears to be non-Markovian, as it depends on the full trajectory. Is that normal? How does this affect the theoretical guarantees of your approach?
- Each experiment takes 48 hours to run on 4 A40 GPUs (line 746). Why does it take so long to run? How much faster would it be to run with other QD algorithms like CMA-MEGA?
- While you justify the usage of Quality-Diversity algorithm by the fact that it "allows adaptation in changing environments" (line 46), you do not provide any experiments showing why the learned diversity is useful. Could you demonstrate how the repertoires you obtain can be used (e.g., for damage adaptation, hierarchical control) as done in previous works [1,2]?
- You describe your approach as a "framework" (line 97 and abstract), but it's only applied to PPGA and relies on steps specific to gradient arborescence methods. How would EBC be implemented in other QD algorithms that don't use the same branching and walking mechanisms?
[1] Chalumeau, Felix, et al. "Neuroevolution is a competitive alternative to reinforcement learning for skill discovery."
[2] Grillotti, Luca, et al. "Quality-diversity actor-critic: learning high-performing and diverse behaviors via value and successor features critics."
[3] Faldor, Maxence, et al. "Synergizing quality-diversity with descriptor-conditioned reinforcement learning."
[4] Batra, Sumeet, et al. "Proximal policy gradient arborescence for quality diversity reinforcement learning."
[5] Flageat, Manon, et al. "Benchmarking quality-diversity algorithms on neuroevolution for reinforcement learning."
[6] Fontaine, Matthew C., et al. "Covariance matrix adaptation for the rapid illumination of behavior space."
[7] Cully, Antoine, et al. "Robots that can adapt like animals."
[8] Grillotti, Luca, and Antoine Cully. "Unsupervised behavior discovery with quality-diversity optimization."
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Thank you for the valuable suggestions. Here is our response to questions and concerns.
- - -
**Q1**. This paper lacks evidence support for the significance of QD (e.g. the ability of adaptation).
**A1**. The claim that QD approaches help with adaptation in changing environments is reasonable and supported by prior works. We do notice that on l.46, the given reference does not adequately reflect the adaptation scenario, so we provide a more suitable few references. While the claim is not central to our work, we perform few-shot adaptation tasks for Humanoid-hurdles, and find that GAIL-EBC significantly outperforms GAIL on the return, and is comparable to the original QDAC performance of Grillotti et al. :
| Algorithm | height 0.026 | height 0.053 | height 0.079 | height 0.105 |
|-- |------|---|---|---|
GAIL |864.37135$\pm$331 | 329$\pm$65 | 198$\pm$26 | 107 $\pm$ 65 |
GAIL-EBC | 2742$\pm$2252 | 396$\pm$105 | 275$\pm$23 | 130$\pm$89 |
- - -
**Q2**. More diverse IL tasks are needed to demonstrate the generalizability of the method.
**A2**. We have conducted additional experiments with diverse measures. Please refer to our response to Q3 of reviewer Cxuk for details.
- - -
**Q3**. While GAIL and VAIL rewards are updated at every iteration, the finesses stored in the archive are not regularly updated (as acknowledged in the limitations section, lines 907-910).
**A3**. That's indeed the limitation of our work that we already acknowledged, and we are happy to address this issue in our future work. Moreover, Algorithm 2, which dynamically adjust the fitness scores, mitigates the impact of this problem by computing a running average of past elite fitnesses.
- - -
**Q4**. EBC reward is non-Markovian and depends on whole trajectory, which may affect the theoretical guarantee.
**A4**. The combination of non-markovian reward and PPO are explored and backed in prior work. For example, [1] validated that episode-based reward structures, when properly decomposed into each step, enhance PPO’s convergence and sample efficiency. Hence, we safely assume the convergence guarantee of PPO in our Lemma 3.1. Empirically, we also validated the effectiveness of EBC reward.
[1] Arjona-Medina, Jose A., et al. "Rudder: Return decomposition for delayed rewards." Advances in Neural Information Processing Systems 32 (2019).
- - -
**Q5**. BestReward of EBC variants takes longer to converge (which is normal, as PPGA now optimizes for fitness+q×EBC).
**A5**. In general, the higher the coverage, the longer it will take to evolve the elite solutions for all the covered cells. We will mention this in the text on page 7, l.364, in the text about Fig.2.
- - -
**Q6**. The computational requirements seem overly demanding (48 hours on 4 A40 GPUs per experiment).
**A6**. We apologize that the text is not clearly written. We use 1 GPU per experiment but run multiple experiments at the same time. We will rewrite this part.
- - -
**Q7**. This method should not be named as a framework. As presented, the EBC reward seems limited to the Gradient Arborescence subfamily of QD approaches (like CMA-MEGA and PPGA) and may not be applicable to other QD algorithms.
**A7**. This is a limitation that we have acknowledged in Appendix G. Please note that we have implemented the EBC reward on different imitation learning methods, showing why it is usefully conceived as a framework. While MAP-Elites does not have an RL problem, PGA-ME and DCRL seem compatible for our framework. For instance, the EBC reward would allow to take a step in the direction of the policy which generates new measures. Given the wide range of experiments, we leave this to future work.
- - -
**Q8**. The connection of CMA-ME and EBC regarding the exploration should be discussed.
**A8**. We claim that EBC allows a **policy gradient** algorithm to update the policy towards empty behavioral areas **directly**, which is more effective compared with the exploration of CMA-ME which doesn't utilize the gradient information. For more details, please refer to Q2 from Reviewer Cxuk.
- - -
**Q9**. Figure 4 is redundant with Figure 2 - the additional variant could simply be included in Figure 2.
**A9**. Figure 4 is for more straightforward comparison to show specifically that EBC can improve PPGA with true reward on conditions without EBC. We separated these so it is more easy to discuss in dedicated paragraphs in the text.
- - -
**Q10**. Policy archives in Figure 3 have inconsistent colorbars.
**A10**. Please note that Figure 3 is plotted only for comparing the coverage metric, not the fitness (l.392). Therefore, the color (which represents fitness) is not our focus.
- - -
**Q11**. Consider reporting CCDF plots to better characterize the distribution of episode returns in the final policy archives.
**A11**: Thanks for suggestion. We will report the CCDF plot in our revised paper.
- - -
## We sincerely hope these responses fully address your concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed answers, these address most of my concerns. I have updated my score for this paper. Depending on how the authors address the remaining issues, I may adjust my evaluation again before final decision.
----
There is still one major point I disagree on:
> A7: […] While MAP-Elites does not have an RL problem, PGA-ME and DCRL seem compatible for our framework. For instance, the EBC reward would allow to take a step in the direction of the policy which generates new measures. […]
I agree the EBC reward can technically be integrated to PGA-ME and DCRL. However, the way it would be integrated to PGA-ME and DCRL is significantly different than from PPGA. In PGA-ME and DCRL, this reward would be added to the **emitter mechanism**, whereas in this work, it is added to the **gradient arborescence** mechanism. If you plan on adding it to the emitter mechanism, then you also need to compare to the improvement emitter from CMA-ME which has a similar mechanism.
I do not think you need to compare to PGA-ME, DCRL, or CMA-ME. However, the claimed contribution, currently listed as EBC “can also significantly improve existing SOTA QD-RL algorithm” (line 107-108), is **too strong** compared to what you do in the rest of the paper, as you **only apply EBC to the gradient arborescence mechanisms of a specific algorithm**.
I believe **your contribution is valuable**, but would be more accurately described as: *you introduce an EBC reward that improves exploration the Gradient Arborescence QD algorithms; and you show that when tested on the SOTA gradient arborescence algorithm PPGA, it enhances its exploration capabilities.*
----
Other comments:
> We claim that EBC allows a policy gradient algorithm to update the policy towards empty behavioral areas directly, which is more effective compared with the exploration of CMA-ME which doesn't utilize the gradient information.
I mentioned CMA-ME in my initial review because the improvement emitter in CMA-ME already contains a mechanism to explicitly maximize behavioral diversity. While I acknowledge this mechanism differs from your proposed EBC reward, and **I don't believe you need to conduct direct comparisons with CMA-ME**, it's important to acknowledge this related prior work in your Related Work section. Your paper would be strengthened by explicitly discussing how your EBC approach relates to and differs from these existing behavioral diversity techniques.
> That's indeed the limitation of our work that we already acknowledged, and we are happy to address this issue in our future work. Moreover, Algorithm 2, which dynamically adjust the fitness scores, mitigates the impact of this problem by computing a running average of past elite fitnesses.
Thank you very much for the clarification. Indeed, the mechanism from MAP-Elites Annealing probably alleviates this issue. I think this limitation, together with your provided explanation from this rebuttal, should appear in the main paper (and not in appendix) as they are both quite important.
---
Reply to Comment 1.1.1:
Comment: The authors are pleased to see that most of the concerns have been addressed. Below you can find our response to the remaining issues.
**Q1.** EBC in QD vs Arborescence QD algorithms.
**A1.** The authors agree that the current framing of the contribution is too general given that we do not have supporting experiments. Therefore, we will rewrite where needed in the text that the EBC reward improves exploration in the context of Gradient Arborescence QD algorithms (rather than QD-RL algorithms in general).
**Q2.** Your paper would be strengthened by explicitly discussing how your EBC approach relates to and differs from these existing behavioral diversity techniques.
**A2.** The authors agree that a more detailed comparison would be beneficial, and particularly note that there is not sufficient discussion of related works about behavioral diversity methods in the context of QD. We will therefore improve the related work section accordingly.
**Q3.** Clarification on the limitation of fitness computation.
**A3.** The authors are pleased that the explanation of how we use Algorithm 2 has clarified that we do alleviate the problem. Indeed, the authors agree that it is best to highlight the limitation as well as how Algorithm 2 can alleviate the problem in the main text. So the authors will update the paper accordingly, along with suitable background references to related problems and solutions where needed.
Thanks again for your valuable suggestion and we hope your concerns are fully addressed. | Summary: Existing imitation learning algorithms fail to learn diverse behaviors. To address this, the paper introduces the QD-IRL framework that applies QD optimization algorithms to IRL problems. To further improve the exploration of QD-IRL, the paper introduces Extrinsic Behavioral Curiosity (EBC) to encourage policies to explore areas that are not covered. Experimental results show that EBC achieves better performance.
Claims And Evidence: The claim is overall clear. However, similar frameworks seem to have been proposed in existing papers, e.g., Yu et al. (2024) cited in the paper. It would be better to have a discussion about the relationship.
The claim of the limitation of PPGA is not convincing to me. The paper claims that *the fitness term $f$ heavily influences PPGA’s search policy update direction* and *PPGA frequently becomes stuck in local regions*. However, as is shown in Eq. (2) in the paper, PPGA optimizes a weighted sum of fitness term $f$ and behavior measure terms $m_j$. The weights $c$ are optimized by CMA-ES, which tries to find the best weights to maximize the QD-Score improvement. Therefore, as the QD-Score considers both quality and diversity, PPGA is expected to adapt the weights $c$ dynamically to encourage the policies to explore the areas with low quality or no policy to improve QD-Score, rather than heavily optimizing the quality. Thus, PPGA does not seem to have these issues. It is not fully clear to me why EBC performs well.
Methods And Evaluation Criteria: The proposed method overall makes sense for the problem.
Theoretical Claims: The paper provides a slightly trivial lemma showing that optimizing the EBC reward helps cover more areas in the behavior space.
Experimental Designs Or Analyses: - The experiments are not sufficient. The methods are only evaluated on three MuJoCo tasks with the same measure function (i.e., the proportion of time each leg touches the ground) on the same dimension (2). What are the performances on the tasks with higher or lower dimensions of measures (e.g., Ant and Hopper)? What are the performances on the tasks with other types of measure functions?
- The impact of the demonstrations is not analyzed. It may be critical to the performance of the method. What is the impact of the number of the demonstrations? Do they need to be diverse? What is the impact of the quality of the demonstrations?
- The advantages of the QD algorithms over the classical IRL algorithms are not analyzed. It would be helpful to add the pure IRL algorithms (without PPGA) with and without EBC bonus as baselines as well.
- I am glad that the QD metric values and their standard deviations are presented in Table 7 in the appendix. However, as each experiment was only conducted with 3 seeds, the standard deviations (and the averaged performance) in both Figure 2 in the body and Table 7 in the appendix may not be valid. It would be helpful to run more (>4) seeds and utilize the statistical tests to show the significance of the results.
- The seeds are not shown, and the code is not available currently, which may be harmful to the reproductivity of the results.
Supplementary Material: The authors did not submit supplementary material.
Relation To Broader Scientific Literature: According to the results, the proposed EBC method may also improve the performance (coverage) of classical QD-RL algorithms.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: The cross (x) markers in the figures are mixed up. It would be better to improve their presentation.
Questions For Authors: - PPGA also optimizes behavior diversity and aims to maximize the QD-Score. Why does PPGA with EBC achieve a better QD-Score than PPGA without EBC?
- What is the relationship with Yu et al. (2024) cited in the paper? It would be better to have a discussion.
- What are the performances on the tasks with other dimensions and other types of measure functions?
- What is the impact of the demonstrations?
- In Humanoid, DiffAIL-EBC even performs better than PPGA with true reward. Can you provide an analysis of it?
- Is it necessary to compare the average rewards of the methods? According to the definition, when a policy with a new behavior but a relatively low reward is added to the QD archive (which is what we intend to do), the average reward will fall instead. This conflicts with the goal of QD.
- Will the code and the seeds be open-sourced when the paper is published? This may be important for the reproducibility and validity of the experimental results.
I would be happy to raise my score if the concerns and questions are answered.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Thanks for the valuable insights and suggestions.
- - -
**Q1.** It would be better to discuss the relationship between Yu et al.'s paper.
**A1.** The key difference between Yu’s work and our work lies in two aspects:
- Yu’s work adopts a **single-step** reward bonus (calculated per (s,a) pair), while our method uses **episode-based reward** bonus aligned with whole-episode measurements.
- Yu’s work focuses on QD-IL, whereas our framework extends to QD-RL.
We will add these discussions in our revised paper.
- - -
**Q2.** Why use EBC given CMA-ES already handles quality/diversity?
**A2.** The EBC reward can supplement the PPGA algorithm: while PPGA works by evolving random coefficients for each measure dimension, the EBC reward encourages the policy to unlock new areas of measure space by adding it to the fitness, which is independent of the local measure space, for a more direct, global, and efficient search of new behavioral areas.
- - -
**Q3.** More experiment needed on tasks with different measure dimensions and type of measures.
**A3.** We tested EBC across tasks with varying measure dimensions (**m**):
- Jump (1D: lowest foot height)
- Angle (2D: body angle)
- Feet-contact (4D: leg-ground time)
GAIL-EBC consistently outperforms GAIL in all scenarios:
| Game | Measure | Model | QD-Score | Coverage |
|-----------|------------------|-----------|-----------------------|--------------------|
| hopper | jump | GAIL | 64017±5558 | 100.0±0.0 |
| hopper | jump | GAIL-EBC | 81987±6890 | 100.0±0.0 |
| humanoid | angle | GAIL | 63771±7113 | 53.0±3.0 |
| humanoid | angle | GAIL-EBC | 170273±19711 | 96.0±0.0 |
| humanoid | jump | GAIL | 18812±6412 | 41.0±31.0 |
| humanoid | jump | GAIL-EBC | 90631±61894 | 85.0±15.0 |
| ant | feet_contact | GAIL | -484468±3264 | 75.84±0.0 |
| ant | feet_contact | GAIL-EBC | -52521±383213 | 77.28±3.2 |
- - -
**Q4.** How do demonstration(demo) quantity/diversity/quality impact performance?
**A4.** Experiments with varying demo counts in Humanoid environment:
| Demos | Model | QD-Score | Coverage |
|-------|-----------|------------------|---------------|
| 10 | GAIL | 2576765±82806 | 68.68±3.2 |
| | GAIL-EBC | 5822582±254060 | 97.0±0.16 |
| 4 | GAIL | 1886725±551004 | 88.34±4.3 |
| | GAIL-EBC | 5704650±150716 | 97.84±0.04 |
| 2 | GAIL | 2676218±360663 | 70.44±0.2 |
| | GAIL-EBC | 5803908±1320888 | 97.3±0.3 |
| 1 | GAIL | 1718577±134816 | 75.24±1.12 |
| | GAIL-EBC | 4948921±203595 | 98.7±1.02 |
**Key observations:**
- With 1 demo, GAIL-EBC’s QD-Score drops by 15% vs. 10-demos.
- Using non-diverse elites (top4 best-reward):
| Algorithm | QD-Score | Coverage |
|----------------|--------|---------------|
| GAIL | 2832078±319164 | 75.3±7.9 |
| GAIL-EBC | 4074729±306686 (-30% vs 4-demos) | 98.12±0.4 |
- - -
**Q5**. It would be helpful to add the pure IRL algorithms (without PPGA) with and without EBC bonus as baselines.
**A5**. Without the QD algorithm, there would be no QD-Score, coverage, and average reward metrics. So we would only be able to compare the maximal fitness, which is not the our objective.
- - -
**Q6**. It would be helpful to use statistical tests to show the significance of the results.
**A6**. We perform a **Tukey HSD Test** on Table 7; comparing our EBC-enhanced algorithm to their counterparts, 7/9 effects are positive and 5 of these are significant with $p \leq 0.05$.
- - -
**Q7**. The code is not available.
**A7**. We will open-source the code and the seeds when the paper is published.
- - -
**Q8**. PPGA also optimizes behavior diversity and aims to maximize the QD-Score. Why does PPGA with EBC achieve a better QD-Score than PPGA without EBC?
**A8**. We kindly refer you to A2. PPGA itself is not sufficient in exploration. However, our EBC bonus synergized with CMA-ES could facilitate exploration and exploitation more effectively (see our paper line 244).
- - -
**Q9.** Why does DiffAIL-EBC outperform PPGA with true reward in Humanoid?
**A9.** As shown in Fig.4, PPGA itself's performance improves significantly with EBC (improved exploration). However, DiffAIL-EBC does not surpass PPGA-EBC, aligning with expectations. See analysis in l.420-428 (Page 8).
- - -
**Q10.** Is comparing average rewards necessary?
**A10.** While QD-score is the primary metric, average reward provides insights into archive quality. We retain it for supplementary analysis but emphasize QD metrics as the core evaluation.
- - -
## We hope these answers address your concerns. | null | null | null | null | null | null |
SPHINX: Structural Prediction using Hypergraph Inference Network | Accept (poster) | Summary: SPHINX proposes an unsupervised framework for latent hypergraph inference that models higher-order interactions directly from point-wise data. The method employs a sequential slot attention mechanism to predict hyperedge probabilities and uses differentiable k-subset sampling to convert these probabilities into a discrete hypergraph structure. This latent structure is then integrated into any hypergraph neural network, enabling improved performance on both inductive and transductive tasks, as demonstrated through extensive experiments and ablation studies.
Claims And Evidence: The paper’s claims are supported by clear and convincing empirical evidence. Extensive experiments show improvements in hypergraph accuracy and downstream task performance, effectively validate the approach.
Methods And Evaluation Criteria: The proposed methods—sequential slot attention for hyperedge inference and differentiable k-subset sampling for discretization—are well-motivated and appropriate for the challenge of inferring latent hypergraph structures. I especially love the fact that SPHINX can be used with any hypergraph neural network decoder and eliminating the need for heavy optimisation tricks.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental design is robust and comprehensive.
Supplementary Material: The supplementary material provides a code base to reproduce the resluts.
Relation To Broader Scientific Literature: The pipeline could be inspiring for other graph structure learning methods.
Essential References Not Discussed: The paper cites most of the key works in hypergraph learning and unsupervised relational inference.
Other Strengths And Weaknesses: I think the hyperparameter M is a really strong prior and needs to be carefully tuned. It would be better if there is some way to select M (or heuristically set an appropriate M).
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive feedback on our work. Below, we address the key points raised and we will incorporate all suggestions into the final version of the paper.
**Selecting the number of hyperedges M**
Having a fixed maximum number of hyperedges is a limitation that we are sharing with most of the hypergraph predictor methods in the literature and we agree with the reviewer that having a fully dynamic model, allowing dynamic number of hyperedges, is an important direction.
The experiments presented in the Appendix (Fig 5) are designed to understand to what extent this affects the performance of the model on the synthetic setup (where we know what is the real number of hyperedges needed). The results show that, as expected, having a too small M causes a drop in performance while having an M that is larger than the golden standard does not affect the performance.
To better understad this behaviour, we measured the average number of distinct hyperedges predicted when providing more slots than required. The results are as follow:
| Slots | Unique Hyperedges | MSE |
|------------|------------------|-----------|
| **1 slot** | 1.00 | 0.00007 |
| **2 slots** | 1.96 | 0.00002 |
| **3 slots** | 2.15 | 0.000017 |
| **4 slots** | 2.14 | 0.000007 |
We believe that this is a very interesting result, as it shows that when equipped with more slots than necessary the model learns some form of redundancy (by predicting the same hyperedge multiple times) instead of hallucinating fake relations.
---
Rebuttal Comment 1.1:
Comment: Thanks for adding hyperparameter analysis for $M$. I would like to keep my positive score. | Summary: The authors focus on unsupervised hypergraph inference. Three key desiderata are identified: applicability to a broad type of tasks, compatibility with any hypergraph process architecture, and ease of optimization. They propose SPHINX that adapts the slot attention for sequential hyperedge prediction, and is differentiable by leveraging contrained k-subset sampling. SPHINX is compatible with popular hypergraph neural networks. Experiments are conducted on synthetic and real-world datasets. Extensive results and analysis verify the superiority of SPHINX, and how its key components take effect.
**Update after rebuttal**
I have read the response from the authors. Since my original recommendation was to accept, I will maintain my recommendation.
Claims And Evidence: The claim in contributions is supported by experimental results, including accurate hypergraph discovery and how it benefits downstream tasks.
Methods And Evaluation Criteria: 1. SPHINX is designed under the guidance of three desiderata pointed out by the authors. SPHINX is well-motivated and reasonably designed.
2. Clustering-based hyperedge discovery and k-subset sampling to allow backpropagation are reasonable for end-to-end hypergraph inference.
3. Designing particle simulation to verify the effectiveness of SPHINX on hypergraph prediction is reasonable because the ground truth are known.
4. The transductive and inductive settings for hypergraph inference are widely accepted.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design is rather comprehensive.
1. Experiments are conducted on both synthetic datasets and real-world datasets. Personally, I think it is creative to design the particle simulation to verify the accuracy of hypergraph discovery.
2. SPHINX is compared with representative baselines and state-of-the-art methods
3. Hypergraph inference is studied under both the inductive and transductive settings.
4. Various ablation studies are conducted to validate the key features of SPHINX, e.g., sequential prediction, k-sampling, etc.
Supplementary Material: I have checked the supplementary material.
Relation To Broader Scientific Literature: This paper is related to structural inference, higher-order network learning, hypergraph neural networks, etc.
Essential References Not Discussed: A previous that uses sequential prediction for graph generation is not reviewed.
Efficient Graph Generation with Graph Recurrent Attention Networks. In NeurIPS, 2019.
Other Strengths And Weaknesses: Strengths:
S1. SPHINX is technically solid and empirically significant under variant experimental settings.
S2. It is innovative to apply the slot attention in computer vision to hypergraph clustering.
Weaknesses:
W1. SPHINX may not be general enough to infer arbitrarily structured hypergraphs. Please see Q1 for details.
Other Comments Or Suggestions: Personally, I think more technical descriptions of k-subset sampling, a key technique in SPHINX, should be added, either in the main text or the appendices.
Questions For Authors: Q1. I have several questions on hypergraph inference from the algorithmic perspective.
Q2. How to determine the number of total hyperedges M, and the size of each hyperedge k, i.e., the number of nodes it contains. Are they specified before hypergraph inference? How to decide the best M and k? Specifically, M and the largest k reflect the intrinsic order of higher-order interaction. Thus, I believe it is crucial for understanding the complexity of the interaction.
Q3. Does your algorithm allow hyperedges of different sizes? Besides, I note that the authors analyze the computational complexity in Appendix C. Is it possible to report the running time?
Q4. How do you determine the order of sequential prediction for hyperedges? Will that affect the performance?
Q5. Can you analyze the inferred hypergraphs on real-world datasets? It will help the readers understand the meaning the hyperedges and better justify the contributions of ``adding a new layer of interpretability to the model''. The authors may consider reporting some statistics of the hypergraphs (e.g., number of hyperedges and the largest size of hyperedges) and conducting some case studies.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s comments and feedback. We thank the reviewer for pointing out Liao et al, which shares similarities with our work in sequential structure prediction and offers inspiration for future improvements. We will include a discussion in the paper.
**Different hyperedge sizes**
SPHINX allows hyperedges of different sizes. The argument k in the sampling algorithm does not affect the number of learnable parameters so it can vary among hyperedges. For simplicity and to reduce hyperparameter search space, we fix k across all hyperedges. However, if a more diverse hypergraph is needed, an array of k-values can be given as input.
**Choosing M and k**
As mentioned in the paper, all experiments treats M and k as hyperparameters.
Dynamic hyperedge sizes: We agree that a dynamic k could enhance flexibility and is a promising direction for improvement. An alternative to hand-picking k is to select it according to the probability distribution: defining k as the number of nodes above a probability threshold or the rank with a significant gap in the distribution. Though still non-differentiable, this removes the need for predefined hyperedge sizes.
Sensitivity to the number of hyperedges M: Having a fixed maximum number of hyperedges is a limitation that we share with most hypergraph predictors in the literature. The experiments in Fig 5 examine how this impacts performance in the synthetic setup. As expected, a too small M causes a drop in performance while an M that is larger than the oracle(2) does not affect the performance. To understand this behaviour, we measured the average number of unique hyperedges. Results show that, when equipped with more slots than necessary, the model learns a form of redundancy (predicting the same hyperedge multiple times) instead of hallucinating fake relations.
|Slots|Unique Hedges|MSE|
|-|-|-|
|**1** |1.00|0.00007|
|**2**|1.96|0.00002|
|**3**|2.15|0.000017|
|**4**|2.14|0.000007|
That said, we agree that developing a fully dynamic model with adaptive k and M is an important future direction.
**Insights on the inferred hypergraph**
Evaluating the accuracy of predicted latent hypergraphs in real-world data is a critical but challenging problem, that remains an open problem. Our datasets lack annotations for higher-order structures, and even if they would have existed, there’s no guarantee they would be optimal for the task.
However, to understand the type of hypergraph predicted by our model in the real-world datasets, we analyze the NBA dataset, an inductive dataset that is easier to visualize and interpret. For a model with 6 hyperedges and 4 nodes per hyperedge:
- the avg node degree for the predicted hypergraphs are [1.97, 1.98, 1.99, 1.99, 1.98, 1.95, 1.96, 1.96, 1.96, 1.95, 4.29]. The first 10 nodes correspond to players- model focuses uniformly on all of them; while the last node represents the ball- the model learns to include the ball in most hyperedges. Thus the predicted structures are star-like hypergraphs, aligning with our intuition that player movement is highly influenced by the ball’s position.
- the avg percentage of duplicated hyperedges across examples is 2.74%. This shows a high diversity in our predictions, confirming that the model captures not just the most likely connections but also a scene-specific structure.
**Description of k-subset sampling**
To maintain readability, we initially omitted certain technical details, such as the formulation of k-sampling. However, we agree that including this information will make the paper more self-contained and reproducible. We will add a section in the appendix.
**Running time**
Following your suggestions, we measured time per iteration on NBA(batch size 128, 11 nodes) and ModelNet(12311 nodes) using a Quadro RTX 8000 GPU. While ensuring architectural comparability, our code is not optimized for large-scale data. More engineering improvements can further enhance scalability.
||ModelNet| | |NBA| | |
|-|-|-|-|-|-|-|
||SPHINX |TDHNN||SPHINX|TDHNN|GroupNET|
|**Training Time (sec)**|0.32 |0.77||0.018|0.21|0.071|
|**Infer Time (sec)**|0.26|0.76||0.030 |0.22|0.030|
**Order of sequential prediction for hyperedges**
We thank the reviewer for pointing this out. Since we do not have supervision at the hypergraph level, we believe order ambiguity is less pronounced than in graph-generation tasks (e.g., GRAN). But, sequential prediction still introduces equivalence classes in the latent space, where any predicted orders yield the same hypergraph, potentially causing model confusion.
To address this, we initialize slot attention deterministically, using the same random sequence for all examples, allowing the model to learn a canonical order if one exists. However, we agree that this is a naive way of imposing an order and we think that there is room for improvement in that regard. We thank the reviewer for identifying the GRAN paper which might offer inspiration for avoiding order-ambiguity.
---
Rebuttal Comment 1.1:
Comment: Thanks for your thoughtful discussion. Ideally, the model's prediction is invariant w.r.t. the prediction order, or the predictions form an equivalence class. However, such property needs theoretical support. A canonical order may remove ambiguity in practice, but it is unclear if the order affects the optimality. Different practitioners may choose different orders. Thus, I think it is worthwhile to resolve this issue in the future, which will benefit both theoretical and practical aspects. On the whole, I will keep my evaluation. | Summary: This paper introduces the SPHINX model, which aims to infer a latent hypergraph structure suitable for the final task in an unsupervised manner from input features, to support higher-order relationship processing in the absence of a readily available hypergraph structure. The process is divided into three steps: First, the hypergraph predictor infers a latent hypergraph based on the input features. Next, a k-subset sampling algorithm is used to transform the obtained probability distribution into specific incidence relationships. Finally, the predicted hypergraph is applied to a standard hypergraph neural network to generate higher-order representations. Experimental results show that SPHINX not only excels in inferring latent hypergraphs but also effectively enhances the performance of downstream tasks in both inductive and transductive settings.
Claims And Evidence: Some of the arguments in the paper are not sufficiently substantiated. Although the authors claim that SPHINX can infer latent hypergraphs that are highly consistent with the true higher-order structures, the validation on real-world datasets is not adequate. There is no direct evidence to prove the match between the inferred hypergraphs and the actual higher-order relationships; the authors only provide indirect proof through the improved performance of downstream tasks. Additionally, the paper claims that the “model is easy to optimize,” but it does not provide a detailed analysis of the convergence speed and stability of the optimization process, nor does it offer comparative experiments to support this claim. As a result, the persuasiveness of this claim is weak.
Methods And Evaluation Criteria: The proposed method is rational to some extent, yet its evaluation criteria are limited. In addition to focusing on the improved performance of downstream tasks, the assessment of the hypergraph predictor should also consider multiple aspects of the hypergraph structure itself, such as its rationality, sparsity, and interpretability. However, the paper mainly focuses on the performance of downstream tasks and lacks rich indicators to evaluate the quality of the hypergraph structure itself. This single evaluation method may not fully reflect the effectiveness and superiority of the method.
Theoretical Claims: The paper lacks sufficient theoretical analysis regarding the model's generalization ability and its applicability across different data distributions, which undermines the robustness of the model's theoretical foundation.
Experimental Designs Or Analyses: The experimental design has some deficiencies, especially in the selection of datasets. Although both synthetic and real-world datasets are included, the diversity of the real-world datasets is insufficient, which limits the validation of the model's effectiveness on different types of data and tasks. This affects the comprehensive evaluation of the model's actual performance. Increasing the diversity of datasets is crucial for verifying the model's robustness and generalization ability.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The key contributions of the paper are related to the existing literature to some extent, but the elaboration is not deep enough. Hypergraph inference, as an emerging field, intersects with areas such as graph neural networks and structural prediction. However, when discussing related work, the authors mainly focus on recent studies and lack sufficient review of classical theories and methods. This makes the background introduction of the paper not comprehensive enough, making it difficult to fully demonstrate its innovativeness and significance.
Essential References Not Discussed: Some relevant works are not cited or discussed in the paper. For example, in the field of graph neural networks in recent years, there have been some works on the learning and prediction of dynamic graph structures. These methods share some similarities with hypergraph inference in terms of ideas, but they are not mentioned in the paper.
Other Strengths And Weaknesses: Strengths:The paper proposes a novel unsupervised hypergraph inference model that is compatible with existing hypergraph neural network architectures and can enhance the performance of downstream tasks. Additionally, the design of the model is innovative, especially the use of the k-subset sampling algorithm to generate discrete hypergraph structures, which provides a new approach for hypergraph inference.
Weaknesses:In addition to the previously mentioned issues of insufficient evidence, limited evaluation criteria, and lack of theoretical analysis, the scalability of the model is also a concern. When dealing with large-scale datasets, the computational complexity of the hypergraph predictor may be high, which could affect the efficiency and practicality of the model.
Other Comments Or Suggestions: None
Questions For Authors: 1.How can the consistency between the inferred hypergraph structure and the true higher-order relationships of the hypergraph structure be verified on real-world datasets? Are there any more direct validation methods other than the improved performance of downstream tasks?
2.In Section 3.1, the authors generate discrete hypergraph structures using a hypergraph predictor and a k-subset sampling algorithm. However, regarding the interpretability of the generated hypergraph structures, can the authors provide a more detailed analysis?
3.In Section 4, the authors demonstrate the performance of SPHINX on several benchmark datasets. However, regarding the computational cost when dealing with large-scale graph data, do the authors have any plans to optimize the algorithm to enhance its scalability?
4.In the experimental section, the authors primarily conducted experiments on synthetic datasets and a few common benchmark datasets. However, the diversity and complexity of these datasets are limited, which prevents a comprehensive validation of the method's effectiveness on different types of graph data and tasks.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed review and valuable feedback on our paper. We would like to address your concerns and questions.
**Hypergraph evaluation for real-world datasets**
Quantitatively evaluating the accuracy of predicted latent hypergraphs in real-world dataset is a very important but also very challenging problem, that remains open in the community. Our datasets lack annotations for higher-order structures, and even if they would have existed, there’s no guarantee they would be optimal for the task. This aligns with challenges in the explainability community, where models are typically tested on synthetic data. Additionally, research in the rewiring community suggests that even when a ground-truth (hyper)graph is available, it is often suboptimal for solving the task. All of these motivated us to construct the synthetic dataset, where we can control a strong connection between the hypergraph structure and the label.
However, to better interpret the type of hypergraph predicted by our model in the real-world datasets, we analyze the NBA dataset, an inductive dataset that is easier to visualize and interpret. For a model with six hyperedges and four nodes per hyperedge:
- the avg node degree for our predicted hypergraphs are [1.97, 1.98, 1.99, 1.99, 1.98, 1.95, 1.96, 1.96, 1.96, 1.95, 4.29].
The first 10 nodes correspond to players - model focuses uniformly on all of them; while the last node represents the ball - the model learns to include the ball in most hyperedges. Thus, the predicted structures are star-like hypergraphs, aligning with our intuition that player movement is highly influenced by the ball’s position.
- the average percentage of duplicated hyperedges across examples is 2.74%. This shows a high diversity in our predictions, confirming that the model captures not just the most likely connections but also learns a dynamic, scene-specific structure.
Regarding the diversity of the datasets, we especially picked them to span both the inductive and transductive setup, with a wide variety of sizes: NBA (11 nodes) ModelNet40 (12311 nodes), NTU (2012 nodes).
***Model is easy to optimize* without analysis**
As mentioned in the introduction, by *easy to optimise* we mean that our model does not require additional regularisation losses that are necessary in most of the previous works in order to stabilise the training (e.g. sparsity regularisation, reconstruction regularisation). We are sorry for not being clear enough in that respect and we are happy to further clarify this claim if the reviewer thinks it necessary.
**Missing review of classical methods and dynamic graph structures**
We aimed to provide a broad overview, covering 2018-2024 papers from both machine learning and closed-form solution methods. We're happy to expand it and would appreciate any specific suggestions from the reviewer.
In the section *Structural Inference on Graphs*, we discuss advancements in the graph prediction literature. However, we apologize if any relevant works were overlooked and we will do our best to do a more comprehensive review in the final version.
**Analysis of the model’s interpretability**
The interpretable character of our model is indeed given by the discrete structure of our hypergraph which enables: a) explicit inspection of the latent structure used by the final model, and b) discovery of meaningful correlations in the input data. For a concrete example, please refer to the NBA analysis above, which illustrates how our model captures interpretable patterns in real-world data.
**Scalability for large-scale data**
For a model with N nodes and M hyperedges, each containing K nodes, the computational complexity is O(MxN+MxlogNxlogK) . This can become a challenge when both the number of nodes and hyperedges grow significantly. The primary computational bottleneck lies in the slot-attention algorithm. To enhance scalability for large datasets, a potential optimization is to precompute an initial clustering of nodes and limit slot-attention computations to selecting hyperedges within each cluster rather than across the entire dataset. This adjustment would reduce the complexity to O(MxN'+MxlogN'xlogK), where N' represents the maximum cluster size.
Following your suggestions, we measured time and memory per iteration on NBA (batch size 128, 11 nodes) and ModelNet (12,311 nodes) using a Quadro RTX 8000 (49GB). While ensuring architectural comparability, our code is not optimized for large-scale datasets. Further engineering improvements can further enhance scalability.
|Metric|ModelNet40| | |NBA| | |
|-|-|-|-|-|-|-|
||SPHINX |TDHNN||SPHINX|TDHNN|GroupNET|
|**Training Time (sec)**|0.32 |0.77||0.018|0.21|0.071|
|**Inference Time (sec)**|0.26|0.76||0.030 |0.22|0.030|
|**Memory Usage (MB)**|9156.17|20559.32||187.61|508.00|3261.53|
While our model's memory requirements increase with dataset size, it remains significantly more efficient than previous models in the literature. | Summary: The paper introduces SPHINX, a novel model designed to infer latent hypergraph structures in an unsupervised manner solely from task-dependent signals. Recognizing the limitations of traditional graph models in capturing higher-order interactions, SPHINX employs a sequential soft clustering approach combined with constrained k-subset sampling to generate discrete hypergraph structures. These structures are compatible with existing hypergraph neural networks and can be optimized end-to-end without additional regularization losses. Through extensive experiments on four challenging datasets, including both synthetic and real-world data, the authors demonstrate that SPHINX effectively infers meaningful hypergraphs that enhance performance in both transductive and inductive tasks, outperforming existing hypergraph prediction methods.
Claims And Evidence: The paper makes several key claims:
1.SPHINX can accurately infer latent hypergraph structures in an unsupervised manner solely from task-dependent signals.
2.The inferred hypergraphs are interpretable and enhance performance in both transductive and inductive tasks.
3.SPHINX outperforms existing hypergraph prediction methods across multiple datasets.
These claims are supported by comprehensive experimental evidence, including ablation studies that highlight the importance of sequential prediction and k-subset sampling. The model is evaluated on both synthetic and real-world datasets, demonstrating superior performance metrics compared to baseline and state-of-the-art methods. Additionally, the paper provides qualitative visualizations of the inferred hypergraphs, aligning closely with ground-truth structures in synthetic settings, further reinforcing the validity of the claims.
Methods And Evaluation Criteria: The proposed method, SPHINX, integrates a sequential soft clustering mechanism with constrained k-subset sampling to infer hypergraph structures. Specifically, it utilizes a slot-attention mechanism adapted for sequential prediction to address ambiguity issues inherent in parallel hyperedge prediction. The k-subset sampling ensures that each hyperedge contains exactly k nodes, facilitating stable optimization without the need for additional regularization.
For evaluation, the authors employ both synthetic and real-world datasets. The synthetic Particle Simulation dataset allows for direct assessment of hypergraph inference accuracy by comparing predicted hyperedges with ground truth. Real-world datasets, including the NBA SportVU, ModelNet40, and NTU datasets, are used to evaluate the downstream performance of SPHINX in inductive and transductive tasks. Metrics such as Average Displacement Error (ADE), Final Displacement Error (FDE), and overlap with ground-truth hyperedges are utilized to measure performance.
Theoretical Claims: The paper does not present significant theoretical claims or proofs. The focus is primarily on the empirical performance of the SPHINX model in inferring hypergraph structures and enhancing downstream task performance. Therefore, there are no theoretical proofs to verify.
Experimental Designs Or Analyses: The experimental design is robust and well-structured. The authors conduct extensive ablation studies to isolate the contributions of sequential prediction and k-subset sampling, demonstrating their necessity for optimal performance. The use of both synthetic and real-world datasets allows for comprehensive evaluation of the model's capabilities in controlled and practical scenarios. Additionally, the comparison with a range of baseline and state-of-the-art methods across different tasks and datasets ensures that the performance improvements are consistent and significant. The inclusion of qualitative visualizations further aids in understanding the effectiveness of the inferred hypergraphs.
Supplementary Material: Yes, the supplementary material has been reviewed. It includes detailed explanations of the SPHINX model components, additional experimental results, visualizations of the inferred hypergraphs, and descriptions of the synthetic Particle Simulation dataset. The appendix also discusses potential limitations, broader impact, and provides an overview of existing dynamic hypergraph predictors. These sections enhance the understanding of the model's functionality and its performance across various scenarios.
Relation To Broader Scientific Literature: The paper situates its contributions within the broader context of graph and hypergraph neural networks, as well as structural inference in machine learning. It builds upon existing works in neural relational inference and hypergraph neural networks, addressing their limitations by enabling unsupervised hypergraph inference that is both inductive and transductive. SPHINX distinguishes itself by introducing sequential slot-attention and constrained k-subset sampling, which are not extensively explored in current literature. By comparing with a wide range of related methods, the paper highlights its novel approach to hypergraph structure prediction and its superior performance, contributing significantly to the field of higher-order relational modeling.
Essential References Not Discussed: The paper appears to comprehensively review the relevant literature in the domains of hypergraph neural networks and structural inference. However, it does not mention recent advancements in dynamic hypergraph learning or specific k-subset sampling techniques that might be pertinent. Including references to the latest works in these areas could provide a more thorough context and strengthen the positioning of SPHINX within the current research landscape.
Other Strengths And Weaknesses: Strengths:
1.Innovation: SPHINX introduces a novel combination of sequential slot-attention and constrained k-subset sampling, addressing key limitations in current hypergraph inference methods.
2.Comprehensive Evaluation: The extensive experiments on both synthetic and real-world datasets, along with thorough ablation studies, provide strong evidence of the model’s effectiveness.
3.Interpretability: By generating discrete hypergraph structures, SPHINX offers enhanced interpretability, allowing for better understanding and visualization of high-order interactions.
4.Applicability: The model’s compatibility with various hypergraph neural network architectures and its performance in both inductive and transductive settings make it versatile for different applications.
Weaknesses:
1.Fixed Hyperedge Size: The requirement of fixed k-node hyperedges may limit the model’s flexibility in scenarios where the size of high-order interactions varies.
2.Predefined Hyperedge Count: The necessity to set a maximum number of hyperedges (M) beforehand could be restrictive in dynamic environments where the number of interactions is not known a priori.
3.Limited Theoretical Insight: The paper focuses heavily on empirical results, with limited discussion on the theoretical underpinnings of why the proposed method works effectively.
Other Comments Or Suggestions: The paper is well-written and presents a clear narrative of the problem, proposed solution, and experimental validation. Including more diverse real-world applications beyond trajectory prediction, such as social network analysis or biological interaction networks, could further demonstrate the versatility of SPHINX. Additionally, exploring dynamic hyperedge sizes or adaptive hyperedge counts in future work would address some of the current limitations and enhance the model’s applicability.
Questions For Authors: 1. Have you considered extending SPHINX to handle hyperedges of varying sizes, and if so, what challenges do you anticipate?
2. How does SPHINX perform with larger-scale datasets in terms of computational efficiency and memory usage?
3. Can you provide more insights on how sensitive the model's performance is to the chosen maximum number of hyperedges (M)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and constructive feedback. We appreciate the time you've taken to review our work, and we would like to address your concerns and questions.
**Fixed hyperedge size**
We want to mention that, while the cardinality k is indeed non-learnable, SPHINX does allow hyperedges of different sizes. The coefficient k in the k-sampling algorithm does not affect the number of learnable parameters so it can differ from one hyperedge to another. For simplicity and to reduce hyperparameter search space, we use a fixed k across all hyperedges. However, if a more diverse hypergraph structure is needed, an array of k-values can be provided as input
We agree that a dynamic k could enhance flexibility and is a promising direction for improvement. One possible adaptation is selecting k for each hyperedge based on probability distribution statistics. For instance, defining k as the number of nodes above a probability threshold or the rank at which a certain gap appears in the distribution. While this approach remains non-differentiable w.r.t k, it eliminates the need for pre-defined hyperedge cardinality by allowing dynamic values.
While these adaptations are straightforward, they were not included in the current study. Exploring fully learnable hyperedge cardinality is an interesting future direction..
**Predefined Hyperedge Count / Sensitivity w.r.t. M**
Having a fixed maximum number of hyperedges is indeed a limitation that we are sharing with most of the hypergraph predictor methods in the literature.
The experiments presented in the Appendix (Fig 5) are designed to understand to what extent this affects the performance of the model on the synthetic setup (where we know what is the real number of hyperedges needed). The results show that, as expected, having a too small M causes a drop in performance while having an M that is larger than the golden standard does not affect the performance.
To better understand this behaviour, we measured, for each hypergraph, the average number of distinct hyperedges predicted when providing more slots than required. The results are as follow:
| Slots | Unique Hedges | MSE |
|------------|------------------|-----------|
| **1 slot** | 1.00 | 0.00007 |
| **2 slots** | 1.96 | 0.00002 |
| **3 slots** | 2.15 | 0.000017|
| **4 slots** | 2.14 | 0.000007|
We believe that this is a very interesting result, as it shows that when equipped with more slots than necessary the model learns some form of redundancy (by predicting the same hyperedge multiple times) instead of hallucinating fake relations.
**Computational efficiency and memory usage on large-scale data**
For a model with N nodes, M hyperedges, and cardinality of hyperedges K, the complexity of our model is O(M × N + M × logN × logK). This becomes problematic when the number of nodes and the number of hyperedges is simultaneously very large.
The most computationally intensive component is the slot-attention algorithm. To improve scalability for large-scale datasets, one potential optimization is to precompute an initial node clustering and restrict slot-attention computations to selecting hyperedges within each cluster (local structure) rather than across the entire graph. This would reduce complexity to O(M×N′+M×logN′×logK), where N’ is the maximum cardinality of a cluster.
Following reviewer suggestions, we also measured the time and memory consumption per iteration on NBA dataset (batch size: 128, 11 nodes per example) and the ModelNet dataset (hypergraph with 12 311 nodes). All experiments were conducted on 1 GPU, Quadro RTX 8000 with 49GB. While we ensured architectural comparability between models, we note that our model has not yet been optimized for large-scale datasets. Further engineering improvements can further enhance scalability.
| Metric | ModelNet40 | | |NBA | | |
|---------------------------|----------------------|-|-------|-------------------|-------------------|-----------------|
|| SPHINX |TDHNN||SPHINX|TDHNN|GroupNET|
| **Training Time (sec)** |0.32 |0.77||0.018|0.21|0.071|
| **Inference Time (sec)** |0.26|0.76||0.030 |0.22|0.030|
| **Memory Usage (MB)** |9156.17|20559.32||187.61|508.00|3261.53|
While our model's memory requirements increase with dataset size, it remains significantly more efficient than previous models in the literature.
**Additional related work on dynamic hypergraphs and k-subset sampling algorithms**
We thank the reviewer for the suggestions. We will do our best to incorporate a review of the k-subset sampling advancement together with more dynamic hypergraph learning methods. We are welcoming any particular suggestions of relevant work that we are missing.
We appreciate the suggestions and agree that enabling hypergraph inference unlocks numerous real-world applications. While our experiments focused on a subset of topics, we are eager to explore broader real-world applications in future work. | null | null | null | null | null | null |
TraceGrad: a Framework Learning Expressive SO(3)-equivariant Non-linear Representations for Electronic-Structure Hamiltonian Prediction | Accept (poster) | Summary: This paper presents TraceGrad, a strategy to learn SO(3)-equivariant Hamiltonian with SO(3)-invariant trace as a guiding label, built upon their mathematical relations. It aims to overcome the tradeoff between SO(3)-equivariance constraints and NNs’ nonlinear expressiveness. TraceGrad brings improvement to baseline models in ablation studies.
## update after rebuttal
The authors' answers to my questions are thorough and informative. I've maintained my score.
Claims And Evidence: Claims about improvement in prediction accuracy are supported by experimental results.
Methods And Evaluation Criteria: The benchmark datasets are comprehensive, and evaluation criteria are reasonable.
Theoretical Claims: I could not check the correctness of proofs.
Experimental Designs Or Analyses: Experimental design and analyses look sound to me.
Supplementary Material: I didn’t review the supplementary material.
Relation To Broader Scientific Literature: Equivariant ML models are actively developed in the AI4Materials community. I’m not sure how widely TraceGrad is applicable to such models (see Questions).
Essential References Not Discussed: Related works to my knowledge are comprehensively discussed.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: * There’re some errors in using single quotes, e.g., Line 309.
* The discussion of equivariant operators in Sec. 5.1 seems redundant with Related Works.
Questions For Authors: * The expressive is mostly added through nonlinear transformations on the invariant features. Could you comment on the inductive bias this approach assumes, and whether it’s reasonable in physics?
* Is the TraceGrad strategy applicable to learning materials properties other than full Hamiltonian?
* What’s the scope of applicability of TraceGrad? In other words, what requirements does the backbone model need to satisfy for TraceGrad to be applicable?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: 1.
We will correct the quote errors and reduce redundancy in the discussion.
2.
In this work, we explicitly introduce the trace quantity, i.e., the square of the Frobenius norm of the Hamiltonian $\mathbf{H}$, as an SO(3)-invariant quantity in our neural network. In physics, invariants often reflect the fundamental mathematical structure underlying physical laws and can serve as the foundation for deriving other quantities with equivariant properties. For example, in special relativity, the Lorentz spacetime interval remains invariant under changes in reference frames, from which one can derive equivariant quantities such as velocity and momentum. Similarly, in molecular systems, energy is an invariant quantity, and its gradient with respect to atomic coordinates yields the force, which is equivariant.
We extend this principle—where invariant quantities induce equivariant ones—from specific physical examples to the context of neural network representation learning. This provides a structured inductive bias that guides the model in learning physically meaningful, symmetry-respecting representations.
3.
As Reviewer gNai pointed out, the TraceGrad mechanism is highly generalizable. For other symmetry-equivariant tensorial physical quantities, the squares of their Frobenius norms are always symmetry-invariant quantities. These invariants can serve as supervision signals from which the corresponding equivariant quantities can be derived.
We validate the effectiveness of the proposed TraceGrad method on the energy/force field prediction task. Due to the limited time during the discussion period, we conducted experiments on two datasets as representative, i.e., MD17-Aspirin and MD17-Malonaldehyde [1], as a representative example. We use the same setup of this dataset as Liao and Smidt [2]. ote that in this task, the regression target $ E $ (energy) is an SO(3)-invariant quantity ($ l=0 $), while $\mathbf{F}$ (force) is an SO(3)-equivariant quantity ($ l=1 $). The model typically learns SO(3)-equivariant features $\mathbf{f}$, which are then transformed into SO(3)-invariant features, from which $E$ is regressed.
Subsequently, the force field at a given position is obtained by differentiating $ E $ with respect to the atomic coordinates: $\mathbf{F}_i = -\frac{\partial E}{\partial \mathbf{r}_i}$, where $\mathbf{r}_i$ is the position vector of the $ i $-th atom. This approach ensures energy conservation. Given the specificity of this task, we integrate the baseline model, namely Equiformer [2] with our proposed TraceGrad method as follows:
First, we use the SO(3)-equivariant features $\mathbf{f}$ encoded by the baseline model Equiformer as input, and construct SO(3)-invariant non-linear features $z$ according to our method (Sections 4 and 5 of our paper). We then use the trace quantity $\mathbf{T}$ to supervise the learning of $z$. Given $\mathbf{F}$ as a column vector with $l=1$, here $\mathbf{T}$ simplifies to $\mathbf{T} = \mathbf{F}^T \cdot \mathbf{F}$. From $z$, we induce the SO(3)-equivariant features $\mathbf{v}$ with more non-linearity, which are then fed back into the baseline model for the subsequent encoding and decoding phases, where $E$ is regressed and finally $\mathbf{F}$ is constructed from the gradients of $E$. We train Equiformer+TraceGrad under the same experimental conditions as those used in the original Equiformer paper [2], with maximum feature degree ($l_{max}$) set as 2. The experimental results are as below:
For the MD17-Aspirin dataset, Equiformer achieves an energy MAE of 5.3 meV and a force MAE of 7.2 meV/Å. In comparison, Equiformer+TraceGrad achieves lower MAE values, with 5.06 meV for energy and 5.65 meV/Å for force. For the MD17-Malonaldehyde dataset, Equiformer achieves an energy MAE of 3.3 meV and a force MAE of 5.8 meV/Å,while Equiformer+TraceGrad, yields improved performance with an energy MAE of 3.21 meV and a force MAE of 4.68 meV/Å. **These experimental results show that our TraceGrad method improves the prediction accuracy for both energy and force, demonstrating that its effectiveness is not limited to the prediction of electronic-structure Hamiltonians and their downstream physical quantities, but has broader application potential**. Since our method is fundamentally general, we plan to extend it in the future to predict other physical properties, such as the force constant matrix, Born effective charges, and more. **We will add these experimental results and discussions in the revised paper**.
[1] Chmiela et al. Towards exact molecular dynamics simulations with machine-learned force fields. Nature Communications, 2018.
[2] Liao and Smidt. Equiformer: Equivariant graph attention transformer for 3d atomistic graphs. ICLR, 2023.
4.
To apply the TraceGrad method, the backbone model only needs to be an end-to-end differentiable strictly SO(3)-equivariant neural network model, such as QHNet, DeepH-E3, or Equiformer.
---
Rebuttal Comment 1.1:
Comment: Thanks for the explanations. I have no further comment.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We would like to sincerely thank you for your time and effort in reviewing our paper. We greatly appreciate the positive feedback and the confidence you have shown in our work . Your thoughtful comments have been incredibly helpful for this paper.
Thank you again for your valuable contribution!
Best regards,
Authors of this paper | Summary: The author propose a new technique to enhance the non-linear expressiveness for equivariant architectures. The idea is simple and straightforward, first generate an invariant feature (like energy) and equivariant feature (like position). The gradient of the invariant feature (energy) with respect to the equivariant feature (position) is an equivariant feature (force). The non-linearity is enforced on the generation of the invariant features. The author validate their methods on two K-S Hamiltonian prediction benchmarks.
Post rebuttal: The authors have addressed most of my concern.
Claims And Evidence: To evaluate the effectiveness of tracegrad, more generalized tasks such as machine learning force field and quantum property regression are needed.
Methods And Evaluation Criteria: Yes, but does regressing against $\mathrm{tr}(H H^*)$ have any physical meaning? I think a more sensible or physical-inspired regression should be against the basis transformed version as in [1] i.e. $\mathrm{tr}(CH H^*C^\*)$ .
[1]: Li Y, Xia Z, Huang L, et al. Enhancing the Scalability and Applicability of Kohn-Sham Hamiltonians for Molecular Systems[C]//The Thirteenth International Conference on Learning Representations.
Theoretical Claims: Why tracegrad is better than the paradigm of scalar-tensor interaction and enforcing the non-linearity on the scalar? Does the expressivity differ? Maybe a theoretical analysis could be beneficial here.
Experimental Designs Or Analyses: Yes, but an efficiency comparison is needed. I am worried that tracegrad will be siginificantly more expensive than scalar tensor interaction paradigm.
Supplementary Material: No
Relation To Broader Scientific Literature: It is useful for general geometric deep learning.
Essential References Not Discussed: Yes, but I believe a special related work section on predicting the Hamiltonian matrix is needed.
Other Strengths And Weaknesses: The idea on using the gradient as a vector feature is novel, and could serve as an useful alternative to the scalar-tensor interaction paradigm or CG-tensor products. But the application on predicting the Hamiltonian is questionable. More extensive evaluations on other tasks such as machine learning force fields, or discussion on why the method only works for the Hamiltonian is necessary.
Other Comments Or Suggestions: 1. Many equations are not carefully written, and things like $loss$ should be wrapped with \mathrm{} in latex.
Questions For Authors: 1. Can you generalize your idea, for example, the Hessian matrix? I think the Hessian matrix also has equivariant properties and could be decomposed into a set of irreducible representations. A more generalized derivation to higher-order spherical tensor could be beneficial.
2. Can you provide a detailed breakdown of your computational costs?
3. How do you approach the instability (occurrence of NaNs) of the network induced by taking the gradient? Do you make any normalizations? If so, how?
4. Can you apply your method to machine learning force field or molecular property prediction (e.g. dipole moment)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1.**We have found that our TraceGrad method also significantly improves energy/force prediction tasks** (please refer to the 3rd item of our response to Reviewer AGnk), demonstrating the generality of our method. We plan to follow the reviewer's suggestion and validate our method on more molecular property prediction tasks in the future work.
2.**Although both our method and that of Li et al. construct SO(3)-invariant quantities for supervised learning, the motivations and mechanisms are very different**. Li et al. construct SO(3)-invariant quantities, which are the orbital energies after diagonalizing the Hamiltonian. Using this quantity to supervise the regression model aims to ensure that the orbital energies derived from the model’s output Hamiltonian are as close as possible to the true energies.
In contrast, our introduction of SO(3)-invariant quantities is intended to supervise high-quality SO(3)-invariant non-linear representations. The goal of these features is to effectively enhance non-linear expressiveness into SO(3)-equivariant features, enabling the fine regression of complex SO(3)-equivariant targets. **The work of Li et al. mentioned by the reviewer, was accepted and published online only about a week before the ICML 2025 submission deadline**, which left us with insufficient time to incorporate it into our work. However, we will certainly cite this paper and explore the possibility of combining it with our approach in future research.
The SO(3)-invariant quantities we construct, $\mathbf{T} = \text{tr}(\mathbf{H} \cdot \mathbf{H}^\dagger)$, quantify the total coupling strength encoded in the Hamiltonian matrix, representing the overall amplitude of electronic interactions in the system. It is a symmetry-invariant scalar that reflects the global energy scale of $\mathbf{H}$ and serves as a physically meaningful regularization target.
From a machine learning perspective, it serves as an excellent target to supervise SO(3)-invariant representations.
**Moreover, this construction is easily extendable to other physical quantities**. For example, in force field prediction tasks, one could construct the trace quantity of the force, which reflects the strength of the force,
as a supervision signal, train high-quality SO(3)-invariant feature representations, and then induce SO(3)-equivariant features through the gradient mechanism.
In some cases, using $tr( \mathbf{C} \mathbf{H} \mathbf{H}^\dagger \mathbf{C}^\dagger)$ can also be a reasonable choice. In particular, when $\mathbf{C}^\dagger \mathbf{C} = \mathbf{I}$, this expression reduces exactly to our definition, $tr( \mathbf{H} \mathbf{H}^\dagger)$. However, its applicability is not as general as that of our proposed formulation, and it is also less straightforward to implement in practice. For instance, in the case of other tensorial physical quantities, such as the force $\boldsymbol{F}$, which is a vector, or the electron-phonon coupling tensor, which is a rank-3 tensor, our definition remains valid, whereas $tr( \mathbf{C} \mathbf{H} \mathbf{H}^\dagger \mathbf{C}^\dagger)$ may no longer be applicable. Therefore, our trace-based construction offers a unified, symmetry-invariant, and easily implementable approach for supervising or regularizing such tensor quantities across a wide range of physical systems.
3.**A theoretical analysis on the advantages of the TraceGrad method**.
First, our method constructs SO(3)-invariant quantities, i.e., the trace quantity, to directly supervise SO(3)-invariant features, allowing for the effective learning of informative SO(3)-invariant features that capture the intrinsic symmetry properties of the mathematical structure of $\mathbf{H}$. This helps the model learn informative invariant features and ultimately deliver them to the equivariant features to assist in Hamiltonian prediction. Second, the proposed gradient mechanism, i.e., $\mathbf{v} = \frac{\partial z}{\partial \mathbf{f}}$, where $\mathbf{v}$ and $\mathbf{f}$ are used to regress $\mathbf{H}$ and $z$ is used to regress $\mathbf{T}$, reflects the partial derivative relationship between $\mathbf{H}$ and $\mathbf{T}$, i.e., $\mathbf{H} = \frac{\partial \mathbf{T}}{\partial Conj(\mathbf{H})}$, where $Conj(\cdot)$ denotes the complex conjugate, imposing stronger physical constraints on the relationships between the components of the equivariant features. In contrast to the conventional gated activation mechanism, which can be expressed as $\mathbf{v} = z \cdot \mathbf{f}$, this approach enables effective joint learning of $z$ and $\mathbf{v}$, with supervision provided by $\mathbf{T}$ and $\mathbf{H}$.
4.We will present the related works on Hamiltonian prediction in a dedicated section for clarity.
5.We will revise the equation rendering to ensure standard formatting.
6.Please refer to the 1st item of responses to Reviewer gNai for the discussion on computational cost.
7.We use layer normalization techniques to stabilize the training process.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comment. I will increase my score to 3.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We would like to sincerely thank you for your thoughtful feedback and the constructive comments you have provided. Your suggestions have been immensely helpful, and we truly appreciate the time and effort you dedicated to reviewing our paper. We are grateful for the higher rating and the confidence you have shown in our work.
Best regards,
Authors of this paper | Summary: This paper introduces TraceGrad, a framework that integrates strong non-linear expressiveness with strict SO(3)-equivariance for electronic structure Hamiltonian prediction. The approach first constructs theoretical SO(3)-invariant trace quantities derived from Hamiltonian targets, using them as supervisory signals to learn invariant features. A gradient-based mechanism is then employed to generate SO(3)-equivariant encodings of varying degrees from these learned invariant features. Empirical evaluations on eight benchmark datasets demonstrate improvements in predicting physical quantities and accelerating density functional theory (DFT) computations.
Claims And Evidence: The paper makes two key claims:
* Addressing the Challenge of Combining Non-Linear Expressiveness with SO(3)-Equivariance
* The authors propose a novel approach to this challenge by systematically bridging SO(3)-invariant and SO(3)-equivariant representations.
* The framework first supervises SO(3)-invariant features to ensure strong non-linear expressiveness and subsequently derives SO(3)-equivariant representations through a gradient-based mechanism.
* The mathematical derivations in Section 4 provide a solid theoretical foundation, and empirical results in Section 6 and Appendix H support the claim.
* Significant Performance Gains in Hamiltonian Prediction on Eight Benchmark Datasets
* The authors report that TraceGrad outperforms state-of-the-art methods across eight datasets from the DeepH and QH9 benchmark series.
* This claim is supported by empirical results in Section 6 and Appendix H, which demonstrate improved prediction accuracy and DFT acceleration.
Methods And Evaluation Criteria: * Methods: The approach is well-grounded in equivariant neural networks for Hamiltonian prediction.
* Evaluation Criteria: The chosen metrics align with standard benchmarks in quantum chemistry.
Theoretical Claims: * The construction of SO(3)-invariant trace quantities and the gradient-based mechanism linking invariant and equivariant representations are mathematically sound.
* The authors assert in Introduction and Remark 4.3 that the gradient mechanism induces expressive SO(3)-equivariant representations while maintaining physical interpretability, providing an advantage over gated mechanisms. Empirical ablation results (Appendix H) partially support this claim, but it remains unclear whether model variants using gating mechanisms have comparable parameter counts to those using TraceGrad. A direct comparison in terms of model complexity would strengthen this argument.
Experimental Designs Or Analyses: * The experimental setup is well-structured and comprehensive.
* However, there is no detailed analysis of parameter counts, which is crucial for assessing model expressiveness. The authors state that $g_{\text{nonlin}}(\cdot)$ is implemented as a three-layer fully connected module with a large hidden size, injecting significantly more parameters into the architecture. Since larger models tend to have higher expressiveness, a fair comparison requires increasing the parameter counts of baseline models (e.g., by widening hidden layers or deepening architectures). Providing such comparisons would clarify TraceGrad’s advantages over other methods.
Supplementary Material: Yes. I reviewed the supplementary code and didn't find any obvious mistakes.
Relation To Broader Scientific Literature: TraceGrad’s contribution to equivariant graph neural networks is general and could inspire new model designs for other molecular property prediction tasks beyond Hamiltonian prediction.
Essential References Not Discussed: No essential references appear to be missing from the discussion, as the submission adequately contextualizes its contributions.
Other Strengths And Weaknesses: Strengths
* The paper is well-organized and clearly written.
* The proposed method is theoretically sound and provides an innovative solution to the equivariance-expressiveness tradeoff.
Other Comments Or Suggestions: * A detailed parameter count analysis for various model variants would strengthen the paper and clarify the technical contribution of TraceGrad.
Questions For Authors: * Have the authors analyzed the impact of the hidden size of $g_{\text{nonlin}}(\cdot)$ on model performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1.**Response to the reviewer's question about the computational burden**:
First, the branch decoding the trace quantity $\mathbf{T}$ from the SO(3)-invariant features $z$ is only required during the training phase and does not need to be activated during inference; thus, the parameters associated with this branch are unnecessary at inference time. Second, the key difference between our proposed gradient-based mechanism for constructing non-linear SO(3)-equivariant features and the original gated mechanism is that the feature construction has changed from $\mathbf{v} = z \cdot \mathbf{f}$ to $\mathbf{v} = \frac{\partial z}{\partial \mathbf{f}}$. Given the network that learns the SO(3)-invariant features $z$, the gradient operation itself does not introduce additional parameters.
The reviewer's suggestion to analyze the computational cost is reasonable. Unfortunately, existing automated FLOPs counting tools like fvcore, torchprofile, and torchstat do not support precise quantification of the computational complexity for frameworks based on equivariant neural network packages such as E3NN. Therefore, in our paper, we measure the computational efficiency of our method by testing the average inference time under the same GPU/CPU hardware conditions.
In the paper’s Appendix J, we provide a comprehensive comparison of GPU computational costs and corresponding accuracy for different models across four representative databases. From the experimental results in Table 7 of our paper, **we find that incorporating our TraceGrad method results in only a slight increase in inference time compared to the baseline models, i.e., DeepH-E3 or QHNet**. Given the substantial accuracy improvements introduced by the TraceGrad method, this minor increase in computational time is considered acceptable for practical applications. **In contrast, simply doubling the depth of DeepH-E3 or QHNet leads to a significant rise in inference time while providing only limited accuracy improvements.**
In contrast, DeepH-E3+TraceGrad and QHNet+TraceGrad exhibit significantly better accuracy performance compared to DeepH-E3$^{\times 2}$ and QHNet$^{\times 2}$, respectively. At the same time, the inference times of DeepH-E3+TraceGrad and QHNet+TraceGrad are considerably lower than those of DeepH-E3$^{\times 2}$ and QHNet$^{\times 2}$, respectively. **These findings highlight the superiority of the TraceGrad method in enhancing model expressiveness and improving accuracy performance while maintaining computational efficiency.**
To further address the reviewer's concern regarding the efficiency differences between the classical gated mechanism and our TraceGrad method, we introduce an additional experimental setup to evaluate inference efficiency. Specifically, we test QHNet+Gate, where the gradient-based mechanism for constructing the non-linear equivariant feature $ \mathbf{v} $ is replaced with a classical gated mechanism, following the definition provided in Appendix H of the paper.
Experimental results show that **QHNet+Gate** achieves inference times of 0.243s and 0.184s on the QS and QD datasets, with $ \text{MAE}^H_{\text{all}} $ values of 1.796 meV and 4.217 meV, respectively. In comparison, **QHNet+TraceGrad** takes 0.248s and 0.187s on the same datasets, with $ \text{MAE}^H_{\text{all}} $ values of 1.191 meV and 2.819 meV, respectively. These results demonstrate that the TraceGrad method introduces only a minor increase in inference time compared to the traditional gated mechanism, while achieving significant improvements in accuracy.
In addition to reporting the GPU inference time, **we also present the inference times of QHNet and QHNet+TraceGrad on a single CPU thread in the paper’s Appendix K**. Experimental results show that while combining TraceGrad introduces only a slight increase from the inference time of QHNet on the CPU, it delivers significant improvements in accelerating the convergence of DFT methods. **Notably, the time saved by TraceGrad for DFT calculations far exceeds the minimal additional time introduced by TraceGrad for the deep model's inference.**
2.We agree with the reviewer’s view that TraceGrad’s contribution to equivariant graph neural networks is indeed general. In addition, **we have found that it demonstrates its effectiveness on another task, namely energy and force field prediction**. For more details, please refer to the 3rd item of our response to Reviewer AGnk .
3.We have reduced the hidden size of $g_{\text{nonlin}}(\cdot)$ by half (from 1024 to 512) and conducted experiments on the QHNet-Stable (QS) database. The experimental results show that the $MAE^H_{\text{all}}$ metric increase from 1.191 to 1.347, but still significantly outperforms the baseline method (1.962). This suggests that while the parameter size does impact the accuracy improvements introduced by our method, the effect is relatively limited.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. Some of my concerns have been addressed.
* Regarding R1 and R3, the response partially resolves my concerns. While the ‘SO(3)-invariant decoder’ is not activated during inference, the branch that generates SO(3)-invariant features z still introduces additional parameters. Given that model expressiveness is closely tied to the total number of trainable parameters, disregarding these so-called extra parameters when discussing expressiveness and model complexity is not entirely appropriate. A direct comparison of parameter counts should be provided to clarify this point.
* Regarding R2, I appreciate the authors’ efforts in exploring the model’s application to other tasks.
Accordingly, I will maintain my rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We would like to express our sincere gratitude for your time and effort in reviewing our paper. We truly appreciate your constructive feedback and the positive comments you provided.
Best regards,
Authors of this paper | Summary: This paper proposes to enhance the Hamiltonian prediction networks with additional invariant supervision and an additional gradient branch. The authors observe that the trace of $H H^T$ is rotation invariant and can be used to supervise the learning of zero order features. Additionally, the gradient of a network using the zero order feature as input w.r.t. the zero and non zero order features is rotation equivariant, so it can be added back to the original equivariant feature. The authors combine these two techniques and test with DeepH-E3 and QHNet models and test on material datasets and small molecule datasets.
Claims And Evidence: - Using the trace of $H H^T$ to supervise zero order feature learning seems to be a reasonable design.
- However, I have a doubt regarding the gradient part. Wouldn't taking the local gradient w.r.t. to the equivariant feature result in a linear scaling of the original equivariant feature? i.e., it won't change the direction of the equivariant feature. For example, for order 1 feature $v\in \mathbb{R}^3$, the zero-order feature from the CG decomposition of $v\bigotimes v$ would be proportional to the dot product $v\cdot v$. As a result, if we take the gradient w.r.t. $v$, the quadratic term in dot product will reduce to be linear w.r.t. $v$. This seems to hold even after applying the non-linear neural networks due to the chain rule. So the final equivariant feature from the gradient branch would be a scaling of the original equivariant feature.
Methods And Evaluation Criteria: - The benchmarked dataset includes both periodic and non-periodic systems, which is a strength.
Theoretical Claims: - Most equations are descriptive rather than proving something.
Experimental Designs Or Analyses: - The experimental design seems to be sound.
Supplementary Material: - I looked over the appendix.
Relation To Broader Scientific Literature: - The related works are adequately discussed.
Essential References Not Discussed: - Not I am aware of.
Other Strengths And Weaknesses: - The experimental results are promising.
- The theoretical analysis of the proposed gradient branch might be enhanced.
Other Comments Or Suggestions: - I did not notice typo.
- Some ablation studies like (+Trace, +Gate, +Grad) might be presented in the main text.
Questions For Authors: - I noticed the ablation studies in the appendix (Table 5) about DeepH-E3 (+Trace, +Gate, +Grad), which I think are important results. Did the authors observe similar trends for QH9 datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1.**Clarification regarding whether the gradient mechanism can change the direction of features**: In Theorem 2 of our paper, for simplicity and to highlight the core ideas, we selected $\textbf{f}$ as a basic feature component of degree $l$. In this case, applying the gradient operation directly to $\textbf{f}$ results in $\textbf{v}$ that only changes its magnitude but not its direction. However, in the actual application of this theory, a series of feature components $\textbf{f}_1, \textbf{f}_2, \cdots, \textbf{f}_C$ with degree $l$ are applied with the gradient operation to obtain corresponding $\textbf{v}_1, \textbf{v}_2, \cdots, \textbf{v}_C$, a new series of feature components.
These components are then combined to form a new feature, $\textbf{v} = \sum_{1 \leq c \leq C} w_c \textbf{v}_c$, where $w_c$ represents the combinational coefficients. In this case, both the direction and magnitude of the new feature $\textbf{v}$ are different from those of the original features $\textbf{f}_1$, $\textbf{f}_2$, ..., $\textbf{f}_C$. In fact, our method involves a more in-depth extension (please refer to lines 178-249 in section 5.1 of our paper for details). For different feature components, e.g., $\textbf{f}^{(k)l_i}$ and $\textbf{f}^{(k)l_j}$ in Eq. (2) of our paper, as long as they share the same degree $(l_i=l_j)$, we can construct SO(3)-invariant and SO(3)-equivariant features using them, even if their magnitudes and directions are different. Through such operations and linear combinations, we can ultimately encode new features with flexible direction and magnitude, significantly enhancing the expressive power of the features. **Therefore, while the gradient operation on a single feature component itself does not change its direction, by combining multiple feature components and leveraging gradients and linear combinations, we can encode richer, more expressive feature representations with varying directions and magnitudes.**
We would like to thank the reviewer for their valuable feedback. We will clarify this analysis in the theoretical section of our paper to ensure a more accurate and comprehensive understanding of the gradient-based mechanism we proposed.
2.We will move the ablation studies into the main text of the revised version of our paper.
3.We have added ablation studies on the QH9-Stable (QS) database, where the notation for each experimental setup is consistent with the definitions provided in Appendix H. The results are summarized in the table below. As can be observed, the key conclusion on this database is consistent with that in the DeepH-E3 benchmark series: each individual component (Trace and Grad) of our method contributes positively to the overall performance, and their combination leads to further improvements. Notably, the gradient-based mechanism (Grad) consistently achieves higher accuracy compared to the gated mechanism (Gate).
**Table**: Experimental results measured by the $MAE^H_{all}$, $MAE^H_{diag}$, $MAE^H_{nondiag}$, $MAE^{\epsilon}$, and $Sim(\psi)$ metrics on the QH9-stable (QS) database using 'ood' split strategy. $\downarrow$ means lower values correspond to better accuracy, while $\uparrow$ means higher values correspond to better performance. The units of MAE metrics are meV, while $Sim(\psi)$ is the cosine similarity which is dimensionless.
| **Method** | $MAE^H_{all}$ ↓ | $MAE^H_{diag}$ ↓ | $MAE^H_{nondiag}$ ↓ | $MAE^{\epsilon}$ ↓ | $Sim(\psi)$ ↑ |
|----------------------|--------------------|----------------------|----------------------------|----------------|------------|
| QHNet (Baseline) | 1.962 | 3.040 | 1.902 | 17.528 | 0.937 |
| QHNet+Trace | 1.874 | 2.936 | 1.815 | 16.724 | 0.940 |
| QHNet+Gate | 1.796 | 2.845 | 1.741 | 15.696 | 0.940 |
| QHNet+Grad | 1.604 | 2.587 | 1.558 | 11.393 | 0.942 |
| QHNet+TraceGate | 1.516 | 2.426 | 1.465 | 10.568 | 0.945 |
| QHNet+TraceGrad | **1.191** | **2.125** | **1.139** | **8.579** | **0.948** | | null | null | null | null | null | null |
Breaking Silos: Adaptive Model Fusion Unlocks Better Time Series Forecasting | Accept (poster) | Summary: The paper proposes an approach to learn the weights of an ensemble of forecasting models based on meta-features of datasets. The approach first featurizes a time-series dataset then predicts optimal weights of models and average them to obtain predictions. The model is trained with a collection of datasets to learn the optimal weight combinations conditioned on the meta-features to a dataset. Experiments are conducted on real-world datasets against time-series base models.
## update after rebuttal
Based on the rebuttal, I decided to increase my score. I still think the paper will need another submission as the changes will be consequential.
**Results comparison.** I reviewed the results provided by the author and think they will greatly improve the paper, they provide much better baselines than the initial ones.
However, I cannot provide a full assessment as those results would need a major revision of the paper and another full round of review to carefully verify the fully redacted experimental protocol. For instance, how the ensemble are selected (validation error), why AG / Chronos provides large numbers (could be that the method fail catastrophically but it needs some justification given the high scores in public leaderboards), …
> In contrast, TimeFuse can adaptively predict the optimal weights for each test sample.
I totally see that this could explain the performance improvement, the only caveat is that I think the evaluation setup is a bit restricted, see my comment bellow on the evaluation protocol.
**Evaluation protocol.** I appreciate the clarification on zero-shot results: you perform inference on a dataset that was not seen during training which is what I understood when reading your paper.
My point is that most experiments are not done in this setup (only Tab 5 does it) and even in this setup, lots of datasets are highly similar (eg training on PEMS04/07/08 and evaluating on PEMS03).
However, other methods such as Chronos Bolt are really in this setup where they are evaluated on time series that were not seen. I believe the experimental protocol will carry more weight if most of the pretraining was done on a set of datasets and evaluation was performed on a distinct one (for instance say using gift-eval training split and evaluating on their test datasets).
Based on the rebuttal, I decided to increase my score. I still think the paper will need another submission as the changes will be consequential.
Claims And Evidence: Most of your results are reported on the datasets used for training your meta-learning model. However, the standard practice is to report on hold-out datasets as the performance can be inflated otherwise (see [Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning](https://www.jmlr.org/papers/v23/21-0992.html) for instance). If you would report performance on those datasets, then you should compare with methods that are also trained on those datasets, for instance some foundational models such as Chronos-bold fine-tuned on those datasets and the performance will be likely quite better. In Tab 5, you report performance on “unseen” datasets but those are mostly the same as it is just the prediction length that is being changed, you just have PEMS which is a true old out dataset.
Methods And Evaluation Criteria: In addition to discuss more recent prior work on ensembling done for time-series, the paper should also consider non naive-ensembling baselines. Taking only the mean and median in particular is a really weak baseline and it is the only one considered. At the very least, I would recommend to compare with Caruana approach used in AutoGluon-timeseries and AutoSklearn2. I would also recommend to look at portfolio/zeroshot configurations as a second non-naive baseline (described here for instance [Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning](https://www.jmlr.org/papers/v23/21-0992.html)).
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes, they currently suffer from two limitations:
* the performances are mostly reported on the datasets used for training as opposed to what is frequently being done in meta-learning
* only very naive baselines are considered for the ensembling
Supplementary Material: No.
Relation To Broader Scientific Literature: The forecasting part is well addressed but the part discussing ensembling is very limited and does not include recent work on ensembling for time-series.
Essential References Not Discussed: The paper seems to claim that it is the first to consider ensemble across different models/architectures:
> these studies are confined to a single-model architecture, focusing on training multiple same-type models based on different dataset views for a static homogeneous ensemble. Our work, in contrast enables dynamic heterogeneous ensemble of models with various architectures
At least, it provides no previous reference whereas considering ensemble across models is an obvious approach and has been discussed for instance in those two papers: [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) and [Multi-Objective Model Selection for Time Series Forecasting](https://arxiv.org/pdf/2202.08485) to mention just two (there are also many references outside of time-series, for instance AutoGluon-Tabular or AutoSklearn2 that I mentioned above). I would also recommend referring to [hasson23a](https://proceedings.mlr.press/v202/hasson23a/hasson23a.pdf) for a discussion on ensembling where the conditioning is done on items, timestamps and forecasting horizon.
Other Strengths And Weaknesses: Strength:
* The paper is well written and easy to read.
* The approach proposed is sound and conditioning on meta-features could improve the final performance of ensembles
Weaknesses:
* Very weak baselines considered
* Performance is mostly reported in the training (meta) dataset
* Discussion in related work of ensemble for time-series forecasting mostly missing
Other Comments Or Suggestions: NA
Questions For Authors: I would recommend to perform all evaluations on unseen datasets using leave-one-out between one of the 16 datasets and also to compare with non-naive ensembling methods such as Caruana or portfolio configurations.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Thank you for the thoughtful review and constructive feedback! With our best efforts, we conducted numerous additional analyses to address your concerns. The full results are at https://anonymous.4open.science/api/repo/ICML25Reb-034C/file/Results.pdf?v=b00206ad. Please understand that due to the strict 5000-character limit, we can only summarize the most critical results and insights in the rebuttal :)**
# Q1: Compare with AutoGluon/AutoSklearn2 ensemble and Chronos-Bolt
We implement and test the following approaches following your suggestions:
- **Ensemble**: forward selection (Caruana), portfolio, and zero-shot techniques from AutoGluon and AutoSklearn2.
- **Foundation Model**: Chronos-Bolt-Base, both zero-shot and finetuned.
- **AutoML Ensemble**: AutoGluon-timeseries with all 24 available base models (with `high_quality` presets).
They are trained and evaluated on all 16 datasets, which are categorized into 3 task types:
- **Long-term Multivariate**: ETTh1/h2/m1/m2, Weather, Electricity, Traffic
- **Short-term Multivariate**: PEMS03/04/07/08
- **Short-term Univariate**: EPF (NP/PJM/BE/FR/DE)
We report the averaged MAE for each task type below (with **1st***/**2nd**/*3rd* best highlighted), please see full results for each dataset and metrics in **Table 1** in the link.
|Task Type|TimeFuse|ForwardEns|PortfolioEns|ZeroShotEns|AutoGluon|ChronosBoltFT|ChronosBolt|
|-|-|-|-|-|-|-|-|
Long-Multi|**0.289***|*0.300*|**0.297**|0.311|44.266|0.598|51.992|
Short-Multi|**21.598***|*22.712*|**22.286**|23.823|43.075|41.447|44.285|
Short-Uni|**0.258***|0.274|0.273|0.274|**0.265**|*0.269*|0.282|
## Key Findings
1. **TimeFuse shows a clear advantage across various task types, and we summarize the reason below.**
2. **Caruana/portfolio/zero-shot ensemble search the optimal fusion weights at dataset-level, but still statistic at test time.** In contrast, TimeFuse can adaptively predict the optimal weights for each test sample.
3. **Chronos is limited to univariate forecasting only and is pretrained with a short predict length (64)**. In Chronos's paper, evaluations are also limited to univariate short-term (4-56) tasks. Our experiments reveal that Chronos suffers in long-term OR multivariate forecasting tasks. While fine-tuning offers improvements, its performance on such tasks is still far from optimal.
4. **AutoGluon shares similar limitations with Chronos as it also only supports univariate prediction, and Chronos is one of its main base models.** Additionally, AutoGluon inference is notably slow on long-multi tasks (e.g., needs ~20 hours for predict on traffic dataset with 862 variates), primarily due to (i) the need for independent inference per variate, and (ii) many AutoGluon base models lacking GPU acceleration and running solely on CPUs.
# Q2: Clarification on the evaluation protocol
We appreciate your point on the necessity of testing meta-learning methods on unseen datasets, but we would like to clarify two aspects:
1. **Table 5 presents valid zero-shot results**: TimeFuse does not train on data from the target dataset in any predict length for zero-shot results. For instance, the results for ETTh1 are obtained with the fusor trained solely on 6 other long-term forecasting datasets. Similarly, results for PEMS03 involve training the fusor exclusively with data from PEMS04/07/08.
2. **TimeFuse is not a "pure" meta-learning approach**: We note that TimeFuse is a model fusion framework, it cannot directly predict new datasets without training base models on the new data first.
**Therefore, we opted to compare TimeFuse with other methods (e.g., Caruana/portfolio/Chronos) that also access new data for fine-tuning/searching optimal weights to answer Q1, following your suggestion.**
# Q3: Discussion of related works
We greatly appreciate the suggested papers. We have read these articles and the references cited therein, gaining significant insights from them. We will revise our paper to more accurately reflect our position. Specifically, we believe that compared with existing works, **TimeFuse's core novelty and contribution lie in introducing a methodology for instance-level dynamic ensembling based on the input sample's meta-features, and demonstrating its broad effectiveness in practical applications**. We will discuss these additional related works in the paper to more accurately and comprehensively articulate TimeFuse's relationship with existing research and its position within the literature.
**Finally, we want to thank you again for the thoughtful review, they have been very helpful in improving this paper. We’ve dedicated over 60 hours to implementing the new algorithms, training models, testing, and organizing results, and we will continue expanding those results to further enhance the paper’s quality. We hope these responses address your concerns and would be glad to continue the discussion if you have any further questions!** | Summary: This paper introduces TIMEFUSE, a novel framework designed for adaptive fusion of multiple heterogeneous forecasting models to enhance time series forecasting. Key findings indicate that no single model consistently outperforms others across all samples; each excels in specific scenarios. The method addresses this by adaptively fusing model outputs based on input meta-features (statistical, temporal, spectral). A learnable fusor dynamically determines optimal weights for each model. Extensive experiments show that TIMEFUSE outperforms individual state-of-the-art models, achieving improved accuracy on up to 95.1% of test samples.
Claims And Evidence: Yes, the authors’ claims: that (1) no universal model winner and (2) the effectiveness of adaptive fusion have been validated by empirical results.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria presented in the paper are well-suited for addressing the problem at hand. Given the key insight that no single forecasting model excels universally, it is natural to combine the strengths of multiple base forecasting models to achieve consistently strong performance across all tasks. Moreover, extracting multiple meta-features is a reasonable approach that standardizes the input for each forecasting model without requiring separate processing to meet each model’s specific requirements.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: The experimental designs and analyses are mostly sound. The choice of baseline models includes a diverse set of recent state-of-the-art models, and the datasets used are standard benchmarks widely recognized in forecasting research.
However, two things are not very clear:
(1) It seems that the fusor is trained on a hold-out validation set and directly minimizes the overall forecasting loss; then what is the training objective of the individual base model? Do they minimize the overall forecasting loss or its own specific loss?
(2) The authors only compare the proposed approach with static ensemble methods and single base models, while there could probably be many other dynamic ensemble methods that produce (soft) combination of models, or MoE that combine predictions from multiple experts. The authors should have a discussion on this aspect.
Supplementary Material: Reviewed all parts of supplementary materials. They include extensions and details of the experiments from the main text that support the claims.
Relation To Broader Scientific Literature: Unlike traditional static ensemble methods (Kourentzes et al., 2014; Oliveira & Torgo, 2015), TIMEFUSE dynamically integrates diverse models using interpretable meta-features inspired by prior research on feature extraction (Barandas et al., 2020). It extends earlier ensemble ideas (Yu et al., 2017; Choi & Lee, 2018) by enabling heterogeneous models and dynamic sample-level weighting, thus bridging the gap between ensemble methods and state-of-the-art single-model forecasting techniques (Wang et al., 2024; Wu et al., 2023).
Essential References Not Discussed: Yes, the author missed discussing a related work. Although the following work didn't employ that many base models and extract different meta-features, the overall problem it frames is very similar:
Han et al., (ICDM 2022), Dynamic Combination of Heterogeneous Models for Hierarchical Time Series
Other Strengths And Weaknesses: Weakness:
1. The framework requires specifying a fixed set of predictors. If there are new available models one needs to retrain the fuser to obtain the updated set of weights.
2. In line 234, oversampling all datasets to match the size of the largest task may break the internal structure of that dataset; particularly for time series with strong temporal dependency.
Other Comments Or Suggestions: None
Questions For Authors: 1. Do all forecasting base models accept the same input format? Should there be an extra processing step to obtain model-specific input?
2. It is still not clear to me why training on raw features will lead to overfitting. The number of samples that raw features provide should be much more than aggregated values from meta-features.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Thank you for your recognition and thoughtful review! We've conducted extensive analyses to address your concerns. Full results: https://anonymous.4open.science/api/repo/ICML25Reb-034C/file/Results.pdf?v=b00206ad. Please understand that due to the 5000-character limit, we can only summarize key findings here.**
# E1: Training objective of base models
**Base models are trained using their standard prediction losses; the overall (fused) forecasting loss in Eq (1) and (2) is only for training the fusor.** In other words, the base models’ training and inference remain entirely standard—we do not modify their input format, loss functions, or inference procedures. Please refer to Appendix A.3 Implementation Details – Base Forecasting Models for more details.
# E2: Compare with advanced ensemble/MoE algorithms
**Thank you for the great suggestion. We test 4 more advanced ensemble methods from AutoSklearn2 and AutoGluon:**
1. **Forward Selection**: Builds an ensemble by iteratively adding the model that most improves current predictions on a validation set.
2. **Portfolio**: Optimizes the model weights to directly minimize validation loss, similar to risk minimization.
3. **Zero-Shot**: Computes the agreement/similarity among model predictions and assigns higher weights to models that have more agreement with others.
4. **AutoGluon**: (AutoML) learns the optimal ensemble of 24 base models (with `high_quality` presets).
They are trained and evaluated on all 16 datasets:
- **Long-term Multivariate (LM)**: ETTh1/h2/m1/m2, Weather, Electricity, Traffic
- **Short-term Multivariate (SM)**: PEMS03/04/07/08
- **Short-term Univariate (SU)**: EPF (NP/PJM/BE/FR/DE)
We report the averaged MAE for each task type below (with **1st***/**2nd** best highlighted), please see full results for each dataset and metrics in **Table 1** in the link.
|Task Type|TimeFuse|ForwardEns|PortfolioEns|ZeroShotEns|AutoGluon|
|-|-|-|-|-|-|
LM|**0.289***|*0.300*|**0.297**|0.311|44.266|
SM|**21.598***|*22.712*|**22.286**|23.823|43.075|
SU|**0.258***|0.274|0.273|0.274|**0.265**|*
## Key Findings
1. **TimeFuse consistently outperforms others across diverse task types.**
2. **Caruana, portfolio, and zero-shot ensembles optimize fusion weights at the dataset level, but still statistic at test time and thus underperform TimeFuse's sample-level adaptive ensemble.**
3. **AutoGluon is limited to univariate forecasting and is extremely slow for long-multivariate tasks, due to per-variate inference and lack of GPU support in many base models.**
We note that we also evaluated the pretrained foundation time-series model Chronos. Due to space constraints, please kindly refer to our response to **Reviewer oty5 Q1** for more details.
# R1: Related work
**Thank you for pointing out this related work. We have carefully reviewed the paper and its references and will include a detailed discussion in our revised version.**
# W1 & W2: Clarification on expanding model zoo and oversampling
We appreciate the weaknesses you pointed out and would like to clarify two points:
1. **Retraining the fusor incurs minimal computational cost**—usually just a few minutes. This allows easy integration of new models by simply adding their predictions to the meta-training data and retraining the fuser.
2. **We use oversampling to balance the data distribution and prevent the pattern of minority datasets from being overwhelmed by larger ones.** We understand concerns about its impact on data representation quality, a potential solution is to adopt more advanced data augmentation techniques to achieve distributional balance.
# Q1: Input format of base models
**Yes, all forecasting base models accept the same input format.** As mentioned in our response to E1, the base models’ training and inference remain entirely standard, with no additional processing required.
# Q2: Why not raw features?
**Sorry for the confusion. We’d like to clarify that using raw features directly as meta features results in higher-dimensional meta features, rather than more samples.** For example, consider a input time series sample $X$ with $D$ variates and length $L$. Using raw features means treating $X\in R^{(L,D)}$ as a single input sample (instead of $D$ samples) to the fusor. In contrast, meta features compress information from $X$ into a lower-dimensional space, reducing the risk of overfitting when learning from complex, high-dimensional raw features. We also ran additional experiments using raw features to validate this point, please see **Table 6** in the full results. **Results show that using meta features consistently outperforms raw features by a significant margin across all datasets.**
**Thank you again for your recognition :) We’ve devoted over 60 hours to get the new results and analysis, which we believe significantly strengthen the paper. We hope our responses have addressed your concerns and would be glad to discuss any further questions.**
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and my concern was mostly addressed. I also thank the authors for additional efforts on new experiments to strengthen the paper. Overall the paper is technically novel and therefore I maintain my score.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer VM6w,**
**Again, we sincerely appreciate your thoughtful review and encouraging feedback!**
We are pleased to hear that your previous concerns are mostly addressed. Should you have any additional suggestions, we would be more than happy to engage in further discussions and make any necessary refinements to the manuscript :)
**All the best wishes,**
**Authors** | Summary: The manuscript introduces TimeFuse, a fusion model for time series fusion model. Specifically, TimeFuse uses the outputs of a model zoo and uses meta features of input time series to train a fusion model that predicts the ensemble weights of the individual models from the model zoo. The meta feature set uses various feature sets (statistical, temporal, spectral, multivariate). The authors evaluate their method on 16 datasets and compare the results of TimeFuse with simple mean/median ensemble strategies, along with other further analysis (size of the model zoo, ablation of the meta features, zero-shot performance).
Claims And Evidence: One major claim of the paper is that TimeFuse as an ensemble method outperforms individual models and other ensemble models (top-k mean/median). Another major claim of the paper is that this can be effectively achieved by the presented meta-feature set. Overall, the evidence presented in the paper is through empirical evaluation on the evaluated datasets that beyond mere error reduction provides insight into the fusion weights, size of the model zoo, and learned fusor weights. While the overall evaluation is sound, there are some specific shortcomings of the evaluation that I will address in the next section.
Methods And Evaluation Criteria: ### Datasets
The authors use the widely used long-term forecasting benchmark along with the PEMS and EPF benchmark. Since the evaluation covers several benchmarks, I think it the manuscript offers a sound empirical evaluation of their method. However, the long-term time series benchmark has limitations. These have been explored in recent position papers (https://arxiv.org/abs/2502.14045) and in the NeurIPS 2024 time series workshop (https://neurips.cc/virtual/2024/workshop/84712#wse-detail-108471, see Christoph Bergmeir's talk). I think it is important for the time series forecasting filed to move towards more extensive evaluation which is proposed in several recent papers (Gifteval: https://arxiv.org/abs/2410.10393; FEV (from the Chronos paper): https://openreview.net/forum?id=gerNCVqqtR).
In addition, the evaluation presented here has specific issues that I further elaborate in the "Experimental Designs Or Analyses" section.
### Baselines
The authors use the individual models in the model zoo as baselines along with mean/median top-k ensembles. While these baselines are sound, I think additional baselines need to be added to effectively support the claims of this work.
1/ Table 6 shows the ablation and comparison to TSFEL features. Since one of the contributions of this paper is the meta feature set, I think the TSFEL results should be added to the main results section with all evaluated forecasting horizons. The improvements by the introduced meta feature set are small, so I'm wondering if the improvement holds up for the entire benchmark.
2/ I think the mean/median ensembles are too simplistic to serve as a baseline for other ensemble approaches and I also think the the comparison to other ensemble-based approaches is missing. I would suggest to compare their method against AutoGluon-TimeSeries as an alternative AutoML package that implements an ensemble/meta method for time series forecasting. I appreciate that the model zoo might be rather different, but it would give a meaningful evaluation against other established AutoML/ensemble packages in that domain.
Additionally, I would suggest that the authors evaluate their method against the ensemble method implement in AutoGluon (https://openreview.net/forum?id=XHIY3cQ8Tew) with their model zoo (forward selection algorithm)
Caruana, R., Niculescu-Mizil, A., Crew, G., and Ksikes, A. (2004). Ensemble selection from libraries of models. In Proceedings of the twenty-first international conference on Machine learning, page 18.
These comparison would substantiate the claims in the paper and demonstrate improvements against stronger baselines/ensembling methods.
3/ The authors mention that raw features would not work as well as the proposed ensemble approach. However, I would kindly ask the authors to substantiate the claim by showing benchmark results with raw features.
Theoretical Claims: There are no theoretical claims in this work.
Experimental Designs Or Analyses: One specific issue with the specific setup in this paper is that the historical window length is fixed to 96 time steps. One issue that this is introduces is that some of the baseline methods perform much worse in the benchmark presented in this work compared to their original paper. For example, I compared the results in Table 2 in this work with the original PatchTST paper (and with this recent position paper: https://arxiv.org/abs/2502.14045) and found that the error for PatchTST in this work is consistently higher. This is likely attributed to the 96 historical window length As such, the improvement that is showed in this paper is only true relative to some baselines that have been run under suboptimal conditions (96 instead of 512 historical window length). This likely also affects the baseline results TSMixer. I appreciate that there has been earlier work that indicated that the performance of some of the transformer variants does not improve with longer input length. But I think the suboptimal conditions chosen here for PatchTST and TSMixer impact the findings. I would suggest the authors to compare against the baseline models when run against longer input windows.
Supplementary Material: I reviewed the supplementary material.
Relation To Broader Scientific Literature: As mentioned before previously, this work does not compare against other AutoML/ensemble methods like AutoGluon-TimeSeries and does not compare against strong ensemble baselines such as forward selection. Additionally, for the zero-shot results the paper does not mention and/or compare against recent zero-shot pretrained time series models (TimesFM, Chronos, or even TabPFN-v2). Thus, it is unclear whether the presented method improves over other AutoML packages or ensemble methods or the zero-shot results present improvements over pre-trained models.
Essential References Not Discussed: As mentioned throughout the review, these essential references are missing either in discussion and/or evaluation
AutoGluon-TimeSeries: (O. Shchur, A. C. Turkmen, N. Erickson, H. Shen, A. Shirkov, T. Hu, and B. Wang. AutoGluon-Timeseries: Automl for probabilistic time series forecasting. In International Conference on Automated Machine Learning, pages 9–1. PMLR, 2023.)
Caruana, R., Niculescu-Mizil, A., Crew, G., and Ksikes, A. (2004). Ensemble selection from libraries of models. In Proceedings of the twenty-first international conference on Machine learning, page 18.
Pretrained time series models should also be discussed as they would be an alternative way to obtain zero-shot results.
Other Strengths And Weaknesses: The paper is clearly written and the empirical study is presented well. I appreciate the simplicity of the approach and I think that the work here gives good suggestions the training methodology through oversampling and alternating batching. As discussed, the weaknesses are mostly with specific issues in the experimental setup and choice of baselines.
Other Comments Or Suggestions: n/a
Questions For Authors: I summarize my main questions to the authors here:
1/ How would the individual model results change if the input context length is increased to 336 or 512 as proposed by earlier work (for example PatchTST)? Would TimeFuse still improve over the baselines? If I compare to the results of the original PatchTST paper, I suspect that the improvement over PatchTST might be much smaller when ran with longer input length, at least for the long-term benchmark dataset.
2/ What would the full results look like for the TSFEL feature set? What improvement has TimeFuse over this feature set?
3/ Does TimeFuse improve over other AutoML/ensemble baselines?
4/ Would the TimeFuse ensemble method improve over stronger ensemble baselines (like forward selection)?
I would consider raising my score if the authors address these points.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Great thanks for your thoughtful review and constructive feedback! With our best efforts, we conducted numerous additional analyses to address your concerns. Full results: https://anonymous.4open.science/api/repo/ICML25Reb-034C/file/Results.pdf?v=b00206ad. Please understand that due to the 5000-character limit, we can only summarize the most critical findings in the rebuttal :)**
# Q1: Experiment with long input length
**Following your suggestion, we trained multiple base models and TimeFuse with longer input context lengths $L$ (both 336 and 512) on 7 long-term forecasting datasets.** Given limited time, we trained PatchTST and models newer than it except for TimeMixer, which encounters OOM on V100-32GB GPU when training with L=336 or higher.
We report the averaged MSE for $L=96/336/512$ below (with 1st*/2nd best highlighted), please see full results for each dataset and metrics in **Table 4** in the link.
|InputLen (L)|TimeFuse|PatchTST|TimeXer|PAttn|iTransformer|TimesNet|
|-|-|-|-|-|-|-|
|L=96|**0.257***|0.285|**0.269**|0.306|0.287|0.311|
|L=336|**0.248***|0.266|0.270|**0.264**|0.288|0.324|
|L=512|**0.245***|0.260|0.265|**0.260**|0.287|0.329|
## Key findings
1. **Across different $L$, TimeFuse consistently outperforms individual models by dynamically fusing their predictions.**
2. **Both PatchTST and PAttn benefit from longer input lengths.** At L=336/512, PAttn replaces TimeXer as the best model, with PatchTST showing comparable performance.
3. **TimeXer and iTransformer are insensitive to input length, while TimesNet degrades as input length increases.** Despite these mixed trends, TimeFuse effectively learns each model’s strengths across samples to produce more accurate predictions.
Additionally, we want to highlight that we use TSLib for implementing all models with unified APIs and recommended hyperparameters. While there may be slight differences from the original PatchTST implementation, we confirm that **our PatchTST results align with (often better than) those reported in recent papers (e.g., TimeMixer)**.
# Q2 & M1: Results and clarification on TSFEL feature set
Please see **Table 6** in the full results for more results with the TSFEL feature set.
**We want to clarify that our goal in proposing a new meta-feature set was NOT to outperform TSFEL (nor do we claim this as a core contribution)**, but rather to (i) avoid TSFEL's engineering issues and (ii) demonstrate that TimeFuse can integrate with various meta-feature sets and still perform well.
More specifically, we propose the compact set alongside TSFEL for three main reasons:
1. **Unexpected NaN values in TSFEL**: We observed that TSFEL often outputs a large number of NaN values, e.g., 15,876 NaNs on the weather testset. **Table 5** in the full results shows NaN counts across datasets. TSFEL’s documentation offers no clear explanation or fix. In our implementation, we impute NaNs with the feature-wise mean.
2. **Lack of multivariate features in TSFEL**: TSFEL computes features based on single variable. We believe that inter-variable relationships are also important. Therefore, we constructed a smaller meta-feature set that explicitly includes such information.
3. **Demonstrating TimeFuse’s flexibility**: We see TimeFuse’s ability to work with different meta-feature sets as an advantage: with a well-engineered implementation, TSFEL and other existing sets (e.g., catch22) can all be seemlessly intergrated into TimeFuse. Users can also design their own feature sets based on domain needs or interpretability considerations.
# Q3 & Q4 & M2: Comparison with AutoGluon, stronger ensemble algorithms, and foundation time-series model.
Please see **Table 1** in the full results for a comprehensive comparison with the suggested baselines. **Due to space constraints, we kindly refer you to our response to Reviewer oty5 Q1 for detailed analysis.**
**In short, TimeFuse outperforms the ensemble baselines with its test-time adaptive ensembling capability. AutoGluon and Chronos are limited by their univariate-centric design and short pretrain predict length, leading to especially poor performance on long-term or multivariate forecasting tasks.**
# M3: Performance with raw features
**Please see Table 6 in full results for the performance of using raw features.**
Using either ours or TSFEL feature sets consistently outperforms the result with raw features by a significant margin.
# D1: Discuss on the evaluation datasets
**We greatly appreciate you pointing out the related works on the limitations of long-term time series benchmarks.** We have carefully read the literature and found them very insightful, relevant discussion will be included in the paper.
**We want to thank you again for the thoughtful review. We’ve dedicated over 60 hours to get the new results and will include them to enhance the paper’s quality. We hope our responses address your concerns and are happy to discuss if you have any further questions!**
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal. I appreciate the much stronger baselines and updated settings to run those baselines that the authors selected to update their results. However, I agree with reviewer oty5 that the findings regarding AutoGluon/Chronos require some justification as they are inconsistent with other findings in both public leaderboards and literature.
Nevertheless, I have increased my score.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer tYpH,**
**Thank you once again for your thoughtful review and encouraging feedback, it means a lot to us!**
**Please also allow us to briefly clarify our findings regarding AutoGluon and Chronos:**
1. Both have only been evaluated on short-term univariate forecasting in prior work. To the best of our knowledge, **we are the first to evaluate them in a long-term forecasting setting (and it took us much effort)**, where we observed a prediction collapse issue with Chronos when the prediction length exceeds its pretrained horizon of 64.
> Note: In the original AutoGluon and Chronos papers, the prediction lengths vary across datasets without clear justification. On average, they are short: **18.07 (AutoGluon)** and **21.95 (Chronos)**, with some **as low as 4**, please see Table 8/3 in the respective papers at these links: [[AutoGluon](https://arxiv.org/pdf/2308.05566#page=18)] [[Chronos](https://arxiv.org/pdf/2403.07815#page=31)].
2. Neither method natively supports multivariate time series (MTS). Both AutoGluon and Chronos operate on a per-variable basis, requiring separate forecasting for each variable. This not only increases inference time but may also limit their performance compared to recent deep learning models designed for direct MTS forecasting.
**We hope this further addresses your concerns and will include all new results and open-source the corresponding code for full transparency. Thank you again for your valuable support!**
> We'd be truly grateful if you're open to reassessing in light of the new efforts we made to improve the paper :)
**All the best wishes,**
**Authors** | Summary: This paper proposes TIMEFUSE, a framework designed to improve time-series forecasting accuracy by adaptively fusing the predictions of multiple (pre-)trained forecasting models. The core idea is to train a “fusor” model that predicts a suitable set of fusion weights based on a set of expert-designed meta-features extracted from each input time series. By leveraging statistical, temporal, and spectral descriptors to characterize each input sample, TIMEFUSE dynamically weights and combines the outputs of different base models, thereby taking advantages of each base model’s complementary strengths. Experimental evaluations on standard long-term (e.g., ETT, Electricity, Weather, Traffic) and short-term (e.g., PEMS traffic, EPF electricity price) forecasting benchmarks show improvements over state-of-the-art individual models. TIMEFUSE also empirically demonstrates its task-agnostic ability to zero-shot generalize to completely unseen datasets.
Claims And Evidence: The main claims of the paper are (1). no single model can predict all data instances well, and (2). it is possible to train a simple model based on a set of expert-designed meta features to predict the combination weights for all model to get a adaptive predictions. Both claims seem intuitively reasonable and are backed up well with motivation examples and empirical results.
Methods And Evaluation Criteria: The methods follows the main claims well.
The datasets, the experimental settings (look-back length and forecasting length) and the main evaluation metrics (MSE and MAE, etc.) follow the common practice in MTSF.
Theoretical Claims: To my understanding, there are no theoretical claims in this paper.
Experimental Designs Or Analyses: The experiment designs, i.e., main results, ablation studies, sensitivity of the model zone size, seem reasonably complete and can empirically back up the advantage of the proposed method.
However, using multiple models instead of one single model will require more resource and time for both training and inference. The tradeoffs between the forecasting accuracy and the (a). training time and (2). inference time for different model zoo sizes should also be reported to the readers.
Supplementary Material: Though the implementation details were provided in the appendix, the code is not publicly available.
Relation To Broader Scientific Literature: To my understanding, there are no clear relations to a potential broader scientific literature, besides MTSF and its applications.
Essential References Not Discussed: To my knowledge, this paper refers to a reasonably good number of references.
Other Strengths And Weaknesses: S1. The paper is largely well-written and easy to follow.
Besides the experimental designs (forecasting accuracy vs. training and inference time under different model zoo sizes) to be clarified, some other weaknesses include:
W1. It seems the TSFEL feature set is pretty comprehensive and shows similar performance, why not just use it? What is the specific technical novelty of the TIMEFUSE meta-features compared with TSFEL?
W2. The training objective is not clear. Specifically, what is the optimal combination weight, and how to get it? Shall we choose the best single model, or should we choose a best weighted sum of the models, even though all models might be either overestimated and underestimated? How to break tie? Please clarify the details and justify the design choices.
Other Comments Or Suggestions: C1. Although TIMEFUSE demonstrates good average forecasting accuracy, it would be very useful if the authors can provide a per-instance breakdown of the best models, as in Figure 1. This can demonstrate how much TIMEFUSE is away from the ideal case, and relevant discussions about what might be missing can be very beneficial to understand the questions.
C2. I like Figure 4. Besides the weights, can you also combine the forecasting accuracies of each model in Figure 4, such that it would be very easy to see if TIMEFUSE assigns higher weights on more accurate models?
C3. I like Figure 6, too. Can the authors also combine the domain and properties of each dataset in this figure, to further enhance the interpretability of TIMEFUSE?
Questions For Authors: Q1. Ensemble learning has been a popular learning paradigm to enhance accuracy before the foundation model era. However, with the emergence of foundation models, a popular understanding is that large models after a certain scales can be much more powerful than ensembles of smaller models. Can the authors envision the possibility of large model for MTSF (actually there are already quite some explorations), and whether or how TIMEFUSE can still be useful with large MTSF models?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Thank you for the thoughtful review and constructive feedback! With our best efforts, we conducted numerous additional analyses to address your concerns. Full results: https://anonymous.4open.science/api/repo/ICML25Reb-034C/file/Results.pdf?v=b00206ad. Please understand that due to the 5000-character limit, we can only summarize the most critical insights in the rebuttal :)**
# W1: Why not TSFEL
We propose a compact meta-feature set alongside TSFEL for three main reasons:
1. **Unexpected NaN values in TSFEL**: We observed that TSFEL often outputs a large number of NaN values, e.g., 15,876 NaNs on the weather testset. **Table 5** in the full results shows NaN counts across datasets. These NaNs scattered across different samples and features. TSFEL’s documentation offers no clear explanation or fix. In our implementation, we impute NaNs with the feature-wise mean.
2. **Lack of multivariate features in TSFEL**: TSFEL computes features only on single variable. We believe that for multivariate time series, inter-variable relationships are important. Therefore, we constructed a smaller meta-feature set that explicitly includes such information.
3. **Demonstrating TimeFuse’s flexibility**: We see TimeFuse’s ability to work with different meta-feature sets as an advantage: with a well-engineered implementation, TSFEL and other existing sets (e.g., catch22) can all be seemlessly intergrated into TimeFuse. Users can also design their own feature sets based on domain needs or interpretability considerations.
# W2: Training objective and other details
- **Training objective:** Explicitly computing optimal model weights for each sample requires *solving a least squares problem for each sample*, which is computationally expensive and unnecessary. In practice, our training objective in Eq (1) can directly optimizes the fusor via backpropagation.
- **How to ensemble:** Our goal is to learn the best weighted sum of base models. This has several advantages over selecting the best single model: (i) **no need to break ties** since we’re not picking just one model; (ii) **more flexible integration**: for example, if one model overestimates and another underestimates, their combination can outperform either model alone. In cases where all models over-/underestimate, our approach can fall back to single-model selection. In practice, such cases can be also mitigated by including more diverse models in the zoo.
We will carefully revise the paper to better clarify these details.
# C1: Sample-level breakdown analysis
**Great suggestion! Please see Table 3 in the full results, where we report the sample-level win rate and average rank for all 13 base models and TimeFuse, using the same settings as Fig 1.**
Overall, TimeFuse achieves a **46.11%** win rate with base models range **0.12%-9.89%**. For average rank, TimeFuse scores **2.37**, compared to **13.83–4.09** for base models, significantly outperforming any single model at the sample level. For cases where TimeFuse doesn’t outperform all base models, your point about “all models over/underestimating” is likely a cause. This might be addressed by adding a prefiltering step to select which models participate in the ensemble.
# C2: Update Figure 4
Please see **Figure 1** in the full results for an updated version, where we order the base models by performance. Generally, TimeFuse tends to assign higher weights to more accurate base models.
# C3: Clarification on Figure 6
Thank you for the suggestion. We note that Fig. 6 shows the fusor weights jointly trained across all long-term forecasting datasets. Its goal is to reveal the relationship between model weights and **input sample** properties, so the characteristics of a single dataset should not affect the results in Fig. 6.
# Q1: On large MTSF models
**We believe ensemble methods can complement large MTSF models rather than compete with them.** In fact, existing AutoML frameworks like AutoGluon already use pretrained MTSF models as base models in ensembles and achieve improved performance.
Additionally, current large MTSF models still have notable limitations: for example, we found that Chronos (a pretrained MTSF model) struggles with long-term or multivariate forecasting due to its univariate-centric design. Please kindly refer to our response to Reviewer oty5 Q1 for more details.
# W0: Accuracy-time tradeoff
**Since TimeFuse's fusor has a very simple architecture, its computational cost is minimal compared to the base model.** As shown in the **Table 2** of full results, the batch inference time of TimeFuse is nearly identical to that of the base model, indicating negligible overhead. We will include a detailed discussion in the paper.
**We want to thank you again for the thoughtful review. We’ve dedicated over 60 hours to get the new results and will include them to enhance the paper’s quality. We hope our responses address your concerns and are happy to discuss if you have any further questions!**
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. My concerns are mostly addressed. I also agree with other reviewers' concerns on the evaluation setup. Under the expectation that these rebuttal discussions and experimental findings will be included in the revision for the final paper, I decided to increase my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer F3V1,
Again, we thank you for your thoughtful review and encouraging feedback, it means a lot to us! We will incorporate all new results into the paper and open-source the corresponding code for full transparency. Thank you again for your support!
All the best wishes,
Authors | null | null | null | null | null | null |
CommVQ: Commutative Vector Quantization for KV Cache Compression | Accept (poster) | Summary: This paper introduces CommVQ, which sigificantly reduces KV cache memory in long-context LLMs while preserving accuracy. It uses additive quantization with a lightweight encoder and a RoPE-commutative codebook for efficient self-attention integration. In practice, CommVQ enables 1-bit KV cache quantization with minimal accuracy loss.
Claims And Evidence: As listed at the end of Introduction section, the main claims made the submission are
(1) maintain performance with a per-token quantization without using small group
(2) cash out real world efficiency through the commutative property of the RoPE matrix and the characteristics of self-attention
(3) enable 1-bit KV Cache quantization
All these three points are justified in the experimental section
Methods And Evaluation Criteria: This paper follows the common practice of KV Cache compression work. The selected benchmark, i.e., longbench and InfiniteBench, and evaluation criteria make sense for the long context senarios.
However, I believe LongBench has much more subtasks besides the reported 8 tasks. Also, the it would be great to see the needle in a haystack test
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes, it follows default setting of LongBench and InfiniteBench
Supplementary Material: Yes, I double checked A.5 Table 10
Relation To Broader Scientific Literature: It is nice to see 1-bit KV Cache quantization can still maintain the performance in the long context scenario!
Essential References Not Discussed: NA
Other Strengths And Weaknesses: There is no details on the Triton implementation of the proposed method. How the kernel is implemented and why it is faster than the baseline?
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and thoughtful suggestions.
**1. Full Longbench and Needle-in-a-Haystack results**
The full LongBench consists of 21 tasks in total. Following prior works such as KIVI and KVQuant, we report results on the same eight representative tasks for fair comparison and consistency. To address your comment, we now provide results on the full LongBench benchmark, including all 21 tasks, using the **LLaMA-3.1 8B** model.
The results below show that CommVQ-2 maintains accuracy across nearly all subtasks, while CommVQ-1 achieves competitive performance even under extreme 1-bit compression.
| | FP16 | KIVI-2 | CommVQ-2 | KIVI-1 | CommVQ-1 |
|:----------------------:|:-------:|:--------:|:----------:|:--------:|:----------:|
| Avg. bit | 16 | 3.00 | 2.00 | 2.00 | 1.03 |
| Qasper | 25.19 | 22.71 | 24.67 | 4.99 | 18.86 |
| QMSum | 23.31 | 24.33 | 24.36 | 9.57 | 23.02 |
| MultiNews | 26.82 | 27.29 | 26.48 | 9.20 | 24.34 |
| TREC | 72.50 | 72.50 | 72.50 | 38.75 | 69.00 |
| TriviaQA | 91.65 | 92.06 | 91.92 | 25.07 | 91.61 |
| SAMSum | 43.49 | 43.26 | 43.98 | 11.93 | 41.83 |
| LCC | 52.47 | 51.32 | 53.02 | 17.67 | 48.78 |
| RepoBench-P | 49.01 | 47.53 | 46.92 | 16.40 | 42.08 |
| NarrativeQA | 31.69 | 31.47 | 32.20 | 2.87 | 29.80 |
| MultifieldqaEN | 29.16 | 27.50 | 29.47 | 7.11 | 24.93 |
| MultifieldqaZH | 19.95 | 19.92 | 19.47 | 4.83 | 18.97 |
| HotpotQA | 17.18 | 20.14 | 19.97 | 7.88 | 16.48 |
| 2wikimQA | 16.36 | 17.14 | 16.98 | 5.85 | 14.11 |
| Musique | 11.64 | 11.99 | 12.43 | 3.77 | 9.40 |
| Dureader | 29.66 | 27.76 | 26.59 | 7.79 | 22.06 |
| GovReport | 34.54 | 34.12 | 31.87 | 9.16 | 26.41 |
| Vcsum | 16.15 | 15.94 | 16.39 | 9.40 | 15.45 |
| Lsht | 46.00 | 45.00 | 45.50 | 17.25 | 31.50 |
| PassageCount | 6.02 | 8.19 | 8.04 | 5.83 | 8.29 |
| PassageRetrievalEN | 98.45 | 97.25 | 98.55 | 23.02 | 97.28 |
| PassageRetrievalZH | 77.72 | 67.45 | 84.25 | 2.76 | 89.00 |
| **Average** | 39.00 | 38.33 | **39.31**| 11.48 | **36.34**|
These results further validate CommVQ’s generalization across a wide range of long-context tasks and domains.
We also provide the results for the **Needle-in-a-Haystack** test using the **LLaMA-3.1 8B** model. The NIAH result figures are included in this link: https://github.com/commvq/CommVQ/blob/main/fig1.png
We can see that CommVQ-2 preserves the full retrieval capability of the FP16 baseline, while our 1-bit quantized version, CommVQ-1, significantly outperforms its counterpart, KIVI-1.
**2. Triton kernel implementation details**
Our Triton kernels implement the techniques described in Section 4 of the main paper. Specifically, they include:
1. A kernel that fuses RoPE application with commutative codebook decoding, reducing intermediate memory operations;
2. A mixed-precision batched matrix multiplication for efficient computation;
3. Loading low-bit representations on the fly, avoiding the need to upcast them to higher precision as required in native PyTorch implementations.
Together, these optimizations reduce memory access overhead and improve compute utilization by fully leveraging Triton’s capabilities. This implementation is under active development and will continue to be optimized. We will release the Triton kernels, along with code and implementation details, upon acceptance. | Summary: This paper proposes a novel method, CommVQ, for compressing the KV cache in LLMs. The core innovation lies in using additive vector quantization—treating each token’s key/value vector as a unit rather than quantizing individual scalars—and designing a “commutative” codebook that allows efficient integration with RoPE. Experimental results on multiple long-context benchmarks and reasoning benchmarks show that CommVQ reduces memory usage while maintaining high accuracy relative to other KV cache compression baselines. The authors also provide an implementation that demonstrates real memory savings, enabling longer context sizes and larger batch sizes on a single GPU.
## update after rebuttal
The authors have addressed my questions. I keep my opinion that this paper leans toward being accepted.
Claims And Evidence: The major claims are supported by experiments (e.g. reduced KV size and better performance)
Methods And Evaluation Criteria: Overall make sense.
Can consider more dataset with long generation like MATH/AIME.
Theoretical Claims: The complexity computation part is correct.
For the RoPE part, it do not contain rigorous proof but no issues stand out.
Experimental Designs Or Analyses: Experiment design is valid.
Need to report throughput/latency to show efficiency
Supplementary Material: Yes, the ablation experiments.
Relation To Broader Scientific Literature: This work is about KV cache compression. Related to literatures like quantization, token eviction, etc.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- The commutative codebook idea is elegant and likely generalizable to many LLMs using RoPE-based positional encoding.
- The experiments show strong performance on multiple tasks at extremely low bit-rates (1–2 bits).
Weaknesses:
- No throughput or latency comparison to existing compression baselines is provided. Demonstrating real-world decode speed on multiple tasks or system loads would bolster confidence in the claimed advantages.
- The method seems does not combine well with other KV cache compression or retrieval-based methods.
- Lack of exploration of how domain shifts (in the data used to learn the dictionary vs. real downstream data) might affect compression quality.
- Benchmark dataset, consider LongBench v2, MATH or AIME.
Other Comments Or Suggestions: - Some details on how to incorporate the commutative constraints during codebook training could be expanded.
- The paper might include best practices for calibrating the dictionary if the user’s data distribution significantly shifts from the calibration domain.
- It would be good to release code/implement details in supple/appendix.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and constructive feedback. Below, we address the key concerns raised.
**1. Latency comparison**
Please see our response to **Reviewer 86AS (2. Latency comparison with the FP16 baseline and prior methods such as KIVI)** for a detailed latency comparison. In summary, CommVQ-1 is consistently faster than KIVI-1, especially as context length increases, as measured on the **LLaMA-3.1 8B** model using a single NVIDIA H100 80G GPU.
**2. Compatibility with other compression methods**
CommVQ is compatible with other KV cache compression techniques like token eviction and quantization.
For **token eviction or retrieval-based methods**, which retain only key tokens during decoding, CommVQ integrates naturally by quantizing just those essential tokens. This allows further compression as only selected tokens are encoded and decoded using the learned codebook.
CommVQ can also benefit from **codebook quantization**, reducing storage and speeding up decoding via low-bit matrix operations.
In short, CommVQ is **orthogonal** to other methods and can be combined with them for greater compression. Exploring such combinations is a promising direction for future work.
**3. Robustness under domain shift**
Thanks for pointing this out. In practice, we find that our codebooks and encoder trained on general pre-training datasets (e.g., FineWeb-Edu) transfer reasonably well across tasks and domains. To validate this point, we conducted an ablation study on the **LLaMA-3.1 8B** model to show CommVQ-2's perplexity changes compared to the FP16 baseline on 4 datasets:
1. **FineWeb-Edu**: a **general** dataset.
2. **GSM-8K**: a **math** benchmark.
3. **Repobench-p**: a **code** retrieval and completion benchmark.
4. **KV_Retrieval in InfiniteBench**: a **synthetic** UUID key-value retrieval benchmark.
The first dataset represents in-domain evaluation, while the last three represent evaluations with domain shifts—i.e., the codebooks and encoder are trained on **general text** and tested on **math**, **code**, and **synthetic UUID data**.
| | FineWeb-Edu | GSM-8K | Repobench-p | KV_Retrieval |
|:---:|:---:|:---:|:---:|:---:|
| FP16 | 10.17 | 5.67 | 2.20 | 31.93 |
| CommVQ-2 | 11.54 | 6.14 | 2.78 | 32.72 |
| PPL Diff | 1.37 | 0.47 | 0.58 | 0.79 |
We find no significant increase in perplexity (PPL) due to domain shifts when compared to in-domain evaluations. This suggests that our method performs consistently well across domains that differ from the calibration data, including synthetic UUID data, which is unlikely to appear in the calibration set. Overall, we conclude that our method is robust and generalizable under domain shifts.
Finally, if a significant domain shift is encountered, we recommend further fine-tuning the encoder and codebook on domain-specific data—similar to best practices in other calibration-based quantization methods.
**4. Additional benchmark (e.g., LongBench v2)**
Thank you for the suggestion. Due to the limited time available during the rebuttal phase, we conducted additional evaluations on **LongBench v2** using the **LLaMA-3.1 8B** model and compared them against **KIVI**, **KVQuant**, and **VQLLM**. KIVI and KVQuant fail to produce meaningful output with 1-bit quantization, so their results are omitted from the table. As shown below, CommVQ continues to outperform the baseline methods at comparable average quantization bit levels.
| Method | Avg. bit | Easy | Hard | Short | Medium | Long | Overall |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| FP16 | 16 | 27.1 | 25.4 | 30.6 | 24.2 | 22.2 | 26.0 |
| KIVI-2 | 3.00 | 25.7 | 24.8 | 25.6 | 25.7 | 23.1 | 25.1 |
| KVQuant-2 | 2.33 | 26.2 | 21.0 | 26.7 | 20.1 | 22.4 | 23.0 |
| VQLLM-2 | 2.00 | 15.6 | 17.7 | 21.1 | 15.3 | 13.0 | 16.9 |
| **CommVQ-2** | 2.00 | 24.5 | 26.0 | 28.3 | 23.7 | 24.1 | **25.4** |
| VQLLM-1 | 1.00 | 7.8 | 6.8 | 8.9 | 8.4 | 1.9 | 7.2 |
| **CommVQ-1** | 1.03 | 25.5 | 22.2 | 26.7 | 21.4 | 22.2 | **23.5** |
**5. Explanation on how to incorporate the commutative constraints during codebook training**
Thank you for pointing this out. The commutativity constraint is enforced by restricting each 2D subvector in the codebook to the form `[[x, y], [-y, x]]`, which ensures commutativity with RoPE rotations. In other words, for each 2D subvector, we only learn two scalars—x and y—and use them to construct the subvector for computation. We will provide a more detailed explanation in our future revision.
**6. Code and Implementation**
Upon acceptance, we will open-source our training pipeline, model weights, and optimized Triton kernels to facilitate reproducibility and further research. Please refer to the response to **Reviewer 9BoD (2. Triton kernel implementation details)** for explanations of the Triton kernel implementation.
---
Rebuttal Comment 1.1:
Comment: Thanks for sharing the additional results. Could you provide a throughput comparison under different batch sizes and sequence length settings? This information would be particularly useful because kV cache compression is especially beneficial when serving large batch sizes and longer sequences.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We present a latency comparison with KIVI under various batch sizes and sequence length configurations, using the LLaMA-3.1 8B model. The batch sizes (BS) are set to 1, 2, 4, and 8, while the sequence lengths are varied across 8K, 16K, 32K, 64K, and 128K (the model’s maximum) tokens.
We start with the smallest batch size and gradually increase the sequence length until KIVI encounters an out-of-memory (OOM) error. Once that occurs, we move on to the next batch size. For each configuration, we record the total latency for generating one next token, measured in seconds per token (s/token). The results are summarized in the table below.
Across all tested settings, our method consistently achieves lower latency compared to KIVI. In general, the advantage becomes more pronounced as the sequence length increases across different batch sizes. We attribute this to our method’s higher compression rate and efficiency-oriented design.
| | Method | 8K | 16K | 32K | 64K | 128K |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| BS=1 | KIVI-1 | 0.045 | 0.057 | 0.102 | 0.190 | 0.297 |
| | **CommVQ-1** | **0.031** | **0.032** | **0.050** | **0.085** | **0.152** |
| BS=2 | KIVI-1 | 0.053 | 0.090 | 0.162 | OOM | - |
| | **CommVQ-1** | **0.035** | **0.053** | **0.088** | **0.158** | - |
| BS=4 | KIVI-1 | 0.088 | 0.159 | OOM | - | - |
| | **CommVQ-1** | **0.056** | **0.091** | **0.161** | - | - |
| BS=8 | KIVI-1 | 0.154 | OOM | - | - | - |
| | **CommVQ-1** | **0.117** | **0.206** | - | - | - | | Summary: * This paper leverages additive quantization by introducing a lightweight encoder and codebook to compress the KV cache, which can then be decoded with a simple matrix multiplication.
* The authors design a commutative with Rotary Position Embedding (RoPE), and utilize an ExpectationMaximization (EM) algorithm to learn the codebook which allows for efficient integration of decoding into the self-attention mechanism to reduce computation overhead.
* Paper shows an impressive 87.5% reduction in FP16 KVCache size while maintaining accuracy on standard long context datasets.
Claims And Evidence: * "Existing quantization techniques treat each scalar in the KV cache independently, CommVQ performs quantization at the vector level. The method uses additive quantization" This seems like a very promising insight.
* "We refine our codebook to be RoPE-commutative. This refinement enables us to reformulate the self-attention computation to incorporate the decoding process more efficiently. " Great to see focus on efficiently achieving the required quantization, making deployment easier.
* Codebook is learnt using a simple neural network by minimizing the reconstruction error and the technique used is EM. Would have been great so see some more insights behind the choice of this technique.
Methods And Evaluation Criteria: * Neural network-based learnt codebook with EM algorithm to minimize the reconstruction error. Overall this approach makes sense and is intuitive.
* Datasets used are standard for long-context evaluation for LLMs.
* We provide an ablation study on how to choose Nc′, R and g in Appendix A.4. These seem to be crucial for CommVQs performance and it would have been good to have at least the insights behind the choices in the main body of the paper.
Theoretical Claims: * Equations in Section 4.2 seem convincing and make sense. However, a lot of the key details seem to be moved to the appendix. It would be helpful to at least have the main insights from them in the main body of the paper.
Experimental Designs Or Analyses: * The experimental section is well set and baselines are appropriate. I was surprised to see the evaluation with only models of 8B parameters, where the KV Cache is still relatively small. Given that KV Caches of larger models would greatly benefit from quantization, it would be helpful to see CommQV's performance on 70B or larger models.
* Performance gains of CommQV-2 seem to be marginal. This seems to suggest that CommQV only performs well only for 1-bit quantization. My concern here is how useful gains on 1-bit quantization are, do operators typically use 1-bit quantized models? Would be great to see some evidence for this.
* I am missing where the initial claim of 87.5% reduction is coming from. Is this for reducing to 2-bit from 16-bit? In that case, the 2-bit quantization seems to have marginal gains. Most of the gains seem to be in 1-bit, but then the reduction should be
* Figure 2 has impressive results, I liked the focus on reduced cost and overhead and the extensive demonstration of the same.
Supplementary Material: I skimmed over the appendices.
Relation To Broader Scientific Literature: * This is a very promising direction and quantization of KV Cache has extremely practical consequences. I liked the way the problem was structured for both maintaining quality and having efficient inference.
Essential References Not Discussed: References are adequate.
Other Strengths And Weaknesses: * Promising overall direction with a dual focus on quality and cost of compression.
* Insights on commutativity of ROPE is very useful.
Other Comments Or Suggestions: N/A
Questions For Authors: * I'd encourage the author to provide more insights into that 1-bit quantization has actual practical benefits. At first glance, it seems a little strange why 16 bits can be compressed to 1-bit with a win on both efficiency and quality.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and positive assessment of our direction and formulation. Below, we address the key concerns raised.
**1. Experiments on larger models (e.g., 70B)**
We focused on 8B models (LLaMA-2, Mistral, and LLaMA-3.1) due to their popularity and our resource constraints. However, to address your suggestion, we have now evaluated CommVQ on the **LLaMA-3.1 70B** model under 1-bit quantization using LongBench. Results are shown below:
| Method | Avg. bit | Qasper | QMSum | MultiNews | TREC | TriviaQA | SAMSum | LCC | RepoBench-P | Average |
|:------------:|:--------:|:--------:|:--------:|:------------:|:--------:|:-----------:|:---------:|:--------:|:----------------:|:---------:|
| FP16 | 16 | 45.46 | 23.75 | 27.76 | 75.50 | 93.18 | 46.23 | 37.62 | 55.24 | 50.59 |
| KIVI-1 | 2.00 | 7.90 | 6.92 | 11.13 | 37.50 | 52.21 | 8.99 | 24.02 | 21.26 | 21.24 |
| **CommVQ-1** | 1.03 | 37.96 | 22.03 | 22.17 | 58.50 | 92.41 | 38.63 | 33.16 | 44.93 | **43.72** |
These results show that CommVQ generalizes well to the 70B model, achieving strong accuracy using a 1-bit KV cache. This confirms CommVQ’s scalability and practical utility in larger models.
**2. Actual practical benefits for 1-bit quantization**
Compressing the KV cache to 1-bit offers several practical benefits. The most significant benefit is a 16× memory reduction compared to the standard FP16 KV cache. These memory savings are applicable regardless of the model's precision, meaning the model doesn't need to operate at 1-bit to take advantage of 1-bit KV cache quantization.
Such a substantial memory reduction not only allows a single GPU to handle longer context lengths and larger batch sizes (as shown in Figure 2 of our main paper), but it also significantly reduces data transfer time when offloading the KV cache to non-GPU memory. This benefit is especially notable in offloading scenarios, which are common when serving large models or deploying on edge devices. In these cases, limited PCIe bandwidth often makes KV cache transfer a major bottleneck. Reducing the size of the KV cache helps mitigate this issue, leading to faster overall inference.
**3. Clarification on "87.5% reduction"**
We clarify that the 87.5% reduction refers to the reduction from **FP16 (16-bit) KV cache to 2-bit**, i.e., (1 - 2 / 16) = 87.5%. This applies to CommVQ-2. For CommVQ-1 (1-bit), the reduction reaches **93.75%**, enabling even more aggressive compression.
**4. More insights behind the codebook, encoder and EM algorithm**
Thank you for your suggestions. We chose to use a simple neural network as the encoder, as it proved to be simple yet effective in our preliminary experiments. We selected the EM algorithm because it offers two advantages:
1. The EM algorithm **converges very quickly**, requiring far fewer iterations than gradient-based approaches.
2. The EM algorithm incorporates principled techniques—such as **soft assignment** and **annealing**—to prevent **mode collapse**, where some centroids become obsolete after a few iterations. This has been one of the major challenges in quantization algorithms.
Regarding the codebook configurations, we included the ablation study in the Appendix primarily due to page limitations. As suggested, we will provide more insights in the main paper in future revisions. | Summary: In general, this paper quantizes the KV cache into a 1-bit representation and then uses the 1-bit representation to combine some basis vectors for the attention process. Intuitively, it is equivalent to decomposing the KV cache into a combination of a finite number of basis vectors to speed up the overall calculation process. It performs well on long sequence processing benchmarks such as Long-Bench and Infinite-Bench, proving the effectiveness and efficiency of the solution.
Claims And Evidence: The paper well supports the authors' claim.
Methods And Evaluation Criteria: This paper does not build its benchmarks and mainly uses the typical benchmarks (LongBench and InfiniteBench) to evaluate the effectiveness and efficiency of models. In addition, we argue that the benchmark RULER should be included in this paper.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: The typical benchmarks (LongBench and InfiniteBench) are used to evaluate the effectiveness and efficiency of models. We argue that this paper should add the benchmark RULER to evaluate the context retrieval ability.
Supplementary Material: Yes. This appendix mainly supplements the method details and experimental details.
Relation To Broader Scientific Literature: This paper is related to the current sparse attention mechanism including KV cache compression and eviction.
Essential References Not Discussed: No
Other Strengths And Weaknesses: This method can compress KV-cache into a 1-bit form and performs well on typical long-sequence benchmarks. However, I still have some concerns.
At the experimental level, the paper's work performs poorly on retrieval tasks. However, the RULER and other retrieval tasks are now the core evaluation benchmarks for evaluating long-sequence processing. It is recommended that the quantization method in this paper should be analyzed to determine whether it can be used for retrieval tasks.
At the model level, decomposing the KV cache into a combination of basis vectors can achieve 1-bit cache quantization, but I think it is better to perform low-rank decomposition directly. Low-rank decomposition can guarantee the model's performance and be easily accelerated on the underlying hardware. It is recommended that low-rank decomposition be included as a baseline to prove the necessity and advantages of 1-bit quantization.
Other Comments Or Suggestions: Please refer to the 'Other Strengths And Weaknesses' part.
Questions For Authors: Please refer to the 'Other Strengths And Weaknesses' part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and helpful suggestions. Below, we address your concerns.
**1. Applicability to retrieval tasks such as RULER**
We appreciate the suggestion to evaluate additional retrieval-specific tasks. We have included results on the **RULER** benchmark using the **LLaMA-3.1 8B** model. We set the context length to 128K. As shown below, CommVQ-2 achieves the highest average score among the methods that can achieve an average quantization bit of 2. CommVQ-1 also retains competitive retrieval ability under the extreme 1-bit quantization.
| Method | Avg. bit | Niah1 | Niah2 | Niah3 | MKey1 | MKey2 | MKey3 | MValue | MQuery | VT | CWE | FWE | QA1 | QA2 | Avg |
|:-----------:|:----------:|:-------:|:-------:|:-------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:------:|:-------:|:------:|:------:|:------:|
| FP16 | 16 | 99.4 | 96.6 | 99.6 | 97.4 | 68.9 | 55.7 | 89.3 | 97.3 | 59.0 | 0.1 | 75.0 | 71.4 | 41.4 | 73.2 |
| KVQuant | 2.33 | 89.0 | 36.0 | 40.8 | 37.0 | 1.0 | 0.0 | 24.3 | 26.4 | 36.7 | 0.3 | 66.0 | 25.4 | 24.4 | 31.3 |
| KIVI | 2.00 | 31.0 | 0.6 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.16 | 0.4 | 30.8 | 12.2 | 8.0 | 6.5 |
| **CommVQ-2** | 2.00 | 97.2 | 91.4 | 92.4 | 95.0 | 61.0 | 4.8 | 78.0 | 88.4 | 49.1 | 0.26 | 72.9 | 68.4 | 39.6 | **64.5** |
| **CommVQ-1** | 1.03 | 63.0 | 53.6 | 11.6 | 64.4 | 18.8 | 0.0 | 23.6 | 24.0 | 28.6 | 0.18 | 67.7 | 63.4 | 37.0 | **35.1** |
These results demonstrate that CommVQ can preserve retrieval capability even under aggressive compression while performing better than other methods under the same compression rate.
Apart from RULER, we also conducted experiments on the **Needle-in-a-Haystack test**, which also focuses on retrieval; please refer to our response to **Reviewer 9BoD (1. Full LongBench and Needle-in-a-Haystack results)** for details. The NIAH result figures are presented in this [link](https://github.com/commvq/CommVQ/blob/main/fig1.png).
In summary, CommVQ under 2-bit quantization can preserve the full retrieval capability of the FP16 model, while our 1-bit quantization version performs better on the NIAH test than KIVI's 1-bit quantization version. Moreover, from Table 2 in our main paper, CommVQ achieves good performance on retrieval tasks (namely R.PK, R.Num, and R.KV), especially in the extreme 1-bit quantization case where CommVQ significantly outperforms the baselines. Our comprehensive results demonstrate our method's effectiveness on retrieval tasks.
**2. Comparison with low-rank decomposition**
Thanks for the suggestion, and we chose to compare CommVQ with **Palu**, a state-of-the-art low-rank decomposition KV cache compression method published at ICLR 2025. We conducted experiments on LongBench using the **LLaMA-3.1 8B** model. We set the context length to 128K. As shown below, CommVQ consistently outperforms Palu at both 2-bit and 1-bit quantization levels. This shows that our method is **more effective** than low-rank decomposition methods under various compression rates.
| Method | Avg. bit | Qasper | QMSum | MultiNews | TREC | TriviaQA | SAMSum | LCC | RepoBench-P | Average |
|:----------------:|:----------:|:--------:|:-------:|:-----------:|:-------:|:----------:|:--------:|:-------:|:--------------:|:---------:|
| FP16 | 16 | 25.19 | 23.31 | 26.82 | 72.50 | 91.65 | 43.49 | 52.47 | 49.01 | 48.05 |
| Palu-30% (3 bits) | 2.10 | 11.71 | 24.25 | 26.16 | 66.50 | 86.73 | 42.99 | 50.14 | 51.13 | 44.95 |
| **CommVQ-2** | 2.00 | 24.67 | 24.36 | 26.48 | 72.50 | 91.92 | 43.98 | 53.02 | 46.92 | **47.98** |
| Palu-60% (3 bits) | 1.20 | 2.80 | 16.37 | 11.06 | 51.08 | 3.43 | 6.56 | 18.66 | 21.36 | 16.42 |
| **CommVQ-1** | 1.03 | 18.86 | 23.02 | 24.34 | 69.00 | 91.61 | 41.83 | 48.78 | 42.08 | **44.94** | | Summary: This paper introduces CommVQ, a novel approach to compress the KV cache during inference, particularly when processing long contexts. Unlike existing scalar-based quantization methods, CommVQ employs vector quantization at the token level using a learned encoder and codebook approach. CommVQ makes two key innovations: (1) leveraging additive quantization to compress each vector in the KV cache into a low-bit representation and (2) designing a codebook that commutes with RoPE, allowing for efficient integration with the self-attention mechanism. This approach achieves impressive compression rates while maintaining high accuracy compared to baseline methods, even enabling effective 1-bit quantization with minimal performance degradation.
Claims And Evidence: The motivation for using vector quantization is clear, while the proposed method further addresses the computational cost of naive VQ.
Methods And Evaluation Criteria: The design of commVQ makes sense, which addresses the critical efficiency bottleneck when combining VQ with RoPE embedding. Moreover, experiment results show the effectiveness of proposed method.
Theoretical Claims: The paper's claims are right. However, it lacks some detailed proof, such as why RoPE embedding is communicative in Property 1. Therefore, I would expect the authors to focus more on this part, as understanding it is very important for the design of CommVQ.
Experimental Designs Or Analyses: The experimental designs include various long-context benchmarks; however, only LLaMa-3.1-8B-Instruct, LLaMA-2-7B, and Mistral-7B are included. Therefore, I would expect more experiments to be conducted on the latest models, such as Qwen-2.5.
Moreover, I would expect some latency comparison with FP16 attention and prior quantization methods such as KIVI. The current results show the speedup of the new codebook, however, it is still unclear when comparing to other methods.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your positive feedback and insightful suggestions. Below, we address your concerns.
**1. CommVQ applied to latest models such as Qwen-2.5**
To evaluate our method's generalization to the latest models, we applied CommVQ to the **Qwen-2.5 7B** model and evaluated it on LongBench. Due to the limited time for rebuttal and compatibility issues (e.g., KIVI and KVQuant do not officially support Qwen in their open-sourced code), we used **KV-4**, the built-in 4-bit KV cache quantization method (HQQ) in HuggingFace Transformers v4.46.2, as our baseline.
| Method | Avg. bit | Qasper | QMSum | MultiNews | TREC | TriviaQA | SAMSum | LCC | RepoBench-P | Average |
|:-------------:|:--------:|:--------:|:--------:|:------------:|:------:|:-----------:|:---------:|:--------:|:----------------:|:--------:|
| FP16 | 16 | 13.04 | 20.70 | 22.47 | 72.50 | 89.47 | 46.16 | 58.97 | 64.51 | 48.48 |
| KV-4 (HQQ) | 4 | 4.59 | 12.58 | 7.85 | 35.92 | 25.70 | 10.63 | 15.00 | 15.12 | 15.92 |
| **CommVQ-2** | 2 | 14.58 | 22.57 | 24.05 | 68.00 | 87.04 | 45.34 | 55.46 | 61.48 | **47.31** |
CommVQ achieves strong performance while compressing the KV cache to an average of 2 bits, significantly outperforming the default 4-bit quantization baseline. This indicates that our method can be effectively applied to the latest models. We also demonstrate that CommVQ can be applied to much larger models, such as the **LLaMA-3.1 70B** model, in our response to **Reviewer yREq (1. Experiments on larger models)**. All these results show the broad applicability and effectiveness of CommVQ.
**2. Latency comparison with the FP16 baseline and prior methods such as KIVI**
We report latency per generated token (in seconds) using the **LLaMA-3.1 8B** model on a single NVIDIA H100 80G GPU. Both KIVI and our method are optimized with Triton kernels. We set the batch size to 1 and vary the context length. As shown below, CommVQ-1 is consistently faster than KIVI-1, especially as the context length increases.
| Context Length | FP16 | KIVI-1 | CommVQ-1 |
|:---------:|:--------:|:--------:|:-----------:|
| 8K | 0.024 | 0.045 | **0.031** |
| 16K | 0.026 | 0.058 | **0.033** |
| 32K | 0.031 | 0.102 | **0.051** |
| 64K | 0.037 | 0.190 | **0.085** |
| 128K | 0.051 | 0.297 | **0.152** |
While both CommVQ and KIVI appear slower than the FP16 baseline in our current measurements, we believe this is primarily due to practical factors such as the usage of flash attention (the FP16 model uses flash attention by default, while KIVI and our method currently do not support flash attention during the generation stage), Triton kernel launch overhead, and the hardware and model we chose. Importantly, as the context length grows, CommVQ achieves better efficiency than KIVI. We are actively optimizing our Triton implementation and expect further latency improvements in future versions.
**3. Lack of some detailed proofs, such as why RoPE embedding is commutative in Property 1**
We thank the reviewer for pointing this out. Due to the page limit, we have omitted some proofs in the main paper. We will add detailed proofs to the main paper in our future revisions.
As for why RoPE embedding is commutative in Property 1, since
$$
R_m^iC =
\\left(\\begin{array}{cc}
\\cos m \\theta_i & -\\sin m \\theta_i \\\\
\\sin m \\theta_i & \\cos m \\theta_i \\\\
\\end{array}\\right)
\\begin{pmatrix}
x & y \\\\
-y & x \\\\
\\end{pmatrix}=
\\begin{pmatrix}
x\\cos m\\theta_i + y\\sin m\\theta_i & y\\cos m\\theta_i - x\\sin m\\theta_i \\\\
x\\sin m\\theta_i - y\\cos m\\theta_i & y\\sin m\\theta_i + x\\cos m\\theta_i
\\end{pmatrix}
$$
and
$$
CR_m^i =
\\begin{pmatrix}
x & y \\\\
-y & x \\\\
\\end{pmatrix}
\\left(\\begin{array}{cc}
\\cos m \\theta_i & -\\sin m \\theta_i \\\\
\\sin m \\theta_i & \\cos m \\theta_i \\\\
\\end{array}\\right)=\\begin{pmatrix}
x\\cos m\\theta_i + y\\sin m\\theta_i & y\\cos m\\theta_i - x\\sin m\\theta_i \\\\
x\\sin m\\theta_i - y\\cos m\\theta_i & y\\sin m\\theta_i + x\\cos m\\theta_i
\\end{pmatrix}
$$
We can see that they are equivalent, so we can confirm Property 1.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' rebuttal. Based on the current results, it established a new state-of-art for KV cache compression. However, with the increase in context length, it is gradually much slower than FP16, which is not only costed by the overhead in triton. Based on these observations, I keep my score for this work. I think it should be accepted.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 86AS,
Thank you for your thoughtful and positive feedback. We appreciate your recognition of our contributions and your insights. Your observations will guide our future optimizations and research directions. Thank you again for your review and for supporting the acceptance of our work. | null | null | null | null |
Conformal Prediction with Cellwise Outliers: A Detect-then-Impute Approach | Accept (poster) | Summary: Conformal Prediction (CP) provides prediction intervals with guaranteed coverage for black-box models under exchangeability assumptions. However, cellwise outliers isolated contaminated entries in test features break this exchangeability, leading to unreliable PI. This paper addresses this challenge by introducing a detect-then-Impute conformal prediction framework to robustly handle cellwise outliers.
This paper proposed two novel algorithms of conformal prediction, PDI-CP and JDI-CP, and provided a distribution-free coverage analysis under detection and imputation procedures. This paper establishes coverage error bounds for PDI-CP and JDI-CP achieves a finite sample 1 − 2α coverage guarantee. In section 6 and section 7, author experiments on synthetic and real datasets to demonstrate that proposed algorithms are robust and efficient.
=============
I thank the reviewer for the responses which clarified my concerns.
I have read them and keep my score unchanged.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: Theorems are correct under assumptions, but the assumptions are restrictive. Assumption 3.1 is critical for ODI-CP but unrealistic in practice. Assumption 3.2 may not hold for correlated features.
Experimental Designs Or Analyses: The experiment is reasonable overall, but a new experiment with setting linear, homoscedastic, Light-tailed could be added in simulation.
Supplementary Material: There is no supplementary material so far.
Relation To Broader Scientific Literature: The key contributions of the paper are positioned at the intersection of cellwise outlier detection, conformal prediction, and missing data imputation, building on and extending prior work in these areas.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: **Weakness**
1. **Computational Cost**: JDI-CP requires O(n) pairwise operations, which could be inefficient for large datasets.
2. **Logic**: The logic of the article introduction is confusing and should highlight the research issues of the article. Author should highlight the purpose of this paper is using DI method to solve cellwise outlier issues instead of giving the notation in introduction.
3. **Innovation**: This paper seems simply combining DI and CP, I don’t see any theoretical innovation. Author cites many theories which are proved in prior work like theorem 3.5 and focus too much on detection outlier instead of CP.
4. **Application**: Assumption 3.1 assume all the cellwise outliers are detected, but it sounds impossible in real world.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >**Q1**: Author should highlight the purpose of this paper instead of giving the notation in introduction.
We sincerely appreciate this constructive critique of our manuscript's organization. In response to your suggestion, we will comprehensively restructure the introduction in the future revision to:
1. **Problem Focus**
- Lead with the critical challenge of cellwise contamination in conformal prediction
- Highlight gaps in existing methods' ability to handle this data corruption
2. **Solution Framework**
- Clearly articulate our methodological innovations
- Emphasize how we address previously unsolved problems
3. **Presentation Strategy**
- Defer technical notation to methodology sections
- Maintain narrative flow while preserving rigor
These adjustments will prioritize readability and better highlight the scientific significance of our approach.
>**Q2**: Assumption 3.1 is unrealistic in practice.
We appreciate your inquiry regarding Assumption 3.1 and we acknowledge its imperfection.
- We'd like to stress that Assumption 3.1 is theoretically necessary for model-free and distribution-free coverage guarantees and can be satisfied by choosing a small detection threshold in practice.
- Additionally, we also present a total variation coverage gap of our method when Assumption 3.1 is violated. If the total variation between processed calibration data and test data is small, the coverage can still be approximately controlled. Please refer to Reviewer **vdr2**'s Q1 for details and an additional experiment about Assumption 3.1 (due to the space limit).
We would be happy to discuss this further if additional clarification would be helpful.
>**Q3**: Assumption 3.2 may not hold for correlated features.
Thanks for your question and we’d like to make the following explanations. For correlated features:
- Assumption 3.2 can be satisfied if we use **one-class SVM** method to learn the score $s_j$ for each coordinate.
- In simulations, we also used detection methods do not satisfy Assumption 3.2, such as **DDC** in Figure 3 and **cellMCD** in Figure 4(b).
- In Section 5, we provided theoretical results for our method when Assumption 3.2 does not hold.
>**Q4**: A new experiment with setting linear, homoscedastic, light-tailed could be added.
Thank you for your advice. In fact, we have included the experimental result of the setting you suggested in Appendix D.2 but didn't mention it in the main text due to the space limitation. We apologize for any possible reading incompleteness and will add this setting to the main text when we have an additional page in the revision.
>**Q5**: JDI-CP requires O(n) pairwise operations, which could be inefficient for large datasets.
We appreciate this insightful observation about our method's computational aspects.
1. **JDI-CP Design Considerations**:
- Primary objective: Finite-sample coverage guarantee
- Trade-off: Achieves robustness (demonstrated in Figure 7) at some computational cost
- Reference: Similar trade-offs exist in Barber et al. (2021)
2. **Future Directions**:
- Actively researching more efficient implementations
- Will optimize computational performance while maintaining theoretical guarantees
[1]Barber R F, Candes E J, Ramdas A, et al. Predictive inference with the jackknife+. The Annals of Statistics, 2021.
>**Q6**: This paper seems simply combining DI and CP. Author cites many theories which are proved in prior work like theorem 3.5. Focus too much on detection instead of CP.
Thanks for your insightful comments on the contribution of our approach. We’d like to provide some clarifications to highlight the contribution of our work.
- Firstly, our approach fundamentally differs from naive DI+CP combinations. As discussed and shown in Appendix A.1, such a direct combination cannot achieve coverage control. Therefore, we proposed new techniques to adaptively deploy DI on calibration data and proposed PDI-CP and JDI-CP.
- Regarding Theorem 3.5, this is a negative result we obtained to illustrate the necessity of Assumption 3.1, and we have not found similar results in other papers. Additionally, we have provided a new counterexample under the CQR score, please refer to Reviewer **vdr2**'s Q2.
- Finally, we believe that detection is essential to cope with cellwise outliers in predictive inference tasks. The classic CP methods under exchangeability assumption have been extensively studied. Here, we are concerned with the nonexchangeable problem caused by cellwise outliers, which is challenging and has not been studied before. The detection and imputation steps are used to identify and remove those outliers, so we can construct informative conformal prediction intervals. The detection is key to constructing exchangeable processed features in ODI-CP and JDI-CP, which rebuilds exchangeability between calibration data and test data and enables coverage control.
Hope these explanations are acceptable for you! | Summary: This paper addresses conformal prediction with feature-wise outliers in the test sample. It assumes access to a detection oracle satisfying the sure detection and isolated detection assumption, and impute the values of the outlier features. After the detection and imputation procedure, split conformal prediction and Jackknife+ are applied, respectively, resulting in two kinds of prediction sets. A distribution-free finite sample coverage guarantee is proved for the latter. Beyond isolated detection, the coverage guarantee worsens with the difference of two detection set, which is empirically shown small. Experiments on synthetic and real datasets with synthetic perturbations show valid coverage and controllable sizes of the prediction set.
Claims And Evidence: The claim lacks support in L48 which states weighted conformal prediction is unsuitable for CP with outliers because distribution shift cannot be estimated. For example, localized conformal prediction reweigh the calibration samples according to their distance to the test sample, without estimating the distribution shift. The WCP is clearly applicable, since the experiments also report Tibshirani et al.'s WCP method. There could be more discussion why weighted conformal prediction is less competitive in this setting.
Methods And Evaluation Criteria: 1. The major concern is over the strength of assumption 3.1 and 3.2. They assume oracle access to an outlier detector that has zero false negative rate. And assumption 3.2, which is necessary for finite length prediction sets, additionally excludes outliers in the joint space of features. Figure 2 has shown that the false discovery rate is around 0.4, which indicated that the assumptions are not approximately satisfied.
2. The form of non-conformity score is restricted to absolute deviation.
Theoretical Claims: For theorem 4.3, the coverage lower bound can be vacuous for large calibration size since max E_i is non decreasing.
I have not checked the proofs.
Experimental Designs Or Analyses: 1. The experiments show that the proposed method is insensitive to different detection and imputation methods, which indicates the flexibility of the design.
2. Results with more recent weighted conformal prediction methods will consolidate the claim, such as localized conformal prediction.
3. Datasets with real outliers will be more convincing. The current results on real datasets introduce artificial perturbations.
Supplementary Material: I have reviewed section D.1.
Relation To Broader Scientific Literature: The most significant contribution of this paper is the first to consider conformal prediction with cellwise outliers, and proposes an algorithm with valid distribution-free coverage guarantee for finite samples (Theorem 4.4).
Essential References Not Discussed: Related works on weighted conformal prediction are listed but not fully addressed on why they are not applicable with outliers. Additionally, localized conformal prediction as a special form of WCP is not discussed. They are free of estimation of distribution shift which is claimed in the paper as a reason for incompetence.
[1] Leying Guan. Localized conformal prediction: A generalized inference framework for conformal prediction. Biometrika, 110(1):33–50, 2023.
[2] Rohan Hore and Rina Foygel Barber. Conformal prediction with local weights: randomization enables local guarantees. arXiv preprint arXiv:2310.07850, 2023.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: Are there contamination models other than Equation 1 that are worth consideration?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: >**Q1**:Concerns focus on Assumptions 3.1 and 3.2, assuming detection with FNR=0. Assumption 3.2, crucial for finite length prediction interval, also excludes joint feature space outliers. Figure 2 shows FDR is around 0.4 , indicating unmet assumptions.
Thanks for your valuable questions! We acknowledge the imperfection of Assumption 3.1 and explain as follows.
- Assumption 3.1(FNR=0), also used in Liu et al.(2022) and Wasserman&Roeder(2009), can be met by choosing a small detection threshold in practice and is theoretically essential for model-free coverage guarantee. When Assumption 3.1 doesn't hold, we derived a new bound showing the coverage gap depends on the total variation between processed calibration and test data (see Reviewer **vdr2**’s Q1 due to space limit). We also added experiments by varying DDC detection thresholds $\sqrt{\chi_{1,p}^{2}}$ (adjusting $p$) based on Setting C in Appendix D.2. When FNR$\neq$0, our method can still approximately control the coverage.
|$p$|0.5|0.7|0.9|0.99|
|-|-|-|-|-|
|FNR |0|0.005|0.008|0.013|
|FDR|0.793|0.669|0.340|0.035|
|PDI coverage|0.909|0.907|0.901|0.902|
|JDI coverage|0.904|0.904|0.899|0.895|
- We'd like to clarify the primary purpose of Assumption 3.2 is to ensure exchangeability of $\\{\hat{\cal{O}}\_i\cup\mathcal{O}^*\\}\_{i=n_0}^{n+1}$(see Lemma 3.3), not for finite length prediction intervals (PIs). In Section 5, we also provide theoretical coverage results when Assumption 3.2 doesn’t hold. In simulations, we also used detection methods that unsatisfy Assumption 3.2, such as **DDC** in Figure 3 and **cellMCD** in Figure 4(b), which don't lead to infinitely wide PIs.
- Actually, Assumption 3.1 requires FNR equal to zero, while the frequency of $\tilde{\cal{T}}\_{n+1}=\hat{\cal{T}}\_{n+1}=\varnothing$ in Figure 2 means the false discovery (positive) rate (FDR) is zero.
>**Q2**:Why weighted conformal prediction is less competitive?
WCP requires precise likelihood ratio estimation, but arbitrary cellwise outliers in our setting render this impossible. Figures 3 and 11 highlight this limitation, showing WCP fails to meet target coverage.
>**Q3**:Localized conformal prediction as a special form of WCP is not discussed.
Thanks for your advice and we apologize for overlooking these references.
- LCP controls conditional coverage under i.i.d. test and calibration data, differing from our focus on distribution shift caused by cellwise outliers in test data, which LCP cannot cope with. We’ll add a detailed discussion of LCP and these references in the revision.
- We also compare our method with baseLCP, calLCP (Guan,2023), and RLCP (Hore&Barber, 2023) through experiments. Clean data follows Section 5.1.1 of Hore&Barber(2023), and cellwise outliers are generated via Equation (10) in our paper with $\epsilon=0.03$. Results show LCP methods fail to provide informative PIs.
||Bandwidth|0.1|0.2|0.4|0.8|1.6|
|-|-|-|-|-|-|-|
|baseLCP|Coverage|0.90|0.90|0.90|0.90|0.95|
||Infinite PI%|0.90|0.90|0.90|0.90|0.81|
|calLCP|Coverage|0.94|0.94|0.94|0.94|0.93|
||Infinite PI%|0.88|0.88|0.88|0.87|0.66|
|RLCP|Coverage|0.90|0.90|0.90|0.90|0.90|
||Infinite PI%|0.90|0.90|0.90|0.86|0.48|
|PDI|Coverage|0.90|||||
|| Infinite PI\%|0|||||
|JDI|Coverage|0.89|||||
||Infinite PI\%|0|||||
>**Q4**:The form of non-conformity score is restricted to absolute deviation.
For simplicity, we use the absolute residual score, but our method supports various non-conformity scores like CQR score. Please refer to Reviewer **vdr2**'s Q2 for a discussion about CQR.
>**Q5**:Theorem 4.3's coverage lower bound may be vacuous with large calibration size.
After checking our proof, we found the expansion to $\max_{i= n_0,\ldots,n}E_i$ is unnecessary and the last inequality in (B.10) (Appendix B.3) should be removed. The modified coverage gap is given by:$$\mathbb{P}\left\\{\hat{q}\_{\alpha}^+(\\{R_i^*-S_{\hat{\mu}}\cdot E_i\cdot|\tilde{\cal{O}}\_{n+1}\setminus\mathcal{O}^*|\\}\_{i=n_0}^n)<R_{n+1}^*\leq\hat{q}\_{\alpha}^+(\\{R_i^*+S_{\hat{\mu}}\cdot E_i\cdot|\tilde{\cal{O}}\_{n+1}\setminus\cal{O}^*|\\}\_{i=n_0}^n)\right\\},$$which will not be vacuous for large calibration size.
Thank you for catching this key technical nuance!
>**Q6**:Dataset with real outlier will be more convincing.
To further demonstrate robustness, we test our method on a riboflavin gene expression dataset(Liu et al.,2022) with confirmed cellwise outliers. Our method maintain coverage above the target $1-\alpha=0.9$ while LCP methods ($h=0.1$) fail to provide meaningful PIs.
||SCP|WCP|baseLCP|calLCP|RLCP|PDI|JDI|
|-|-|-|-|-|-|-|-|
|Coverage|0.83|0.85|0.90|0.96|0.90|0.93|0.95|
|Length|1.82|Inf|Inf|Inf|Inf|3.08|3.29|
>**Q7**:Are other contamination models worth considering?
The Tukey-Huber Contamination Model generates casewise outliers, assuming most samples follow a target distribution $F$, while the others follow an arbitrary distribution $H$:$$X\sim(1-\epsilon)F+\epsilon H,$$where $\epsilon\in[0,1)$ is the contamination ratio.
---
Rebuttal Comment 1.1:
Comment: Thank the author for their response. Most of my concerns and questions are addressed. I have raised the rating from 3 to 4.
---
Reply to Comment 1.1.1:
Comment: Many thanks for the review and raising your rating! If you have any other questions, concerns, and comments, please let us know. We would like to provide our responses and address them in the future revision. Thank You! | Summary: This paper proposes a DI-CP framework to handle cellwise outliers in conformal prediction. The key idea is first to detect outliers in the test feature vector and then impute them before applying conformal prediction. To maintain exchangeability, a similar detection-imputation process is used to calibration samples.
The authors propose two methods: 1, PDI-CP (Proxy Detection-Imputation CP), which applies detection and imputation separately. 2, JDI-CP (Joint Detection-Imputation CP), which modifies detection rules to ensure theoretical coverage guarantees.
The paper provides theoretical guarantees, including a finite-sample coverage bound for JDI-CP and empirical results showing that DI-CP performs robustly under contamination.
Claims And Evidence: This paper presents a novel and relevant problem in conformal prediction, but the theoretical guarantees depend on overly strong assumptions about detection accuracy. The method may fail if detection is imperfect or if too many inliers are misclassified as outliers.
Methods And Evaluation Criteria: There exist two problems:
1. Strong assumptions on detection accuracy (Assumption 3.1 is unrealistic).
The method assumes perfect outlier detection. This is impractical because most real-world detection methods have false negatives.
If detection is imperfect, theoretical guarantees do not hold, making the results less applicable to real-world data.
2. Excessive false positives break the method.
The authors claim that choosing a larger detection threshold guarantees Assumption 3.1. However, this results in too many false positives.
Excessive imputation may change the calibration distribution, violating exchangeability assumptions (Lemma 3.3 fails).
There is no analysis of how misclassified inliers affect conformal validity.
3. Appendix A.1 considers Direct-ODI and Direct-PDI, which do not modify calibration samples. This setting contradicts the main method’s justification that calibration samples must be processed to maintain exchangeability.
Theoretical Claims: 1. Theoretical results assume that imputation does not introduce significant bias. However, imputation methods systematically shift feature distributions, leading to biased conformity scores.
Experimental Designs Or Analyses: Empirical evaluation lacks robustness analysis.
The paper does not study sensitivity to different detection thresholds and imputation choices.
Supplementary Material: I did not review the supplementary code as part of my evaluation. My review is based on the theoretical justifications, experimental results, and clarity of the main paper.
Relation To Broader Scientific Literature: This paper reviews the literature of cellwise outliers, conformal prediction without exchangeability, predictive inference with missing data, and conformal inference for outlier detection. It also lists the key difference of settings or tasks for works in the related area.
Essential References Not Discussed: I think authors have achieved a fairly complete literature review.
Other Strengths And Weaknesses: Strength:
Novel problem setting: The problem of cellwise outliers in conformal prediction is important and underexplored. The paper provides a structured approach to address it.
Exchangeability considerations: The idea of applying detection and imputation jointly to calibration and test samples is a novel extension that ensures that conformal prediction remains valid despite outliers.
Other Comments Or Suggestions: There are a lot of math notations proposed in this paper. Could the author provide the math notation table in the appendix?
Questions For Authors: 1. The method assumes perfect outlier detection. How does coverage degrade when detection is imperfect?
2. If choosing a larger detection threshold ensures Assumption 3.1, doesn’t this break exchangeability by misclassifying inliers as outliers? How does this affect empirical coverage?
3. Theorem 4.3 and Theorem 4.4 assume that imputation does not change the residual distribution. How do results hold when imputations introduce bias?
4. How does the method perform when detection is imperfect?
5. Have you tested other imputation strategies?
6. Can you provide a sensitivity study showing how conformal coverage varies under different imputation and detection methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >**Q1**: Assumption 3.1 is impractical. How does coverage degrade when detection is imperfect?
Thank you for your insightful question!
- We acknowledge the imperfection of Assumption 3.1 and have obtained a new coverage gap bound in total variation: if there are still outliers in test point $\tilde{X}\_{n+1}$ after the detection procedure, Lemma 3.3 will not hold because the exchangeability between $(\check{X}\_{n+1}^{\rm{DI}},Y_{n+1})$ and $\\{(\check{X}\_i^*,Y_{i})\\}\_{i=n_0}^n$ is broken, where $\\{\check{X}\_i^*\\}\_{i=n_0}^n$ are the ODI features. At this point, there is a total variation coverage gap of the prediction interval (PI):$$\mathbb{P}\\{Y_{n+1}\in\hat{C}^{\rm{ODI}}(\tilde{X}\_{n+1})\\}=\mathbb{P}\left\\{|\hat{\mu}(\check{X}\_{n+1}^{\rm{DI}})-Y_{n+1}|\le\hat{q}\_{\alpha}^+(\\{R_i^*\\}\_{i=n_0}^n)\right\\}\ge 1-\alpha-\frac{1}{n-n_0+1}\sum_{i=n_0}^n d_{\rm{TV}}(\check{X}\_{n+1}^{\rm{DI}},\check{X}\_i^*).$$Notice that, we can still have a approximate coverage control if $d_{\mathrm{TV}}(\check{X}_{n+1}^{\mathrm{DI}}, \check{X}_i^*)$ is small. A similar bound can be obtained for PDI-CP and JDI-CP and we will add these results in the revision.
- In addition, we add a new experiment to show the influence of imperfect detection. Please see **Q6** for details.
>**Q2**: If choosing a larger detection threshold ensures Assumption 3.1, doesn’t this break exchangeability? How does this affect empirical coverage?
- According to the construction, we know that false positives do not break the data exchangeability in ODI-CP and JDI-CP. Since we use $\tilde{\cal{O}}\_{n+1}$ to approximate $\cal{O}^*$ in PDI-CP, it will break the exchangeability between processed test data and calibration data because $\tilde{\cal{O}}\_{n+1}$ depends only on test data. As we stated in Theorem 4.3, the coverage gap is affected by the number of false discoveries $|\tilde{\cal{O}}\_{n+1}\setminus\cal{O}^*|$.
- In Appendix D.3, we summarized the empirical false discovery rate (FDR) and true positive rate (TPR) of the detection methods in our simulation. We will include this discussion in future revision, using the table from **Q6** as an illustrative example. As the threshold decreases, the number of discoveries increases, leading to a higher FDR. Notably, the table demonstrates the empirical coverage of our method remains robust to variations in FDR.
>**Q3**: Theorem 4.3 and 4.4 assume imputation doesn't change the residual distribution.
We apologize for any confusion. Here, $F_{R}$ denotes the distribution function of the ODI-CP residuals $\\{R_i^*\\}\_{i=n_0}^n$, **not** the residuals from the raw calibration data before the DI procedure. Thus, our method doesn't assume "imputation preserves the residual distribution." We appreciate your attention and will clarify this notation in the revision.
>**Q4**: Have you tested other imputation strategies? How coverage varies under different imputation and detection methods?
- In addition to Mean Imputation, we evaluated two other imputation methods in Section 6.2: **kNN** and **MICE**. Figure 5 shows our method maintains robust empirical coverage across all imputation strategies.
- Our analysis in Sections 6.1-6.2 compares coverage and length of PI across various detection and imputation methods. Figures 4-5 show our method consistently maintains robust coverage control across all configurations with stable interval lengths.
>**Q5**: Direct-ODI and Direct-PDI contradict the main method’s justification.
We apologize for any ambiguity and would like to clarify:
- Direct-ODI and Direct-PDI in Appendix A.1 are naive combinations of DI with CP, which are **not** our proposed methods.
- Figure 8 shows Direct-PDI fails to maintain proper coverage while our PDI-CP successfully achieves target coverage. This empirical evidence confirms simply combining DI with CP (without our proposed modifications) cannot guarantee valid coverage.
We appreciate your careful review and will revise Appendix A.1 for clarity.
>**Q6**: The paper does not study sensitivity to different detection thresholds and imputation choices.
Regarding the sensitivity of our method:
1. **Imputation methods**: Figure 5 in Section 6.2 demonstrates our method maintains robust performance across different imputation choices.
2. **Detection thresholds**: We conduct additional experiments by varying the DDC detection threshold $\sqrt{\chi_{1,p}^2}$ (adjusting $p$) based on Setting C in Appendix D.2. As shown in the table, our method demonstrates robust coverage performance when threshold changes.
|$p$|0.5|0.7|0.9|0.99|
|-|-|-|-|-|
|FNR |0|0.005|0.008|0.013|
|FDR|0.793|0.669|0.340|0.035|
|PDI coverage|0.909|0.907|0.901|0.902|
|JDI coverage|0.904|0.904|0.899|0.895|
Please let us know if you would like any additional details.
>**Q7**: Could you provide a math notation table?
Thanks for your helpful advice. We will add a math notation table to the appendix of the revision.
Hope these explanations are acceptable for you!
---
Rebuttal Comment 1.1:
Comment: Thank the author for their response. Most of my concerns are addressed. I have raised my score from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your efforts in reviewing our work and raising your score! If there are any additional insights or suggestions you would like to share, we are eager to hear them. Thank you once again for your support! | Summary: When some entries of the test features are contaminated, the paper introduces a detect-then-impute conformal prediction framework. This framework first applies an outlier detection procedure to identify contaminated entries in the test features and then uses an imputation method to fill in the identified outliers. Moreover, the authors apply the detection and imputation procedures to the calibration set, ensuring the construction of exchangeable features for the conformal prediction interval of the test label. Two practical algorithms including PDI-CP and JDI-CP are proposed, with lower bounds on marginal coverage probability established under certain conditions. Numerical experiments on both synthetic and real datasets are provided to demonstrate the performance of the proposed algorithms.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The paper evaluates the proposed method on synthetic data and two real datasets and compares its performance against several relevant baseline methods.
Theoretical Claims: I checked the theoretical proofs of Lemma 3.3 and Proposition 3.4 and believe they are correct.
Experimental Designs Or Analyses: I have reviewed all experimental parts in the paper.
Supplementary Material: I reviewed the parts related to numerical studies in the supplementary material.
Relation To Broader Scientific Literature: The paper proposes a framework to construct prediction sets with marginal coverage guarantees when the test input feature is contaminated.
Essential References Not Discussed: No critical references appear missing.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: 1. The validity of Assumption 3.1 appears to depend heavily on the quality of the detection rule, which may be restrictive in practice. In Theorem 3.5, the negative result is derived using a specific form of the prediction set. Would the conclusion change if other adaptive prediction sets, such as those based on conformalized quantile regression (CQR), were used instead?
2. How does the contamination rate in the test features affect the length of the prediction sets?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >**Q1**: The validity of Assumption 3.1 (sure detection) depends on quality of detection, which may be restrictive in practice.
Thanks for your valuable question! We acknowledge the imperfection of Assumption 3.1 and make the following explanations.
- Sure detection/screening conditions are commonly used in existing works (Wasserman&Roeder,2009; Liu et al.,2022) to ensure all relevant variables are retained. We adopt a similar condition in Assumption 3.1 for model-free coverage guarantee.
- Assumption 3.1 is essential for meaningful prediction intervals (PIs): if outliers persist in $\check{X}\_{n+1}^{\rm{DI}}$, Theorem 3.5 shows PIs can become infinitely wide in expectation.
- Violating Assumption 3.1 impacts our method by introducing a total variation coverage gap: if outliers persist in the test point $\tilde{X}\_{n+1}$ after detection, Lemma 3.3 fails due to the exchangeability between $(\check{X}\_{n+1}^{\rm{DI}},Y_{n+1})$ and $\\{(\check{X}\_i^*,Y_{i})\\}\_{i=n_0}^n$ is broken, where $\\{\check{X}\_i^*\\}\_{i=n_0}^n$ are the ODI features. At this point, there is a total variation coverage gap of the PI:$$\mathbb{P}\\{Y_{n+1}\in\hat{C}^{\rm{ODI}}(\tilde{X}\_{n+1})\\}=\mathbb{P}\left\\{|\hat{\mu}(\check{X}\_{n+1}^{\rm{DI}})-Y_{n+1}|\le\hat{q}\_{\alpha}^+(\\{R_i^*\\}\_{i=n_0}^n)\right\\}\ge 1-\alpha-\frac{1}{n-n_0+1}\sum_{i=n_0}^n d_{\rm{TV}}(\check{X}\_{n+1}^{\rm{DI}},\check{X}\_i^*).$$The impact is mild if $d_{\rm{TV}}(\check{X}_{n+1}^{\rm{DI}}, \check{X}_i^*)$ is small; similar bounds apply to PDI-CP and JDI-CP.
- In practice, Assumption 3.1 (TPR=1) can be satisfied when the detection threshold is small; our method still maintains target coverage in simulations and real examples even if some outliers were not completely detected. For your convenience, we also added a new experiment under different DDC detection thresholds $\sqrt{\chi\_{1,p}^{2}}$ (varying $p$) based on Setting C in Appendix D.2, which shows Assumption 3.1 is exactly satisfied (TPR=1) at $p=0.5$. When TPR<1, our method can still achieve approximate control of coverage.
|$p$|0.5|0.7|0.9|0.99|
|-|-|-|-|-|
|TPR|1|0.995|0.992|0.987|
|FDR|0.793|0.669|0.340|0.035|
|PDI coverage|0.909|0.907|0.901|0.902|
|JDI coverage|0.904|0.904|0.899|0.895|
[1] Wasserman, L. and Roeder, K. High dimensional variable selection. The Annals of Statistics, 2009.
[2] Liu, Y., Ren, H., Guo, X., Zhou, Q., and Zou, C. Cellwise outlier detection with false discovery rate control. Canadian Journal of Statistics, 2022.
>**Q2**: Would the conclusion in Theorem 3.5 hold with other adaptive prediction sets like conformalized quantile regression?
Thank you for your question! We prove a similar result for the CQR score, and add new simulation results based on CQR.
- The PI constructed from CQR is $\hat{C}(X)=[\hat{f}^{lo}(X)-\hat{q}\_n,\hat{f}^{up}(X)+\hat{q}\_n]$, where $\hat{f}^{lo}$ and $\hat{f}^{up}$ are the lower and upper quantile regression models, and $\hat{q}\_n$ is the quantile of empirical distribution of CQR computed on the calibration set.
- Following the proof of Theorem 3.5 in Appendix B.2, $Y_i=X_{i,1}+X_{i,2}$ where $X_{i,1},X_{i,2}\sim\rm{Uniform}([0,1])$ for $i\in[n+1]$. Suppose $\hat{f}^{lo}(x)=\beta_1^{lo}x_1+\beta_2^{lo}x_2$ where $\beta_2^{lo}\neq0$, and the test point $\tilde{X}\_{n+1}=(X_{n+1,1},Z_{n+1,2})^{\top}$ where$$Z_{n+1,2}=\frac{M+1}{\beta_2^{lo}} \mathbb{1}\\{\beta_1^{lo}\geq 1\\}+\frac{M+2}{\beta_2^{lo}}\mathbb{1}\\{0<\beta_1^{lo}<1\\}+\frac{M-\beta_1^{lo}+2}{\beta_2^{lo}}\mathbb{1}\\{\beta_1^{lo}\leq 0\\}$$for some large positive value $M$. If $\check{X}\_{n+1}^{\rm{DI}}$ still contains $Z_{n+1,2}$ and $\hat{C}(\tilde{X}\_{n+1})$ covers the true label, we have$$\max\\{\hat{f}^{lo}(\check{X}\_{n+1}^{\rm{DI}})-Y_{n+1},Y_{n+1}-\hat{f}^{up}(\check{X}\_{n+1}^{\rm{DI}})\\}\geq\hat{f}^{lo}(\check{X}\_{n+1}^{\rm{DI}})-Y_{n+1}\geq M,$$which means $\mathbb{P}(\hat{q}\_n\geq M)\geq\mathbb{P}(Y_{n+1}\in\hat{C}(\tilde{X}\_{n+1}))\geq1-\alpha$.
- We also conduct an experiment using CQR to construct PIs, where the Baseline used in our simulation can be considered as the optimal method for constructing split conformal PI for cellwise outlier, which masks calibration and test features by $\cal{O}^*$. This experiment will be added in future revisions.
||Baseline|ODI|PDI|JDI|
|-|-|-|-|-|
|Coverage| 0.905|0.902|0.900|0.885|
|Length|4.366|4.291|4.282|5.544|
>**Q3**: How does the contamination rate affect the length of the prediction sets?
- Figure 6 in Section 6.3 shows the coverage and length across cell contamination probabilities with DDC and Mean Imputation, while Figures 12-13 in Appendix D.4 display results for kNN and MICE imputation.
- Results indicate the length of our method remains stable across varying contamination rates, whereas that of Baseline increases with higher contamination. Figure 3 demonstrates our method's competitive length and target coverage, surpassing the classical WCP method.
Hope these explanations can ease your doubts! | null | null | null | null | null | null |
Fairness Overfitting in Machine Learning: An Information-Theoretic Perspective | Accept (poster) | Summary: This paper proposes new generalization bounds for fair machine learning. Based on the Mutual Information framework, these bounds show that the important factors governing fairness generalization are the size of the different subgroups and the mutual information between the distribution of hypothesis and subsets of the training set on which fairness is evaluated. Tighter versions of the bound based on more involved mesures of mutual information are also derived. Developed for both Demographic Parity and Equalized Odds, the tightness of the proposed bounds is empirically evaluated on several datasets.
Claims And Evidence: The proposed fairness definitions do not seem to match the usual ones in the multiclass case:
- The fairness definition considered in Equation (1) for demographic parity seems to assume binary classes (at least to match the usual demographic parity definition, see the work of Agarwal et al. 2018 for example). However, in the paper, it is assumed that the predictions lie in a range $[0,a]$ which suggests a multi-label predictions setting, it is then not clear what the proposed formula really represents.
- To match the definition of equalized odds, $f$ should be the probability of having a positive label given a sample. While this initially fits the setup proposed in the paper (line 320, 1st column), it is later mentioned that it is possible to extend the proposed approach to the multiclass case by considering $f$ in $[0,a]$ (line 365, second column). It is then not clear what this measure really represents.
Methods And Evaluation Criteria: The methods and evaluation criteria appear to be appropriate. This is mainly a theoretical paper and complete proofs are provided.
Theoretical Claims: I only skimmed through the proof of Lemma 2 as it is one of the central results of the paper:
- The Efron-Stein inequality (Boucheron et al. 2013, Theorem 3.1) assumes that the function $g$ is square-integrable. However, this assumption is not mentioned in the paper and it is never formally proved that the considered functions respect this assumption.
Experimental Designs Or Analyses: To improve fairness, the paper proposes, in Section 6, to use a batch balancing approach. This is reminiscing of the work of Roh et al. (FairBatch: Batch Seleection for Model Fairness, 2021) and connections should be discussed.
Supplementary Material: I looked at the Related Work (Appendix A), the proof of Lemma 2 (Appendix B.1), and the proof of Lemma 1 (Appendix E.1).
Relation To Broader Scientific Literature: Line 191 to 198, it is mentioned that the proposed result is the first to link generalization error to group imbalance. However, this is something that already appears in previous fairness generalization bounds based on the VC dimension where the bound is smallest when all the groups have the same size (for example, see Woodworth et al. 2017 in the list of missing references).
Essential References Not Discussed: The related work, relegated to Appendix A, misses several works that addressed the problem of generalization guarantees in fair machine learning, albeit with different proof techniques. A non-exhaustive list:
- Learning Non-Discriminatory Predictors, Woodworth et al., 2017
- A Reductions Approach to Fair Classification, Agarwal et al., 2018
- Randomized Learning and Generalization of Fair and Private Classifiers: from PAC-Bayes to Stability and Differential Privacy, Oneto et al, 2019
Other Strengths And Weaknesses: The intuition behind the Mutual Information terms appearing in the different bounds is hard to grasp as there is only very sporadic discussion on what they really capture and when they can be expected to be small or large. From the experiments in Figure 2, they seem rather large since the bound does not converge to $0$ but, instead, seems to reach a plateau when the number of examples exceeds $1500$.
Other Comments Or Suggestions: - There is a notation mismatch between the main paper and the proof of Lemma 1 ($\ell_E^F$ seems to become $F_E$ for example).
- Line 161, second column, it should be $\tilde{v}_i \neq v_i$.
- In the experiments, the plot at the bottom right of Figure 2 suggests that the actual generalization error is larger than the bound. This is probably due to inverted axis labels.
Questions For Authors: 1. Is the square-integrability assumption to apply the Efron-Stein inequality respected?
Would this concern be addressed in a satisfactory manner, I could increase my score. Furthermore, would the other reviews or the rebuttal show that I missed or misunderstood some key points, I would reconsider my stance.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Mismatch in Fairness Definitions for the Multiclass Problem
In our original formulation for the binary case, the prediction function $f =\hat{Y}$ outputs values in $\{0,1\}$ (i.e., a=1). For multiclass problems, we allow $f$ to take values in a bounded range $[0,a]$; however, the choice of $a$ does not affect the upper bound in our fairness guarantees.
To illustrate this in the context of Equalized Odds (EO), consider our use of the Total Variation (TV) loss as a fairness measure. For each true label $y \in \{0,1,2,3\}$, we define the TV loss as follows:
$$ \ell^{F_S}\_E(w,S \mid Y=y) = \frac{1}{2}\sum\_{c=0}^{3}\Bigl|P(\hat{Y}=c\mid Y=y,T=0)-P(\hat{Y}=c\mid Y=y,T=1)\Bigr|, $$
which in practice is approximated by
$$ \ell^{F_S}\_E(w,S \mid Y=y) = \frac{1}{2}\sum\_{c=0}^{3}\left|\frac{n\_{0,y,c}}{n\_{0,y}+2}-\frac{n\_{1,y,c}}{n_{1,y}+2}\right|. $$
We then define an aggregate function over the true labels: $g(z_u) = \sum\_{y\in\{0,1,2,3\}} \ell^{F\_S}\_E(w,z_u \mid Y=y) $. Our analysis shows that
$$ \sup\_{z\_u,\tilde{z}\_u^i} |g(z\_u)-g(\tilde{z}\_u^i)| \le \frac{2}{\min\_{(t,y)}\{n^{Z\_u}\_{t,y}+2\}}$$
The key point is that—even if the prediction function $f$ is allowed to take any value in $[0,a]$—changing one sample affects the probability estimates (and hence the TV loss) by at most a fixed amount (i.e., at most 1) regardless of $a$. In other words, while $f$’s output may be scaled by $a$, the impact on the fairness loss (measured in terms of probability differences) remains unchanged.
Thus, our technical arguments extend naturally to the multiclass setting. We acknowledge that our previous explanation did not clearly separate the role of the prediction magnitude (bounded by $a$) from the fairness loss itself, and we appreciate the opportunity to clarify that the fairness definitions (including EO) are applicable to the multiclass case under our bounded prediction assumption.
## Is the square-integrability assumption to apply the Efron-Stein inequality respected?
Thank you for carefully reading the proof. You are correct that the Efron-Stein inequality assumes square-integrability, and we acknowledge that this assumption was not explicitly stated in Lemma 2 in the paper. However, in our case, this assumption is naturally satisfied because all the random variables involved are bounded. Specifically, as $f$ is bounded ($f<a$), the loss function $l$, for which we apply the Efron-Stein inequality through Lemma 2 (Eq. 65 or Eq 82), is also bounded ($l<a$ see Eq. 190-191 ). Since any bounded random variable is square-integrable, this guarantees that the assumption is met. Formally, we have: \
$$ |l| \leq a. $$
To verify square-integrability, we need to show that
$\mathbb{E}[l^2] < \infty$.
Since $l$ is bounded. We have
$$ X^2 \leq a^2$$
Taking expectations on both sides,
$$ \mathbb{E}[l^2] \leq \mathbb{E}[a^2] = a^2 < \infty. $$
Thus, our loss is always square-integrable. Therefore, the conditions for applying the Efron-Stein inequality are fully satisfied in our setting.
We will include this clarification and add the assumption explicitly to Lemma 2 in the revised version of the paper to ensure completeness. Given that this addresses your concern, we kindly ask you to reconsider your score. We appreciate your openness to revisiting your evaluation and would be happy to discuss any further points if needed.
## Related work:
Thank you for highlighting the related work on generalization guarantees in fair machine learning. We will address these references in the revision.
However, while (Woodworth et al., 2017; Agarwal et al., 2018) derive generalization guarantees for loss functions corresponding to DP and EO within specific algorithms, our work targets a more general algorithmic framework in the DP and EO setting—even when using other loss functions. Moreover, their guarantees primarily focus on overall sample complexity, whereas our bounds incorporate not only the total sample size but also additional factors such as the properties of the learning algorithm, the particular loss function, the dataset characteristics, and, importantly, the group balance in the dataset.
More closely related work is Oneto et al. (2019), which derives a generalization bound in Theorem 1 for fairness generalization with randomized algorithms. However, their bound, being a KL-divergence bound, can not be computed and evaluated in practice for any realistic setting. In contrast, our bounds—particularly Theorem 5—are computable, even for modern deep neural networks. This ensures that our results are not only theoretically rigorous but also practical and computable, allowing us to study fairness generalization errors in real-world settings.
We will incorporate this discussion into the paper and revise our claim accordingly.
Thank you for catching the typos. We will fix all these typos and the flipped axis labels iof the Figure in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for this rebuttal.
Unfortunately, I could not follow the explanation regarding the mismatch between the fairness definition studied in this paper and the standard definitions. More precisely, the loss discussed in the rebuttal seems different to the one present in the paper. Furthermore, my concern is not whether the proposed upper-bounds apply in the multi-class setting but rather what is the connection between the proposed fairness metrics and the ones that can be found in the literature in the multi-class case. They appear to be different.
The rebuttal addressed my concern regarding the missing assumption.
In light of this, I increased my score to 2.
---
Reply to Comment 1.1.1:
Comment: Thank you for the follow-up and for pointing out the confusion. We appreciate the opportunity to clarify the distinction.
We would like first to emphasize that in this paper, we focus mainly on the binary case (i.e., a=1 throughout the paper. we will make that clear in the final version). In the binary setting, we focus on two widely adopted notions: demographic parity (DP) and equalized odds (EO), which correspond to the independence and separation criteria in fairness. We chose to focus on these definitions due to their simplicity, widespread adoption, and analytical tractability in the binary setting [1,2].
For instance, demographic parity is typically expressed as
$ P(\hat{Y} \mid T = 0) = P(\hat{Y} \mid T = 1) $,
and equalized odds is given by
$ P(\hat{Y} = 1 \mid T = 1, Y = y) = P(\hat{Y} = 0 \mid T = 0, Y = y) $
for $ y \in$ {0, 1}. Although the notation in our paper may appear different at first glance, our fairness losses are mathematically equivalent to these standard definitions, as used in prior work (e.g., [2,6]). This equivalence is what enables our tractable theoretical analysis.
Regarding the multi-class extension, extending notions like DP or EO beyond binary outcomes is known to be nontrivial, and there is no universally accepted generalization in the literature [2-5]. Some works rely on information-theoretic quantities such as mutual information [3], others on distance correlation [4], or direct extension based on the binary formulation [5] above.
In our rebuttal, we presented a multi-class example to illustrate the flexibility of our theoretical tools. The fairness criterion used there aligns with Definition 1 in [5]. Our goal was to show that the theoretical framework developed in the main paper can easily extend beyond the binary case and apply to other fairness notions.
To conclude, while our paper primarily focuses on the binary case and provides a detailed theoretical analysis with a novel bounding technique, the flexibility of our framework offers significant value. Our tools can easily be extended to address other fairness definitions in more complex settings, including multi-class cases (as demonstrated in the rebuttal), thus opening avenues for future work. The key contribution remains the new bounding technique and the versatility of the theoretical tools developed, which have the potential to facilitate further progress not only in fairness setting but also in group-based loss settings.
We hope that we have addressed your concerns and we kindly ask you to reconsider your score.
[1] Han, Xiaotian, et al. FFB: A fair fairness benchmark for in-processing group fairness methods. In International Conference on Learning Representations, 2024.
[2] Mroueh, Y. et al. Fair mixup: Fairness via interpolation. In International Conference on Learning Representations,2021.
[3] Umang Gupta, Aaron M Ferber, Bistra Dilkina, and Greg Ver Steeg. Controllable guarantees for fair outcomes via contrastive information estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 7610–7619, 2021.
[4] Dandan Guo, Chaojie Wang, Baoxiang Wang, and Hongyuan Zha. Learning Fair Representations via Distance Correlation Minimization. IEEE Transactions on Neural Networks and Learning Systems, pages 1–14, 2022.
[5] Denis et al. Fairness guarantee in multi-class classification. arXiv:2109.13642, 2023.
[6] Madras, David, et al. Learning adversarially fair and transferable representations. International Conference on Machine Learning. 2018. | Summary: The paper considers the generalization of (in terms of empirical violation of) fairness when presented with unseen data. Specifically, the goal is to provide a formal guarantee through information-theoretic fairness generalization bounds with mutual information (MI) and conditional mutual information (CMI). The theoretical and empirical results are provided, and the tightness and practical relevance of the bounds are analyzed across several (group-level) fairness notions, including DP and EOdds.
---
### After Rebuttal
Thank authors for the rebuttal. The response is helpful and to-the-point, which further boosts my confidence in evaluation of an Accept (score 4). I confirm that I also carefully went through comments from other reviewers, as well as the rebuttals therein.
Claims And Evidence: The claims consists of several theoretical bounds, e.g., the relation of fairness generalization and the dependence between hypothesis and input data (Theorem 1), the relation between fairness overfitting and the selection mechanism (Theorem 2), the tightening of presented bounds (Theorems 3--4), the reduction of computational cost (Theorem 5), and the bounds for specific fairness notions (Theorems 6--7). The evidence includes the proofs of theorems and the empirical evaluations.
Methods And Evaluation Criteria: The method starts from theoretically capturing the bounds followed by improving them, and the further applying to specific group-level fairness notions. The evaluation criteria include the derived $\Delta L$-CMI, with different methods including DiffDP, DiffEodd, DiffEopp, HSIC, PRemover, etc.
Theoretical Claims: The theoretical claims are different generalization bounds for fairness, and the proofs are provided in appendix.
Experimental Designs Or Analyses: The experimental analyses include different aspects of the evaluation of bounds, including bound tightness, bound-error correlation, and the implication of batch balancing (during training).
Supplementary Material: I went through the supplementary material (but I did not check the proof line-by-line).
Relation To Broader Scientific Literature: The fairness overfitting and generalization bounds can have implications over broader fields where relevant group-level fairness notions are of interest.
Essential References Not Discussed: There are no significant missing references.
Other Strengths And Weaknesses: The strength of the paper comes from the organization of materials and the clear presentation of motivation, setting, theoretical analyses, and empirical evaluations.
The paper can be further improved by including some discussion from the side of lower bounds. While I understand that providing a lower bound might involve developing another set of theoretical results and is beyond the scope of current work, having some discussion (e.g., why a lower bound might be nontrivial, whether a lower bound can be shown to be above 0) can shed light on the whole picture and make it even more informative, i.e., the generalization bounds for fairness can be from both sides (instead of only gets upper bounded).
Other Comments Or Suggestions: Nothing specific in addition to the comments in above sections.
Questions For Authors: Since the impossibility results in previous literature have shown that, in general, not all group-level fairness notions can be achieved at the same time (Chouldechova, 2017, Kleinberg et al., 2017), and also that, the group-level fairness notion EOdds may not be attainable if the data distribution itself does not satisfy certain properties (Tang and Zhang, 2022). I am curious about the possibility of directly deriving the lower bound with the proposed theoretical analyzing framework.
---
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. _Big Data_ 5, 2 (2017), 153–163.
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent trade-offs in the fair determination of risk scores. In _Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS’17)_.
Zeyu Tang and Kun Zhang. 2022. Attainability and optimality: The equalized odds fairness revisited. In _Proceedings of the Conference on Causal Learning and Reasoning_, Vol. 177. PMLR, 754–786.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your thoughtful question about deriving lower bounds within our theoretical framework, especially considering existing challenges in achieving group-level fairness notions like Equalized Odds (EOdds).
In our work, we focus on understanding how fairness measures observed during training generalize to new, unseen data. While our framework sheds light on the behavior of fairness interventions and their generalization properties, directly deriving explicit lower bounds for the fairness-accuracy trade-off, particularly for specific notions like EOdds, is fundamentally different. In particular, Information-theoretic bounds, including ours, rely on the Donsker-Varadhan variational representation for upper-bound generalization error. Hence, deriving a lower bound would require an alternative variational formulation that directly lower bounds the difference in expectation between the joint distribution and the product of marginals of the specific loss function, which is not trivial.
Regarding the impossibility results, as you've noted, prior research has shown that it's often impossible to satisfy all group-level fairness criteria simultaneously. For instance, Chouldechova (2017) and Kleinberg et al. (2017) discuss inherent trade-offs between different fairness measures. Additionally, Tang and Zhang (2022) highlight that achieving EOdds depends on specific data distribution properties; when these aren't met, deterministic classifiers may struggle to attain EOdds without incorporating randomness into their predictions. The key distinction is that impossibility results establish lower bounds on the population risk for different fairness loss functions, while our contribution provides upper bounds on fairness generalization error. Notably, a model can have low generalization error but still perform poorly in terms of fairness on the population level.
An interesting parallel future direction would be to explore connections between different fairness generalization bounds—for instance, relating DP-fairness generalization error to EO-fairness error. We will incorporate this discussion in the future directions section. | Summary: This paper studies fairness generalization error, i.e., how does model fairness extend to new, unseen data. The fairness generalization error is defined (in Eq. (3)) as the discrepancy between the fairness-population risk and the fairness-empirical risk. The authors study this from an information theory perspective, and derive upper bounds for fairness generalization error using two widely used fairness metrics: Demographic Parity (DP) and Equalized Odds (EOd). The proposed fairness error bounds are validated through experiments on two datasets: COMPAS and Adult.
Claims And Evidence: My first question about the paper is in terms of its motivation. The authors illustrated the fairness error in training and testing on COMPAS dataset in Figure 1, but the difference between training and testing error doesn't seem significant, even for ERM training. The authors show more COMPAS results in Figure 2, and in the appendix (which includes more results on COMPAS and Adult dataset), where the fairness generalization error (difference between training and testing fairness error) is not significant.
This raises the question of whether fairness overfitting is a significant issue, at least on the two datasets (COMPAS and Adult) presented. If the fairness generalization error remains small, the motivation for a theoretical study on fairness generalization error bounds may be weaker. Could the authors clarify why such an analysis is necessary given these findings?
Methods And Evaluation Criteria: The paper focuses on a theoretical study, but it is not quite clear how the proposed theory can be applied to practical estimation. To be specific:
The paper focuses on a theoretical study, but it is not quite clear how the proposed theory can be used in practice. Specifically:
1. the description of the learning algorithm \mathcal{A} and hypothesis W seems vague to me. Could the authors clarify line 108 (left column) - line 072 (right column)? Are there any assumptions or constraints on the learning algorithm AA?
2. The assumption that $|f| \in [0,a]$ is used in many theorems. Could the authors clarify how to determine the value of a in practice for estimating the upper bound?
3. The derived error bounds rely on using different permutations of a subset of training data of size mm. However, there is no clear guideline on how to choose mm in practice. Could the authors elaborate on the discussion of mm in Remark 5?
4. Could the authors discuss the time complexity of computing the bound estimation? Does obtaining the bound require re-training the model for each new permutation?
5. From Figure 2 and more figures in the appendix, the derived bound seems to be the upper bound of the fairness generalization error. However, it's not clear if the derived upper bound is indeed tight. Could the authors elaborate more on how to interpret the experimental results? Or any evidence to support that the bound is tight?
Theoretical Claims: This paper proposes to derive the upper bound of fairness generalization error. However, it's not quite clear what new insights can be obtained from the theoretical study. For example, in Section 6, "Batch Balancing", the authors mention that balancing training data between different sensitive groups can help improve fairness, but this conclusion has been studied in many existing works, which use different techniques like resampling, reweighting, counter-factual data generation to balance data [1,2,3]. It's not clear what new insights can be obtained from the theoretical analysis.
Building on the points mentioned above: Based on Theorem 5 and 7, it seems that the mutual information term and the number of samples affect the upper bound. The authors may consider exploring potential new/practical insights from these two perspectives. For example, can we practically reduce the mutual information term (maybe through some regularization in training) or increase the training sample number (like data augmentation) to minimize the fairness error bound?
[1] Buda, Mateusz, Atsuto Maki, and Maciej A. Mazurowski. "A systematic study of the class imbalance problem in convolutional neural networks." Neural networks 106 (2018): 249-259.
[2] Jang, Taeuk, Feng Zheng, and Xiaoqian Wang. "Constructing a fair classifier with generated fair data." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021.
[3] Sagawa, Shiori, et al. "Distributionally Robust Neural Networks." International Conference on Learning Representations. 2019.
Experimental Designs Or Analyses: I understand that the paper primarily focuses on a theoretical study, but the experimental analysis is limited to only two datasets (COMPAS and Adult). Actually, the main paper presents results only on COMPAS, while all Adult dataset results are relegated to the appendix. To better demonstrate the practical applicability of the proposed theory, the authors may consider including a few larger datasets (like CelebA) and varying model architectures in their analysis.
Supplementary Material: I took a brief review of the supplementary material.
Relation To Broader Scientific Literature: This paper is related to fairness in machine learning, an important topic in trustworthy machine learning.
Essential References Not Discussed: See in *Theoretical Claims*
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: 1. In Eq. (4), is it gen_fairness or \bar gen_fairness?
2. Lemma 2: "such that ... for all j \not = i except on i, i.e., \tilde v_i \not = v_j". Is it \tilde v_i \not = v_i instead?
3. Theorem 1: the second square symbol seems not clear
4. In Figure 2, second row, third column, the authors may consider reporting the correlation coefficient and p-value to validate the linear relationship
Questions For Authors: My major concerns regarding the paper are about the following:
1. motivation to study fair generalization error bound
2. applicability, parameter setting, efficiency, and validity of using the derived theorems for error bound estimation
3. practical insights from the theory
4. experimental validation of the derived bound
Please check my comments in above sections of Claims And Evidence, Methods And Evaluation Criteria, Theoretical Claims, Experimental Designs Or Analyses for details.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful feedback on our work. Below, we address each concern point by point.
**Motivation:** We respectfully disagree that there is a lack of motivation to study fairness generalization. As demonstrated in Figure 1, i) Compared to ERM, when fairness interventions are applied, the generalization error can become particularly significant (e.g., HSCI or PRemover), indicating that these techniques introduce generalization challenges that need further investigation. ii) In certain cases—especially with low-data— the fairness generalization error can be as high as 10–30% of the fairness training loss, which is significant. iii) Different fairness methods exhibit varying generalization behaviors, suggesting that multiple factors influence fairness generalization. Understanding these variations is essential for developing principled approaches to mitigating fairness-related overfitting.
**Experiments:** We kindly refer the reviewer to lines 397-404. Estimating our proposed bound follows the well-established protocol (Harutyunyan et al. 2021; Wang & Mao 2023) and particularly Dong et al. (2024), where permutation-based bounds have been proposed for standard generalization error. The main difference is that here: i) we are targeting a fairness loss. ii) MI terms involve a continuous variable.
Q: $a$ and $m$ in practice? \
A: $a$ in our paper is the upper bound of the predictor function f. Since in this paper, we consider mainly the binary case {-1,1}, a=1 and does not require any estimation. We will clarify this in the final version. \
$m$ is a hyperparameter. Like in our experiments, we recommend setting m=n−1 for practical reasons. \
Q: Re-training model for each permutation? \
A: No, computing the bound does not require retraining for each permutation. It only involves evaluating mutual information estimates over different subsets, with a computational complexity similar to Dong et al. (2024).
**Insights from theory:**\
**Theoretical:** In this paper, we take a significant step toward addressing fairness generalization errors by deriving the first rigorous **computable bounds** in the MI and CMI settings, using a novel bounding technique. Our results demonstrate that models generalize differently depending on the fairness approach used, as they memorize the different data attributes (V) at different scales. Furthermore, our bounds shed some light on how data balance influences fairness overfitting. \
**Practical:** Beyond these theoretical insights, generalization bounds can also inform practical strategies for improving fairness performance. As a proof of concept, our results suggest that balanced representation (1/H(n_0,n_1)) reduces fairness generalization errors. To validate this, we experimented with batch-balancing, demonstrating that this indeed improves generalization further confirming our theoretical finding. Note that from this perspective, our work can be seen as a theoretical guarantee for the previous works on data balancing highlighted by the reviewers. Additionally, for example, our findings suggest that controlling the MI term—such as by introducing gradient noise during training—could further improve fairness generalization.
Q: assumptions on A? \
A: A learning algorithm is a randomized mapping from the training data $S$ to the model weights $W$. Similar to other information-theoretic studies (e.g., Harutyunyan et al., 2021), we make no assumptions about A.
Q: How to interpret the experimental results? the bound is tight? \
A: In general, the primary goal of generalization bounds is not solely tightness—achieving this requires strong additional assumptions—but rather to provide theoretical insights into generalization behavior and capture the key factors. Information-theoretic bounds are generally among the tightest in learning theory. For instance, Wang & Mao (2023) (in Figure 2) report bounds that, while sometimes over four times larger than the observed empirical generalization error, remain valid and notably tighter than previous results, as they successfully capture the overall trends. \
In the context of fairness, using prior bounding techniques would yield overly loose bounds that fail to provide meaningful insights into fairness dynamics (see our discussion on Lemma 1). In contrast, our work introduces a novel bounding technique that not only results in tighter fairness bounds compared to previous techniques—e.g., Theorem 1 is tighter than Lemma 1 by a factor of $1/\sqrt{n}$—but also reveals fairness-specific properties, such as the influence of class balance (n0,n1). Empirically, our derived bounds consistently track the observed fairness generalization error across different models and datasets, further validating their effectiveness.
Q: p-value? \
A: The p-value for the scatter plot is 3.43×10⁻¹⁶, indicating an extremely strong statistical significance further validating our bounds.
We will fix the typos suggested by the reviewer. | null | null | null | null | null | null | null | null |
SNS-Bench: Defining, Building, and Assessing Capabilities of Large Language Models in Social Networking Services | Accept (poster) | Summary: This paper introduces a benchmark dataset SNS-Bench to evaluate LLMs' capabilities in social networking service. The dataset consists of 8 NLP tasks centering around user postings that compiled from the REDnote social networking platform. The authors evaluate over 25+ closed and open sourced LLMs on SNS-Bench and show that their performance generally adhere to the scaling law.
## update after rebuttal
The authors addressed my concerns regarding using OCR to convert post images to texts, and the performance without translating Chinese texts into English. However, my main concern about the work, that the proposed datasets and tasks lack the sense of "personalized recommendation" and "social networking". While the Author did try to demonstrate the "personalization" concept in the Note-Hashtag task, the provided example only shows how they select post tags to "characterize" the post author based on that single post, without utilizing any personal attribute and behavioral histories. Further more, the "social networks" of the users (i.e., the connections and interactions of the user in question) are ignored in these datasets and tasks, making the claim of bench marking for "Social Networking Services" questionable. As such, I am maintaining my overall rating.
And my apology that I replied the authors rebuttal as an official comment, which is not visible to the authors.My rebuttal comment is pasted below:
"""
Thank the authors for demonstrating the "personalization" concept in the Note-Hashtag task. The provided example, however, looks like to select post tags to "characterize" the post author based on that single post. I think we need a clear definition of "personalized recommendation" as the rebuttable ground. In my humble knowledge, the core of "personalized recommendation" is to tailor recommendation for each individual based on their personal attribute and behavioral histories, and I don't see such characteristics in the proposed tasks, and neither do I see how "social networks" (i.e., the connections and interactions of the user in question) play a part in these tasks.
Admittedly, the LLMs have deeply reshaped the landscape of many research fields, and my opinion about "personalized recommendation" and "social networks" tasks may be too shallow and conservative. I would be glad to learn how the authors perceptive these concepts with the application of LLMs.
"""
Claims And Evidence: - The paper claims that SNS-Bench is designed to assess LLMs' capabilities in social networking services. However, the eight benchmark tasks primarily align with conventional NLP tasks centered around user notes (postings), without explicitly modeling the broader social networking context, such as user interactions, network structures, or temporal dynamics.
- Section 3.1 defines Personalized Recommendation as a task category, stating that models should deliver tailored content based on user interests, behavior, and past interactions. However, the proposed tasks, Note-Hashtag and Note-QueryGen, do not incorporate any personalization mechanism. They focus on general content tagging and query generation rather than adapting recommendations based on user-specific preferences or engagement history.
Methods And Evaluation Criteria: No, the proposed benchmark does not consider how a user's social networking may affect their engagement with the media generated by their social networks.
Theoretical Claims: N.A., this is a benchmark paper, no theoretical claim.
Experimental Designs Or Analyses: Other than the missing of user's social networking context, the experimental design and analysis are solid and comprehensive.
Supplementary Material: Reviewed A.2. Prompt Templates for Instructions, looks good to me.
Relation To Broader Scientific Literature: The proposed benchmark dataset is related in applying LLMs in social networking tasks.
Essential References Not Discussed: not in my awareness
Other Strengths And Weaknesses: Strengths:
- The paper evaluates a diverse set of both open-source and closed-source LLMs on the proposed benchmark, providing a broad comparative analysis.
- The SNS-Bench provides a framework to evaluate LLMs's capabilities in social media related tasks.
Weaknesses:
- The benchmark tasks are designed around the note format of the REDnote app, which may limit their generalizability to other social networking platforms with different content structures and interaction dynamics.
- The conversion of user note images to text using Optical Character Recognition (OCR) may miss critical visual elements that contribute to the meaning of note, potentially affecting the fidelity of content understanding.
Other Comments Or Suggestions: Typo:
"Large Language Models (LLMs) play an importent role in SNS..." --> "important"
Suggestion:
Improve justification for task selection. Some tasks, like Note-Gender, seem less well-defined in terms of real-world SNS applications.
Questions For Authors: - As the REDnote platform are mostly a Chinese community, did you translated the notes into English or you only selected notes in English? If you you performed translation, is there any performance difference before and after the translation?
- Could you explain where the concepts of "social networking" and "personalization" may apply in the benchmark?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **R4.Q1: Broader Social Networking Context**
Thank you for the insightful observation. The construction does incorporate key social interactions:
1. **Data Collection Pipeline (Section 3.2)**
- User engagement metrics reflecting community response
- Note categories and tags chosen by creators based on audience reception
- Comment sections capturing direct user interactions
2. **Example: Note-Hashtag**
- Candidate tags include the original creator’s selections (personal preference)
- Community-generated popular tags for similar content (crowd preference)
**R4.Q2: Personalization**
We give the example on personalization:
```
{"content":"Note Title: 108 Exercise + Reading Classics\nNote Content: Since discovering the Seed Law in October 2023, I’ve practiced 108 exercises and recited classics for over 7 months. My health has improved—better complexion, fewer wrinkles, and improved sleep. I've also experienced increased energy and financial gains.\nAs Seed Dad says, only action brings results.\nI hope everyone persists—each breakthrough leads to a better self!\nEvery drop of sweat is proof of our transformation!","candidates":"Enhance Energy,Grapefruit Jasmine,Must Visit Xinjiang,Cat Internal Deworming,Physics Tutoring for International Students,Stele Forest Museum,Sydney Spa,Buccellati Gardenia,108 Exercise,Tongue Piercing,Seed Law,Sharing What I Find Interesting,Women's Growth,Guojin Building","answer":"Seed Law,108 Exercise,Enhance Energy,Women's Growth"}
```
1. The tags (*"Seed Law"*, *"108 Exercise"*) are derived from:
- The author's consistent personal practice (7 months)
- Tangible health benefits (specific improvements listed)
- Community validation (implied by their selection)
2. The selected tags represent:
- A personal transformation journey (*"Women's Growth"*)
- Proven techniques with real impact (*"108 Exercise"*)
- A philosophical alignment (*"Seed Law"*)
3. This demonstrates how we capture personalization:
- Creator-defined preferences (original tags)
- Community-endorsed interests (popular tags)
**R4.Q3: Platform limitations**
We appreciate the concern about REDnote's note format. In our ongoing work (SNS-Bench-V2), we are addressing this through:
1. Multi-platform expansion:
- Collecting data from Twitter/X and Instagram.
- Designing unified task formulations that work across platforms.
2. Beyond note content:
- Incorporating threaded conversations.
- User history.
- Cross-post interactions.
**R4.Q4: Visual loss in OCR**
We take the visual element concern seriously and have implemented:
1. Strict quality control:
- Human verification for all OCR conversions (Appendix B)
- Automatic filtering of low-confidence OCR results
**R4.Q5: Typo and Tasks**
We will fix the typos and rewrite the detailed definition of tasks.
**R4.Q6: Results on Chinese Version**
Most cases are English, with Chinese translated into English (GLM4 + manual review). To provide more results, we have translated all English cases into Chinese (GLM4 + manual review). We will release the Chinese data.
The average Chinese results:
||Note-Taxonomy|Note-Hashtag|Note-QueryCorr|Note-MRC|Note-NER|Note-Gender|Note-CHLW|Note-QueryGen|**SNS-Bench**|
|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|
|Llama-3.2-3B-Instruct|10.50|6.48|23.25|13.22|4.05|64.25|24.81|32.84|22.42|
|Qwen2.5-1.5B-Instruct|12.09|34.06|42.42|17.61|42.55|68.91|50.69|32.43|37.59|
|Phi-3.5-mini-instruct(3.82B)|32.84|62.09|40.91|25.16|38.30|54.40|26.47|33.21|39.17|
|Phi-4-14B|57.16|63.60|44.74|41.17|46.18|89.64|27.1|31.43|50.13|
|Glm-4-9b-chat|53.09|80.21|42.11|27.75|56.69|88.08|37.76|37.51|52.90|
|Qwen2.5-7B-Instruct|47.90|77.54|43.52|54.37|62.51|89.12|36.20|37.24|56.05|
|Qwen2.5-32B-Instruct|66.65|84.08|51.38|64.87|65.59|90.67|39.00|35.78|62.25|
|Qwen2.5-72B-Instruct|68.29|87.83|55.90|59.66|65.75|92.23|49.01|39.65|64.79|
|Deepseek-v3|74.26|91.39|57.22|62.83|74.2|93.26|40.93|35.07|66.14|
|GLM-4-Plus|71.42|89.71|52.67|61.49|68.88|93.26|32.88|36.03|63.29|
|Gemini-1.5-pro|70.27|87.88|48.39|60.42|70.18|90.16|34.26|37.36|62.36|
|GPT-4o-2024-05-13|69.18|80.28|55.02|65.11|70.52|91.19|48.04|39.56|64.86|
(PS. Llama-3.2-3B-Instruct and Phi-4-14B with a performance drop.)
**R4.Q7: Social Networking & Personalization**
SNSbench captures social networking dynamics through:
1. Interaction Signals: Tasks use data with implicit social context—comments, user-generated tags, and replies. For example, Note-CHLW identifies highlight words from actual discussions, mirroring how platforms detect trending topics.
2. Personalization: Most tasks reflect personalized SNS behaviors, for example:
- Note-QueryGen: Generated queries mimic how users personalize searches based on interests (e.g., converting a skincare note into "best vitamin C serums for sensitive skin").
**We kindly invite you to review our responses and reconsider your assessments. Thank you for your time and consideration!** | Summary: The paper introduces SNS-BENCH, a comprehensive benchmark for evaluating large language models in social networking service tasks. It covers eight diverse tasks—from note taxonomy and sentiment analysis to query generation and entity recognition—using a dataset of 6,658 questions sourced from a major social platform. The study presents detailed experimental results across 25+ LLMs, highlighting performance variations and a scaling law that informs both the strengths and limitations of current models.
Claims And Evidence: The paper claims that SNS-BENCH offers a systematic and robust framework to assess LLMs’ capabilities in SNS contexts and that model performance improves with scale, with closed-source models generally outperforming open-source counterparts. Extensive experimental results, quantitative metrics (accuracy, F1, etc), and detailed analyses across eight tasks support these claims. The evidence appears convincing, though some claims would benefit from further discussion on dataset representativeness.
Methods And Evaluation Criteria: The paper employs a multi-step data collection and annotation process, ensuring diversity and quality through de-identification, manual review, and expert validation. Tailored evaluation metrics are defined for each of the eight tasks, including accuracy, F1 score, and so on. These methods and criteria are well-aligned with the goals of assessing LLM performance in complex, real-world SNS scenarios.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental design benchmarks over 25 LLMs using standardized prompts and metrics across eight distinct SNS tasks on a large-scale dataset. The analyses compare performance variations based on model size and type, providing clear insights into task-specific challenges and strengths.
Overall, the design is methodologically sound and the analyses are thorough.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths include the paper’s comprehensive and well-structured benchmark, detailed experimental analysis, and clear evaluation criteria that address diverse SNS tasks.
A notable weakness is the reliance on text-based interactions might overlook multimodal aspects inherent to modern social networking.
Another miner concern is the lack of insights or detailed analysis. The current analysis is more akin to a summary to the results.
Other Comments Or Suggestions: Expanding the discussion on future extensions—such as incorporating multimodal data—could further enhance the paper. Overall, the work is solid and offers valuable insights into the evaluation of LLMs in social networking contexts
Questions For Authors: Please check above sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **R3.Q1: Multimodal limitation.**
We sincerely appreciate the reviewer’s constructive feedback. The reviewer is absolutely right to highlight the importance of multimodal interactions in modern SNS platforms. In fact, we are already working on SNS-Bench-V2, which will incorporate image-text pairs (e.g., OCR-extracted text from note images, user-generated captions) and multimodal tasks (e.g., visual hashtag recommendation, sentiment analysis). This extension will align with real-world SNS content while maintaining our focus on nuanced social understanding. We will explicitly discuss this direction in the revised manuscript’s Broader Impact section.
**R3.Q2: Deeper analysis and insights.**
Thank you for prompting us to clarify our findings. Beyond aggregated results (Table 2), our analysis reveals critical task-specific and generalizable insights about LLMs’ social capabilities:
**1. General Insights:**
Even state-of-the-art models struggle with SNS-unique demands:
- Creativity: Underperform in Note-QueryGen (lexical diversity) and Note-CHLW (novelty detection) due to rigid training objectives.
- Depth: Struggle with Note-MRC (Complex) (evidence extraction) and Note-QueryCorr (Topic) (intent granularity), revealing gaps in social context modeling (Section 5.1).
**2. More Task-Specific Insights:**
- Note-Taxonomy: Small models (e.g., Qwen-1.5B) favor single-choice tasks (accuracy: 27.5%), while larger models (e.g., Llama3-70B) excel in multi-hop reasoning (65.12%), proving scale aids hierarchical reasoning (Table 2).
- Note-Hashtag: Most models (32B+) perform better on single-choice (Qwen-72B: 86.25%) than multi-choice (84.60%), except Claude-3.5, which leads in multi-choice (88.58%), suggesting superior multi-label adaptability (Figure 9).
- Note-MRC (Simple): Gemini/Qwen excel at binary relevance judgment (F1: ~90%) but fail to precisely extract answers (BLEU: ~55%), whereas DeepSeek-V3 achieves the best content extraction (ROUGE-L: 74.81%), highlighting a trade-off between relevance judgment and granular retrieval (Table 5).
These observations underscore that SNS challenges require both social and technical innovation—a theme we will emphasize in Section 6. We will also add a new subsection (5.3 Model Behavior Analysis) to consolidate these insights with supporting visualizations (e.g., confusion matrices for Hashtag tasks). | Summary: This paper aims to advance LLM models for Social Networking Services(SNS) by introducing a comprehensive benchmark SNS-BENCH derived from a social media platform, addressing the limitation of studying SNS in isolated tasks in prior work. The benchmark includes eight distinct tasks, such as note classification, sentiment analysis and personalized recommendations, providing evaluation across various realistic dimensions. The authors evaluate over 25 LLMs on SNS-BENCH, providing insights into model performance across different categories. One main result shown is that closed-source models generally outperform open-source ones, but with a relatively small margin, and tasks involving complex emotions and long-text understanding remain challenging for current LLMs.
Claims And Evidence: Claims made in this work are overall well-supported. The comprehensiveness of the SNS-BENCH benchmark is well illustrated by its diver source and different tasks for SNS. This dataset itself is well-structured, paired with standard metrics for each subtask. Experiments are thorough in terms of including a variety of popular LLMs from both open-source and close-source communities. However, one caveat is that the authors motivate the idea to shift from focusing on isolated SNS tasks, but this SNS-BENCH benchmark proposed in this submission suffers from isolated evaluation on each SNS task after breakdown, albeit the source data is realistic and situated. This makes some claims on the ability of generalizing the conclusion beyond individual tasks less convincing.
Methods And Evaluation Criteria: The benchmark includes a rigorous annotation process with both automated and human verification, ensuring high data quality. Evaluations on each separate task are fair and standard, including widely-used metrics from QA and text generation literature (F1, BLEU,etc.)
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experimental setup and analysis are well-documented, with clear descriptions of the evaluated models, the computational resources, and the experimental protocol. This work thoroughly compares the performance of different models and analyzes task-specific challenges. However, the discussion reads as a bit brief, likely due to limited space. Moving some graphs to the appendix would help. This work can be further strengthened by providing analysis into the social and online dimensions of SNS tasks, such as how different models cope with user diversity and time dynamics. These dimensions are naturally introduced by the source data into this benchmark, but the evaluation and analysis do not explicitly address those challenging aspects.
Supplementary Material: No
Relation To Broader Scientific Literature: The study emphasizes the necessity of a comprehensive SNS-specific benchmark, and highlights gaps in existing LLM capabilities regarding social interaction in the online culture.
Essential References Not Discussed: This paper is well-referenced.
Other Strengths And Weaknesses: One of the key weaknesses of this submission is its lack of creativity in designing a novel evaluation framework. The approach taken primarily involves deriving a benchmark from real-world user data and breaking it down into individual tasks with standard evaluation metrics. While this is a reasonable and common methodology, it does not push the boundaries of evaluation in the Structured Neural Search (SNS) domain. The field would greatly benefit from more innovative evaluation paradigms that account for the complex nature of real-world SNS challenges, particularly those involving user interaction and diversity.
Other Comments Or Suggestions: 022: “in SNS remains challenging (Bandura).” → include year for this citation.
Figure 4,5,6,7,8,9: words in the graph are a bit too small to read.
Questions For Authors: Have you considered evaluating models under adversarial settings, simulating malicious users in online communities?
The dataset may have biases due to its reliance on a single social media platform, so I’m curious about further analysis on this aspect.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Response to Review 2:**
Thank you for your insightful feedback.
**R2.Q1: Isolated evaluation of SNS tasks.**
We acknowledge the concern regarding task isolation in SNS-Bench and appreciate the opportunity to clarify our design rationale.
1. Task-Specific Evaluation Necessity:
Social networking services (SNS) involve diverse interactions (e.g., content comprehension, sentiment analysis, recommendation), each requiring distinct capabilities. Isolated evaluation allows us to:
- Pinpoint strengths/weaknesses: For instance, models may excel at hashtag selection (structured tasks) but struggle with complex reasoning in Note-MRC (Figure 7).
- Guide targeted improvements: Task-specific metrics (e.g., F1 for Note-NER, ANLS for Note-QueryGen) reveal granular performance gaps (Table 5).
2. Holistic Insights via Aggregation:
While tasks are evaluated independently, our analysis synthesizes cross-task patterns (Section 4.3). For example:
- Closed-source models consistently outperform open-source ones (Table 2), suggesting general superiority in SNS contexts.
- Tasks requiring emotional/cultural understanding (Note-Gender) exhibit higher variance, highlighting challenges in social nuance (Section 5.1).
3. Real-World Generalization:
The benchmark’s diversity—8 tasks spanning 6,658 questions with varied formats (Table 1)—ensures broad coverage of SNS scenarios. By testing isolated but representative tasks, we simulate the multifaceted demands of real-world platforms (e.g., handling both classification and generation).
We agree that future work could explore interdependencies between tasks (e.g., joint training). However, our current design prioritizes diagnostic clarity, enabling actionable insights for model development. Thank you again for your valuable advice. We will emphasize this rationale in the revised manuscript.
**R2.Q2: Lack of creativity in evaluation framework.**
We appreciate the insightful feedback. While our benchmark adopts standard evaluation metrics for individual tasks, its core value lies in the authenticity and diversity of real-world SNS scenarios. Unlike synthetic or simplified datasets, SNS-Bench captures nuanced user behaviors (e.g., informal language, cultural references) and task interdependencies (e.g., sentiment influencing recommendations) from actual SNS platforms. This fidelity enables a more grounded assessment of LLMs’ practical utility in social contexts. Future work will build on this foundation to incorporate interactive and adversarial dynamics.
**R2.Q3: Citation and figure readability issues.**
Thank you for your careful review. We will address both points in the revised manuscript:
1. Citation update: The reference to Bandura will be updated to include the publication year (e.g., "Bandura, 2001").
2. Figure improvements: Figures 4–9 will be resized to ensure all text (axis labels, legends, annotations) is clearly legible in the final version. | Summary: This paper presents SNS-Bench, to access LLMs on different social networking services. It includes 8 tasks such as query content relevance. It evaluates 25+ LLMs and provide further insights.
## update after rebuttal
I maintain my score in support of the work after rebuttal.
Claims And Evidence: The central claim is the current LLMs are not performing ideally in SNS tasks. Table 2 well supports this claim.
Methods And Evaluation Criteria: Yes, it evaluates 25+ leading LLMs including Claude, GPT, Llama, Qwen etc.
Theoretical Claims: There is no theoretical claims if the reviewer understands correctly.
Experimental Designs Or Analyses: Yes, the experiment design is sound (this is a benchmark paper, with reasonable and comprehensive choice of LLMs.)
Supplementary Material: Yes, the reviewer mainly reviews Appendix A which provides extensive prompt template used.
Relation To Broader Scientific Literature: Related benchmarks do not benchmark complex scenarios with multiple network tools or heavily rely on GPT models. The paper well addresses these two problems.
Essential References Not Discussed: The reviewer believes the paper provides a good list of references.
Other Strengths And Weaknesses: The paper presents a good problem - evaluating LLMs in SNS tasks.
Other Comments Or Suggestions: The font of Table 2 is too small. Please consider using more than one row for one task.
Questions For Authors: Please see above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Response to Review 1**:
**R1.Q1: Font size in Table 2.**
Thank you for your helpful suggestion. We will adjust the font size and optimize the layout of Table 2 (e.g., using multiple rows for tasks if needed) to improve readability in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. I maintain my score and support the work to be accepted! | null | null | null | null | null | null |
Aggregation Buffer: Revisiting DropEdge with a New Parameter Block | Accept (poster) | Summary: This paper revisit dropedge, it claims the robustness of GNNs will grow bad during training, which yields poor performance. This paper propose aggregation buffer, a block designed to address this problem.
## Update after rebuttal
I recommend accept as a poster.
Claims And Evidence: Yes.
The main claim is that Dropedge is helpful for the robustness but harmful for the bias. The evidence of this claim is in Figure 2. And this claim is also intuitive and sounds correct to me (I actually know the fact Dropedge is helpful for the robustness from a paper called Dropmessage).
Methods And Evaluation Criteria: Yes.
The method is to add a new aggregation block that satisfied two conditions (actually one condition, since C2 implies C1). From Figure 2 we see this method works. But in theory, why these two conditions is enough for the bias-robustness trade-off is a bit unclear.
Theoretical Claims: Yes. No obvious issue.
Theorem 4.1 basically says the proposed block satisfied the two condition, I have checked the correctness.
Experimental Designs Or Analyses: Yes. No obvious issue.
Table 1 uses strong baselines, Table 2 uses some different encoders. The authors also include results of high-degree and low-degree nodes, I do not know why this is necessary, but it is harmless. The overall experiments are decent, at least in my batch.
Supplementary Material: Yes. All of them.
Relation To Broader Scientific Literature: The paper is related to DropEdge, JKNet.
Essential References Not Discussed: No
Other Strengths And Weaknesses: S1: The paper is clear and the analysis seems reasonable.
S2: The experiments are comprehensive.
W1: The improvements seem marginal.
W2: The paper aims to improve the robustness of GNNs, but the experiments are conducted on the original graph.
Other Comments Or Suggestions: No.
Questions For Authors: Q1: There is a recent paper [1] that report high performance of classic GNNs, e.g., the raw GCN achieves 85.10 on Cora. Is there difference between your setup and theirs?
[1]Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed review and valuable questions. We hope the following responses address your concerns.
**Why the two conditions are enough for the bias-robustness trade-off is unclear.**
Thank you for raising this insightful point.
Our two conditions and the layer-wise correction mechanism of AGG$_B$ are motivated by our discrepancy-bound analysis (Sec. 3.4).
Condition C1 encourages AGG$_B$ to adapt to structural variations, while C2 ensures stability by minimizing unnecessary changes when the graph structure is stable.
Together, these conditions mitigate representation inconsistencies caused by structural variations in a *layer-wise manner*.
Establishing a rigorous theoretical link between layer-wise corrections and robustness—defined on the *final output of a GNN*—is indeed challenging.
However, it is intuitive that consistent corrections at each layer yield a more robust final representation—a notion supported by our empirical results.
Moreover, in the case of a single-layer GNN, the layer-wise and final-output views coincide, offering a clear example where this connection holds.
**W1. The improvements seem marginal.**
We believe that a difference in experimental setups should be considered when evaluating the significance of improvements.
Our method aims to enhance trained GNNs—similar to curriculum learning (e.g., TUNEUP) or graph augmentation (e.g., GraphPatcher)—rather than training GNNs from scratch.
While prior works often use fixed hyperparameters for base GNNs, we performed a grid search even for the base models, boosting the *base accuracy* to improve upon.
Although this setup may yield smaller apparent gains, it more accurately reflects the true robustness and effectiveness of methods.
Notably, our approach is the only method that consistently improves performance across all datasets, suggesting that these gains stem from architectural changes--the integration of AGG$_B$--rather than hyperparameter tuning.
These results support our view that addressing structural inconsistency offers further opportunities to enhance GNN performance.
**W2. The paper aims to improve the robustness of GNNs, but the experiments are conducted on the original graph.**
Thank you for your helpful suggestion.
We focused on original graphs because our target is robustness among nodes with different structural properties within the same graph, rather than across graphs.
To evaluate this, we measured performance across groups based on node degrees (low vs. high) and structural roles (heterophilic vs. homophilic), which are closely linked to structural inconsistency.
Our method showed significant gains for low-degree and heterophilic nodes, supporting the claim that AGG$_B$ improves edge robustness.
Nevertheless, we agree that testing under graph perturbation is informative.
In response to your feedback, we conducted additional experiments using random edge removal in test graphs ([link - Table C](https://shorturl.at/GjlH9)).
AGG$_B$ significantly improves the standard GCN—even those trained with DropEdge—demonstrating its robustness benefits.
Furthermore, GNNs trained with DropEdge did not retain performance any better than those without it, reinforcing our claim that DropEdge alone is insufficient for robustness due to inherent inductive bias.
**Q1: There is a recent paper [1] that report high performance of classic GNNs, (e.g. 85.10 on Cora). Is there difference between your setup and theirs?**
We carefully reviewed [1] and identified several key differences in experimental setups.
In our work, we use 10 random splits per dataset, running one trial per split.
We select hyperparameters based on validation results from 5 of these splits to avoid overfitting to a specific partition.
In contrast, [1] employs fixed public splits—using a single public split for Cora—with 5 runs using random weight initializations.
This approach may yield higher accuracy and lower variance by tuning specifically for that split, but it might not generalize well to other partitions.
This approach may yield higher accuracy and lower variance due to tuning specifically for that split, but it might not generalize well to other data partitions.
Additionally, their hyperparameter search space is broader, including batch/layer normalization, residual connections, number of layers, additional linear transformations before and after GNN layers, and maximum epochs.
Our search includes dropout, hidden dimension, learning rate, and weight decay, following standard practices in previous GNN literature [2]. These differences likely explain the performance gap.
Thank you again for your insightful review and for considering our response.
[1] Classic gnns are strong baselines: Reassessing gnns for node classification.
[2] Pitfalls of Graph Neural Network Evaluation | Summary: This paper analyzes the robustness of GNN under dropping edges and proposes Aggregation Buffer (AGGB) as a solution, which enhances the robustness of GNN through a two-step training strategy while maintaining the knowledge of the original model. AGGB optimizes the shortcomings of DropEdge and improves the performance of GNN under unstable graph structures.
Claims And Evidence: The main claim of the paper is that GNN cannot effectively cope with the adjacency matrix perturbations caused by DropEdge due to its aggregation operation , resulting in performance degradation. To this end, the authors proposed AGGB, a module that can enhance the robustness of GNN, aiming to solve this problem. In the experimental section, the authors verified the effectiveness of AGGB through a series of comparative experiments. The experimental results provide evidence to support the claims in the paper.
Methods And Evaluation Criteria: The proposed method mainly includes Aggregation Buffer (AGGB), which is used to enhance the robustness of GNN under structural perturbations. The proposed evaluation criteria is appropriate.
Theoretical Claims: In the paper, several key theoretical claims are proposed. I checked the claims. In general, the theoretical claims are effectively supported by mathematical derivation and theoretical proof.
Experimental Designs Or Analyses: The experimental datasets are common datasets for semi-supervised learning. The evaluation method of degree bias and structural disparity is one of the key innovations of the experimental design in the paper. By evaluating the performance of the head node and the tail node respectively, the robustness of the model under different degree distributions can be better measured. For structural differences, nodes are divided into homogeneous nodes and heterogeneous nodes based on their homogeneity ratio. This is also an innovation.
Supplementary Material: Yes, I have read it. The supplementary materials contain further experimental results including theoretical proofs. I have verified the rationality of the proofs.
Relation To Broader Scientific Literature: GNN is widely used in various graph data tasks. DropEdge (Rong et al., 2019) proposed to enhance the robustness of the model by randomly deleting edges in the graph. However, this method only trains the model by perturbing the graph structure, and does not fundamentally solve the sensitivity of the GNN structure to perturbations. Based on this, this paper further proposes to introduce the AGGB module to correct the performance of GNN under perturbations, so that the model can adapt to different graph structure changes and improve robustness.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The proposed method is a novel approach for improving the robustness of GNNs under graph perturbations, as it addresses an important issue of structural changes in graph data.
2. The paper does a good job of clearly explaining the theories behind the designation and shows why existing methods fail to address these issues.
3. The experimental design is solid, with a wide range of datasets and a novel evaluation framework, including tests for robustness under GNNs.
Weaknesses:
There are no major weakness in this paper.
Other Comments Or Suggestions: See the questions.
Questions For Authors: 1. If there is still a large deviation between Q and the true distribution P, is it reasonable to assume that Q is sufficiently close to P?
2. In the current design, AGGB relies on the representations of all the first 𝑙 layers and the adjacency matrix. Does this design lead to information redundancy or introduce unnecessary noise? Is there a more streamlined way to integrate this information?
3. Currently, the method trains GNN first and then AGGB. Is it possible to introduce some AGGB mechanisms when pre-training GNN?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed review and valuable questions. We have reordered your questions since we believe Q1 and Q3 are closely related. We hope the following responses adequately address your concerns.
**Q2. Does the current design of AGG$_B$ which relies on all preceding representations lead to information redundancy or introduce unnecessary noise? Is there a more streamlined way to integrate this information?**
Thank you for raising this point.
There can be information redundancy in our design, but our rationale for integrating representations from all previous layers is to minimize the propagation of structural discrepancies.
Ideally, AGG$_B$ would fully correct inconsistencies at each layer; however, in practice, some discrepancies may remain unresolved in intermediate layers and can accumulate in deeper ones.
Relying solely on the immediately preceding layer $H^{(l-1)}$ risks carrying these issues forward, whereas leveraging all prior representations $H^{(0;l-1)}$ allows AGG$_B$ to access earlier, less corrupted information for more robust corrections.
For detailed analysis, we have strengthened our ablation study ([link - Table A, B](https://shorturl.at/2cblM)) (a) by including a single-layer variant $(D+I)^{-1}H^{l-1}W^{l}$ of AGG$_B$ and (b) running it for all datasets (beyond the 4 originally used).
Although this variant improves GNN performance in most cases, our original design consistently yields stronger results across all datasets and achieves the highest overall ranking.
Developing a more streamlined integration that minimizes information redundancy and noise is a promising future direction of this work.
**Q1. If there is still a large deviation between Q and the true distribution P, is it reasonable to assume that Q is sufficiently close to P?**
Thank you for this insightful question.
The assumption $Q \approx P$ is used in our approximation between Eq. (3) and Eq. (4):
$$
E_{P}[ \log Q(y_i |G_i) - \log Q(y_i|\tilde{G_i})] \approx D_{KL} (Q(y_i |G_i) \Vert Q(y_i|\tilde{G_i}))
$$
This approximation is adopted because the true distribution P is inaccessible.
We agree that assuming $Q \approx P$ is generally not valid when Q significantly deviates from P.
However, as the training proceeds with the bias term $D_{KL}(P(y_i|G_i) \Vert Q(y_i|G_i))$ optimized, it brings $Q$ closer to $P$, at least on the training distribution.
Furthermore, our two-step training scheme leverages this assumption in loss function only after base GCN is trained, which makes the assumption more reliable in practice.
To support the reliability of our framework, we conducted an experiment to indirectly assess the true distribution $P$ using the labels by the below formulation.
$$
\frac{1}{N} \sum_{i=1}^N \sum_{c=1}^{C} y_i(c) (\log Q(c|G_i) - \log Q(c|\tilde{G}_i))
$$
Although this expression uses one-hot ground truth labels (thus only sampling one class per node) and the labels include noise, it still offers a useful proxy to assess the validity of the approximation.
Our results ([link - Figure A](https://shorturl.at/MeZ73)) show that its shape closely mirrors that of the approximated KL divergence, indicating that our approximation captures trends well.
Furthermore, AGG$_B$ shows robust optimization behavior even under this label-based approximation, demonstrating its effectiveness in terms of robustness optimization.
We will include this discussion in the final version.
**Q3. Currently, the method trains GNN first and then $AGG_B$. Is it possible to introduce a mechanism for training $AGG_B$ when pre-training GNN?**
This is an insightful question that relates closely to the assumption discussed above.
The assumption $P \approx Q$ is used once more in the bias term of Eq. (5) due to inaccessibility to true distribution.
$$
D_{KL} (P(y_i |G_i) \Vert Q_B(y_i|G_i)) \approx D_{KL} (Q(y_i |G_i) \Vert Q_B(y_i|G_i))
$$
In our two-step scheme, the pre-trained $Q$ is optimized to be close to $P$, making this approximation more valid, especially when restricted to training samples (Sec. 4.3)
However, if AGG$_B$ were trained jointly with the GNN from scratch, $Q$ would initially deviate significantly from $P$, and the robustness-controlled loss could interfere with the optimization of the standard loss, leading to suboptimal guidance.
Nonetheless, we agree that incorporating AGG$_B$ into the end-to-end training of GNNs would enhance the applicability and elegance of our approach.
As noted in our conclusion, this is a promising direction for future work, and we are actively exploring strategies to integrate it into joint training.
Thank you again for your insightful reviews and for taking the time to read our response. | Summary: This paper revisits DropEdge, a data augmentation technique for GNNs that randomly removes edges to enhance robustness. While DropEdge helps mitigate overfitting, its performance gains in supervised learning are limited due to an inherent inductive bias in GNN architectures. To address this, the authors propose Aggregation Buffer ($AGG_B$), a parameter block that improves GNN robustness and enhances DropEdge’s effectiveness. $AGG_B$ can be integrated as a post-processing step in any GNN model. Empirical results on 11 node classification benchmarks show that $AGG_B$ significantly improves accuracy and mitigates degree bias and structural disparity. The paper provides a theoretical analysis of DropEdge’s limitations and demonstrates that $AGG_B$ serves as a unifying solution to structural inconsistencies in graph data.
Claims And Evidence: The proposed \( AGG_B \) takes \( H^{(0:l-1)} \) as input, rather than the standard 1-hop neighborhood representation \( H^{l-1} \). It is well known that incorporating \( H^{(0:l-1)} \) can enhance performance, as demonstrated in works like JKNet. However, this raises an important question: **Is the performance improvement due to the aggregation function and loss function introduced by the authors, or simply due to the use of \( H^{(0:l-1)} \)?**
To clarify this, I strongly recommend an **ablation study** where everything remains unchanged except that \( H^{l-1} \) is used instead of \( H^{(0:l-1)} \). This would help isolate the true contribution of \( AGG_B \). Without this analysis, the authors' claim remains inconclusive.
Methods And Evaluation Criteria: 1. Novelty of the method is limited. For example, two stage training with dropedge is proposed in Tuneup [1].
2. the Chameleon dataset used in the paper is known to be problematic[2]. Please use the filtered dataset instead.
[1] Hu, Weihua, et al. "TuneUp: A Simple Improved Training Strategy for Graph Neural Networks." arXiv preprint arXiv:2210.14843 (2022).
[2] Platonov, Oleg, et al. "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?." arXiv preprint arXiv:2302.11640 (2023).
Theoretical Claims: No, i did not check the proofs in the appendix.
Experimental Designs Or Analyses: 1. The performance improvement introduced by \( AGG_B \) appears to be marginal, with gains of less than 1% on many datasets. Considering the standard deviation, it remains inconclusive whether \( AGG_B \) is genuinely effective.
- I suggest the authors evaluate \( AGG_B \) on deep GNNs, as DropEdge tends to perform better with increased depth. This would provide a clearer understanding of its impact.
2. The authors make claims regarding heterophilous graphs, yet they only test on two heterophilous datasets, and one of which is problematic.
- A broader evaluation on more diverse heterophilous datasets is necessary to support these claims.
3. Given the marginal performance improvements and large standard deviations, it would be beneficial to include experiments on larger-scale datasets to assess \( AGG_B \)’s scalability and effectiveness.
Overall, the results presented are not convincing, and additional experiments—especially on deeper GNNs, more heterophilous datasets, and larger-scale benchmarks—are necessary to substantiate the claims.
Supplementary Material: Appendix A
Relation To Broader Scientific Literature: Two stage training of GNN with the use of Dropedge is proposed in Tuneup. The training framework is similar.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: The writing and presentation are clear and well-structured.
Other Comments Or Suggestions: N.A.
Questions For Authors: please see weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful review. Our response is organized around three points: (1) novelty, (2) ablation study on the AGG$_B$ design, and (3) additional experiments.
**Q1. The novelty of the method is limited. For example, two-stage training with DropEdge is proposed in TUNEUP.**
There were previous works that considered degree bias and structural disparity as separate issues.
Our main contribution is to offer a new perspective, reframing these issues as instances of a broader problem: **structural inconsistency**.
We propose AGG$_B$, which directly addresses this general problem.
To the best of our knowledge, no previous work has solved these two problems at once or even considered these problems together, but our AGG$_B$ consistently and significantly outperforms the approaches designed specifically for certain problem.
TUNEUP performs two-stage training to utilize the pseudo-labels from a classifier for semi-supervised node classification.
The reason why they used DropEdge is to reduce the effect of overfitting caused by the imperfect knowledge of pseudo labels; our approach is totally different.
Our two-stage training is motivated by our theoretical results showing that existing GNNs are inherently impossible to solve the structural inconsistency.
To bypass the inherent limitation of GNNs, we separate robustness optimization into a second stage and introduce a carefully designed parameter block, AGG$_B$, specifically tailored to optimize robustness effectively.
**Q2. Is the performance improvement simply due to the use of $H^{(0:l-1)}$?**
No. In response to your suggestion, we strengthened our ablation study ([link - Table A, B](https://shorturl.at/2cblM)) by including a single-layer variant, $(D+I)^{-1}H^{l-1}W^{l}$ and extending across all datasets (beyond the 4 originally used).
This variant satisfies conditions C1 and C2 while limiting the usable information to the immediate previous layer.
The results show that while alternative layer architectures provide gains under our training scheme and loss, our original AGG$_B$ consistently works the best.
It is also noteworthy that this experiment has a comparison with JKNet-style block, which also uses the information from all previous layers, and our AGG$_B$ consistently outperforms it.
Following your concerns, we also replaced Chameleon with its filtered version and added filtered Squirrel. The overall performance trends remain consistent.
In fact, the reason why we chose to use all previous layers is not because of performance.
If AGG$_B$ fails to fully resolve the structural discrepancies at intermediate layers, these inconsistencies may propagate into deeper layers, making the effect of AGG$_B$ partially reflected.
By referencing earlier representations, AGG$_B$ can access less corrupted information.
We will add more in-depth discussion on the choice of our parameter block in the final version.
**Q3. Marginal performance improvement and more additional experiments is requires**
We believe that a difference in experimental setups should be considered when evaluating the significance of improvements.
Unlike many studies that train GNNs from scratch, our method is applied to trained GNNs.
Few approaches—such as curriculum learning (e.g., TUNEUP) or graph augmentation (e.g., GraphPatcher)—operate in this setting.
While prior works often fix hyperparameters for base GNNs, we performed extensive grid searches, demonstrating that our method yields gains beyond what hyperparameter tuning can achieve—by addressing fundamental architectural limitations.
Although this setup may boost base accuracy and lead to smaller apparent gains, we believe it more accurately reflects the true effectiveness and robustness of our method.
Following your advice, we conducted three additional experiments:
(1) **Performance under edge removal** ([link - Table C](https://shorturl.at/GjlH9)) :
We directly evaluated robustness driven by AGG$_B$ under random edge removal. AGG$_B$ significantly outperformed standard GCNs—even those trained with DropEdge—demonstrating its edge robustness.
(2) **Experiments with deeper GCNs** ([link - Table D](https://shorturl.at/bhlls)) :
AGG$_B$ improved performance in 28 out of 30 configurations, with larger gains at greater depths, where increased aggregations makes models more vulnerable to structural inconsistency. Even on GCNs trained with DropEdge, AGG$_B$ boosted performance in 28 out of 30 cases, highlighting its distinct mechanism.
(3) **Experiments on larger datasets** ([link - Table E, F](https://shorturl.at/DvhF3)) :
On larger datasets, AGG$_B$’s performance remained consistent with earlier observations, underscoring its broad applicability.
We appreciate your valuable insights and important concerns of our work. We will incorporate these additional findings in the final version. Thank you again for your detailed feedback. | Summary: This paper analyzes DropEdge, which is widely used in GNNs. it shows that DropEdge has limited effectiveness for GNNs. Through theoretical analysis, the authors show the limitation comes from fundamental constraints in GNN architectures. They propose "Aggregation Buffer" (AGGB), a parameter block that can be added to any pre-trained GNN and trained with DropEdge. AGGB addresses structural inconsistencies in graphs, improving performance on 11 benchmark datasets while effectively mitigating common GNN problems like degree bias and structural disparity.
## After rebuttal
I am satisfied wth the authors' response.
Claims And Evidence: I found the following claims in the paper.
1. DropEdge has limited performance gains in supervised learning tasks due to fundamental limitations in GNN architectures. This has been well verified with both theoretical analysis and empirical evidence
2. The limited effectiveness of DropEdge stems from the AGG operation in GNNs and its inability to maintain consistent representations under structural perturbations. The authors develop a theoretical framework using discrepancy bounds (Theorems 3.8 and 3.9) to show that unlike MLPs, GCNs cannot establish a constant discrepancy bound independent of input when adjacency matrices differ.
3. Aggregation Buffer (AGGB) effectively addresses GNN limitations. This is also well supported with both theory and emprical results.
4. AGGB consistently improves performance across different GNN architectures and datasets. Comprehensive results in experiment study support this claim.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in this paper are well-justified and appropriate for addressing GNN robustness issues. The Aggregation Buffer design directly targets the identified theoretical limitations in GNN architectures with its degree-normalized linear transformation satisfying both edge-awareness and stability conditions. The two-step training approach is new and sound.
For evaluation, the paper employs a comprehensive approach using 11 diverse benchmark datasets, comparing against relevant baselines, The thorough ablation studies examine different AGGB architectures, loss functions, hyperparameter variations, and component contributions.
Theoretical Claims: Yes, I briefly check the theorical claims, but not in detail. I don't find errors.
Experimental Designs Or Analyses: Yes. I am convinced with the experimental design. Almost all benchmark datasets with different sizes are used. DropEdge and other drop based methods like Drop Message are included as baselines. I am convinced by the experiments. The ablation studies and in-depth analysis are also comprehensive.
I have a minor concern/suggestion.
The authors mention that "use a 10%/10%/80% split for training, validation, and test, which is common for semi-supervised learning." I suggest the authors revise this sentence. Since in the literature, they usually don't use this setting. For example, the cora, they often have two settings, one is with 20 samples for training per class; the other setting is with 1000 for testing, 500 for validation, and the others for training.
Supplementary Material: I checked the appendix. The code is also well-documented.
Relation To Broader Scientific Literature: This paper's work on improving GNN robustness through Aggregation Buffer connects to several research threads in graph learning. It extends theoretical understanding of DropEdge by providing a formal analysis of its limitations, building on discrepancy bound concepts commonly used in domain adaptation. The paper's findings on GNN vulnerability to structural perturbations align with literature on GNN over-smoothing. Its two-stage training approach with parameter freezing shares conceptual similarities with knowledge distillation techniques. By framing various structural inconsistency problems (degree bias, heterophily) as manifestations of edge-robustness limitations, the paper offers a unifying perspective that connects previously separate research directions in GNN architecture design.
Essential References Not Discussed: I think the related work part is already good. There are no other essential references that should be discussed.
I think it would be better if the author also included some discussions on general data augmentation techniques in graphs, not limited to random drop-based. For example, the ones based on mixup.
[1] G-Mixup: Graph Data Augmentation for Graph Classification (ICML 22)
Besides, for the drop-based ones, I also suggest adding some references about adative dropping, like
[1] Robust Graph Representation Learning via Neural Sparsification [ICML 20]
[2] Learning to Drop: Robust Graph Neural Network via Topological Denoising (WSDM 21)
[3] xAI-Drop: Don't use what you cannot explain (LOG 24)
Other Strengths And Weaknesses: Strengths
1.Strong theoretical foundation that connects the empirical limitations of DropEdge to fundamental properties of GNN architectures
2. Novel characterization of the bias-robustness trade-off in GNN training that explains previously observed phenomena
3. The proposed AGGB is simple yet effective, requiring minimal computational overhead
4. Extensive empirical validation across diverse datasets demonstrates practical utility
Weaknesses
1. The paper focuses exclusively on node classification tasks; application to other GNN tasks
2. The theoretical analysis primarily focuses on GCNs; stronger theoretical connections to other GNN architectures would strengthen the paper
Other Comments Or Suggestions: I don't have other comments.
Questions For Authors: 1. the current paper focuses on node classification tasks, but could the Aggregation Buffer approach be effectively applied to other common GNN tasks such as graph classification and link prediction? Would the theoretical insights about discrepancy bounds and the bias-robustness trade-off transfer directly to these tasks, or would they require significant adaptation?
2. the theoretical analysis primarily focuses on GCNs, to what extent can your findings regarding discrepancy bounds be generalized to other GNN architectures like GAT, GraphSAGE, or GIN?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed review and insightful questions. We hope the responses below address your concerns.
**Revising the sentence explaining the way of data split**
Thank you for your thoughtful suggestion. We acknowledge that for datasets such as Cora, Citeseer, and Pubmed, the settings you mentioned are more commonly used.
In the final version, we will revise it to:
``Experiments are repeated 10 times, each with a independently randomized 10\%/10\%/80\% split for training, validation, and test, respectively."
We hope this clarification addresses your concern.
**Discussions related to other data augmentation techniques in graph, non-dropping(e.g G-Mixup) and adaptive dropping(e.g NeuralSparse and xAI-Drop) methods.**
G-Mixup is inspired by Mixup in computer vision and generates synthetic graphs by interpolating between estimated graphons of different groups. While effective for graph classification, it is not directly applicable to node-level tasks—our focus—since node classification typically involves a single large graph rather than many samples.
NeuralSparse trains a dedicated sparsification network to remove task-irrelevant (noisy) edges deterministically. In this sense, it functions more as an architectural variant preceding the GNN than as a stochastic augmentation method.
xAI-Drop drops nodes with the low fidelity scores in high probability and can be viewed as a parametric, biased variant of DropNode. This method also produces reduced subgraphs (Definition 3.2), and such adaptive strategies could replace DropEdge in our framework to yield richer structural signals and potentially boost AGG$_B$’s performance.
We view this integration as a promising direction for future work and will include this discussion in the final version.
**W1. Could the Aggregation Buffer approach be effectively applied to other common GNN tasks such as graph classification and link prediction?**
For graph classification, the readout function aggregates node representations into a graph-level output.
We can view the input as a set of rooted subgraphs, $S\_i ={\\{G_j\\}}^{m}\_{j=1}$, where $m$ is the number of nodes in graph $i$.
The classification objective can then be expressed as $D_{KL}(P(y_i|S_i)||Q(y_i|S_i))$.
Since DropEdge produces a set of reduced subgraphs $\tilde{S}_i$, our bias–robustness framework can be applied similarly.
For link prediction, framed as a binary classification task for edge existence, the model input can be considered as a pair of rooted subgraphs, $\pi_{(u, v)} =\{G_u, G_v\}$.
The objective becomes $D_{KL}(P(y_{(u,v)}|\pi_{(u,v)})||Q(y_{(u,v)}|\pi_{(u,v)}))$, with data augmentation defined analogously on, $\tilde{\pi}_{(u,v)}$.
Although these tasks share the bias–robustness trade-off perspective, each requires task-specific adaptations. In graph classification, the set of node representations at each intermediate layer varies with node dropping, making discrepancy between different-sized sets hard to define. In link prediction, operations such as dot product, absolute difference, Hadamard product, or concatenation followed by an MLP require separate theoretical treatment. Thus, while the bias–robustness trade-off view can be extended, AGG$_B$’s layer-wise correction mechanism demands new theoretical insights and task-specific methods—a promising direction for future work.
**W2. To what extent can your findings regarding discrepancy bounds be generalized to other GNN architectures?**
Thank you for this question. We extended our discrepancy analysis beyond GCNs by considering the more generalized layer-wise update:
$$
\mathbf{H}^{(l)} = \sigma(\mathbf{A}^* \mathbf{H}^{(l-1)} \mathbf{W}_1^{(l)} + c \mathbf{H}^{(l-1)} \mathbf{W}_2^{(l)}),
$$
where $A^*$ is a (possibly normalized) adjacency matrix and $c$ is a constant. Under this formulation, Theorem 3.9 still holds, with constants $C_1$ and $C_2$ depending on model parameters and adjacency differences, can be shown by extending the arguments in our proof at Appendix D.
This abstraction encompasses a broader GNN architectures. For instance, inductive GCN corresponds to $A^* = (D+I)^{-1}A$ and $c=0$; GraphSAGE uses $A^* = D^{-1}A$ and $c=1$; GIN sets $A^* = A$ and $c = 1 + \epsilon^{(l)}$; and for GAT, with $c=0$ and we can bound the norm of $A^*$ due to its row-stochasticity.
Thus, our theoretical findings regarding discrepancy bounds apply to a wide range of GNN architectures. A formal proof for these cases will be included in the final version.
Thank you again for your insightful review and for taking the time to consider our responses. | null | null | null | null | null | null |
Predicting the Susceptibility of Examples to Catastrophic Forgetting | Accept (poster) | Summary: This paper reports on a large volume of observations regarding the learning speed and catastrophic forgetting in the context of continual learning, and proposes a new sampling strategy called SBS to improve the replay-based continual learning methods. The experiments consistently show the effectiveness of the proposed speed-based sampling, which highlights the practical value of the proposed strategy.
Claims And Evidence: Most of the claims are based on empirical study and I don't have any objections on them. However, as they are based on the empirical observation, the academic value of the work is clumsy. Especially, the sizes of q and s should be more theoretically derived for the method.
Methods And Evaluation Criteria: The datasets used are popular in the field of continual learning, and all the settings follow the standard convention. However, in some experiments, the hyperparameters need to be justified objectively.
Theoretical Claims: There's no theoretical claim in this paper.
Experimental Designs Or Analyses: The overall design of the experiments and analyses follows the standard convention.
Supplementary Material: I just looked over the overall things in the supplements, and couldn't find any serious faults.
Relation To Broader Scientific Literature: This is specific to the continual learning and less room for broader scientific literature.
Essential References Not Discussed: Most of the important references are discussed in the paper.
Other Strengths And Weaknesses: The strongest point of the paper is an extensive empirical study and interesting observation. However, despite the extensive efforts, the academic value is limited to the empirical study. The authors should investigate more in depth on the proposed method, and need to provide a theoretical justification on the proposed strategy.
Other Comments Or Suggestions: None
Questions For Authors: - How can you determine the optimal values for q and s without such a arbitrary task?
- What is the reason of the learning speed for the catastrophic forgetting?
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We appreciate your recognition of our extensive empirical study and the practical value of our findings.
Regarding the connection between learning speed and catastrophic forgetting, our primary contribution is identifying and characterizing this phenomenon. While a formal theoretical analysis is an exciting future direction, many fundamental insights in machine learning (e.g., double descent, simplicity bias) were first observed empirically before theory followed. Despite the lack of theory, our results have several more intuitive explanations: later-learned examples likely depend on more complex or composite features, making them more susceptible to forgetting when new tasks are introduced, as those features are more likely to break first. Another intuition is that a similar pattern occurs in human learning, where foundational skills persist longer than more complex ones.
On the choice of hyperparameters $q$ and $s$, our goal with SBS is to demonstrate that leveraging learning speed can improve continual learning across different algorithms and settings. While a more theoretically derived selection method could be valuable, it is non-trivial and beyond this paper’s scope. Instead, we propose a practical heuristic -- using a related self-supervised task (e.g., RotNet) to tune these parameters without additional labels. While RotNet was chosen arbitrarily, our experiments show that this approach consistently finds the best hyperparameters across all the setups we evaluated.
---
Rebuttal Comment 1.1:
Comment: I agree that theoretical justification may follow the empirical study, and in terms of completeness I wanted to note the necessity of the work for the future. In that sense, this work is on its way to that direction, and a little premature to the goal. | Summary: The manuscript addresses the challenge of selecting the most relevant examples to store in a memory buffer for rehearsal-based continual learning. Based on a preliminary analysis on the speed at which examples are learned and forgotten, the manuscript finds that the most complex samples are the fastest to be forgotten. Based on this, it introduces Speed-Based Sampling (SBS), a sampling strategy that excludes the $q$ fastest and $s$ slowest learned samples from the selection.
Claims And Evidence: The claims are supported by a thorough analysis.
Methods And Evaluation Criteria: The benchmarks are adequate.
Theoretical Claims: N/A
Experimental Designs Or Analyses: While the design of the benchmarks is adequate, I believe the manuscript should compare against other methods that seek to retain complex or diverse samples from the stream, such as [1,2]. Indeed, while the proposal addresses this by measuring the "learning speed" of each sample, [1] uses the loss of the examples as a proxy for complexity.
[1]: Buzzega, Pietro, et al. "Rethinking experience replay: a bag of tricks for continual learning." ICPR 2021.
[2]: Bang, Jihwan, et al. "Rainbow memory: Continual learning with a memory of diverse samples." CVPR 2021.
Supplementary Material: Yes, in particular the graphs regarding the preliminary analysis in the CIL setup.
Relation To Broader Scientific Literature: The considerations regarding the speed at which the examples are learned are based on existing literature and the results are known in literature [1,2]. However, the proposed SBS seems novel and relevant for the CL field.
[1]: Maini, Pratyush, et al. "Characterizing datapoints via second-split forgetting." aNeurIps 2022.
[2]: Millunzi, Monica, et al. "May the Forgetting Be with You: Alternate Replay for Learning with Noisy Labels." BMVC 2024.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1: As mentioned in lines 389-391, the results presented in Tab. 1 for the proposed SBS are obtained with the RotNet auxiliary task. However, given the high computational cost of training the network multiple times, a more fair comparison should compare with the fixed values for q and s, as mentioned in l379. Indeed, while the manuscript mentions that "q=s=20% consistently enhances performance across all evaluated datasets", I did not find results to support such a claim. On the same note, I suggest including the computational cost (in terms of compute time) of SBS with the RotNet auxiliary task.
2: It was not clear to me if the experiments of Tab. 1 (main paper) and 2 (supplementary) were conducted on the TIL or CIL setting. From Sec 2.2 it seems to me that both figures and tables in the main paper are for TIL and that results for CIL are in the supplementary. However, App. B only mentions results for the qualitatives and the figures and I could not find the results for the main experiments in the CIL setting.
3: I did not understand the reasoning behind removing the $s$ slowest-learned examples from the dataset. According to the preliminary analysis, if the motivation is to retain the most complex samples, why would we need to discard the ones that would be the first to be forgotten?
Other Comments Or Suggestions: N/A
Questions For Authors: Overall the manuscript is well written and mostly easy to read and understand. I would consider raising my score upon addressing the concerns in “Other Strengths And Weaknesses" and the lack of similar methods that seek to retain complex or diverse samples from the stream.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for your willingness to consider raising your score. Below, we address your concerns in detail.
**concern 1**
The advantages of picking $q=s=20\%$ across different settings are scattered throughout the figures of the paper: see Figs 4(a-b), 16(a-b), 17(a-b), 18(a-c), 19(a-c), 20(a-c), 21(a-c) and 24(a-c). However, we will add a dedicated appendix section explicitly showcasing results for picking $q=s=20\%$, hopefully making this point clearer.
Additionally, we will add a new row to Table 1, reporting the results of SBS with these fixed hyperparameters, showing the advantages that can be gained by SBS with and without the hyper-parameter search. The results that will be added are:
|buffer|1k|10k|
|-|:-:|:-:|
|CIFAR-100-20|54.65|73.72|
|CIFAR-10-5|82.24|86.15|
|TinyImageNet-2|52.15|63.12|
to compare, the baseline results of random sampling (Table 1 in the original manuscript) are:
|buffer|1k|10k|
|-|:-:|:-:|
|CIFAR-100-20|51.75|71.25|
|CIFAR-10-5|79.48|82.4|
|TinyImageNet-2|49.81|61.48|
While removing hyperparameter tuning slightly reduces performance, SBS still consistently outperforms other methods across all settings. This demonstrates that SBS is effective even without extensive tuning.
We will also add computational cost for both the RotNet-based version and the fixed-hyperparameter version in the appendix to provide a complete picture.
**concern 2**
Thank you for catching this oversight. You are correct that Table 1 (main paper) and Table 2 (supplementary) both report results for TIL. We also conducted experiments for CIL, and while they were qualitatively similar (though lower in absolute performance due to the increased difficulty of CIL), they were mistakenly omitted from App. B.
We will correct this in the camera-ready version by including the CIL results in App. B.
**concern 3**
This is an interesting question. While we do not have a formal theoretical proof, we can provide an intuitive explanation based on our observations.
While the training error of networks tends to go to $0$, meaning that every example in the dataset is going to be learned at some speed, the test error often does not. The training examples that are learned slowest are often those that correspond to points near test points that the model simply did not learn, meaning they are inherently difficult for the model to generalize to, even in non-continual settings. Therefore, in the CL settings, where the replay buffers are limited, we want to focus on examples that the model can generalize from, and removing the slowest-learned examples helps allocate space to more useful ones, improving overall performance. However, we note that the better the model can perform on the original task, the smaller the number of these unhelpful slower-to-learn examples, allowing us to remove less of them to get better performance.
**comparison to other methods**
You suggested comparisons with methods that retain complex or diverse samples, specifically Rainbow Memory (RM) [1] and LARS [2]. Below, we summarize how our method relates to them and provide additional quantitative comparisons.
**Rainbow Memory (RM)**
The original RM paper focuses on blurry-CIL settings, whereas our study considers disjoint settings. The RM paper itself notes that RM does not consistently improve performance in disjoint settings, often performing similarly to random sampling. Since SBS outperforms random sampling, this suggests that SBS is superior to RM in these settings.
That said, we acknowledge RM’s potential value and have already discussed its relation to SBS in App. G. We will make this connection more explicit in the main paper and include RM results in Table 1 for direct comparison.
**LARS (from BAGS of Tricks)**
LARS is presented in [2] as one of several "tricks" to improve CL and was not presented as a stand-alone sampling method, which is expected to improve performance across different settings. LARS suggests to sample the buffer randomly, but to remove examples from it in a suggests loss-aware, where examples with low loss will be removed more frequently. We agree that adding a comparison to LARS can strengthen our work, and we added such a comparison to the paper. To compare to SBS, we isolated LARS from the rest of the "tricks" suggested in [1], and evaluated it on the different datasets and buffer sizes in Table 1. Below are the results:
|buffer|1k|10k|
|-|:-:|:-:|
|CIFAR-100-20|51.82|71.41|
|CIFAR-10-5|79.17|84.93|
|TinyImageNet-2|50.22|62.78|
While LARS improves over random sampling in most cases, its gains are smaller than those achieved by SBS. Following your suggestion, we will add both RM and LARS results to Table 1 in the camera-ready version for a clearer comparison.
---
[1] Bang, Jihwan, et al. Rainbow memory: Continual learning with a memory of diverse samples. (CVPR 2021)
[2] He, Tong, et al. Bag of tricks for image classification with convolutional neural networks. (CVPR 2019) | Summary: In this work, the authors investigate catastrophic forgetting from a behavioral perspective, observing the connection between learning speed and forgetting: examples learned more quickly tend to be more resistant to forgetting. Motivated by the observation, this paper introduces Speed-Based Sampling (SBS), a simple yet general strategy that selects replay examples based on their learning speed, for replay-based continual learning. Experiments show the advantages of the proposed SBS over uniform sampling and other sampling baselines.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No, as no theoretical claims are included.
Experimental Designs Or Analyses: Note that the paper highlights the benefits of SBS to existing replay-based continual learning methods. However, the authors only compare their method with uniform sampling when utilizing continual learning methods. To make the claim sounder, the authors should also compare it with other sampling methods when utilizing replay-based continual learning methods.
Supplementary Material: Not attached.
Relation To Broader Scientific Literature: The proposed sampling method is tailored for continual learning tasks.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strength:
1. The phenomenon that examples learned later are more likely to be forgotten while earlier learned examples are not is well studied and supported by extensive experiments.
Weaknesses:
1. The proposed method may be invalid for prevailing large-scale training. First, when training with large-scale data, people usually train on the dataset one or two epochs, making the proposed ‘learning speed score’ inaccurate as mentioned by the authors. Second, with large-scale data set, it would be computationally expensive to run the RotNet auxiliary task on the same dataset for selecting hyperparameters q and s to get the best performance shown in the paper. Although the authors suggest that simply setting q = s = 20% is generally good choice, the performance of the choice is not clear in the paper.
2. The paper emphasizes the benefits of SBS to existing replay-based continual learning methods, but the authors only compare their method with uniform sampling when utilizing continual learning methods.
Other Comments Or Suggestions: Refer to Weaknesses part.
Questions For Authors: 1. In Table 1, methods ‘Max Ent’ and ‘IPM’ are almost consistently worse than random sampling, which seems unusual. Do you have any insights into this phenomenon?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We appreciate that you found our analysis comprehensive and our results well-supported. Below, we address your comments in detail:
**Comparison with non-uniform sampling in continual learning methods**
Most competitive continual learning methods rely on random sampling [1] because non-uniform sampling strategies generally fail to provide consistent improvements across continual learning methods and settings. SBS is, to our knowledge, the first sampling method that consistently enhances different continual learning methods, making this a key contribution of our work.
Some methods, like iCaRL [2], incorporate specific non-random sampling strategies (herding for iCaRL), but these are tightly coupled to the method itself and do not generalize well. For instance, herding does not work effectively with most competitive continual learning methods, while iCaRL is designed around herding and performs poorly with other sampling methods, including random sampling. However, for completeness, we will follow your suggestion and include additional results on continual learning methods with different sampling functions in the supplementary material and briefly discuss them in the main paper.
**Max entropy and IPM performance**
Max entropy (Max Ent) selects examples with the highest entropy in network logits. While a common baseline, it is rather a naive baseline, and it is known to perform poorly in practice, as also observed in our results.
IPM [3] selects a diverse subset of examples from a dataset, and tries to maximize the information that this set has. The original IPM paper was not focused on continual learning, and showed advantages of this subset in other fields, such as active learning, representation learning, and GAN training. However, prior work [4] indicates that its effectiveness in continual learning is inconsistent, which aligns with our findings. A possible explanation is that IPM prioritizes "informative" examples, which often overlap with slower-to-learn examples. Since our analysis shows that slower-to-learn examples are more prone to forgetting, this selection bias may explain IPM’s weaker performance. Interestingly, IPM performs better in settings where learning is easier or buffer sizes are large (e.g., CIFAR-100-20 with a 10k buffer), which is consistent with this hypothesis.
**Choice of hyper-parameters**
As noted in our response to reviewer hXyj, the performance of $q=s=20\%$ is already present in multiple figures throughout the paper, including Figs. 4(a-b), 16(a-b), 17(a-b), 18(a-c), 19(a-c), 20(a-c), 21(a-c), and 24(a-c), showing its advantages over random sampling. However, to make this clearer for future readers, we will add a dedicated paragraph consolidating these results in the camera-ready version. Additionally, we will include SBS results with these fixed hyper-parameters in Table 1, demonstrating that they still significantly outperform random sampling without requiring additional tuning or additional computation. The exact numeric results for $q=s=20\%$ that will appear in Table 1 are also provided in our response to reviewer hXyj.
--------------------------------------
[1] Wang, Liyuan, et al. "A comprehensive survey of continual learning: Theory, method and application." (PAMI 2024)
[2] Rebuffi, Sylvestre-Alvise, et al. "icarl: Incremental classifier and representation learning." (CVPR 2017)
[3] Zaeemzadeh, Alireza, et al. "Iterative projection and matching: Finding structure-preserving representatives and its application to computer vision." (CVPR 2019)
[4] Brignac, Daniel, Niels Lobo, and Abhijit Mahalanobis. "Improving replay sample selection and storage for less forgetting in continual learning." (ICCV 2023) | null | null | null | null | null | null | null | null |
A Multi-Region Brain Model to Elucidate the Role of Hippocampus in Spatially Embedded Decision-Making | Accept (poster) | Summary: This work studies recent neurophysiological results through the use of computational modeling and reinforcement learning. The authors consider several different variants of the model (varying the connectivity and coding properties) and find only certain of these models achieve high performance and similarities to neural activations. The authors' work provides several direct predictions that can be tested with new experiments. More generally, the authors' approach proscribes a framework for studying decision-making embedded in spatial navigation.
Claims And Evidence: All claims are supported by some evidence. However, I felt that M3 and M5's conjunctive coding of evidence and position (Figs. 3 and 7) was not convincingly shown. In particular, while it is clear that M1, M2, and M4 do no conjunctively encode evidence and position, the maps of M3 and M5 are not particularly localized (and differ from the maps of Nieh et al. (2021)). The authors compute the mutual information between E x Y and E x RY (and RE x Y) in Fig. 6 - could they do the same with the hippocampal data from Nieh et al. (2021) and compare the distributions? Something to make it more quantifiable how similar the ExY coding is in the model to the hippocampal data would strengthen this claim.
Methods And Evaluation Criteria: The methods and evaluation make sense for the problem and were well motivated.
Theoretical Claims: No theoretical claims were made.
Experimental Designs Or Analyses: The experimental design and analysis were well done and appeared sound.
Supplementary Material: I reviewed the entirety of the Appendices.
Relation To Broader Scientific Literature: The paper explores a surprising neurophysiological result that challenges how the field thinks about place cells and their role in spatial navigation and episodic memory. The authors' result, finding that they expect grid cells to jointly encode space and evidence, is itself surprising and motivates greater study of the MEC-hippocampus-cortex circuit. These points were well made by the paper and the Introduction did a nice job of laying out the motivation.
Essential References Not Discussed: The authors include all "essential" references. However, I do think there are several relevant papers that are worth citing (which, I imagine the authors are familiar with):
1. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007796 - The joint encoding of space and evidence reminds me of this work on how grid cells can encode higher dimensional variables. The random projection method discussed by Klukas and co-authors would be interesting to consider in the context of the authors model.
2. https://proceedings.neurips.cc/paper_files/paper/2017/file/5f14615696649541a025d3d0f8e0447f-Paper.pdf - The use of RNN models to generate hypotheses about navigation seems related to the work of Kanitscheider and Fiete.
3. https://www.cell.com/neuron/fulltext/S0896-6273(11)00609-X (and other time cell related papers) - The idea that hippocampal neurons encode accumulating evidence (and temporal integration) is somewhat reminiscent of hippocampal cells encoding accumulating time.
Other Strengths And Weaknesses: **Strengths:**
1. This paper provides a general framework for testing hypothesis on hippocampal-MEC-cortical interactions.
2. This paper is able to replicate existing neurophysioloigcal results and leads to new predictions on the role of grid cells that can be experimentally tested.
3. This paper is generally well written and motivated, and the experiments are well done.
**Weaknesses:**
1. As noted previously, the major weakness to my mind is that the conjunctive encoding of location and evidence is not convincing from the heat maps alone. Quantifying the extent of this joint encoding would strengthen this claim.
2. The results in Fig. 2A clearly show that only M3 and M5 are able to optimally perform the task. This has clear implications for building an AI system that can solve the task. But the mice in Nieh et al. (2021) don't perfectly solve the task. Looking at Fig. 1b in Nieh et al. (2021), it seems like the mice are solving the task at around 75% accuracy. M0, M0-star, and M4 all solve the task at around 70% at around by the end of training. I understand that the subsequent results provide additional support for M3/M5 having similarities to the neural recordings, but discussing this aspect of performance (that the mice did not perform the task as well as M3/M5) is important I think.
3. Models M0 and M0-star have the same number of units as the other models. But one thing that was not clear to me was if they have the same number of trainable weights (since the Vector-Hash model has some fixed weights). Could one reason models M0 and M0-star not perform as well until later in training be due to the fact that they have more weights to optimize over?
Other Comments Or Suggestions: 1. Both $p$ and $h$ are used to reference the hippocampal units (i.e., $W_{pg}$ and $W_{hg}$).
2. (Very minor) I had not heard of the word "lacunae" before. Maybe change it to something more common?
3. "theoretical work can leverage this standardized yet rich task and readily test its predictions" - should this be "and its predictions can be tested"?
Questions For Authors: 1. How does the mutual information distributions (for E x Y) compare between real hippocampal data and the models?
2. How closely do models M0, M0-star, and M4 perform to the mice in Nieh et al. (2021)?
3. How does the number of training able parameters differ between M0/M0-star and the other models?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their precious time and comprehensive feedback. We deeply appreciate their recognition of the significance of our work. In summary, the reviewer raised an insightful question regarding experimental v.s. model data comparison to strengthen our claims, and our interpretation of task performance, for which we discuss the implication of M5 for studying lapses. We provide results for when we shrink the number of trainable parameters of baselines to match that of M5, which do not alter our conclusion. We have fixed the typos on our end. **Together with the discussion below, does this address the reviewer’s questions?**
> The authors compute the mutual information…in Fig. 6. could they do the same with the hippocampal data from Nieh et al. (2021) and compare the distributions? Something to make it more quantifiable…would strengthen this claim.
Thanks for the insight. We’d like to first refer to Figs S2b, c in Nieh et al., which plot mutual information (M.I.) of hippocampal data, the same as our Fig 6. We share it here for convenience: https://imgur.com/a/acEavag.
This qualitatively matches the results of M3-M5 in Fig 6, showing models with both position and evidence encoded in grid cells give rise to higher ExY M.I. than when one variable is randomized. This makes sense for M4 because M.I. is not a metric for localization. However, as shown in Fig 4, M4 does not match experiments in having choice-specific neurons.
We don’t expect a quantitative match in M.I, because
1. Our environment is discretized;
2. We don’t have the same level of noisiness as the experiments;
3. Real neurons exhibit higher redundancy;
4. The real data is smoothed but we didn’t apply smoothing, hence the scale of M.I. is different. The smoothing also influences the localization in HPC maps.
> M0, M0-star, and M4 all solve the task at around 70% at around by the end of training…subsequent results provide additional support for M3/M5…but discussing this aspect of performance (that the mice did not perform the task as well as M3/M5) is important I think.
Indeed, the lapses phenomenon in mice (low performance) is an important ongoing research direction.
While early literature proposed lapses could be due to perceptual noise, more recent work like Ashwood et al., Nat Neurosci., 2022 pointed out that mice might be leveraging different strategies characterized by different states, while Pisupati et al., eLife, 2021 proposed the mice are potentially balancing exploration and exploitation when making mistakes. These statistical models have specific parameters to account for lapses based on their hypotheses and are evaluated based on how well they fit the data. While we find the lapses literature important, we don’t think the low performance of M0, M0+, & M4 would be informative for understanding lapses, as you carefully noted that they are limited in reproducing experimental findings.
While studying lapses is beyond our scope, future studies on lapses could leverage M5, validated as being able to reproduce experimental findings. It’d be interesting to investigate what the key ingredients are to observe lapses while preserving the properties we observed. To our knowledge, a mechanistic model for lapses would be fairly novel. **We will include this in our discussion.**
> Could one reason models M0 and M0-star not perform as well until later in training be due to the fact that they have more weights to optimize over? How does the number of training able parameters differ between M0/M0-star and the other models?
This is a fair point. We focused on an equal number of neurons as a fair baseline for expressivity. Here we found the number of parameters trained by back prop doesn't play a key role:
Assume the total number of parameters in an RNN w/ input size $I$, hidden size $H$, and output size $O$, w/ bias, is $(IH + HH + H) + (HO + O) = H^2 + H(I+O+1) + O,$ then M5 has ~$26,755$ back-prop-trainable parameters. We can apply similar reasoning to M0 and M0+, with $1,172,843$ and $1,174,995$ gradient-trainable parameters. To match M5, M0 would have $H \approx 158$ and M0+ would have $H \approx 157$.
We ran these mini-models for 3 trials each: the **conclusion in the main paper stays the same**. Plz see the anonymous results here with mini-M0 & mini-M0+ (red and orange, w/ learning rate of 1e-5), in comparison with M5 and original M0 and M0+: https://imgur.com/a/Qr5MmT4
Supplement: mini models w/ other learning rates, no change of conclusions: https://imgur.com/a/vCD2HRG
We notice our use of M0-star & M0+ interchangeably; we will use “M0+” consistently.
> Other Comments
Thanks! We'll ensure a consistent notation `h`. We’ll modify “lacunae” to gaps and correct the sentence.
---
Ashwood, Zoe C., et al. "Mice alternate between discrete strategies during perceptual decision-making." Nature Neuroscience (2022).
Pisupati, Sashank, et al. "Lapses in perceptual decisions reflect exploration." Elife (2021).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses to my questions. I apologize for not noting the MI plot in Nieh et al. - thank you for pointing me to it. I am convinced by the additional experiments with the change in number of trainable parameters for M0 and M0+. And I am glad to hear the authors will add more discussion on lapse of performance - I agree this is a really interesting future direction!
My one remaining question is (which I should have put in the question section) is why do the authors think the conjunctive tuning in the RNN is not as strong as in Nieh et al. Indeed, to me it looks like the tuning in Fig. 3 is quite weak and diffuse. If you applied smoothing, would they more like what was seen in Nieh et al.? Or maybe some aspect of the task needs to be changed for stronger tuning? Any thoughts on this and any added discussion in the paper on the fact that the tuning is weak would be appreciated.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their prompt and insightful follow-up. We are glad that the reviewer’s questions regarding lapses and the number of trainable parameters in baselines are resolved.
Here we address the follow-up regarding the tuning visualization of HPC activities:
> My one remaining question is (which I should have put in the question section) is why do the authors think the conjunctive tuning in the *HPC* is not as strong as in Nieh et al. Indeed, to me it looks like the tuning in Fig. 3 is quite weak and diffuse. If you applied smoothing, would they more like what was seen in Nieh et al.?
In short, **yes, if we apply smoothing, the conjunctive tuning in HPC looks more localized and stereotypical.** Notably, smoothing neural data is fairly standard in literature, likely for visualization purposes. For example, the Vector-HaSH paper smoothed and interpolated the HPC activities when showing the tuning curve for better visualization (see Fig 4b in Chandra et al., 2025 and their official [code repository](https://github.com/FieteLab/VectorHaSH/blob/main/Grid_place_tuning_curves_and_additional_expts_Fig1_4_6.ipynb)).
Here we demonstrate that applying smoothing enhances the localization of tuning curves in selected neurons from M4, M5: https://imgur.com/a/hIeKSar.
We follow a similar 2-stage processing procedure in Nieh et al. (Mutual Info Analysis in Method): we apply 1d Gaussian filter with a $\sigma_1$ of $1$, then thresholded the result so that values less than $2$ standard deviations across the time series were set to $0$; we then apply 1d Gaussian filter with a $\sigma_2$ of $1$ or $2$. We will add the above figure to our Appendix. We appreciate the reviewer’s comment.
We’d like to emphasize that our mutual information analyses (Fig 6) and the place fields and evidence fields (Figs 4 & 8) provide supporting evidence on the overall HPC (conjunctive) tuning of each model, despite smoothing impacts the quality of activity visualization. | Summary: This paper introduces a series of models to investigate how animal brains may solve the accumulating towers task. Beginning with a simple RNN, the authors add model components until they arrive at extensions of the Vector-HaSH model of the hippocampus that also include a cortical model component. The authors now evaluate the models on their ability to solve the accumulating towers task, but also analyze how the added complexity enable to increased model performance.
The authors predict that only models that include models that allow conjunctive position-evidence tuning in grid cells exhibit the conjunctive position-evidence hippocampal representations that had been identified earlier experiments in mice.
## update after rebuttal
I have no remaining questions for the authors. I applaud that the authors commit to releasing the code for their experiments, and are putting effort into making their code understandable and useful for other researchers.
Claims And Evidence: Yes, the main claims are clearly laid out and supported well.
Methods And Evaluation Criteria: Yes, the accumulating towers task is suited well for this investigation.
Theoretical Claims: There are no theorems or proofs.
Experimental Designs Or Analyses: The experiments seem sound.
It would be helpful if the authors could include more information on how hyperparameters for model training were chosen, and how much those hyperparameters affect each of the half-dozen models that the authors built. A few extra sentences in appendix A.1 should be sufficient.
It would be helpful if the authors could include more equations in the paper, likely in an appendix. For example the CAN equations should be included.
Supplementary Material: Yes, all of it.
Relation To Broader Scientific Literature: This paper uses the Vector HASH method https://www.biorxiv.org/content/10.1101/2023.11.28.568960v2 , a recent and state-of-the-art hippocampal model, as a core component in most of the models investigated. The authors build a series of models and compare their behavior with behavior that was observed in laboratory experiments on mice in earlier work. These models present significant extensions of the Vector HASH method, and advance our understanding of the hippocampus and brain regions that it connects to.
The paper is particularly exciting because the authors pay close attention to existing neuroscience literature, and marry this with complex but meaningful modeling.
Essential References Not Discussed: Relevant literature has been cited.
Other Strengths And Weaknesses: Strengths:
- The paper advances the field of hippocampal modeling by investigating how state-of-the-art existing models of the hippocampus might perform when integrated with other parts of the brain. While the work reviewed here is entirely theoretical (computer models), the work appears to be inspired by recent (2021) measurements taken in brains of mice. The paper’s alignment with existing laboratory measurements on the task that is being investigated (accumulating towers) makes this work particularly exciting. Last: the authors announce in the paper that they are performing experimental work to test hypotheses born out of the models built here.
- The work presented in the paper is quite comprehensive: the authors investigate half a dozen models, all of which are interesting from a computational neuroscience perspective.
- The paper is written well: while the paper is dense, it does a good job of conveying the material. The diagrams showing the model architectures make it easy to understand the experimental set-up.
- The authors highlighted that their work builds on the Vector-HaSH method published earlier. This enables readers who are familiar with the surrounding literature to instantly situate the paper.
Weaknesses:
- It would be helpful if the authors could include more equations in their paper. For example, the CAN or Vector-HaSH equations (which have of course been published elsewhere in papers cited by the paper reviewed here) are so central to the paper that they should in my opinion by included in the paper or in the appendix.
- While the authors promise to release the code for the experiments, they did not include them with the supplementary materials. While authors may be worried about their code getting stolen by reviewers, peer review can inspire improvements to experiment code that benefit the quality of the paper, and make it easier for other research groups to build on the paper, or even just benchmark against the models presented here.
Other Comments Or Suggestions: - It would be helpful if the authors could include more equations in their paper. For example, the CAN or Vector-HaSH equations (which have of course been published elsewhere in papers cited by the paper reviewed here) are so central to the paper that they should in my opinion by included in the paper or in the appendix.
Questions For Authors: The authors mention on line 682 (Appendix A.1) that different learning rates were used for different model configurations, and that a hyper parameter search was performed for some configurations, while other configurations used common “default” hyper parameter configurations. This made me wonder: how much would tuned hyperparameters have helped for models that used default hyper parameter configurations?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their precious time, and their comprehensive feedback. We deeply appreciate their recognition of the significance and clarity of our work. In summary, the reviewer raised an insight regarding the role of hyperparameters, and made additional comments to help us enhance clarity. We share the hyperparameter tuning results and CAN implementation, which are added to the appendix. **Together with the discussion below, does this address the reviewer’s questions?**
> It would be helpful…how hyperparameters for model training were chosen…A few extra sentences in appendix A.1 should be sufficient.
> how much would tuned hyperparameters have helped for models that used default hyper parameter configurations?
Thank you for attention to this detail. M0 and M0+ used a different learning rate due to gradient issues (line 681). We share the result on hyperparameter tuning on learning rate (LR) [0.001, 5e-4, 1e-4, 5e-5] (here’s an anonymous colored version: https://imgur.com/a/Ve7B3Mo).
The current set of hyperparameters largely ensures fairly optimized performance, except a slower LR of 1e-4 improves M4 instability in Fig 2. **However, none of them changed the claims made in the paper.** The table includes mean [success rate]/[exploration time] +- sem at the terminating episode across 3 trials. The highest success rate and the lowest exploration time are bolded for each model.
| LR | M0 | M0+ | M1 | M2 | M3 | M4 | M5 |
|--------:|:------------------------------:|:------------------------------:|:-------------------------------:|:-------------------------------:|:-------------------------------:|:-------------------------------:|:-------------------------------:|
|0.00005 |61.69±9.27/27.49±3.05 |**74.13±5.90**/**24.80±0.22** |49.98±0.24/24.63±0.68 |**50.34±0.78**/**21.80±0.47** |93.76±1.28/28.88±0.42 |92.73±3.69/**25.23±0.93** |95.03±0.50/25.94±0.79 |
|0.0001 |**67.96±5.65**/**26.65±2.41** |72.54±1.40/25.13±0.60 |49.75±0.20/25.39±0.56 |50.19±0.21/23.33±0.96 |94.59±2.50/29.11±0.17 |**97.35±0.34**/26.53±0.70 |**97.78±0.55**/27.04±0.67 |
|0.0005 |0.00±0.00/200.00±0.00 |0.00±0.00/199.99±0.02 |49.99±0.55/**22.92±0.04** |49.79±0.46/27.66±1.77 |**97.62±0.87**/**22.42±0.04** |72.13±17.06/25.47±4.11 |97.52±2.40/**19.96±0.22** |
|0.001 |0.00±0.00/200.00±0.00 |0.00±0.00/200.00±0.00 |**50.18±0.21**/24.64±0.34 |49.95±0.29/26.67±1.84 |83.86±8.77/32.60±1.13 |55.11±7.09/28.14±1.84 |87.23±12.14/27.48±1.29 |
We also provide **an updated Fig 2** with the optimized LRs. The changes are a LR of 5e-5 for M0+ and 1e-4 for M4. This **doesn’t alter the conclusion** in Section 5.1 regarding M5’s learning efficiency & fast exploration: https://imgur.com/a/U0OVo2H. **We'll include this in the appendix.**
> It would be helpful if the authors could include more equations in the paper, likely in an appendix. For example, the CAN equations...
Thank you for pointing this out. We agree that including more equations would enhance the clarity. If accepted, we will include below in the appendix in the camera-ready version. Specifically, we implement $CAN()$ in Eq. (1), based on the Vector-HaSH repo [1]:
$g(t+1) = \textsf{CAN}[g(t), v(t)]
= \mathbf{M} g(t),$
where $\mathbf{M}$ denotes a shift matrix depending on the velocity signal $v$. For simplicity, suppose we have two grid cells with periodicities 3 and 4 and use a single dimension instead of two dimensions (our case). Then, $\mathbf{M}$ is defined as a shift matrix in each grid module as follows:
$
M = U = \begin{bmatrix}
0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\
0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\
1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\
\end{bmatrix}
$
if $v$ shifts the bump activity to the right, otherwise $U^T$.
Formally, $M_{i,j} = 1$ if $(j - X_{k-1}) \mod \lambda_k \equiv (i + \text{velocity} - X_{k-1}) \mod \lambda_k$, where $X_{k-1} < i, j < X_k \forall i, j, k, X_k = \sum_{l=1}^{k} \lambda_l$. Otherwise, $M_{ij} = 0$.
2D Vector-HaSH is a simple extension of this to two-dimensional grid states and velocities.
> code release.
We appreciate the reviewer’s emphasis on reproducibility and agree accessible code is important for advancing the field. We're finalizing documentation and ensuring the code is well-organized and user-friendly. We remain committed to releasing the full codebase upon acceptance (w/ the camera-ready version).
---
[1] https://github.com/FieteLab/VectorHaSH | Summary: This work is motivated to implement efficient reinforcement learning (RL) inspired by computation in hippocampus. It develops a multi-region brain model that incorporates hippocampal-entorhinal circuit based on the Vector-HaSH model. It shows a structured, content-addressable associative memory with neural representations is biologically grounded efficient RL solver. It proposes with a variants of models from M0 to M5 with different grid cell and place cell coding strategy, and inputs to the MLPs or RNNs. The results demonstrate joint integration model induces efficient learning, and evidence-position co-tuning in grid cells, how this model's results aligned with current experimental evidence.
Claims And Evidence: This work is based on existing Vector-Hash model, and extending its studies to a multi-region brain model across entorhinal, hippocampal and neocortical regions, and systematically exploring the role of joint integration and coding strategy to induce efficient learning. It proposes a variant of models to find the one leads to the most efficient learning (number of episodes taken to converge). It also further checks how the finding from models aligned with existing experimental findings, which presents a very solid study.
Methods And Evaluation Criteria: The study is conducted with a task-driven perspective, and evaluates with multiple variants of constraints, to finding alignments between experimental phenomenon, which is orthogonal to the data-driven paradigm, and directly fitting the experimental measured the data, which could also ignore critical components. And only one task has been evaluated in the experiment, this framework could potentially extend to more diverse tasks and settings to achieve a more comprehensive study.
Theoretical Claims: There are no theoretical proofs or claims.
Experimental Designs Or Analyses: The soundness/validity of experimental designs have been evaluated, systematic explorations are conducted, and results are solid. One major limitation is that the work is mostly biologically driven, as it claims to an efficient brain-inspired reinforcement learning rule, while it does not directly compare with existing models in artificial intelligence sides.
Supplementary Material: The supplementary material covers sufficient experimental designs for reproducing the experimental results and additional results to increase the soundness of the work.
Relation To Broader Scientific Literature: This work has covered sufficient amount of literatures and a comprehensive survey. The study is well grounded.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: As described above, this work introduces a systematic exploration of the computation in hippocampus for decision making with a task-driven perspective, and it delivers interesting results on how a joint integration strategy contributes to efficient learning and aligning with existing experimental discovered phenomenons.
Other Comments Or Suggestions: N/A
Questions For Authors: Any model designs do not directly follow biological constraints, or findings are not aligned with experimental discoveries?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer’s precious time and comprehensive feedback. We deeply appreciate their generous recognition of the significance of our work. The reviewer shared an interesting insight regarding comparison with advanced AI models, and had a clarifying question on how our setup and findings compare with the biology. Here we address both. **Together with the discussion below, does this address the reviewer’s questions?**
> One major limitation is that the work is mostly biologically driven, as it claims to an efficient brain-inspired reinforcement learning rule, while it does not directly compare with existing models in artificial intelligence sides.
Thanks a lot for the insights, and we agree having some comparison with advanced AI models could be quite interesting to see if brain-inspired models could perform SoTA in the standard machine learning benchmark (e.g, image classification, Atari). However, our main focus is to design a biologically-inspired model that can be applied to explain neuroscience experiments. This objective is aligned with previous neuroscience-application studies that only use simple RNNs as baselines published in top ML venues (e.g., Miller et al., NeurIPS, 2023; Valente et al., NeurIPS, 2022).
> Any model designs do not directly follow biological constraints, or findings are not aligned with experimental discoveries?
We think the biological constraints can be assessed through
1. the information flow among regions, and
2. the representation produced by the model.
While there are not sufficient neuroscience experiments (to the best of our knowledge) revealing the exact information flow among regions in the context of spatially-embedded decision-making tasks, M1-M5 are all reasonable hypotheses of (1) given the current neuroscientific understanding (elaborated in Section 4). However, as presented in our paper, M1, M2, M4 did not produce experimentally aligned representations (e.g., Figs 4, 8).
---
Miller et al. "Cognitive model discovery via disentangled RNNs." NeurIPS. 2023.
Valente et al. "Extracting computational mechanisms from neural data using low-rank RNNs." NeurIPS. 2022. | Summary: This paper aims at providing a mechanistic characterisation of how sensory and abstract (task-dependent) information is encoded and transmitted across different brain regions, including the hippocampus (HPC), medial and lateral entorhinal cortex (mEC and lEC), and the cortical circuity. The authors specifically focus on explaining the data reported in Nieh et al., 2021, which is an decision-making task based on sensory evidence accumulation under the spatial context. The model is largely based on the existing Vector-HASH+ model (Chandra et al., 2025), but with the additional cortex component (represented by an RNN) that takes in the hippocampus readout and map into actions. Several variants of the baseline Vector-HASH model are proposed, underlying different candidate intra- and inter-circuitry mechanisms. The authors then trained the models on the abstracted version of the towering task from Nieh et al., 2021, and show that post-training, the place cells indeed exhibit selectivity with respect to spatial location and sensory evidence. They also show that the sensory and task information are only well-separated in one specific circuitry configuration, and making predictions for the mEC-HPC circuitry.
Claims And Evidence: The paper is quite well-written, with comprehensive citations to relevant literature.
Some claims made by the authors are ungrounded and questionable, I list them and some general comments/questions below.
- The authors assumes direct projections from lEC and mEC consistutes the firing of place cells. How could the authors enforce sparsity and spatial selectivity in place cells then? I do not see a mechanistic explanation for the necessary occurrence of place fields. I think the missing of inductive biases in constructing the place cells limits the capability of the model for predictions beyond those already presented in Nieh et al. 2021.
- The fact that M0 and M0+ models do not yield good performance could due to poor implementation of the recurrent structure. I can imagine a recurrent networks designed for temporal integration (e.g., those used for explaining drift diffusion) will be able to perform the task whilst not requiring the complicated EC-HPC network. I hence think this is an unfair comparison and statement to make.
- The authors make claims about the criticality of sensory inputs from lEC in driving efficient learning and spatial navigation. However, I cannot draw such conclusion from Figure 5. The lack of qualitative comparison with M3, the model that yields similar performance in the decision making task as the M5 model, undermines the validility of the authors' claim.
- The model is largely an adoption of the Vector-HASH+ under a simple RL setting. I do find the neural predictions and hypotheses interesting, but the paper is limited in terms of methodological novelty.
- It would be useful to apply the model to other tasks to substantiate the validity of the model, such as the two-armed bandit task from Mishchanchuk et al., 2024.
Overall, I accredit the authors' mechanistic attempt, but I personally find many claims made in the paper to be ungrounded, and no sufficient ablations exist to support their claims. Adding on the limited methodological novelty in the paper, I am leaning towards rejection. But I am happy to change my mind should the authors provide compelling empirical support for their claims.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There is not theoretical claims in the paper.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, all of them.
Relation To Broader Scientific Literature: The paper proposes a mechanistic model for predicting circuitry configurations across multiple brain regions that lead to cognitive behaviours. I think this is a useful application of the Vector-HASH+ model beyond spatial contexts.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A, see above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer’s precious time & comprehensive feedback. We sincerely thank them for recognizing the significance & clarity of our work. The reviewer raised questions on the support for our claims, method novelty, and the biological grounding of models. Here we clarify with the relevant content in the paper. **Together with the discussion below, does this address the reviewer’s concerns?**
> The model is largely an adoption of the Vector-HASH+... I do find the neural predictions and hypotheses interesting, but…limited in terms of methodological novelty.
Thanks for acknowledging our findings. The architecture is based on Vector-HaSH (VH, Chandra et al., Nature, 2025), but **the “+” part is our original development (lines 215-218)**. We emphasize our application-driven work is among the first to apply VH systematically, with appropriate adaptation (e.g., the “+” part), to understand spatial decision-making w/ experimentally verifiable predictions. Similar studies exist w/ RNNs but lack the necessary bio details in modeling multi-region circuits.
> ...I accredit the authors' mechanistic attempt, but I personally find many claims made in the paper to be ungrounded, and no sufficient ablations exist to support...
Our M1-M5 are ablations of multiple hypothesized multi-region interactions. We refer to VH (Chandra et al., 2025) for the detailed bio grounding as they’ve substantially addressed them. We ensured further claims are supported w/ evidence. We address below the reviewer's specific concern on the sensory criticality claim, but happy to elaborate on other claims if needed:
> The authors make claims about the criticality of sensory inputs from lEC in driving efficient learning and spatial navigation. However, I cannot draw such conclusion from Fig 5. The lack of qualitative comparison with M3…undermines the validility of the authors' claim.
The sensory criticality claim is made for efficient navigation & low-d representation, **not** for learning. The navigation aspect is evident in Fig 2B (M3 vs M5) & lines 304-314; low-d representation aspect is in Fig 5, w/ M3 in Appendix E, referred to in line 413. Joint grid code is critical to efficient learning (M3 & M5 in Fig 2A, lines 299-303).
> The authors assumes direct projections from lEC and mEC consistutes the firing of place cells. How could the authors enforce sparsity and spatial selectivity in place cells?
Relevant bio groundings are inherited from & addressed in VH (Fig 6 in Chandra et al., 2025), so we didn’t reiterate. We'll include these details in the appendix:
**Sparsity:** Per line 200, $W_{hg}$ is a random projection matrix generated by *standard Gaussian distribution* (Method in Chandra et al. (2025)), so half of the activation is 0 on expectation. The input is sparse as the firing of each grid module is one-hot per model inductive bias (line 208). ReLU is applied to `h` (eqns 2 & 3) to also enforce sparsity.
The number of unique grid states ($\prod \lambda^2_i$) is much smaller than the number of unique activated HPC states ($2^{N_h}$), so only a small number of HPC cells are >0.
**Selectivity:** Each sensory state is associated w/ a specific grid state by updating $W_{hs}, W_{sh}$, so each sensory state is only associated with certain HPC states.
> …M0 and M0+ models do not yield good performance could due to poor implementation of the recurrent structure. I can imagine a recurrent networks designed for temporal integration…will be able to perform the task whilst not requiring the complicated EC-HPC network...think this is an unfair comparison...
Vanilla RNN is fairly powerful & extensively studied, e.g., Yang et al., Nat Neurosci, 2019 & Driscoll et al., Nat Neurosci, 2024 applied RNNs to NeuroGym tasks with temporal components.
And, our goal is to assess the added value of the structured EC-HPC networks, not to benchmark arbitrary network classes; changing the recurrent structure of baselines necessitates changing M1-M5’s RNN for fair comparison. This would be unnecessary and defeat the purpose of isolating the inductive bias introduced by EC-HPC circuits.
Our work highlights the utility of leveraging VH as a **hypothesis-generating testbed** to study the entorhinal-hippocampal-cortical network. A task-optimized recurrent structure misses the necessary bio details.
> ...useful to apply the model to other tasks…validity of the model...two-armed bandit...
The validity of the model is grounded in Chandra et al. (2025), and we emphasize the depth, not breadth, of results since
1. Our scope is spatially embedded decision-making with phenomena specific to HPC & related circuits. e.g., two-armed bandit task doesn't study the **spatial aspect** of HPC.
2. The tower task is justified for many reasons noted in lines 36-63, e.g., `we focus on this task to integrate our findings into a larger cohesive narrative that transcends the inherent scope limitations of stand-alone studies on arbitrarily chosen tasks`. | null | null | null | null | null | null |
ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning | Accept (poster) | Summary: The paper proposes ZebraLogic, a newly developed benchmark dataset of logic grid puzzles derived from constraint satisfaction problems. The authors systematically evaluate LLM performance across different levels of problem complexity. The authors show "curse of complexity", where model accuracy declines significantly as puzzle complexity increases. The study also explores some strategies to improve reasoning performance, including Best-of-N sampling. The results suggest that scaling model size and test-time compute is insufficient to achieve reasoning in current LLMs.
Claims And Evidence: Why Self-Refinement is promising? It seems that majority voting (or self consistency) has better results than self-refinement.
Methods And Evaluation Criteria: The paper doesn't provide new methods.
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I check the appendix.
Relation To Broader Scientific Literature: The paper proposes a new dataset for evaluating LLM reasoning capabilities.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: S1. The paper systematically evaluates various LLMs through extensive experiments.
S2. The proposed dataset is engaging and valuable.
S3. The paper is well-structured and easy to follow.
W1. Introducing a novel method inspired by the experimental results would strengthen the paper.
W2. The Best-of-N sampling with oracle selection has limited practical value. In most reasoning tasks, obtaining an oracle verifier is challenging.
W3. Reasoning capabilities in LLMs encompass various aspects, but the paper primarily focuses on logical reasoning. It would be beneficial to discuss other perspectives.
W4. Many observations in the paper align with well-known trends. For example, the significant performance drop as task complexity increases is expected. Providing deeper insights beyond these common trends would enhance the contribution.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review! We value your constructive feedback and will address your suggestions in the revised version.
---
#### Q1: Why Self-Refinement is promising? ...
Both self-refinement and majority voting are methods aimed at enhancing the reasoning performance of LLMs, and they are orthogonal to each other. For instance, self-refinement can be used to generate multiple outputs, which can then be evaluated through majority voting to select the most likely output. However, our goal is not to develop a state-of-the-art model for solving these puzzles but to demonstrate that all commonly used approaches, including self-refinement and majority voting, struggle to some extent with these challenges.
The reason we explore self-refinement is that reasoning models like O1 and R1 indicate that LLMs exhibit stronger reasoning capabilities when allowed to reflect on their previous steps, identify errors, and self-correct. Our focus is on understanding the scaling behavior of self-refinement (with multi-turn prompting) and how it compares to majority voting, highlighting the limitations shared across these approaches.
---
#### W1
We believe our insights will encourage the development of better logical reasoning models, and we will discuss their implications for future research in the revised version.
Here are key points and experiments we will add:
- **Improved Training:** We will generate numerous training examples and explore reinforcement learning methods like GRPO to boost model reasoning. We will also test if training on ZebraLogic puzzles generalizes to domains like math and code generation.
- **Improved Inference:** We will investigate inference techniques to enhance reasoning, such as forced re-verification prompting and refined best-of-N sampling strategies.
---
#### W2
We agree that Best-of-N sampling with oracle selection has limited practical value and do not propose it as a solution for reasoning improvement. Instead, we aim to highlight the difficulty of these reasoning tasks by testing the limits of repeated sampling, even with a perfect oracle. This shows that current models struggle under ideal conditions, underscoring the tasks' challenge. We respectfully disagree that this is a paper weakness but will clarify this point to avoid confusion.
---
#### W3
We recognize that LLM reasoning includes domains like spatial, causal, and analogical reasoning beyond logical reasoning. However, we argue that logical reasoning is a foundational aspect of intelligence, justifying it as a critical starting point. Its structured nature enables precise evaluation and controlled experiments, vital for broader reasoning insights.
Additionally, our methodology for creating controllable reasoning problems is adaptable to other domains. For example, in spatial reasoning, we could generate grids with constraints for models to infer relationships. Many reasoning tasks share a logical basis involving constraint-solving, suggesting our approach provides a versatile framework for advancing various LLM reasoning types.
---
#### W4
We appreciate the feedback and agree that performance drops with task complexity across domains. However, we contend that the extent and nature of this decline in logical reasoning for LLMs are underexplored, which our work systematically examines.
Our paper provides new insights into LLM scaling limits in logical reasoning, including:
1. **Quantifiable Complexity Metrics:** We introduce ZebraLogic, a 1,000-puzzle benchmark, using search space size and Z3 conflict count to explain performance drops, offering insights beyond prior studies (Sec. 2.3, Fig. 8).
2. **Scaling Behavior Analysis:** We examine model size, sampling, and test-time compute, showing that even advanced methods (e.g., Llama-3.1-405B, pass@128) fail past certain complexity levels, questioning the "more scale" assumption (Sec. 4-6, Fig. 1).
3. **Reasoning Token Insights:** Our analysis of OpenAI’s o1 reveals heavy use of hidden chain-of-thought tokens (up to 10x more than GPT-4o), yet performance plateaus at high complexities, indicating inference-time reasoning trade-offs (Sec. 6, Fig. 3).
4. **Practical Implications:** We evaluate strategies like Best-of-N sampling and self-verification, noting their limits and proposing explicit step-by-step reasoning to enhance LLM capabilities (Sec. 5-6).
In summary, while the performance drop with complexity may align with general expectations, our work offers a rigorous, systematic dissection of this trend in logical reasoning, uncovering its boundaries and underlying causes. These contributions provide deeper insights that extend well beyond common observations, positioning ZebraLogic as a valuable tool for both understanding and addressing the reasoning limitations of LLMs.
---
Thank you again for the review! We will address your suggestions in the revised version. Please contact us with any further questions. | Summary: The paper introduces ZebraLogic, a benchmark dataset of 1,000 logic grid puzzles derived from constraint satisfaction problems (CSPs), to evaluate the scalability of large language models (LLMs) in complex non-monotonic reasoning. Key findings include the curse of complexity, and scaling model size or test-time compute (e.g., sampling, backtracking) offering limited improvements. The study evaluates GPT-4o, Llama-3, and specialized reasoning models (o1, R1).
Claims And Evidence: Mostly yes.
Methods And Evaluation Criteria: Yes. The evaluation criteria are reasonable, including both puzzle and cell-level accuracy.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes.
The main experiment and test-time scaling experiment are reasonable.
However, the paper lacks error analysis, such as qualitative failure case studies.
Supplementary Material: The supplementary material contains most of the paper's codes. It lacks environment requirements and a readme file, making it hard to reproduce.
Relation To Broader Scientific Literature: The paper is related to LogiQA (QA-style logical reasoning) and CLOVER (Neuro-symbolic LLM-solver integration).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The paper isolates logical reasoning from domain knowledge, ensuring a controlled evaluation.
2. The paper uses two complementary metrics to define problem complexity: search space size and Z3 conflicts.
3. The experiments test on a broad range of models. The curse of complexity is effectively demonstrated.
4. Various inference-time strategies are tested, including Best-of-N sampling, majority voting, and self-verification. Results find that scaling alone is insufficient.
Weaknesses:
1. The paper does not deeply analyze why scaling fails.
2. The paper lacks a qualitative analysis of model errors.
3. The paper assumes that Z3 conflicts correlate with reasoning difficulty but does not experimentally validate this claim.
4. The paper only evaluates models in a one-shot setting, which might not be optimal for logical reasoning.
Other Comments Or Suggestions: 1. Figure 3 needs more captions to describe the results briefly.
Questions For Authors: 1. Can you briefly discuss why scaling fails?
2. Does the number of hidden reasoning tokens generated by o1 models correlate with puzzle difficulty?
3. Why does Best-of-N sampling with oracle selection significantly improve performance, while reward models fail to be as effective?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful review! We address each of the concerns and questions raised, and we will incorporate these clarifications and additional analyses into the camera-ready version of the paper. We hope our responses adequately resolve the issues highlighted and kindly ask for your reconsideration of the scores if appropriate.
#### W1: The paper does not deeply analyze why scaling fails
Due to page constraints, our initial submission focused on empirical observations, but we plan to enhance the revised paper with a detailed examination of failure modes. This will include: (1) analysis of the number and types of unsatisfied constraints in model outputs, and (2) investigation of the correlation between clue ordering and solution correctness. These additions will provide deeper insights into the limitations of scaling for logical reasoning in LLMs. We also did qualitative analysis and discussed them in the response to W2.
#### W2: The paper lacks a qualitative analysis of model errors
Thanks for the suggestion! While we included some failure case studies (e.g., Appendix C.1), page limits constrained a full qualitative analysis in the main text. We will address this in the revised version by integrating key error patterns:
- **Non-Reasoning Models:** Frequent hallucinations in initial and later steps (e.g., Llama-3.1-405B) lead to inconsistent deductions and incorrect outputs.
- **Reasoning Models:** Fewer early hallucinations (e.g., o1, R1), but errors persist in later steps due to incomplete backtracking.
- **Self-Verification:** Reasoning models excel with self-correction (e.g., R1’s “Wait”/“Correct” markers, o1’s clue revisits), absent in non-reasoning models.
- **Clue Rephrasing:** Periodic rephrasing of clues by reasoning models enhances constraint understanding and reduces errors.
#### W3: The paper assumes that Z3 conflicts correlate with reasoning difficulty but does not experimentally validate this claim
Thanks for the question. We recognize that "reasoning difficulty" has no universal definition and appreciate concerns about Z3 conflicts as a general indicator beyond Z3. To clarify, Z3 conflicts reflect the "reasoning difficulty of Z3," a leading systematic heuristic solver, though not fully solver-agnostic like search space size. Still, we find it insightful. Search space size, like grid size in our study, indicates challenges for uninformed reasoners, where a 10x increase means 10x more brute-force effort. Likewise, Z3 conflicts gauge difficulty for advanced solvers like Z3. Although we didn't quantitatively validate Z3 conflicts' correlation with perceived reasoning difficulty, our results using Z3 conflicts and grid size as proxies match scaling trends in benchmarks like AIME regarding model size and inference cost.
#### W4: The paper only evaluates models in a one-shot setting, which might not be optimal for logical reasoning
In our preliminary experiments, we find that providing more few-shot examples do not improve the performance. We will include the results in the revised paper. It is indeed more likely to improve the performance for smaller non-reasoning models. However, our focus in the paper is to study the scaling behavior of reasoning models, instead of finding the best prompting strategies or exact few-shot examples to get state-of-the-art performance on reasoning benchmarks.
#### W5: The supplementary material lacks environment requirements and a readme file, making it hard to reproduce
Thanks for pointing out! We will add a README file to the repository to help the readers to reproduce the results. We believe the code is not too complicated to run. The only complex part is to parse model outputs and calculate the metrics, while the other parts are standard LLM inference and data processing.
#### Q1: Can you briefly discuss why scaling fails?
Please refer to the answer for W1.
#### Q2: Does the number of hidden reasoning tokens generated by o1 models correlate with puzzle difficulty?
Yes, as shown in Figure 6 and discussed in Section 6.1, the number of hidden reasoning tokens generated by o1 models is positively correlated with puzzle difficulty, where difficulty is reflected by the Z3 conflict metric as an indicator of the reasoning challenges faced by Z3.
#### Q3: Why does Best-of-N sampling with oracle selection significantly improve performance, while reward models fail to be as effective?
Oracle selection picks the model output closest to the ground truth, boosting performance as it mimics using ground truth as a reward signal, which is impractical. We study reward models to explore their limits in evaluating LLM reasoning performance.
---
Thank you for your review! We will address your suggestions in the revised version. We hope these clarifications and additional analyses adequately resolve the issues highlighted and kindly ask for your reconsideration of the scores if appropriate. | Summary: The paper introduces ZebraLogic, a benchmark of logic grid puzzles derived from constraint satisfaction problems (CSPs), to evaluate the logical reasoning capabilities of LLMs. Key findings include:
1. Curse of Complexity: LLM performance declines sharply as puzzle complexity (measured by search space size and Z3 solver conflicts) increases, even with model scaling or test-time compute.
2. Scaling Limitations: Larger models (e.g., Llama-3.1-405B) improve performance on simpler puzzles but fail in highly complex scenarios, suggesting fundamental reasoning limitations.
3. Hidden Reasoning Tokens: Models like o1 generate significantly more hidden chain-of-thought (CoT) tokens during inference, correlating with improved performance, though scaling plateaus at high complexity.
4. Test-Time Compute: Best-of-N sampling and self-verification prompts yield marginal gains but fail to overcome the curse of complexity.
Claims And Evidence: Yes
Methods And Evaluation Criteria: - ZebraLogic is well-designed, using CSPs to isolate logical reasoning from domain knowledge.
- Complexity metrics (search space size, Z3 conflicts) are appropriate.
- Prompting / Best-of-N sampling / Voting / Self-Verify metrics are common practice.
Theoretical Claims: The paper focuses on empirical findings.
Experimental Designs Or Analyses: - Broad evaluation of open/closed-source models across complexity levels.
- Lack latest models like o3-mini.
Supplementary Material: Yes, the supplementary material are very detailed, include data, code, and evaluation results.
Relation To Broader Scientific Literature: The work aligns with prior studies on LLM benchmarking, especially for reasoning (like MATH).
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: 1. Typo: "rater" → "rather" (Page 13).
2. Consider evaluating newer models (e.g., o3-mini) to strengthen claims like scalability and token efficiency.
Questions For Authors: Could the authors expand their analysis to include newer models (e.g., o3-mini high/medium/low, Gemini 2.0 Flash Thinking, Grok-3, and QwQ-32B) to provide additional data points for validating the curse of complexity and hidden reasoning token dynamics? Such comparisons would strengthen the generalizability of the findings, particularly for scaling trends and hidden token analysis.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful review! We address each of the concerns and questions raised, and we will incorporate these clarifications and additional analyses into revision of the paper. We hope our responses adequately resolve the issues highlighted and kindly ask for your reconsideration of the scores if appropriate.
#### W1: Lack of evaluation on the latest models like o3-mini
Thank you very much for the suggestion! The O3-mini APIs were not stable at the time of our experiments, which often leads to timed out errors and refusals such as "Invalid prompt: your prompt was flagged as potentially violating our usage policy." The O3-mini-high model was particularly unstable.
We recently re-ran the experiments and will include the results in the version. The current results are as follows:
| MODEL | XL | Large | Medium | Small | Cell Acc |
|------------------------|------|-------|--------|-------|----------|
| o3-mini-2025-01-31-high | 75.5 | 87.5 | 97.1 | 99.7 | 95.7 |
| o1-2024-12-17 | 42.5 | 78 | 92.1 | 97.2 | 78.7 |
| deepseek-R1 | 28.5 | 73.5 | 95.7 | 98.4 | 80.5 |
| o3-mini-2025-01-31-low | 23 | 64.5 | 91.1 | 99.4 | 72.6 |
| o1-preview-2024-09-12 | 17 | 59.5 | 88.2 | 98.1 | 75.1 |
| o1-mini-2024-09-12 | 12 | 39 | 76.8 | 87.5 | 70.3 |
We will include the results in the camera-ready version.
#### Q1: Could the authors expand their analysis to include newer models (e.g., o3-mini high/medium/low, Gemini 2.0 Flash Thinking, Grok-3, and QwQ-32B) to provide additional data points for validating the curse of complexity and hidden reasoning token dynamics? Such comparisons would strengthen the generalizability of the findings, particularly for scaling trends and hidden token analysis.
Yes, we are more than happy to expand the analysis to include newer models and do more analysis with their reasoning process, and find the common failure modes for suggesting the future direction of the research. Please satey tuned for our leaderboard website for the latest results and an easy-to-use tool for evaluating all LLMs.
#### Typo: "rater" → "rather"
Thank you for pointing out! We will fix it in the camera-ready version, and will carefully proofread the paper to avoid similar typos.
---
Thank you so much for your review! We will address your suggestions in the version. We hope these clarifications and additional analyses adequately resolve the issues highlighted. | Summary: This paper investigates the logical reasoning capabilities of large language models (LLMs) by introducing ZebraLogic, a benchmark dataset of 1,000 logic grid puzzles. These puzzles are formulated as constraint satisfaction problems (CSPs) with controlled complexity levels, allowing for systematic evaluation of LLMs' reasoning abilities across varying difficulties. The authors use two key metrics to measure puzzle complexity: search space size and Z3 conflict count (the number of conflicts encountered by an SMT solver). The research also describes what the authors call the "curse of complexity," a considerable loss in LLM performance as issue complexity grows, even when expanding to bigger models.
Claims And Evidence: 1. **Performance decline with complexity:** The authors demonstrate this through comprehensive evaluations across various model sizes and architectures, showing consistent performance drops as complexity increases.
2. **Limitations of model scaling:** The experiments show that even the largest models (e.g., Llama-3.1-405B) achieve near-zero accuracy on highly complex puzzles, supporting the claim that model scaling alone cannot overcome reasoning limitations.
3. **Optimal reasoning token ratio:** The claim that there exists an optimal ratio of reasoning tokens to Z3 conflicts is supported by their analysis of o1 models, though this evidence is more correlational than causal.
Methods And Evaluation Criteria: 1. The two complexity metrics (search space size and Z3 conflicts) provide complementary views of problem difficulty.
2. The categorization of puzzles into four complexity groups enables a clear analysis of performance trends.
3. The evaluation across multiple model architectures and sizes allows for robust comparative analysis.
4. The puzzle generation methodology using clue types and templates is sound, ensuring puzzles have unique solutions while maintaining varied difficulty levels.
Theoretical Claims: There are no theoretical proofs to verify in this paper. The authors establish that ZebraLogic is NP-complete through reduction from the Quasigroup Completion Problem. The paper primarily focuses on empirical evaluation rather than theoretical derivations.
Experimental Designs Or Analyses: 1. The authors evaluate a diverse set of models spanning different architectures, sizes, and both open-weight and proprietary systems.
2. One potential limitation is that the analysis of o1's hidden reasoning tokens relies on estimates since these tokens aren't directly accessible. However, the authors acknowledge this limitation and verify their estimates once o1-full is released.
3. The Best-of-N sampling experiments are insightful, though the choice of majority voting and reward models for candidate selection could be expanded to explore other selection strategies.
Supplementary Material: Yes, data and evaluation code. I haven't run the code.
Relation To Broader Scientific Literature: The paper's core contribution of showing how performance scales across multiple dimensions (model size, sampling, reasoning tokens) provides a more comprehensive picture. The paper builds on prior work on logical reasoning benchmarks like LogiQA [1] and related investigations of LLM reasoning limits. It extends previous research on grid puzzles [2, 3, 4] by systematically controlling for complexity.
[1] Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. Liu et. al, IJCAI 2020.
[2] Learning to Automatically Solve Logic Grid Puzzles, Mitra et al, EMNLP 2015
[3] Faith and Fate: Limits of Transformers on Compositionality, Dziri et al, NeurIPS 2023.
[4] Step-by-Step Reasoning to Solve Grid Puzzles: Where do LLMs Falter?, Tyagi et al, EMNLP 2024.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
1. The multi-dimensional analysis of scaling behavior (model size, sampling, reasoning tokens).
2. The inclusion of both open-weight and proprietary models for comprehensive evaluation.
**Weaknesses:**
1. The analysis of o1's reasoning process relies on limited visibility into its hidden reasoning tokens
2. The self-verification experiments could be expanded to explore more sophisticated reflection mechanisms
3. The paper doesn't extensively analyze what specific types of reasoning failures occur as complexity increases
Other Comments Or Suggestions: 1. A more detailed discussion of how these findings might inform training objectives for future models would strengthen the paper.
Questions For Authors: 1. The paper shows that o1 models generate ~10x more hidden reasoning tokens than standard models. Have you explored whether standard models could benefit from being allowed to generate similarly extensive reasoning steps, or are there architectural differences that make this approach uniquely effective for o1?
2. How might the findings from ZebraLogic generalize to other formal reasoning domains beyond logic grid puzzles? Do you expect similar complexity scaling behaviors for tasks like mathematical reasoning or program synthesis?
3. Have you analyzed what specific types of reasoning errors or failure modes emerge as puzzle complexity increases? This could provide insights into targeted interventions for improving reasoning capabilities.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful review! We address each of the concerns and questions raised, and we will incorporate these clarifications and additional analyses into revision of the paper. We hope our responses adequately resolve the issues highlighted and kindly ask for your reconsideration of the scores if appropriate.
#### W1: The analysis of o1's reasoning process relies on limited visibility into its hidden reasoning tokens
We recognize that OpenAI's restrictions on raw reasoning tokens limited our o1 analysis. To gain deeper insights, we will analyze DeepSeek's R1 model's visible reasoning tokens in the revised paper.
#### W2: The self-verification experiments could be expanded to explore more sophisticated reflection mechanisms
Thank you for this insightful suggestion. We recognize the value of exploring more advanced reflection mechanisms to enhance our analysis. To address this, we will conduct a detailed investigation into sophisticated reflection approaches and incorporate the findings into the version of the paper, ensuring a more comprehensive evaluation of their impact.
#### W3 & Q3: The paper doesn't extensively analyze what specific types of reasoning failures occur as complexity increases
Thank you for your valuable suggestion regarding the analysis of reasoning failure modes. We acknowledge that the page constraints of the initial submission limited our ability to include a comprehensive discussion on this topic. To address this concern thoroughly in the revised paper, we will incorporate a detailed examination of specific failure modes, including:
- A quantitative breakdown of the number and types of constraints (e.g., uniqueness, clue-based, positional) that remain unsatisfied in LLM outputs, identifying prevalent error patterns such as failures in handling non-monotonic reasoning or spatial constraints as complexity scales.
- An examination of how the sequence and presentation of clues influence solution accuracy, testing for systematic biases or dependencies in the reasoning process, such as over-reliance on early clues or misinterpretation of later ones.
- Case studies from our human evaluation (Appendix C.1) featuring specific examples of reasoning breakdowns, such as incomplete backtracking or incorrect counterfactual assumptions, particularly in puzzles with large search spaces or high Z3 conflict counts.
Our analysis of the o1 outputs reveals two predominant failure modes that significantly impact its reasoning performance:
1) **Lazy Mode (Most Prevalent):** The model often uses brief summaries instead of detailed step-by-step reasoning (e.g., "going step-by-step" or "cycling through possibilities"), reducing solution robustness.
2) **Incorrect Proof of Impossibility:** The model wrongly claims clues are unsatisfiable, misinterpreting constraints like adjacency, leading to premature, incorrect conclusions and exposing limitations in complex constraint handling.
Thanks again for your valuable suggestion! We will address this in revision.
#### Q1: The paper shows that o1 models generate ~10x more hidden reasoning tokens than standard models....
Thanks for the great question! We think the model architecture (i.e., the number of parameters, the number of layers, the number of attention heads, etc.) is not a key factor but still the data distribution and training methods are important. DeepSeek's R1 vs V3 is a good example for this. Training the base model with longer CoT data for triggering self-verification and self-correction is what enables R1 to outperform V3 on the ZebraLogic benchmark. Recent paper [1] also suggests that keeping appending the token "Wait" to force the model to think more is effective for improving the reasoning performance of the model.
[1] s1: Simple test-time scaling
#### Q2: How might the findings from ZebraLogic generalize to other formal reasoning domains beyond logic grid puzzles? Do you expect similar complexity scaling behaviors for tasks like mathematical reasoning or program synthesis?
Yes, we believe ZebraLogic's findings apply to other formal reasoning domains like mathematical reasoning and code generation, as many reasoning challenges are constraint satisfaction problems, central to our research. Although quantifying complexity in math or coding is harder than in our grid size and Z3 conflict measures, constraint-based reasoning principles suggest strong generalization potential.
#### Q3: Have you analyzed what specific types of reasoning errors or failure modes emerge as puzzle complexity increases?
Please refer to the answer for W3 for this question.
---
Thank you for your review! We will address your suggestions in the version. We hope these clarifications and additional analyses adequately resolve the issues highlighted and kindly ask for your reconsideration of the scores if appropriate. | null | null | null | null | null | null |
MIB: A Mechanistic Interpretability Benchmark | Accept (poster) | Summary: This paper proposes a benchmark, called MIB, to evaluate whether the interpretability algorithm precisely and concisely recovers relevant
causal pathways or specific causal variables. MIB includes two tasks: 1) circuit localization which identifies important connections in the model to perform a task and 2) causal variable localization which compares different ways to project hidden features to find causal mediators. MIB assesses on many common tasks used in the field, standardizing the evaluation and providing insights into the capability of MI methods.
Claims And Evidence: The claims in the paper are supported by the experiments.
Methods And Evaluation Criteria: - Both tracks in this benchmark only evaluate the faithfulness of the explanation; however, there are other criteria as well, such as completeness, minimality and human-interpretability [1,2]
- Previous work [3] criticizes the faithfulness in the circuit localization track and suggests considering the relative amount of pretraining compute needed to achieve comparable performance. Could you include this metric in the benchmark or discuss its utility compared to the faithfulness metric?
- In the causal variable localization track, the IOI task uses a different metric than other tasks and this metric is not presented in the text. Could you explain why we should use different metrics here? Also, I don't understand how we can calculate the logit of the high-level causal model in the IOI task. How do you compute this metric or could you point to the description in the manuscript if I miss it by any chance?
[1] Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small, ICLR 2023
[2] Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models, ICLR 2025
[3] Scaling and evaluating sparse autoencoders, ICLR 2025
Theoretical Claims: There is no theoretical result.
Experimental Designs Or Analyses: Please see the comment in the Evaluation section.
Supplementary Material: Yes
Relation To Broader Scientific Literature: This work provides a benchmark for developing MI methods.
Essential References Not Discussed: This benchmark does not discuss other metrics in prior work.
[1] Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small, ICLR 2023
[2] Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models, ICLR 2025
[3] Scaling and evaluating sparse autoencoders, ICLR 2025
Other Strengths And Weaknesses: - The presentation could be improved. For example, it'd be helpful if you could explain why the subgraph performance ratio locates components with a positive effect and the subgraph behavioral distance locates components with any strong effect.
- The notation is confusing and not explained. For example, in the causal variable track, $\mathcal{C}$ in interchange intervention overlaps with the notation of the circuit in the previous section, while later in the formula interchange intervention uses $\mathcal{A}$, which represents the high-level causal model.
Other Comments Or Suggestions: Can we, and should we, study the interaction of circuit localization and causal variable localization? More specifically, if we find a circuit with different types of features, do the observations in Sec 3 still hold?
Questions For Authors: Please see the question above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We have a detailed plan to address your points about readability and presentation. If accepted, will we use the additional page to incorporate this material to the main text.
> We use faithfulness, but should also discuss completeness, minimality, and human-interpretability
Thanks for this suggestion. One reason we propose using the area under the faithfulness curve is because this captures both **minimality** and **faithfulness** at the same time: methods that are better at locating the most important causal dependencies with fewer components will have higher faithfulness at smaller circuit sizes, thus increasing the area under the curve. Our metrics thus reward methods that are good at locating minimal circuits.
Measuring **completeness** is generally not computationally feasible. Wang et al. (2023) performed a detailed manual analysis on a single task; this allowed them to discover clusters of components with similar functional roles. Their notion of completeness involves ablating subsets of a particular functional cluster. In automatic circuit discovery settings, we don't know what the ground-truth clusters should be, and would thus need to enumerate all possible combinations of components (an exponential-time search). Thus, most work in this field does not measure completeness. Marks et al. (2024) propose ablating only the circuit to measure completeness, but it is much easier to destroy performance than to recover it (i.e., low scores when ablating the circuit don't necessarily imply that we have found all important causal dependencies). As for **human-interpretability**, we see this as the role of featurization methods, rather than circuit localization methods; higher performance in the causal variable localization track implies that the method is better at locating human-interpretable concepts in models. We will discuss these ideas and challenges explicitly in the final version; thanks!
> "Previous work criticizes faithfulness on circuits and suggests considering the relative amount of pretraining compute needed to achieve comparable performance...?"
We're a bit confused by this feedback; could you clarify? Gao et al. doesn't criticize faithfulness, and they compare training compute for SAEs. Circuit discovery typically does not involve training. More broadly, typical language modeling benchmarks use model size (akin to our minimality metric) and model performance (akin to our faithfulness metric), but tend not to compare models based on compute.
> "In the causal variable localization track, the IOI task uses a different metric, Why this metric?"
Because the high-level causal model for the IOI task predicts the output logits of the model (rather than a specific behavior), we cannot use accuracy. Instead, we measure the squared error between the high-level model logits and the actual language model logits. We have clarified this in the new draft. Please see L424-429 for how we compute the logit $O$ of the high-level causal model using a linear model over binary variables.
> We don't discuss metrics used in IOI paper, sparse feature circuits paper, and scaling SAEs paper
Please see responses above for a discussion of the metrics proposed by these papers. We will add this discussion to the revision.
> “why the subgraph performance ratio locates components with a positive effect ...” and “the causal variable track, in interchange intervention overlaps with the notation of the circuit...”
Please see the response to RnEi for an intuitive summary of the circuit localization metrics, and a description of how we will revise notation and presentation for clarity.
> “notation is confusing… $\mathcal{C}$ in interchange intervention overlaps with the notation of the circuit in the previous section, while later in the formula interchange intervention uses $\mathcal{A}$, which represents the high-level causal model”
Good point; we have standardized notation across tracks, and added the table for notation seen here: https://imgur.com/a/m4DDNH2. To avoid overloading notation, we now use $\mathcal{H}$ for a high-level causal model, and $\mathcal{C}$ for a circuit. $\mathcal{C}$ refers to a part of the computation graph, whereas $\mathcal{H}$ is an abstraction that does not necessarily map cleanly to the computation graph. We also made the notation for interchange interventions more transparent, with $\leftarrow$ indicating intervention.
> What about the intersection of the causal variable localization and circuit localization tracks?
We agree completely! Our evaluations focus primarily on localization *or* featurization, but future work should consider their intersection. We felt that good metrics for each were necessary before one could consider evaluating the two jointly. Future work could consider metrics that are compatible with circuits built on sparse features, pre-located causal variables, or neuron clusters rather than individual neurons or submodules.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification. I'd like to discuss some points in the review
**Q1:** I agree with the argument on minimality and the AUC metric. I think it should be highlighted as the motivation of the proposed metric.
About completeness, it'd be more convincing if you could justify `it is much easier to destroy performance than to recover it ` by showing that the ablation test does not provide any statistically significant evidence that the MI method finds all the explanations.
About human-interpretability, I don't totally agree that it could be expressed by higher performance in the causal variable localization track. The faithfulness metric shows how the features align with causal variables in the **predefined hypothesis posed by humans**. There could be the case that the model implements another algorithm, which is still human-understandable yet different from the hypothesis. For example, although it's counterintuitive, LLMs could perform arithmetic by converting the numbers to base 2 and performing carry-the-one. That said, I believe that it's challenging to design such a test without human-in-the-loop, and it should be acknowledged as a current limitation.
**Q2:** Sorry for the wording; I didn't mean Gao et al. criticize the faithfulness but the way it's computed, i.e., by the drop in the loss, with the similar argument as in your rebuttal of Q1. More particularly, in Sec. 4.1 in their paper, they suggest considering the relative amount of pretraining compute needed to train a language model of comparable downstream loss. Although it could be resource-intensive, I believe that for the goal of a benchmark, where we are not limited by any constraint and want a complete assessment of the method, a more reliable metric is more useful.
The responses to other questions addressed my concerns.
---
Reply to Comment 1.1.1:
Comment: **Q1:** Agreed, we will emphasize this point!
On **completeness**: the issue with significance testing is that any meaningful test would require access to the ground-truth set of causally influential components. For realistic models, this ground-truth set cannot be tractably computed without significant manual effort. One could likely achieve the same low performance by ablating an entire circuit or just 75% of it, so it's hard to derive a tractable and automatic signal. Importantly, **our InterpBench model addresses this limitation**! Its AUROC metric captures completeness, minimality, and faithfulness. We will add a note about completeness for non-InterpBench models to the limitations.
On **human-interpretability**: we believe your comment is compatible with ours! Our faithfulness metric captures the extent to which the causal variable—not the entire high-level model—aligns with the representation. The high-level model may differ from than the hypothesis, but it would still be possible to modify the model's behavior in a predictable way, and this will be reflected in the scores. We will follow your suggestion and discuss the implications of different possible high-level models in the limitations; thank you!
**Q2:** Thanks for following up. In circuit localization, no method involves training (except UGS). Most methods involve gradient attributions or inference-time interventions, which are non-parametric and use little compute. In the causal variable track, only some of the methods are parametric. Given this methodological diversity, any notion of "training compute vs. performance" would only be applicable to a small subset of the methods, and runtime metrics could be misleading if applied to non-parametric and parametric methods.
Given the above discussion, adding such a metric to the core benchmark would be problematic. That said, because circuit discovery methods often don't require training, it's often possible to estimate the number of forward and backward passes needed to discover a circuit:
| Method | Num. Passes |
|-----------|-----------|
| AP | $O(d)$ |
| AP-IG-inputs | $O(d \cdot k)$ |
| AP-IG-activations | $O(d \cdot k \cdot L)$ |
| ActP | $O(d \cdot N)$ |
| IFR | $O(d)$ |
| UGS | $O(d \cdot e)$ |
Where $d$ is the number of examples used to find the circuit, $k$ is the number of interpolation steps, usually a small number (we use 10), $L$ is the number of layers in the model, $N$ it the number of components in the model—a large number usually in the tens or hundreds of thousands at least—and $e$ is the number of training epochs. Given the good faithfulness and low number of passes needed for AP-IG-inputs, we believe this strikes the best balance between runtime/compute and performance. The very high number of passes needed for ActP makes it often not worthwhile for larger models. IFR performs relatively poorly but requires few passes, whereas UGS is usually somewhere in the middle w.r.t. runtime and performance. While helpful for our baselines, note that number of passes isn't necessarily the main factor in runtime or compute for all circuit discovery methods, so we would hesitate to formalize this as a general metric.
For causal variable localization, DAS, DBM, and SAE (the parametric methods) have similar runtimes; PCA (a non-parametric but data-driven method) is faster, but tends to perform poorly. None (Full Vector) requires no training, nor any forward/backward passes before evaluation.
We hope this addresses your concerns. Please let us know if you have remaining questions or feedback! | Summary: The authors proposed a benchmark dataset for Mechanistic Interpretability (MI). The dataset consists of four tasks: 1) Indirect Object Identification (IOI), 2) Arithmetic with two digits, 3) Multiple-Choice Question Answering (MCQA), and 4) AI2 Reasoning Challenge (ARC). The goal of the benchmark is to test for circuit localization and causal variable localization. The authors include metrics for each of the goals and test the performance of several baselines on each task and each goal for different language models.
Claims And Evidence: The submission doesn’t make many theoretical or empirical claims, it does make observations about the current state of MI which is supported by their experiments.
Methods And Evaluation Criteria: It is hard to tell whether the evaluation criteria (the metrics they use to evaluate the baselines) are good or not, I think the presentation of the paper is far from ideal, at least from someone who does not already know most of the MI literature.
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes, I checked the validity of the experimental design and it seems serious to me. They tried several baselines and several models on all of their tasks they proposed.
Supplementary Material: I skimmed through all the supplementary material with very little level of detail.
Relation To Broader Scientific Literature: I find the paper to be very valuable in relation to the literature. Mechanistic interpretability is very popular now and having a common benchmark is definitely important.
Essential References Not Discussed: Not that I know of.
Other Strengths And Weaknesses: Strengths:
- As mentioned above, I think there is a lot of value in establishing a common benchmark for MI given the attention it is currently getting from the research community.
- I appreciate the environment the authors are trying to build around the benchmark. For example, including it in Hugging Face, or even providing compute to test the user models.
- I find valuable both the diversity of the tasks/tracks and the possibility of adding more tasks as the research and models evolve.
- I can see the authors put some thought into the experiments and the baselines they tested.
Weaknesses:
- My biggest problem with the paper is its readability. For theoretically oriented people, or even people who are not already deeply in the MI and NLP research it is very difficult to read. Let me give a couple of examples.
- In 3.1 when they describe the circuit metrics, they mention that faithfulness is what is commonly used. Then they say what are the goals of faithfulness and argue they need to separate these goals into two metrics for each of the goals, the notation which I already find a bit confusing (F_{=}, F_{+}), and then with no justification they propose a proxy to estimate these metrics. I was lost there.
- d_{model} in line 237 is not defined, which makes the next line even more difficult to parse. The whole sentence reads:
“Including one neuron of d_{model} in submodule u can be conceptualized as including all outgoing edges from u to 1/d_{model} of the degree they would have been compared to including all neurons in u.”
- The “faithfulness metrics.” paragraph on section 4 suffers from the same, although to a lesser extent, I would say.
Other Comments Or Suggestions: See weaknesses.
Would be very nice to include graphs for the causal relations described in text in 4.2. In the end, if I understand correctly, these causal graphs are fixed for the tasks at hand.
Questions For Authors: If the authors can describe in an actionable way how they will address the readability of the paper I am happy to increase my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for appreciating the value of our paper’s contribution and the validity of our experimental designs. We note your points about readability and presentation, and appreciate your willingness to reconsider your score on the basis of correcting them. We will make a number of changes to improve readability and presentation. If accepted, will we use the additional page given us to incorporate the following material to the main text and increase the readability of the paper.
Plans to improve readability and presentation
===
**”Sec 3.1….argue they need to separate these goals into two metrics for each of the goals, the notation which I already find a bit confusing (F_{=}, F_{+}), and then with no justification they propose a proxy to estimate these metrics. I was lost there”**
We will add some more intuition behind the metrics in the final version of the paper. We realized that the $F_+$ and $F_=$ names were confusing because they are named after the ideal values they should take, rather than what they actually measure. To be more transparent, we have renamed $F_+$ to the **circuit performance ratio** (CPR), and $F_=$ to the **circuit-model distance** (CMD). In short, CPR measures how the circuit performs on the task metric $m$ as a ratio w.r.t. the full model’s performance, while CMD measures how closely the circuit replicates the full model’s input-output behavior. One should aim to maximize CPR if the goal is to find the best-performing circuit; one should aim to minimize CMD if the goal is to find a circuit that concisely implements the full model's behavior.
Both of these metrics are defined using integrals. It is impossible to compute these integrals in exact form without infinite samples. The trapezoidal rule (i.e., Riemann sum) is an established way of approximating integrals given a few values of the function—this is the proxy we allude to. In the revision, we will clarify by not using different names for the exact and empirical definitions. We will instead define CPR and CMD in exact form, and then simply say how we measure them in practice.
**”Would be very nice to include graphs for the causal relations described in text in 4.2. In the end, if I understand correctly, these causal graphs are fixed for the tasks at hand.”**
Yes, the causal graphs are fixed for the task at hand. We added a more comprehensive description of the causal variable localization track that includes causal models for each task and figures that illustrate the concepts and terminology. For example: https://imgur.com/a/6mgZfj2.
**"$d_{model}$ in line 237 is not defined, which makes the next line even more difficult to parse"**
Thanks for catching this! We will add the following clarification: "...where $d_{\text{model}}$ is the model's hidden size, or the number of neurons in the activation vector output of each layer."
Beyond these specific points you raised, we have standardized notation across tracks, and added a table of notation, which you may view here: https://imgur.com/a/m4DDNH2. To avoid overloading notation, we have now modified the text so that $\mathcal{H}$ is a high-level causal model and $\mathcal{C}$ is a circuit. $\mathcal{C}$ refers to a part of the computation graph, whereas $\mathcal{H}$ is an abstraction that does not necessarily map cleanly to the computation graph. We also made the notation for interchange interventions more transparent with the $\leftarrow$ indicating intervention.
Other comments
===
> "Submission doesn't make any theoretical or empirical claims"
Please see our response to reviewer bp8X on scientific novelty.
> "It is hard to tell whether the evaluation criteria (the metrics they use to evaluate the baselines) are good or not"
A sanity check for our evaluation criteria is that they recover known findings. In the circuit localization track, attribution patching with integrated gradients is significantly better than attribution patching, in line with prior work [1,2]. In the causal variable localization track, supervised methods outperform unsupervised methods, as expected, and as found in very recent work [3].
References
===
[1] Hanna et al. (2024). "Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms." COLM. https://arxiv.org/abs/2403.17806
[2] Marks et al. (2025). "Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models." ICLR. https://arxiv.org/abs/2403.19647
[3] Wu et al. (2025). "AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse Autoencoders." arXiv. https://arxiv.org/abs/2501.17148 | Summary: Mechanistic Interpretability (MI) research has made significant strides, but it has lacked a consistent way of comparing methods. In this paper, the authors introduce Mechanistic Interpretability Benchmark (MIB), which splits the evaluation into two main tracks: Circuit Localization and Causal Variable Localization. Across four tasks, MIB allows researchers to test how well different MI methods can (1) identify the causal circuit responsible for a given task behavior, and (2) localize specific conceptual variables within a model’s hidden representations. Using MIB, the authors show that attribution patching and mask-optimization approaches perform best in circuit localization, whereas supervised methods outperform unsupervised ones in aligning causal variables.
Claims And Evidence: **Claim 1**: MIB provides a consistent evaluation framework, enabling direct comparison among MI methods.
- Evidence: The authors introduce a publicly hosted leaderboard for each of the two tracks. Researchers can submit their methods to get systematic performance scores, facilitating transparent head-to-head comparisons.
--> Convincing and clear
**Claim 2**: In the circuit localization track, attribution-based patching and mask-optimization methods are most effective.
- Evidence: Using the authors’ proposed faithfulness metrics F+ and F=, EAP-IG-inputs consistently achieves top results across multiple tasks (Tables 2 and 13).
--> Convincing and clear
**Claim 3**: In the causal variable localization track, supervised methods outperform unsupervised methods.
- Evidence: DAS and DBM achieve higher interchange intervention accuracy (faithfulness) across tasks such as MCQA and ARC (Table 3).
--> Convincing and clear
Methods And Evaluation Criteria: - The authors isolate two main dimensions of mechanistic interpretability: localizing all edges/nodes that implement a task and localizing a single conceptual variable.
- The faithfulness metrics are well-designed in each track.
Theoretical Claims: There is no rigid theoretical claim.
Experimental Designs Or Analyses: - The authors systematically benchmark multiple known MI methods across four tasks.
Supplementary Material: I have reviewed the detailed explanation on benchmark.
Relation To Broader Scientific Literature: MIB connects to ongoing attempts to standardize mechanistic interpretability evaluations.
Essential References Not Discussed: No critical reference omissions are apparent.
Other Strengths And Weaknesses: **Strengths**:
- Thorough coverage of existing methods and LLM models demonstrates both broad scope and significant differences in method performance.
**Weaknesses**:
- The task selection rationale could be more comprehensively justified. Why exactly these four, and do they ensure coverage for all important MI phenomena?
Other Comments Or Suggestions: Please refer to questions.
Questions For Authors: 1. Beyond “representative tasks,” what’s the deeper motivation for picking these four tasks specifically, and how can we be sure this set is sufficient to measure general MI progress?
2. Could you clarify how the weighted edge count integrates into the final scoring? Is it directly used in the F+ and F= metrics, or is it just an informative reference for circuit sparsity?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment of the thoroughness of our experiments, the convincingness and clarity of our claims, and the design of our metrics!
> Regarding task selection rationale and task coverage:
We aimed to strike a balance between having tasks (1) of diverse difficulties and requiring diverse skills that capture the strengths and weaknesses of different models (L91-92), (2) that have and have not been studied in prior mechanistic interpretability work, and (3) that good open-source language models are capable of performing (Table 1; L131-133). To point (1), the four tasks we selected test linguistic/semantic understanding of text (IOI), mathematical reasoning (Arithmetic), scientific knowledge (ARC), copying from context (MCQA), and the ability to answer formatted multiple-choice questions (MCQA and ARC). To point (2), we included tasks that are more widely studied, such as IOI and Arithmetic, to ground the benchmark in existing and ongoing research and to foster buy-in from the mechanistic interpretability community. We purposefully selected ARC and MCQA to push the research community in a particular direction: towards tasks that are represented on real-world leaderboards. Tasks like IOI have dominated the MI literature (with dozens of papers evaluating on this task), but others such as MCQA have only been studied in 2 papers (neither of which analyzed the ARC dataset). ARC in particular is significantly more realistic than the tasks that have been studied to date using circuit discovery methods.
We intend for the benchmark to be a living resource (L433-438) and plan to expand the task list with community buy-in. We did not mean to imply (and thus do not state in the paper) that the four selected tasks are sufficient for understanding *all* LM behaviors. “[E]nsur[ing] coverage for all important MI phenomena” as requested would involve a task list capturing nearly every textual output an LM can produce! Regarding how we can know our task set is “sufficient to measure general MI progress”: MI progress has up until this point been driven by papers running analysis most commonly on one or two datasets, and our task set has already allowed us to draw meaningful comparisons not established in prior work (see response to bp8X for more on this).
> How do weighted edge counts factor into the $F_+$ and $F_=$ metrics?
$F_+$ and $F_=$ (which have been renamed to circuit performance ratio (CPR) and circuit-model distance (CMD), respectively; see response to RnEi) can be conceptualized as integrals of $f$ over the weighted edge counts. If we plot faithfulness $f$ against circuit size (measured via weighted edge count), then $F_+$ (CPR) is simply the area under this curve. The weighted edge count is a way to measure circuit size in a way that enables direct comparisons across edge-level and node-level circuits; it is basically (proportion of nodes in submodule $u$) times (number of edges outgoing from submodule $u$), summed over all $u$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My main concerns were addressed in the rebuttal. I also read the other reviewers' comments and the corresponding responses. While I acknowledge the novelty in proposing an evaluation benchmark for Mechanistic Interpretability, as the authors aimed to do, I did not find additional strengths that would warrant a change in my score. Therefore, I will maintain my current score. | Summary: The authors introduce a benchmark designed to standardize evaluations of mechanistic interpretability (MI) methods. This benchmark offers consistent evaluation across standardized models, metrics, and intervention datasets, with two public leaderboards tracking method performance. The benchmark is divided into two specialized tracks: one evaluating circuit localization methods (identifying computational pathways within networks) and one assessing causal variable localization methods (identifying specific features representing causal variables in hidden representations). They evaluate several SOTA MI methods and discover that attribution and mask optimization techniques perform best for circuit localization, while supervised approaches outperform unsupervised methods for causal variable localization. The benchmark provides a systematic framework for comparing different MI approaches, offering valuable guidance for future research directions in mechanistic interpretability.
Claims And Evidence: The author claims to introduce a new standardized benchmark for evaluation of MI method and also introduce 2 leaderboards open to public submissions. The claim are clear and supported by the description in the paper.
Methods And Evaluation Criteria: The authors evaluate several existing MI methodologies on their proposed benchmark, reporting results and analyzing methods performance over different tasks. The experiments and evaluations for such methodologies are thoroughly described and clearly reported in sections 3.2 and 4.1 (Tables 2 and 3). In alignment with their proposal of a benchmark for evaluating mechanistic interpretability methodologies, the authors provide comprehensive evaluations of existing MI techniques. Their thorough analysis of these comparative results further strengthens the benchmark's utility for the community.
Theoretical Claims: The authors introduce 2 faithfulness metrics in section 3.1, one circuit size metric in section 3.1, and an interchange intervention accuracy metric in section 4. The mathematical formulations are based on intuitive, simple concepts and are correct.
Experimental Designs Or Analyses: The authors evaluate their methodologies on a set of existing MI methods. Specifically, they implement and analyze the results of Circuit Localization Methods in Sections 3.2 and 3.3. The implementation details are well explained and clear.
They implement and analyze the results of Causal Variable Localization Methods in Sections 4.1 and 4.2. They clearly describe implementation details and report results for a wide range of methodologies.
Supplementary Material: I did not check the supplementary. The experiments in the main paper are sufficient for the evluation.
Relation To Broader Scientific Literature: The paper introduces a new benchmark for standardizing the evaluation of mechanistic interpretability (MI) methodologies. The authors effectively consolidate existing benchmarks and metrics (with some modifications) to create a standardized evaluation framework. They provide publicly accessible leaderboards and conduct comprehensive evaluations of current methods against their benchmark.
This contribution offers value to the research community by addressing the fragmentation in evaluation approaches. While the benchmark doesn't introduce novel datasets, its claimed merit lies in establishing consistent evaluation guidelines and maintaining accessible leaderboards. The benchmark's ultimate impact will largely depend on the usability and maintenance of these leaderboards, but it represents a potentially valuable resource for the field despite limited technical novelty.
Essential References Not Discussed: Essential references are discussed.
Other Strengths And Weaknesses: As previously noted, the benchmark provides a valuable foundation for the standardized comparison of mechanistic interpretability methodologies. However, its substantial reliance on existing datasets and evaluation metrics raises questions about whether the technical contribution and novelty meet the threshold for publication at this scientific conference despite its clear practical utility. In short, it is a valuable empirical contribution but lacks scientific novelty. Thus, I am inclined towards a weak accept.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and for acknowledging the value of our benchmark’s practical utility and contribution to the literature! We’d like to clarify our scientific contributions; they include 1) new metrics, and 2) new empirical results (L432-435 Col.2), facilitated crucially by 3) our standardization of metrics, datasets, and models.
> Regarding reliance on existing datasets:
While we included tasks that are more widely studied, such as IOI and Arithmetic, to ground the benchmark in existing research, we purposefully selected our tasks to push the research community towards tasks that are represented on real-world leaderboards (such as MCQA and ARC) with varying formats and difficulty levels (L91-92). Crucially, creating new datasets isn’t necessary when existing datasets remain un- or under-studied. Certain tasks such as IOI have dominated the MI literature (with dozens of papers evaluating on this task), but others such as MCQA have only been studied in 2 papers (neither of which analyzed the ARC dataset). ARC in particular is significantly more realistic than the tasks that have been studied to date using circuit discovery methods. Our main contributions were not the datasets, but rather the metrics, the curation of datasets, and the resulting systematic analyses.
Also, the vast majority of MI papers only run analysis on one dataset and/or one model. In contrast, we conduct systematic analyses across five models, four datasets, and several methods for each track. We are also the first to run experiments on ARC. We expand the synthetic MCQA dataset from prior work from 105 to 260 instances, and the Arithmetic dataset to 150k instances (75k used in our experiments), compared to 1200 in prior work.
Finally, we curate counterfactuals to isolate specific information, which may be useful for other mechanistic studies outside of MIB, as the choice of counterfactual is central to making causal claims [1].
We elaborate more on choice of tasks in our response to Pv8S.
> Regarding reliance on existing metrics:
For the circuit localization track, we do not use existing evaluation metrics; in fact, one of our main contributions is proposing new metrics (see first 2 PPs of Section 3.1)! Faithfulness metrics in prior work conflated important model components (both helpful and harmful) with components driving better model behavior. Additionally, previous studies typically measure the quality of a single circuit using a single faithfulness value $f$, rather than measuring the quality of a method via an aggregation over $f$ values (as we do). The weighted edge count is also novel: before this, it was not clear how to compare the size of node-based and edge-based circuits.
> Regarding scientific novelty:
Our new metrics and large-scale evaluations allow us to support multiple novel empirical claims. Previously, no single paper was able to compare all of the circuit discovery methods that we did in a systematic way (see response to comment on existing metrics). Specifically, we find that (a) edge-based circuits are better than node-based circuits (L327-328), (b) ablations from counterfactual inputs are best (L320-321), and (c) DAS (and non-basis-aligned representations more broadly) outperforms other featurizers (L439). In summary, our proposed metrics help us recover known findings (like that integrated gradients improves attribution quality), challenge others (that SAEs are not as effective as DAS at featurization), and enable new kinds of direct comparisons across methods that were previously difficult to operationalize (L432-434). We will clarify these by adding a bulleted paragraph in the introduction summarizing our contributions, and emphasizing in the paper when we obtain novel findings.
The causal variable localization track includes multiple causal variables that have not been investigated in previous research. The multiple choice pointer variable was not analyzed in previous work such as [2]; while there is some evidence of a carry-the-one variable existing, e.g., [3] or [4], our baselines were unable to surpass the random performance of 50%, indicating that linear methods may not be enough to locate this variable. Finally, while the token and position variables from the IOI task were proposed in [5], we are the first to conduct experiments identifying separable sets of features for each variable. In sum, while we drew from existing datasets and tasks, our analyses have significant novelty.
References:
===
[1] Kusner et al. (2017). https://arxiv.org/abs/1703.06856
[2] Wiegreffe et al. (2025). https://arxiv.org/abs/2407.15018
[3] Kantamneni & Tegmark (2025). https://arxiv.org/pdf/2502.00873
[4] Quirke et al. (2025). https://arxiv.org/pdf/2402.02619
[5] Wang et al. (2023).https://arxiv.org/abs/2211.00593 | null | null | null | null | null | null |
Mind the Gap: a Spectral Analysis of Rank Collapse and Signal Propagation in Attention Layers | Accept (poster) | Summary: In this paper, the authors discuss the phenomenon of rank collapse in width (i.e., for asymptotically large context length) in transformers at initialization. The authors point at the spectral gap in the attention matrix as the main cause of such collapse, and they devise a simple solution to remove the gap and tame the rank collapse.
Claims And Evidence: The theorems are stated clearly, and experiments are carried out to validate them. However, I feel that the authors are quite a bit overselling their findings, or at least they are not clearly stating the limitations of their analysis. The main limitations that I find a bit difficult to justify are:
1. The input matrix should be orthonormal, which is quite unrealistic in practical scenarios, since usually the vocabulary size is some 10x the embedding dimension.
2. That the ratio between context length and embedding dimension ($gamma$ in the paper) is a constant smaller than 1, again quite unrealistic in practice (LLama3, e.g., allows context lengths that are more than 10x the embedding dimension).
3. In the analysis for more than one layer, the authors remove the dependence of attention matrices from the input.
I understand that in theory papers such assumptions are usually necessary. However, I also think that such limitations should be clearly stated and discussed in the paper, proposing possible solutions if possible, or stating why it would be hard to overcome them.
Methods And Evaluation Criteria: The proof techniques and experimental setting are sensible.
Theoretical Claims: I checked the proofs at high level and I could not find any major issue that could invalidate the theoretical results presented in the paper.
Experimental Designs Or Analyses: I think that the second main limitation of the paper is the experimental section, both regarding the experiments presented in the paper, and additional experiments that I think are important but missing:
1. Regarding the experiments already present, details are missing about the setups used. For example, it is not clear what are the dimensions of the matrices involved in Fig. 3, nor what is the value of $\gamma$. Without such information, it is hard to evaluate the prominence of the rank collapse, also because the stable rank seems to be already very small from the start (approx. 1.25) compared to what I expect the size of the involved matrices to be.
2. I believe that the following very important experiments are missing, without which it is hard to understand the impact of the work:
a) Does your proposed solution to the rank collapse problem also work in real transformers models? That is, how would the curves of Figure 3 behave if your solution of removing the one-rank perturbation is applied?
b) Does the rank collapse in width also occur when many consecutive layers are used? That is, can you show an equivalent of Figure 5 with the stable rank as the y-axis?
c) Does the rank collapse and/or the gradient explosion also occur when the attention matrices in consecutive layers are kept as layer dependent? That is, in the case of many successive layers, if the usual formula (1) is used for the attention matrices instead of considering them as independent of the input, does the rank collapse in width still occur?
I think these proposed experiments should be easy and quick to implement in your experimental setup, and they would greatly enhance the understanding of the scope of your observed phenomenon and of you proposed solution to it.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper falls into the line of work investigating rank collapse in transformers. Previous literature focused on rank collapse in depth, i.e., as the number of consecutive attention layers go to infinity. This paper, instead, investigates rank collapse in width.
Essential References Not Discussed: Other works on rank collapse in depth could be cited, such as:
- Feng et al., "Rank diminishing in deep neural networks," 2022.
- Geshkovski et al., "A mathematical perspective on transformers", 2023.
- Geshkovski et al., "The emergence of clusters in self-attention dynamics", 2023.
- Geshkovski et al., "Dynamic metastability in the self-attention model", 2024.
Other Strengths And Weaknesses: I believe that the work is a potentially interesting new direction for the field, if the authors are able to better justify their analysis. In particular, I would like the authors to discuss the theoretical and experimental limitations that I outlined in the sections "Claims and Evidence" and "Experimental Designs Or Analyses" above.
Other Comments Or Suggestions: N/A
Questions For Authors: Please see sections "Claims And Evidence" and "Experimental Designs Or Analyses" above. I will be happy to revise my score if my concerns are addressed satisfactorily. Again, my main concern is that it is not clear from the provided experiments if the rank collapse in width is a serious issue in practical scenarios, and if the proposed solution helps in that regard.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough comments and their appreciation of our contribution. We address the points in the same order. For additional figures (indicated by Roman numerals), please see https://anonymous.4open.science/r/spectral_analysis_transformers-C633/figures_rebuttal.pdf.
1. Thank you for bringing this up. We have addressed this in detail in our response to reviewer uxct, providing extensive illustrations showing that our findings hold in practice, even without these assumptions, which are only needed for our proofs. We will make sure to clarify these theoretical limitations in our revised version.
2. Please see the response regarding $\gamma$ in the answer to reviewer uxct. A precise quantitive characterisation of the spectral bulk without these assumptions is extremely interesting and we will explore it, but this is a multi-year programme of study and we believe it is best that the results we have so far be presented to the machine learning community sooner than later. Regarding Llama3, it is interesting to consider the case of a decoder-like architecture, for which an extra layer of complexity comes in due to the causal mask in the attention, which thus falls outside the scope of this paper. Thus, we kindly refer you to our paragraph GPT-architecture in our answer to uxct.
3. Thank you for highlighting that this was not clearly explained. The reason we do not include input dependence in later layers is that the resulting inputs do not satisfy the orthogonality conditions and, as mentioned above, a rigorous analysis in the non-orthogonality setting is beyond the scope of what can be theoretically justified at this stage. We will make sure to include a clear discussion on this limitation of our work.
---
1. Apologies for these parameters having not been clearly stated. The parameters are now explicitly stated in the revised manuscript, $d=768$ with input length as sentences from our own abstract that are stacked together before being processed with a pre-trained tokeniser that comes with Hugging Face's checkpoints of the corresponding model. All experiments are repeated $5$ times, with the average result shown as a solid line and individual runs displayed as faded lines.
2. We sincerely thank the reviewer for these interesting suggestions which we believe enhanced our manuscript. We have undertaken these experiments and are happy to share the results with you. (a) We show in Fig. V how rank collapse in width is affected when modifying the information propagation in RoBERTa according to our proposed fix. RoBERTa is such that $d=768$. Whilst this small improvement might seem marginal in width (and the discrepancy with our theoretical guarantees can be explained by the extensive complexity of the architecture beyond our theoretical framework), its impact on the rank collapse in depth is remarkable, see Fig. VI. (b) The equivalent of Fig. 5 in our draft with the y-axis being the stable rank for consecutive layers following equation (1) is exactly Fig. 4b in our submission. The label '$\mathbf{A}(\mathbf{X})$' indicates that information flows as in equation (1), whereas '$\mathbf{A}$' (e.g. in Fig. 5) is used when the attention matrix is taken as an i.i.d. Markov matrix, independent of the input. We realise this distinction was only clearly stated in Appendix A.4 and have moved this up to the main body in the revised version. Nevertheless, rank collapse occurs across consecutive layers, as shown in Fig. 4b, and the only modules that address it—among LayerNorm and skip connections—do so by removing the spectral gap. In case the reviewer was interested in seeing the same plot with i.i.d. Markov matrices instead, we upload an additional figure for which similar comments can be made; see Fig. VII. (c) Fig. 4b in our current submission seems to be what the reviewer is asking for. Regarding the gradients in deeper key-query attention layers, we provide an additional plot in Fig. VIII that reaffirms our analysis and give us some empirical insights on the relationship between rank collapse in width, in depth, and gradients explosion.
---
+ Thank you for highlighting these manuscripts, which we had not originally cited. We will include citations to them in our revised manuscript.
We have taken into consideration your overall impression that the limitations of our work were not clearly stated enough and have modified our draft accordingly. Should you have any further concerns that might affect your appreciation of our work, we would be happy to address them.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the careful response and the additional experiments provided. I think I miswrote some of my requests for experiments. In particular, I would like to ask if the authors would be able to provide experimental results for rank collapse in width after several layers. For example, a plot where the x-axis is $T$ and the y-axis is the stable rank of $\Sigma_{\ell}$ for various values of $\ell$. Please tell me if my request makes sense or if I have misunderstood something.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our responses, which included both clarifications and additional experiments—and, most importantly, for engaging with us. We now provide the requested additional experiment, which can be found as Figures X and XI from https://anonymous.4open.science/r/spectral_analysis_transformers-C633/figures_rebuttal_v2.pdf. As before, figures from this document will be referred to with roman numerals while figures from the original submission are referred to using standard numerals.
In this experiment, the input has a stable rank exactly equal to T. After a single application of the softmax layer (ℓ=1), the stable rank drops significantly, quickly approaching 1 as T increases. Removing the spectral outlier successfully mitigates rank collapse in width (Fig XI), consistent with our theory. As pointed out by the reviewer, interesting insights can be drawn from deeper layers. After just one application of softmax, the stable rank has already collapsed and is already so low that it leaves little room for further degradation with increasing width when ℓ>1 (Fig. X). Moreover, while deeper layers introduce more complex behavior that our theory is not able to predict, we observe that the proposed fix, which is provably effective for ℓ=1 retains some degree of effectiveness when ℓ>1 (Fig. XI). We thank the reviewer for suggesting this insightful additional experiment and remain happy to address any further.
We also take this opportunity to summarize our rebuttal:
- Regarding reviewer’s concerns about the validity of the assumptions that are used in our proofs, we clarified that our work should be seen as a proof of concept, with broader applicability beyond the current framework. Specifically, we:
- Conducted an extensive ablation study to show that our findings on the spectral gap within the attention matrix (and subsequent rank collapse in width) hold even without orthogonality constraints on the inputs, both for synthetic (Fig IX) and real world data (Fig I)
- Provided an ablation study on $\gamma$, defined as the ratio between the number of tokens $T$ and the embedding dimension $d$, to show this assumption is artificial (without it, the inputs can not be assumed to be isometric) and is not needed in practice both on synthetic data (Fig. IX) and real world data (Fig. III)
- In response to concerns about the generalisability of our findings beyond attention layers, we:
- Demonstrated that the spectral gap induced by the softmax attention layer cannot be leveraged by additional modules like skip connections or LayerNorm (Fig. 4) nor by any other component typically found in Bert encoders, as we expicitly show the occurrence of rank collapse within real world transformers (Fig. 3).
- Showed that the simple fix we propose mitigates rank collapse in real world transformers using real world data in width (Fig. V), significantly solving rank collapse in depth (Fig. VI).
- In response to reviewers questions on:
- GPT architectures: We showed that a similar spectral gap emerges in GPT/decoder-like architectures (Fig. IV), but occurs on the real line rather than the complex plane due to the causal mask. While our theory does not currently cover this case, it could be extended to accommodate it in a follow-up work, highlighting the utility of our proof of concept.
- The connection to existing literature: We clarified why the proposed fixes for rank collapse in depth in [1] and [2] are not suitable for addressing rank collapse in width and we and connected our theoretical insights to the recent practical works [3] and [4].
We tried our best to answer all the reviewer’s questions and concerns but would be more than happy to answer any further points. We hope the reviewers will reconsider their scores in light of these responses, and we thank them once again for their valuable feedback.
---
[1] Dong, Y. et al. (2021). Attention is not all you need: Pure attention loses rank doubly exponentially with depth.
[2] Noci, L. et al. (2022). Signal propagation in transformers: Theoretical perspectives and the role of rank collapse.
[3] Ye, T. et al. (2024). Differential transformer.
[4] Ali, A. et al. (2023). Centered self-attention layers. | Summary: The authors study attention layers randomly initialized, looking at signal propagation or exploding/vanishing gradient issues from a rank perspective. Notably, using random matrix theory tools, they identify a new rank collapse that occurs in width, i.e., in the context size. Via a careful theoretical analysis, they showcase how it is related to the exploding gradient problem and propose a practical fix to solve the rank collapse in width. They conduct experiments with synthetic inputs on toy and real attention layers, e.g., from BERT, to validate their theoretical findings.
Claims And Evidence: The theoretical claims are supported by clear and detailed proofs, and the authors also provide experimental validation of their theory with synthetic inputs. However, the assumptions (orthogonal inputs) seem unrealistic, which impact the theoretical findings, in my opinion.
Methods And Evaluation Criteria: The authors provide theoretical results to better understand rank collapse in width. They propose a practical fix and conduct synthetic experiments to validate their findings. Hence, the method and evaluation criteria make sense for the problem at hand, although the experiments should illustrate the benefits of the approach to improve signal propagation and/or mitigate exploding gradients.
Theoretical Claims: The theoretical findings are supported by detailed and clear proofs.
Experimental Designs Or Analyses: I checked the soundess of the experiments. I believe that, while most of the contributions are theoretical, the experiments are too lightweight with a focus on small-scale, synthetic data and attention-only models. I think the submission would benefit from experiments in more practical settings and/or better empirical motivation to study the rank in width.
Supplementary Material: I read the proofs and the experimental part in the appendix and reviewed the code provided with an anonymous link.
Relation To Broader Scientific Literature: I find that related work and prior works are well introduced and compared. The submission's contributions are interesting and are part of a growing interest in the literature on the theoretical understanding of transformers from a rank perspective.
Essential References Not Discussed: To the best of my knowledge, there were no essential references not discussed in the current submission.
A potentially relevant paper is [1], which studies the limitations of transformers in time series forecasting by looking at the loss landscape, rank, and entropy collapse. The authors proposed SAMformer, which solves the loss's sharpness that leads to a significant improvement with SOTA methods, and in contrast, they showed that entropy collapse was benign. In the paper, they showcase block diagonal attention matrices with SAMformer (i.e., high rank), while the vanilla Transformer and other modifications suffer from rank collapse (Fig 6 and 12). The connection with the current submission is that the models considered are single-layer transformers; hence, the rank collapse does not come from the depth. In addition, the channel-wise attention means that the number of tokens is the number of features (multivariate time series input); hence, the context is high (up to 862), which could qualify as a rank collapse in "width". [1] could, for instance, be used as motivation for the current submission to study the rank in width (see weakness part) or offer other perspectives for the current study.
*References*
[1] Ilbert et al., SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
Other Strengths And Weaknesses: **Strengths**
- The paper is clear and well written
- Notations and technical background are well introduced
- I appreciate that the authors explain the context and potential impacts of each theoretical finding
- The analysis is well conducted with elegant and well-explained proofs
**Weaknesses**
I list below what I think are weaknesses, but I would be happy to be corrected if I misunderstood some important aspects of the authors' contributions.
- The motivation of the paper is not very clear to me. The authors focus on rank collapse at initialization in width but do not provide empirical validation that this hinders performance, signal propagation, or training stability in practical scenarios. Could the author elaborate on that?
- Most of the theoretical findings assume that the inputs are orthogonal, and in particular, this enables the authors to show that the attention matrix is iid Markov. Since, to the best of my knowledge, inputs are not orthogonal in practice (this is data-dependent), from my understanding, the attention matrices will not be iid Markov anymore, even at initialization, which hinders most of the author's findings. Could the authors elaborate on that? An idea to improve the submission would be to add a discussion akin to Section A.5 of [2], where the authors theoretically and experimentally motivate the assumption 3.1 of a uniform attention matrix.
- I believe the current setting is too oversimplified, which impacts the benefits of the derived insights for more practical settings. An idea to improve the submission would be to incorporate the other transformer's components in the analysis (like in [1], [2]) or at least discuss/ experiment on how they impact (or not) the current analysis.
- The authors use their theoretical findings to derive a practical fix, but they do not test it in practical settings since the experiments, even for BERT, consider isotropic inputs and consist in observing the rank but not the impact on signal propagation or training stability.
- The authors mention that their theoretical analysis resembles practical works like [3, 4]. Since the authors also provide a practical fix, more discussion on the novelty should be done, and I believe they should compare their fix with the ones in [3, 4].
Overall, the idea is interesting, and the theory is elegant with the use of random matrix tools. However, I do not find the problem well motivated enough and the analysis convincing to solve rank collapse in width given the strong assumptions needed on the input. This is the reason for my current score, but I remain open to modifying my score, provided the authors clarify the points mentioned in the weaknesses section.
*References*
[1] Dong et al. Attention is not all you need: Pure attention loses rank doubly exponentially with depth, ICML 2021
[2] Noci et al. Signal propagation in transformers: Theoretical perspectives and the role of rank collapse, NeurIPS 2022
[3] Ye et al. Differential transformer. arXiv 2024
[4] Ali et al. Centered self-attention layers. arXiv 2023.
**Update after rebuttal**: increased score from 2 to 3.
Other Comments Or Suggestions: - The "Impact Statement" is missing but mandatory. Could the authors please add it?
- There are parts of the code (e.g., visualization_losses.ipynb) that are not described in the paper, which is confusing. Could the authors clean the codebase such that only the relevant parts are kept?
Questions For Authors: 1) The author mentioned in the introduction that rank collapse in depth is not specific to attention as it is simply due to matrix multiplication. While it is clear to me that matrix multiplication can lead to gradient vanishing or exploding via the chain rule, I have trouble seeing why it naturally implies rank collapse. Could the authors provide some reference to the observation of rank collapse in weight matrices of other architectures than transformer and where it impacts the signal propagation or training stability?
2) I did not understand the author's justification in Remark 3.3 for initializing the value matrix with unit standard deviation. BERT(base model) has an embedding dimension of $768$ and a context window of $512$, meaning that $d$ and $T$, using the authors' notations, are comparable. However, in many other scenarios, one could have $d << T$, in which case the singular values of the value matrix do not compensate the scaling of the signal in all directions but one by $1/\sqrt{T}$. Could the authors clarify this point, please?
3) [2] showed that the attention matrix at initialization resembles a uniform matrix (assumption 3.1 motivated in Section A.5 of [2]). Denoting $U = \mathbb{1}_{T \times T}$, it means that Eq. (7) of the current submission would become $\frac{1}{T} U \approx A = \frac{1}{T}U+ A^\perp$, which does not convene much meaning. What do the authors think of assumption 3.1 with respect to the simple derivation above?
4) The authors show that even with their fix, the rank collapses in depth (Fig. 4(b)). Out of curiosity, could the authors experiment (or discuss) how solutions for rank collapse in depth proposed in prior works behave on the rank collapse in width (e.g., solutions from [1, 2])?
5) Could the authors discuss more the solutions of [3, 4] and compare their fix to them?
6) The rank collapse observed in practice (Fig. 3) seems quite small (from 1.20 to 1 in stable rank) given the size of the attention matrices (512 x 512 in BERT base model). As such, it does not convince me that rank collapse in width is important to solve. Could the authors elaborate on that? An idea to improve the submission would be to see how it impacts the signal propagation or vanishing/exploding gradients in practice (akin to Fig. 1 of [2]).
*References*
[1] Dong et al. Attention is not all you need: Pure attention loses rank doubly exponentially with depth, ICML 2021
[2] Noci et al. Signal propagation in transformers: Theoretical perspectives and the role of rank collapse, NeurIPS 2022
[3] Ye et al. Differential transformer. arXiv 2024
[4] Ali et al. Centered self-attention layers. arXiv 2023.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your kind support of our theoretical development and for highlighting the "SAMformer" paper "SAMformer" that we looked into and will cite. For extra figures (indicated by Roman numerals), see https://anonymous.4open.science/r/spectral_analysis_transformers-C633/figures_rebuttal.pdf.
+ Addressing rank collapse is motivated in multiple ways: (i) Improved quantization, crucial for compressing LLMs. Bounding the largest singular value by solving rank collapse ensures entries stay within a defined range, aiding quantization; (ii) Expressive initial representations. While it's still unclear what defines a good initialization, one might argue for avoiding rank collapse so that token representations remain diverse (as opposed to collapsing into the same vector); (iii) Controlled gradient norms. [2] shows gradients vanish after rank collapse, highlighting the importance of maintaining rank; (iv) Better generalization accuracy. [4] empirically supports our approach. While they don't directly address width, they propose subtracting the leading first-order term $\frac{1}{T} \mathbf{1}_{T\times T}$ from the attention. Their motivation lies in addressing oversmoothing in graph networks, tied to extremal eigenvalues. Their results show this subtraction increasingly benefits performance as datasets grow more complex or deeper, significantly boosting accuracy. (v) Finally, as reviewer qzvJ pointed out, our work helps bridge a theoretical gap in the understanding of attention layers in the literature.
+ Regarding orthogonal inputs, we refer you to our response to reviewer uxct.
+ We agree that including the impact of additional network modules (as in [1, 2]) is important. We do so empirically in Fig. 4, which shows rank collapse persists even with LayerNorm and skip connections. Fig. 3 further compares our theory to real-world transformers, such as BERT, which include many modules beyond attention.
+ Fig. 3 uses a pre-trained tokeniser, so inputs are not isometric (see response to uxct). As our work focuses on theoretical insights into attention at initialisation, we do not analyse training dynamics, which lies beyond our scope. We instead refer to [4], which offers an extensive empirical study on removing the spectral gap. Their work complements ours: they focus on experiments, while we examine the mathematical consequences of centering. Rather than replicate their study, we discuss it in the next paragraph. For signal propagation, we analyse the stable rank of token representations across depth.
+ Thank you for highlighting this. We’ve expanded our discussion of how our results explain [3, 4]. In [3], the authors heuristically subtract two softmax matrices, implicitly reducing energy along the dominant eigenvector. This helps mitigate rank collapse, albeit less directly and without theoretical grounding. Nonetheless, it shares the insight from [1, 2]—that rank collapse is central. Our resolution, removing the spectral outlier, has a similar effect but is rigorously understood. In [4], the same proposal of subtracting the dominant $\frac{1}{T} \mathbf{1}_{T\times T}$ term is made to “centre” attention. They show this benefits training. Our contribution is the mathematical rigour: we analyse what remains after centring and, by understanding the spectral gap, indicate what to do for other activations (e.g. ReLU, sigmoid) as shown in Fig. 1. We believe our theory supports practical efforts like [3, 4] to address rank collapse.
---
Q1. Rank decreases with matrix multiplication, which leads to collapse with depth. For a training instability link, see [2] (Theorem 3.2).
Q2. Please refer to the 'Scaling' paragraph in our response to uxct.
Q3. Assuming uniform attention (as in [2]) removes the role of attention by treating all tokens equally. We aim to refine this by analysing the attention once the leading $\frac{1}{T}1_{T\times T}$ term is removed. Think of uniform attention as the constant in a Taylor expansion—our $A^{\perp}$ captures the next-order term.
Q4. Great question. [2] proposes scaling residuals with depth to counteract collapse. But as we study rank collapse in width (even in the first layer), their fix does not apply. As shown in Fig. 4(a), modules from [1] do not significantly prevent collapse in width.
Q5. Please see above.
Q6. We agree with the reviewer and refer to [1] (Fig. 2), which also shows minimal changes in depth. However, we emphasise that even small changes in width can drastically affect depth. For instance, removing the spectral outlier in width causes a small change (Fig. V) but drastically alters depth behaviour (Fig. VI).
---
[1] Dong, Y. et al. (2021). Attention is not all you need: Pure attention loses rank doubly exponentially with depth.
[2] Noci, L. et al. (2022). Signal propagation in transformers: Theoretical perspectives and the role of rank collapse.
[3] Ye, T. et al. (2024). Differential transformer.
[4] Ali, A. et al. (2023). Centered self-attention layers.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed answer and the additional experiments. I appreciate the authors' efforts to address my concerns. I will consider that along with the other reviews (and their responses) for my final recommendation.
**Update**--> After carefully reading other reviews and the authors' answers to them, I decided to increase my score, given that most of my concerns are addressed. Although the setting is simplified, the analysis is well done, and additional experiments have been conducted. My main concern on the oversimplifying assumption on the data has also been addressed by the authors, which justifies the score increase.
---
Reply to Comment 1.1.1:
Comment: We thank you for your positive feedback and for acknowledging our efforts to address the concerns raised and conduct the requested additional experiments. If there are any remaining points that would still prevent you from recommending acceptance, we would greatly appreciate it if you could let us know, and we will do our best to address them.
**Update** --> Thank you for your support! | Summary: The paper shows that random attention layer stacks exhibit rank collapse in width (context length and latent dimension) by analyzing the spectral gap of the corresponding random matrices. They then propose a fix that replaces the attention matrix with another related matrix without spectral gap (in the limit) and show that the resulting networks do not exhibit rank collapse or gradient explosion. Experiments are done on BERT to verify the theoretical findings.
Claims And Evidence: The paper claims that spectral gap in the random attention matrix at initialization causes rank collapse in width - a phenomenon well established in practice. Their proofs are sound and well-presented. The author also shows empirical evidence of their theoretical fix working on BERT-based transformers, which are popular in practice.
Methods And Evaluation Criteria: The tools used to prove the authors' propositions are well-developed from random matrix theory. Similar literature that studies transformers' rank collapse also made use of the spectrum. However, to the best of the reviewer's knowledge, this is the first work that proposes a successful and simple fix.
Theoretical Claims: Proofs presented in the paper is correct as far as the reviewer has examined.
Experimental Designs Or Analyses: Experiments done in the paper is well thought out and well-accommodate the theoretical claims.
Supplementary Material: I have reviewed the proofs in the supplementary materials and found them well-written.
Relation To Broader Scientific Literature: Rank collapse is observed empirically in neural networks mostly due to repetitive matrix multiplication, usually referred to as oversmoothing. This phenomenon is shown to have occurred when the attention matrix is degenerate to be a uniform attention matrix, all entries sharing a numerical value. The paper argues for why the attention matrix, at initialization, would be uniform - or rank collapse in width, thus resolving a key assumption in literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- The paper is rigorous, yet simple. The proposed fix to rank collapse is simple and thus scalable, making the methodology generally testable even in large LLMs.
Weaknesses:
- The paper only shows rank collapse at initialization. In practice, the networks are optimized before being use. This suggests that such collapse might be a non-issue if the optimization algorithm can somehow resolve the degeneracy.
- The paper consider stacks of attention layers, while realistic transformer have more complicated structures, with layer norms and/or masking. The paper "On the Role of Attention Masks and LayerNorm in Transformers" by Wu et al. in 2024 could be a good reference to parse some of the ideas of this paper to more complicated architecture.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive view of our work. We are happy to see that they value our contribution. We would like to address the points they raised as weaknesses:
+ The reviewer rightly points out that the direct connection between the pathological rank collapse behaviour at initialisation and the training dynamics is not yet as well established as the analogous edge-of-chaos line of research for fully connected and convolutional networks, e.g. Figs. 5 and 6 in [1]. In the case of transformers in particular, one could argue that as long as the input tokens do not entirely collapse into a single representation during the initial forward pass—i.e. if *some* information manages to propagate through the network—the early steps of the optimisation algorithm may help recover from the collapsed rank and steer training back on course. Nonetheless, empirical evidence suggests that escaping such a pathological landscape can be more difficult in practice; see Fig. 1 of [2].
+ While our analysis is admittedly not comprehensive of the whole transformer block, it provides valuable insights and sheds light on a hitherto overlooked phenomenon (even at the first layer) and its origin, namely rank collapse in width and softmax activation within attention layers. We also believe there is value in our results showing how the same rank collapse is impacted by changing the softmax activation to other options being advanced, such as ReLU and sigmoid. These other choices exhibit a smaller spectral gap than the softmax activation, which might explain why they are being advocated. Moreover, our resolution to subtracting the spectral gap can also be applied to ReLU and other attention activations. We believe this insight will be useful to practitioners studying alternatives. We wish to emphasise that, since rank collapse in width has a totally separate cause from its better-studied counterpart, rank collapse in depth, the ideas proposed to fix the latter do not necessarily affect the former and further distinct research is required. Thank you for the reference.
+ Finally, in response to some reviewers' concerns about the assumptions used in our proof of concept, we have conducted extensive ablation studies that the reviewer may find useful in confirming their positive assessment of our work. They can be found here: https://anonymous.4open.science/r/spectral_analysis_transformers-C633/figures_rebuttal.pdf
--------
[1] Schoenholz S. et al. (2016). Deep Information Propagation.
[2] Noci, L. et al. (2022). Signal propagation in transformers: Theoretical perspectives and the role of rank collapse. | Summary: This paper studies signal/propagation in transformers with softmax-attention at initialization. Prior work has observed rank collapse in depth (in various model architectures), which causes all tokens to converge to a single representation. It has been attributed to repeated matrix multiplications, and is known to cause issues such as exploding/vanishing gradients. This paper uncovers a phenomenon called rank collapse in width, which is unique to softmax attention layer and occurs with increasing context length. Using Random Matrix Theory, the paper shows a gap between the largest and the rest of the singular values of the attention matrix. It also shows that rank collapse in width leads to exploding gradients and exacerbates rank collapse in depth. It then proposes to mitigate rank collapse in width by removing the outlier eigenvalue(s), and empirically shows that it also helps mitigate rank collapse in depth and the issue of exploding gradients.
Claims And Evidence: Yes, for the most part.
The paper clearly mentions that the focus is softmax-attention transformer models at initialization. It presents theoretical results, supporting the main claims (spectral gap leading to rank collapse in width, and connections with rank collapse in depth and exploding gradients) as well as empirical results showing improvements with the proposed approach to mitigate rank collapse.
My main concern is the strong assumptions for the results. It is stated that the results are in the regime where context length $T$ is large. However, an additional assumption that $\frac{d}{T}\in (0, 1]$ is also required for the theoretical results and is not discussed anywhere. As $T\rightarrow\infty$, the token dimension $d$ also tends to $\infty$ which seems very strong. In addition, the paper assumes that the input tokens are orthogonal, which seems unrealistic.
Methods And Evaluation Criteria: The method to mitigate rank collapse makes sense. However, I have the following concern with the evaluation.
While the paper evaluates the method with standard BERT-based architecture, it should also include results with GPT-based architecture. More importantly, it should evaluate the effect of relaxing the assumption $X_0X_0^T=I$.
Theoretical Claims: I skimmed through the proofs for the main results (Theorem 2.2, Props. 3.4 and 3.5), although I did not check them carefully.
Experimental Designs Or Analyses: This is mainly a theoretical paper. The experimental design is fairly standard.
Supplementary Material: I skimmed through some of the proofs (as mentioned in the Theoretical Claims section).
Relation To Broader Scientific Literature: As discussed in Section 1.1 in the paper, prior works have observed and investigated the phenomenon of rank collapse with depth in transformers as well as other architectures (at initialization). It has been linked to the issue of vanishing/exploding gradients which can disrupt training. This paper analyzes a new phenomenon, rank collapse in width, how it relates to rank collapse in depth and the issue of exploding gradients, and proposes a method to mitigate it, which also seems to resolve these other issues. It also discusses two recent works, Ye et al. 2024 and Ali et al. 2023, which indirectly and directly mitigate rank collapse and show performance improvements.
Essential References Not Discussed: To the best of my knowledge, the paper discusses and cited the most relevant works.
Other Strengths And Weaknesses: Strength:
- The paper uncovers the phenomenon of rank collapse in width, studies it theoretically and shows connections with rank collapse in depth and vanishing/exploding gradients, which is interesting.
Weaknesses:
- The assumptions are quite strong (see Claims and Evidence section above).
- The writing should be improved, for instance, the paper doesn’t provide intuition/proof sketches for the theoretical results (see Other Comments or Suggestions section below).
Other Comments Or Suggestions: If you have any other comments or suggestions (e.g., a list of typos), please write them here.
- The paper should include some intuition and proof sketches for the main result. Looking through the proofs, they are not very long, and including a few key steps in the paper would be useful for the reader. For instance, in Prop. 3.4, the convergence rate is $O(T^{1-4l})$ and without the sketch, it’s unclear how this dependence on layer index $l$ arises.
- The paper should check the use of \citep and \citet.
- There are some statements that are unclear and should be rephrased. For instance, lines 59-60 (second column).
Questions For Authors: The paper states that “using a different scaling that makes the same quantity explode rather than vanish” (refering to gradients with respect to the value matrix $W^V_l$ at layer $l$). Can the authors elaborate on why this is the case?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your comments. Those with an editorial nature are applied in the revised draft, so we will address their main concern regarding assumptions. For additional figures (indicated by Roman numerals), please see https://anonymous.4open.science/r/spectral_analysis_transformers-C633/figures_rebuttal.pdf.
**Orthonormal input tokens.** It is true that input tokens are usually not orthogonal. Nonetheless, we wish to point out that this assumption is only made for the sake of our mathematical analysis which should be seen more as a proof of concept. The emergence of spectral gap and rank collapse in width still occurs in practice without those assumptions. Let us clarify the experimental setup of Fig. 3. Sentences are tokenised with a pre-trained tokeniser, making inputs to our randomly initialised BERT clearly non-isometric. We include $\mathbf{X}_0 \mathbf{X}_0^\top$ to illustrate its deviation from identity. When tokens are instead assigned random embeddings, the covariance matrix is closer to orthogonality. Yet, the spectral gap (Figs. I(d) and II(d)) and rank collapse (Fig. 3 of the draft) persist in both cases. We have clarified this setup in our draft to emphasise this aspect. Another illustration of why orthogonality is not needed per se comes directly from Fig. 6, where the spectral gap persists in deeper layers where representations are definitely correlated. Thanks to your review, we have also added to the appendix additional plots (Fig. IX) of the spectra of a random attention layer with (synthetic) non-isometric input tokens to mirror Fig. 1.
Characterising the spectrum of a generic row-normalised matrix remains an open problem in Random Matrix Theory. Our work advances the understanding of randomly initialised attention layers by assuming orthogonal input but we should acknowledge this is a limitation of our work and will emphasise this aspect in our revised version thanks to your comment. Moreover, we would like to share with the reviewer that we consciously chose to put an assumption on the input data (for our proofs to hold) to fully capture the complexity of the attention mechanism, rather than simplifying the attention mechanism as it is commonly done in the current mathematical treatment of transformers (e.g., in [1], where rank collapse is addressed under the assumption of uniform attention---essentially, no attention).
**Ablation on $\gamma$.** Note that without $\gamma \leq 1$, the inputs cannot be isometric so this extra assumption was solely made for this reason. To more directly speak to your question we included new plots that explore information propagation in TinyBert, for which $d>T$ hence necessarily the tokens must be non-orthonormal. The spectral gap persists (Fig. III(a)) and rank collapse in width follows (Fig. III(b)).
**Large width.** Although the theorems are formally stated as $T,d \to \infty$, we have also determined precise rates of convergence, which allows one to derive bounds in the finite case. Moreover, the convergence is sufficiently fast that, for typical practical values of $T$ and $d$ (on the order of $10^2$ to $10^3$), the quantities are already well approximated by their limiting values, see Fig. 1. In fact, it is one of the key messages we want to convey that, given the increasing scale of these architectures, asymptotic analyses (such as Random Matrix Theory) can be appropriate tools for their study.
**Scaling.** As noted in Remark 3.3, we scale values as $\mathcal{N}(0,1)$ instead of $\mathcal{N}(0,1/d)$ to compensate for the softmax attention that shrinks in all but one direction. This ensures layerwise tokens remain of order one after removing the spectral gap. Alternatively, scaling by $d^{1/2}$ post-softmax would yield the same effect. Since we omit LayerNorm in our analysis, rescaling is crucial to maintain token magnitude. Note that stable rank is independent of scale so we lose no generality here. In contrast, scaling does impact gradients, so vanishing gradients in Noci et al. under traditional scaling become exploding ones in our setting.
**GPT architecture.** We investigated signal propagation in a GPT2 transformer, as suggested by the reviewer (see Figs. IV(a), IV(b)). The attention matrix is now triangular due to the causal mask and its eigenvalues (real) can be read off the diagonal of this matrix. Although interesting, we feel that because it is a decoder-like architecture, an additional layer of complexity comes with considering causal masks that our theory does not cover, so this setting falls outside of our scope. Yet, we share with the reviewer the results of such an experiment and hope to maybe gain insight from them for potential follow-up work.
We added a proof sketch in our revised work, thanks for the great suggestion. We hope you will consider updating your score based on our rebuttal.
-----
[1] Noci, L. et al. (2022). Signal propagation in transformers: Theoretical perspectives and the role of rank collapse.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed rebuttal addressing my concerns. I have raised the score to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you! | null | null | null | null | null | null |
Memorization Sinks: Isolating Memorization during LLM Training | Accept (poster) | Summary: The paper studies memorization vs general capabilities in models by introducing their Sequence-Tied Dropout (SeqTD). This method a pool of shared neurons and a set of memorization neurons. They use the sequence ID to determine which memorization neurons to use. Note that there is some overlap between the sequences and an neuron.
Claims And Evidence: They claim they propose a scalable and practical method that enables post-hoc isolation while preserving general model capabilities. I do not think this method is scalable nor practical. However, I do agree that it enables post-hoc isolation.
Methods And Evaluation Criteria: They pretrain on only one model size on one dataset using SeqTD. They compare to other methods to show that they are better at evaluating sequence forgetting (loss before and after on repeated sequences) and model degradation (loss before and after on val set). I do think only using loss limits this study, and that using generated sequences for sequence forgetting might have been better. Additionally, it would be good to study additional model sizes maybe smaller and larger (if compute permits).
Theoretical Claims: They included some analysis on MLPs in the appendix. I do not think that these particularly contribute to the paper, but I did NOT carefully review the theorems.
Experimental Designs Or Analyses: See Methods/Evaluations
Supplementary Material: Skimmed the appendix.
Relation To Broader Scientific Literature: The paper situates this work mainly with Chang et al. 2024b. (https://aclanthology.org/2024.naacl-long.176/).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
- In-depth study of the methods with interesting ablations
- Seem to be able to disentangle sequence memorization and general capabilities.
Weaknesses
- I am not convinced by the authors' arguments that this method is scalable and practical. Like n-gram overlap could be very high between different sequences but even with the proposed hashing technique, this method would map the sequence onto different memorization neurons. Additionally, I think the relationship of p-shared parameters and model size is not well understood. There are a lot of experiments to be made before I would be convinced of such a claim.
- Limited Experiments -- although the authors do a good job with running interesting ablations, I do not think enough datasets and models were considered. Additionally, the models from my calculations were trained for 100-200 of million tokens (2M examples x ~50-100 tokens per example) instead of billions. (Why is it not clear how many tokens were considered in the paper?) I think it makes it hard to be convinced of some of the claims in the paper when the training runs are very small with no breath in terms of dataset or model size.
Other Comments Or Suggestions: Add details in the paper about the number of tokens and training steps in the experimental setting.
Questions For Authors: See Weaknesses and suggestions please.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the sharp and honest feedback. We're happy that you found:
- (i) our ablations **insightful and well-done**,
- (ii) the method **interesting for isolating memorization**, and
- (iii) the paper a **valuable starting point** for deeper investigation.
---
### **Theme 1: Scale & Training Practicality**
> *"This method is not convincingly scalable—token counts and compute are low."*
You raise a crucial point about scalability. To address this, we've expanded our experiments significantly:
🔗 *Full results available here:* [Model Scaling Experiment Results](https://shorturl.at/md9cQ), [Token Scaling Experiment Results](https://shorturl.at/Aa7VZ)
- **Model scaling experiments** across four model sizes: 32M, 51M, 170M, 344 million parameters. Our results show that SeqTD mitigates memorization across all model sizes, but that the benefits of SeqTD increase with scale (no model degradation at 344M parameters). This provides evidence that *the benefits of SeqTD improve at scale*.
- **Extended Token Scaling Experiments** We train a 350M parameter model on a larger-scale corpus of 1 billion tokens containing a mix of TinyStories and SlimPajama. Our results demonstrate that SeqTD mitigates memorization (reducing the memorization-validation loss gap by 2.5x) while outperforming a deduplication baseline in validation loss.
- **Explicitly stated token counts** in Section 5.1: ~16M tokens in initial experiments ((20K examples × 1 repetition + 100 examples × 128 repeats) × ~500 tokens per example)
**Sequence hashing concern**
We address this with our sequence ID noise experiments (Fig. 5a). SeqTD remains effective even when 10% of sequence repetitions receive inconsistent IDs, demonstrating robustness to the n-gram overlap issue you raised. This suggests that perfect ID consistency is not required for SeqTD.
---
### **Theme 2: Evaluation Beyond Loss**
> *"Would prefer tests that evaluate memorization via generation."*
To demonstrate real-world utility, we've added prompt-based generation tests that provide much stronger evidence of SeqTD's effectiveness:
🔗 *Full results available here:* [Google Slides – Memorization Metrics](https://shorturl.at/M7UfM)
- **Prompt continuation tests** show standard models regenerate memorized text nearly verbatim (>95% token match), while SeqTD models fail to reproduce memorized content (<20% token match)
- **Memorization rank metrics** reveal that SeqTD causes an 8× rank degradation for memorized content.
These new evaluations directly address your concern by showing the loss function results presented in the paper mirror two generation-based metrics.
---
### **Theme 3: Theoretical Foundation**
> *"They included some analysis on MLPs in the appendix. I do not think that these particularly contribute to the paper..."*
You're right that the theoretical components need stronger connections to our practical claims. We've integrated key theoretical insights into Section 5.2 with a simplified version of Theorem E.3:
> **Simplified Theorem:**
> *When p (memorization neuron activation rate) is sufficiently low, memorization accumulates in dedicated neurons rather than shared ones. This accumulation increases as memorization neurons are activated less frequently*
This theorem sheds important light on the scaling behaviors of SeqTD. As model scale increases and larger memorization pools are feasible (with each neuron active on less sequences), the localization achieved by SeqTD should improve. We validate this in our model scaling experiments. Our theory additionally reveals the crucial role of the p_mem parameter (which controls memorization neuron activation) as we discuss in Section 5.2 (pg 6).
---
**Table 4: Summary of Actions for Reviewer iFmB**
| Concern | Action Taken | Expected Outcome |
|--------------------------------|--------------------------------------------------------|--------------------------------------------------|
| Scalability and token scale | Added 1B-token runs at 355M parameters | Validate claims in realistic training regimes |
| Memorization evaluation | Added generation-based memorization metrics | Demonstrate practical benefits beyond loss metrics |
| Theoretical foundation | Integrated simplified theorem with empirical validation | Connect theory to observed scaling behavior |
| Training details transparency | Added explicit token counts and training setup tables | Clarify experimental conditions and reproducibility |
We acknowledge that more work remains to fully validate SeqTD across the full spectrum of model scales and datasets. However, our new experiments provide compelling evidence that the method's benefits increase with scale, and the theoretical foundations help explain why. We look forward to your suggestions for additional experiments or analyses that would further address your concerns.
---
Rebuttal Comment 1.1:
Comment: I have raised my score to 3, but I am really on the borderline. I do need to see I would prefer seeing runs at a higher parameter count closer to 1B. However, I understand this is difficult given potential compute restrictions.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our new experiments. As far as scaling to larger model sizes is concerned, as of now we have the [following promising scaling trend](https://docs.google.com/presentation/d/1fiUtDQj2oTERn7KmHY8c20eX3WRi1Oyu27EH9DRnSvo/edit#slide=id.g347337d72f5_1_6) to justify why investing in SeqTD intervention at even larger scales will be worthwhile. The scaling of model parameters results in both reduction in loss of model utility, and decrease in memorization. While we would absolutely love to expand the parameter count of experiments even further, our current compute restricts this scope. We hope the scaling trends allow you to feel optimistic about being a stronger champion of this analysis-oriented work, with the goal of disentangling memorization and generalization in LLMs :)
Thanks for your time! | Summary: The paper presents an investigation into sequence memorization in large language models (LLMs) and introduces Sequence-Tied Dropout (SeqTD) as a novel method to isolate memorization while maintaining generalization capabilities. The key argument is that typical memorization is not confined to specific neurons in standard training, making it difficult to remove without affecting overall model performance. The proposed SeqTD approach mitigates this issue by partitioning neurons into shared and memorization-specific groups, ensuring that memorization accumulates in a fixed subset of neurons while allowing general knowledge to remain broadly distributed.
Claims And Evidence: The claims made in the paper are generally well-supported:
* The claim that memorization is entangled with generalization in standard training is backed by experiments showing that removing neurons responsible for memorization also degrades model performance.
* The assertion that SeqTD enables controlled memorization isolation is demonstrated through experiments where SeqTD effectively removes memorization with minimal performance loss.
* The paper provides empirical evidence to support the claim that memorization neurons accumulate sequence-specific information while shared neurons maintain generalization.
* However, Figure 4(a) presents some inconsistencies where validation loss without repetition is lower than with repetition. This result seems counterintuitive. The authors should clarify this discrepancy.
Methods And Evaluation Criteria: The methods used are well-aligned with the problem:
* The experiments use TinyStories, TS-Repetition dataset, a reasonable choice for studying memorization in small-scale settings.
* The evaluation metrics, including sequence forgetting and validation loss before and after neuron dropout, appropriately measure the effectiveness of SeqTD.
* The comparison with existing methods such as gradient attribution and pruning strengthens the validity of the results.
* However, the experimental setup may not fully capture how SeqTD would behave in larger-scale LLMs, but the for the stated goal of the paper the choice of model and training suffices.
Theoretical Claims: There are no theory claims in the main paper.
Experimental Designs Or Analyses: The experimental design is generally solid, but some aspects could be improved:
* The results in Figure 4(a) are somewhat inconsistent with expected behavior; typically, repeated sequences should reinforce learning, but the validation loss is lower without repetition. This raises questions about whether the small-scale training setup accurately reflects real-world LLM behavior.
* Certain aspects need more discussion such as with SeqTD, the mem loss goes down and then up, this is not the case with standard training, what is the mechanism that drives this. Since early stopping with SeqTD could be detrimental to the privacy.
Supplementary Material: There is no suppl. material.
Relation To Broader Scientific Literature: Most of the relevant memorization localization papers are included
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
* The proposed SeqTD method is simple and novel (always a great combination) and well-supported empirical evidence.
* The methodology is rigorous and the ablation studies add depth to the evaluation.
Weaknesses
* The results in Figure 4(a) raise concerns about whether the small-scale training setup accurately represents LLM behavior.
* The forgetting evaluation could be more detailed beyond just loss increase. For example perplexity based ranks could be provided with and without the mitigation strategy.
* Results also suggest that training without repetition has better performance for memorized examples compared to SeqTD, this there is a gap to potential upper limit
* Certain aspects need more discussion such as with SeqTD, the mem loss goes down and then up, this is not the case with standard training, what is the mechanism that drives this. Since early stopping with SeqTD could be detrimental to the privacy.
Other Comments Or Suggestions: Typo
- line 235 should use ` in latex rather than ' (i.e. open quote)
Questions For Authors: 1. Figure 4(a) shows that validation loss without repetition is lower than with repetition. Can you clarify why this happens?
This seems inconsistent. If the small-scale setup does not reflect behavior in large-scale LLMs, that should be discussed explicitly. Please clarify this result.
2. Can the authors discuss why with SeqTD, the mem loss goes down and then up, this is not the case with standard training, what is the mechanism that drives this?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the detailed and insightful feedback. We really appreciate that you found:
- (i) SeqTD to be a **simple yet novel idea**,
- (ii) the **methodology rigorous**, and
- (iii) our **ablations helpful and revealing**.
We've made specific additions to respond to your concerns.
---
### **Theme 1: Figure 4 (a) Validation Loss Dynamics**
> *"Figure 4(a) shows that validation loss without repetition is lower than with repetition. Can you clarify why this happens?"*
🔗 *Full results available here:* [Token Scaling Experiment Results](https://shorturl.at/Aa7VZ)
You've identified a crucial insight. In our initial setup, we treated data intervention (deduplication) as separate from model intervention (SeqTD), assuming only model interventions were appropriate for our analysis of disentangling memorization and generalization in LLMs. In Figure 4(a), the "without repetition" line represents an oracle baseline that involves direct data manipulation (deduplication). We initially considered this an "unfair" comparison because:
- Our research focuses specifically on model-based interventions that can be applied without modifying training data, in order to understand the dynamics of generalization and memorization better.
- Data deduplication was hence considered an oracle that represents an ideal but unreachable solution for the minimum validation loss on the trainig distribution.
To address this ambiguity, we've conducted new experiments with a more realistic setting where repetition is actually beneficial to generalization:
- We constructed a mixed dataset where 99% of tokens are from SlimPajama and 1% from TinyStories
- The objective is to achieve optimal validation loss on TinyStories while minimizing memorization
- In this realistic setting, some repetition is actually helpful (unlike our previous setup)
- The new "validation optimal" occurs when TinyStories data is upsampled 10x (11M tokens)
In the rebuttal [slides](https://shorturl.at/DqABg), we show in this setting, SeqTD significantly exceeds the deduplication baseline (reducing performance gap by 4x), while also reducing the loss gap between memorized and validation examples by 2.5x. This demonstrates SeqTD's capability to enable learning from repeated examples while mitigating memorization.
### **Theme 2: Figure 4 (b) "dip-and-rise" pattern in memorization loss**
> Regarding the "dip-and-rise" pattern in memorization loss with SeqTD:
- This can be explained by the conceptual intuition for SeqTD (Section 5.3) There is initially some memorization learned in the shared neurons early in training. Once the memorization neurons fit the repeated sequences, further gradient steps no longer reinforce memorization in the generalizing neurons. At this stage, the interference from other sequences forgets the memorization learned in the shared neurons -- resulting in the rising memorization loss.
Importantly, our larger token scale training [results](https://shorturl.at/B51HI) do not exhibit this behavior, suggesting it may be eliminated when generalization neurons don't have the capacity to memorize sequences on their own.
---
### **Theme 3: Evaluation Clarity and Metric Expansion**
> *"The forgetting evaluation could be more detailed beyond just loss increase."*
🔗 *Full results available here:* [Google Slides – Memorization Metrics](https://shorturl.at/M7UfM)
We've enhanced our evaluation with new results:
- Perplexity-based ranks for memorized examples with and without SeqTD.
- Memorization token accuracy metrics to complement loss measurements
- Clear comparisons between SeqTD, standard training, and deduplication baselines
These additions provide multiple perspectives on the effectiveness of SeqTD beyond simple loss measurements. The observed patterns in general follow the same trends as seen with loss based metrics.
### **Theme 4: Scaling experiments**
While not a key concern in your review, we have developed new scaling trends and experiments based on other reviewer comments. If this is of interest, you can read our respons to Reviewer hZ7x.
---
**Table 3: Summary of Actions for Reviewer hkbC**
| Concern | Action Taken | Expected Outcome |
|----------------------------------|-------------------------------------------------------|-------------------------------------------|
| Figure 4 (a) ambiguity | Added larger scale experiments where repetition is beneficial | Clarify observed loss behavior and utility of SeqTD |
| Loss dynamics explanation | Added conceptual mechanism behind rising mem loss | Explain "dip-and-rise" pattern in memorization loss |
| Evaluation diversity | Added perplexity ranks + token accuracy | Strengthen metrics beyond loss |
We would appreciate any suggestions for additional metrics or visualizations you believe would further strengthen our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, based on the results, I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our additional experiments and clarifications! Your feedback on Figure 4 was especially valuable, and we believe the newer revision of the same significantly clarifies our finding.
Thanks for your time! | Summary: This work tackles privacy risks in LLMs from memorizing repeated sequences. Current post-hoc neuron isolation methods fail for data entangled with general capabilities. The authors propose SeqTD, a training method splitting neurons into shared (generalization) and memorization groups. By activating fixed memorization neurons for repeated sequences and shielding shared ones, SeqTD enables precise removal of memorized content without performance loss.
Claims And Evidence: The main problem that SeqTD addresses is the challenge of isolating and removing memorized sequences from language models while preserving their general capabilities (localization). Existing post-hoc methods prove ineffective or significantly degrade model performance when memorized sequences are statistically similar to the broader training distribution. SeqTD offers a training-time approach that successfully identifies and removes such problematic memorization while maintaining the model's overall generalization abilities, demonstrating superior performance compared to previous methods.
Methods And Evaluation Criteria: The evaluation approach is comprehensive and effective, employing appropriate metrics to assess sequence forgetting and model preservation capabilitie. For the method part, the authors may need to provide more insights why pretraining is better than post-hoc. Besides, the authors may also want to have more analysis about the difference of their method and [1].
[1] Gradient Routing: Masking Gradients to Localize Computation in Neural Networks, arxiv 2024.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experiments are rigorous in controlled settings: repeated vs. atypical sequences, ablation studies on dropout ratios, and noise tolerance . However, the absence of benchmarks on larger modelsor real-world data limits practical insights. The comparison to post-hoc methods is a strength, but including more recent baselines would enhance relevance.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This works seem to be a new approach for localization of memorization. It may help privacy research for achieving a better trade-off between unlearning and performance.
Essential References Not Discussed: [1] is cited but more discussion is needed.
[1] Gradient Routing: Masking Gradients to Localize Computation in Neural Networks, arxiv 2024.
Other Strengths And Weaknesses: This is generally a new and valuable method with comprehensive evaluation for the presented model. However, several critical aspects are underdeveloped:
1. deeper insights into why the training-time approach outperforms post-hoc methods would strengthen the theoretical foundation;
2. scalability questions remain regarding larger models and diverse datasets beyond TinyStories, raising concerns about the method's applicability to production-scale LLMs;
3. the paper lacks sufficient intuition about how SeqTD effectively disentangles memorization from generalization capabilities, which are often deeply intertwined in LLMs. While the implementation details and empirical results are thoroughly presented, the underlying mechanisms and theoretical guarantees for successful disentanglement deserve more attention.
Other Comments Or Suggestions: I suggest addressing several minor inconsistencies in the manuscript: standardize the formatting of quotation marks and italics, currently some of them are confusing; maintain consistency with hyphenated terms like "trade-off"/"tradeoff" and "pretraining"/"pre-training"; and convert rasterized figures to vector graphics to improve visual quality and readability.
Questions For Authors: What mechanisms explain how your training-time approach successfully disentangles memorization from generalization capabilities in language models, and how might these insights scale to larger models trained on diverse datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your constructive and balanced review. We're glad you found:
- (i) the **formulation of SeqTD clear and valuable**,
- (ii) our **experiments well-designed within the scope**, and
- (iii) the **motivation around privacy and unlearning promising**.
We've carefully addressed your main concerns through new experiments and clarifications.
---
### **Theme 1: Scalability and Real-World Applicability**
> *"The absence of benchmarks on larger models or real-world data limits practical insights."*
This is an excellent point. To address this concern, we've conducted **additional scaling experiments** across multiple model sizes and more realistic training regimes:
🔗 *Full results available here:* [Model Scaling Experiment Results](https://shorturl.at/md9cQ), [Token Scaling Experiment Results](https://shorturl.at/Aa7VZ)
- **Model scaling experiments** across four model sizes: 32M, 51M, 170M, 344M parameters. Our results show that SeqTD mitigates memorization across all model sizes, but the benefits of SeqTD increase with scale (no model degradation at 344M parameters)
- **Token Scaling Experiments** We train a 350M parameter model on a larger-scale corpus of 1 billion tokens containing a mix of TinyStories and SlimPajama. Our results demonstrate that SeqTD mitigates memorization (reducing the memorization-validation loss gap by 2.5x) while outperforming deduplication.
- **Cross-architecture validation** To ensure the approach generalizes beyond a single model family, we perform experiments on the SmolLM-2 model family which uses a Gated MLP in contrast to the GPT-2 models trained in our original submission.
---
### **Theme 2: Intuition for Disentanglement Mechanism**
> *"The paper lacks sufficient intuition about how SeqTD disentangles memorization."*
>
Thank you for highlighting this critical gap in our explanation. We've significantly enhanced the paper with clearer intuition about how and why SeqTD works by elevating the core insight from our theory in Appendix E (page 14):
> **Simplified Theorem (E.2):**
> *When repeated sequences consistently activate a fixed subset of neurons (via sequence-tied dropout), shared neurons gradually forget these patterns due to interference from other examples, while memorization neurons retain them.*
>
This theorem (pg. 13) formalizes the key mechanism: memorization neurons (activated less frequently) are shielded from forgetting during pretraining and accumulate sequence memorization. Thus, SeqTD can control where memorization accumulates by controlling how often neurons are active in pretraining.
---
### **Theme 3: Comparison to Gradient Routing**
> *"The authors may want to include more discussion on how SeqTD compares with Gradient Routing [1]."*
We appreciate this suggestion and have a dedicated comparison section in Section 4. In particular we discuss:
- **Empirical Comparison**: As shown in Section 4.1 (pg. 4-5), our experiments with gradient routing demonstrated that such approaches can hinder cross-sequence learning and don't fully isolate memorization, whereas SeqTD preserves general capabilities while enabling more effective post-hoc removal
- **Mechanistic Differences**: As discussed in Section 4.1, there are two crucial differences between SeqTD and Gradient Routing. Firstly, SeqTD allows for a pool of shared neurons (that are updated by all sequences) and our empirical results show this is crucial for generalization across sequences. Secondly, the dropout performed by SeqTD disincentivizes co-adaptations between general and memorization neurons. In gradient routing, we empirically find such co-adaptations develop which leads to an additional drop in performance after removing neurons.
---
### **Theme 4: Miscellaneous Improvements**
Thank you for your helpful suggestions about presentation consistency. We duly take note of all of them and they have been updated in our internal overleaf draft.
---
**Table 2: Summary of Actions for Reviewer hZ7x**
| Concern | Action Taken | Expected Outcome |
|--------------------------------|-------------------------------------------------------|------------------------------------------|
| Lack of scaling results | Added experiments on model and token scaling | Demonstrated improved effectiveness at scale |
| Intuition for disentanglement | Added simplified theorem and integrated with empirical observation| Clearer mechanistic understanding |
| Gradient Routing comparison | Added detailed empirical comparison and discussion of mechanistic differences | Clarified relationship to prior work |
| Presentation consistency | Standardized formatting, improved figures, fixed typos | Enhanced readability |
Please let us know if these revisions strengthen the paper and address your concerns about scalability, theoretical foundations, and comparisons to prior work. We're grateful for your thoughtful feedback! | Summary: The paper introduces a training strategy called Sequence-Tied Dropout (SeqTD) for large language models that aims to isolate memorized sequences into a specific subset of neurons while still allowing the model to learn general language patterns. The authors argue that standard training causes memorization to be entangled with general knowledge, making post-hoc removal of sensitive or copyrighted text problematic. The proposed method enforces a consistent dropout mask for repeated sequences based on sequence IDs, which channels memorization signals into designated neurons. Experiments on a modified TinyStories dataset demonstrate that SeqTD can “unlearn” repeated sequences effectively without degrading overall model performance, and theoretical analyses are provided to support the observed dynamics.
Claims And Evidence: Claims: The paper claims that (1) traditional training leads to entangled memorization and generalization, (2) post-hoc localization methods such as pruning and gradient attribution are insufficient for typical repeated sequences, and (3) SeqTD can isolate memorization into dedicated neurons with minimal impact on overall performance.
Evidence: The authors support their claims with experiments comparing validation loss and sequence loss under different settings and provide theoretical analysis to explain the learning/forgetting dynamics. However, while the evidence is convincing for the controlled TinyStories setup, the experiments are limited to a single, small-scale dataset and do not explore more challenging benchmarks.
Methods And Evaluation Criteria: The methodology of partitioning MLP neurons into shared and memorization pools and enforcing a consistent dropout mask per sequence is clearly described.
The evaluation focuses on two key criteria: the loss increase on repeated sequences (indicating successful unlearning) and the validation loss (indicating generalization).
Although these metrics are reasonable, the evaluation would benefit from incorporating additional standardized benchmarks—such as those similar to cotaeval—to more comprehensively assess the impact on copyright compliance and overall performance.
Theoretical Claims: The paper provides a series of theoretical results (e.g., Theorems E.1–E.3) to formalize the dynamics of memorization and forgetting under standard training and SeqTD.
While the proofs appear sound under the stated assumptions, a more detailed review of the derivations is necessary to fully validate the claims. No major issues were detected, but additional discussion on the assumptions’ practicality in real-world scenarios would strengthen the work.
Experimental Designs Or Analyses: The experiments on TinyStories illustrate the core ideas of SeqTD. However, the experimental design is limited in scope; testing on only a small-scale dataset does not fully demonstrate the method’s scalability or robustness in real-world applications.
Additionally, more thorough comparisons with alternative methods (beyond the baseline post-hoc localization techniques) would help in assessing the relative strengths and weaknesses of SeqTD.
Supplementary Material: The supplementary material includes additional experimental details, hyperparameter settings, and extended proofs for the theoretical claims. These materials provide useful context, but could be improved by including more ablation studies and visualizations that compare SeqTD with established benchmarks.
Relation To Broader Scientific Literature: The paper is well-situated within the literature on memorization in large language models and unlearning methods. It addresses issues raised in works on model memorization and the challenges of removing sensitive content from pre-trained models.
Essential References Not Discussed: the literature review could be expanded to incorporate recent studies in copyright compliance and evaluation frameworks, such as:
1. “Copyright Violations and Large Language Models”
1. “Foundation Models and Fair Use”
1. “Evaluating Copyright Takedown Methods for Language Models”
1. “LLMs and Memorization: On Quality and Specificity of Copyright Compliance”
1. “SHIELD: Evaluation and Defense Strategies for Copyright Compliance in LLM Text Generation”
1. “Digger: Detecting Copyright Content Misusage in Large Language Model Training”
1. “Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4”
1. “Avoiding Copyright Infringement via Large Language Model Unlearning”
1. “CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation”
1. “Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy”
Incorporating these references would provide a broader context and strengthen the paper’s discussion regarding practical applications and limitations in copyright-sensitive domains.
Other Strengths And Weaknesses: Strengths:
- The paper proposes a novel and conceptually clear method for disentangling memorization from general model capabilities.
- The theoretical analysis supports the empirical findings, and the idea of sequence-specific dropout is interesting and innovative in this context.
Weaknesses:
- The experimental evaluation is limited to a small, controlled dataset (TinyStories) and does not test scalability or robustness in more realistic settings.
- The literature review could be expanded to better position the work within recent developments in copyright compliance and related evaluation methods.
Other Comments Or Suggestions: It would be beneficial to include experiments on larger datasets or more realistic benchmarks to validate the general applicability of SeqTD.
Consider incorporating a standardized evaluation package (e.g., cotaeval or similar ones) to strengthen the experimental analysis.
Expanding the literature review to discuss related works in copyright evaluation and unlearning methods would improve the context and relevance of the paper.
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive review. We're glad that:
- (i) you found our **theoretical analysis sound**,
- (ii) appreciated the **clarity and motivation of our experiments**, and
- (iii) recognized the novelty of using dropout for disentangling memorization from generalization.
We've added new experiments and clarifications that we believe directly address your suggestions.
---
### **Theme 1: Experimental Scope & Generalization**
> *"Experiments are limited to a small-scale dataset and do not explore more challenging benchmarks."*
To expand the scope and test robustness, we've added:
🔗 *Full results available here:* [Model Scaling Experiment Results](https://shorturl.at/md9cQ), [Token Scaling Experiment Results](https://shorturl.at/Aa7VZ), [Metrics Besides Loss](https://shorturl.at/M7UfM)
- **Model scaling experiments** across four model sizes: 32, 51, 170, 344 million parameters. Our results show that SeqTD mitigates memorization across all model sizes but the benefits of SeqTD increase with scale (no model degradation at 344M parameters).
- **Token Scaling Experiments** We train a 350M parameter model on a larger-scale corpus of 1 billion tokens containing a mix of TinyStories and SlimPajama. Our results demonstrate that SeqTD mitigates memorization (reducing the memorization-validation loss gap by 2.5x) while outperforming a data-deduplication baseline on validation loss.
- **Metrics Besides Loss** We validate SeqTD using generation-based extraction metrics of memorized sequence *token accuracy* and *memorized sequence perplexity rank*. We demonstrate SeqTD drops token accuracy on memorized sequences from >90% to <10%, suggesting robust mitigation of memorization.
---
### **Theme 2: Clarifying Mechanism via Theory**
> *"More practical insights into the theoretical assumptions."*
We've surfaced the core intuition behind our theoretical results through the following simplified theorem (based on the theory introduced in Appendix E, page 14)
> **Simplified Theorem (E.3):**
> *When a set of memorization neurons is activated consistently across repetitions of a given sequence and less frequently on other sequences, sequence memorization accumulates in these neurons and away from neurons activated on all sequences (shared neurons).*
The key idea is that memorization neurons (by virtue of being less frequently activated) are shielded from interference from other gradient updates. As a result sequence memorization accumulates and is preserved in these neurons. On the other hand, memorization stored in the generalization neurons experiences forgetting dynamics and is eliminated as a result. **We empirically verify this proposed mechanism in Figure 6 of our paper by showing that SeqTD experiences smaller amplitude learning-forgetting cycles.*
---
### **Theme 3: Clarifying Evaluation Objectives (CotaEval)**
> *"Evaluation would benefit from using standardized copyright benchmarks like cotaeval."*
Thank you for the suggestion. While we agree with the **long-term motivation around copyright and privacy**, we want to clarify that this paper is a **foundational investigation into training dynamics**, not a compliance benchmark. The datasets used (e.g., TinyStories) are synthetic or constructed for studying memorization patterns—not real-world content where cotaeval would apply.
---
### **Theme 4: Literature Review and Broader Context**
> *"Expand discussion to include recent works on copyright compliance."*
We agree and have expanded our Related Work section in our internal draft to include the copyright compliance literature you suggested. Thanks for the suggestion!
---
**Table 1: Summary of Actions for Reviewer Q8Dm**
| Concern | Action Taken | Expected Outcome |
|------------------------------|----------------------------------------------------------|------------------------------------------------|
| Small-scale experiment scope | Added model-size sweep + 1B-token training | Demonstrate scaling properties and robustness |
| Evaluation depth | Added perplexity rank + token accuracy metrics | Provide stronger insights on model behavior |
| Mechanism clarity | Moved and Simplified Theorem E.3 | Integrate theoretical foundation with empirical observations |
| CotaEval request | Respectfully deferred; explained paper focus | Clarified evaluation scope and contribution |
| Literature review gaps | Added broader copyright/unlearning papers | Situate paper in practical context |
We believe these additions substantially strengthen the paper while maintaining its focus on the core theoretical and empirical contributions. We appreciate your thoughtful suggestions that helped us improve the work's clarity, scope, and connection to the broader literature. | null | null | null | null | null | null |
From Language Models over Tokens to Language Models over Characters | Accept (spotlight poster) | Summary: The paper presents algorithms to convert token-level language models to character-level ones, addressing the prompt boundary problem. It introduces the concept of covering and proposes both exact and approximate algorithms. The main findings include that the method can accurately approximate the character-level distribution with a small computation budget. The key algorithmic idea is to find a covering of token strings that form a key technical concept, allowing the selection of members in proportion to their normalized prefix probability. The paper also provides an efficient algorithm for conditional token generation.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors provide a detailed analysis of the prompt boundary problem and demonstrate how their method resolves it. They also present empirical results on two publicly available language models, GPT2-large and Llama 3.1 8B, showing the accuracy and efficiency of their approach. The experiments include measuring the Jensen-Shannon distance (JSD) between the character-level conditional distributions with different beam sizes and the reference model, as well as the processing speed in characters per second.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. The authors address the fundamental tension between token-level models and character-level prompts, which is a significant challenge for users of large language models. The evaluation criteria, including the JSD and processing speed, are appropriate for assessing the quality and efficiency of the proposed algorithms.
Theoretical Claims: I did not check the correctness of the proofs for theoretical claims in detail, but the provided proofs in the paper seem logically sound and follow standard mathematical reasoning. The proofs for Proposition 1 and Proposition 2 are based on manipulating summations and leveraging the properties of strict-prefix monotonicity, which are key concepts in the paper.
Experimental Designs Or Analyses: The experimental designs and analyses seem valid. The authors use standard benchmark datasets (wikitext-103-v1) and modern language models (GPT2-large and Llama 3.1 8B) for evaluation. The experiments are designed to measure the accuracy and efficiency of the proposed method, and the results are presented in a clear and understandable manner. The use of beam search with different beam sizes to approximate the covering is a reasonable approach, and the trade-off between error (JSD) and speed (characters/sec) is well-documented.
Supplementary Material: This paper has no supplementary material.
Relation To Broader Scientific Literature: The key contributions of the paper are related to the broader scientific literature in several ways. The work builds upon previous research on tokenization models and their limitations, such as the issues discussed by Cao & Rimell (2021) and Chirkova et al. (2023). The concept of covering is a novel contribution that addresses the prompt boundary problem, which has been highlighted in prior work like Lundberg & Ribeiro (2023). The paper also contributes to the field of computational psycholinguistics by providing a method to compute contextual surprisal for arbitrary character strings, as discussed by Giulianelli et al. (2024). The proposed algorithms and methods are likely to influence future research on language model interfaces and tokenization strategies.
Essential References Not Discussed: Since I am not familiar with the relevant literature, I cannot be sure.
Other Strengths And Weaknesses: The paper has several strengths. It addresses a fundamental problem in the interface between token-level language models and character-level prompts, which is a significant challenge for users of large language models. The proposed algorithms are novel and provide a principled solution to the prompt boundary problem. The empirical evaluation is thorough and demonstrates the effectiveness of the proposed methods.
However, there are also some weaknesses. The paper assumes that the language model's probability mass is concentrated around a limited set of tokenizations, which may not always be the case. Additionally, the beam summing method requires a very large beam size K if the language model does not favor a small number of tokenizations, which could be a limitation in practice.
Other Comments Or Suggestions: The paper is well-written and clear overall. However, there are a few minor issues. For example, in the section on "Key Properties of κ," the definition of strict-prefix monotonicity could be made more explicit with additional examples. Additionally, the notation in some parts of the paper is quite dense and could be simplified for better readability.
Questions For Authors: I have a few important questions for the authors:
1. How does the proposed method handle cases where the language model's probability mass is not concentrated around a limited set of tokenizations?
2. What are the computational requirements for the beam summing method when dealing with very large beam sizes?
3. Could the covering concept be extended to handle more complex tokenization schemes, such as those used in multilingual language models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### General response
Many reviewers suggested that our evaluation methodology, which uses a large beam as a proxy for ground truth character-level probabilities, may have some systematic bias. We will add discussion to the paper about the challenge of designing a faithful evaluation as well as the possible pitfalls in our large-beam evaluation scheme.
That being said, we agree that our evaluation can be improved. In subsequent revisions of the paper, we will seek to address this in the following ways:
- **Additional baselines and comparisons**:
- **Token healing for character-level probabilities**: Thanks to a suggestion from reviewer DnV9, we will add a comparison between our algorithm and an algorithm based on token healing for inferring character-level probabilities.
- **Perplexity per byte**: Based on the suggestion by reviewer DnV9, we will add a comparison between our method’s byte-level cross-entropy and the byte-normalized cross-entropy of a token-level language model.
- **Increased beam-width**: We will further increase the beam width of the baseline model, bringing it even closer to the ground truth and reducing any potential bias in the comparison.
- **Evaluation of downstream accuracy on LM reasoning benchmarks**: We agree that some evaluation of the downstream effects of our approach would strengthen the paper. Therefore, we will investigate the feasibility of adding an evaluation of the downstream accuracy on one or more common LM reasoning benchmarks (e.g., HellaSWAG or GLUE). However, we do not necessarily expect to see benefits. It's possible that randomly shifting the prompt boundary to the left would be interesting. Consider an example from HellaSWAG:
```
Prompt: "A man is sitting on a roof. He "
. "is using wrap to wrap a pair of skis.",
2. "is ripping level tiles off.",
3. "is holding a rubik's cube.",
4. "starts pulling up roofing on a roof."
```
We would randomly shift the prompt left, e.g., move `"of. He"` from the end of the prompt to the possible continuations:
```
Prompt: "A man is sitting on a ro"
1. "of. He is using wrap to wrap a pair of skis.",
2. "of. He is ripping level tiles off.",
3. "of. He is holding a rubik's cube.",
4. "of. He starts pulling up roofing on a roof."
```
We note, however, that we do not claim that our method must improve performance on downstream tasks, e.g., math reasoning. The purpose of our approach is to enable a character-level interface to a tokenized LM, which naturally solves the prompt boundary problem.
### Reviewer 9E2G
Thank you for your review, particularly the questions!
> Other Comments Or Suggestions: The paper is well-written and clear overall. However, there are a few minor issues. For example, in the section on "Key Properties of κ," the definition of strict-prefix monotonicity could be made more explicit with additional examples. Additionally, the notation in some parts of the paper is quite dense and could be simplified for better readability.
Thank you for the suggestions. We will do some editing for improving readability including examples like this will help us do that.
> Questions For Authors:
> 1. How does the proposed method handle cases where the language model's probability mass is not concentrated around a limited set of tokenizations?
This is an excellent question. Unfortunately, if mass is not concentrated, then our beam summing estimate will be skewed. Fortunately, as the method is based on beam search, even if the beam size K is smaller than what is needed to cover the entire distribution, our method can still pick up on the more likely tokenizations. We will add some discussion about this in the revised paper.
> 2. What are the computational requirements for the beam summing method when dealing with very large beam sizes?
We provide time and space complexity analyses in section 3.2. The short answer is that time and space are linear in the beam size K. The primary bottleneck in using more samples is that we make up to K language model calls at each position of the character string.
> 3. Could the covering concept be extended to handle more complex tokenization schemes, such as those used in multilingual language models?
This sounds very interesting. What are some of the complexities that multi-lingual language models have? If you could please provide some pointers/citations, we would love to dig into your question. | Summary: The paper highlights the fact that language models over tokens are _not_ language models over characters, at least the way they are normally used. To be specific, the standard procedure of taking a prompt, tokenizing it, and then sampling from the model conditioned on the token sequences is _not_ the same thing as sampling from the model's distribution conditioned on the prompt as a textual prefix. The paper introduces the idea of a _covering_ which is a sufficient set of token sequences that, if evaluated by the model, allow one to sample properly from the text conditioned distribution. An approximation of this covering is also given for computational convenience.
Claims And Evidence: The paper proceeds very logically from its premise to conclusions. I believe all of the theoretical claims made are well-supported.
Methods And Evaluation Criteria: I think the method proposed (using the full cover) is exactly the correct thing to do, although it is not very practical due to its exponential time complexity (which is unavoidable). The approximation given is the most natural approximation of the full problem.
The evaluation criterion (JSD on next byte) is valid but I do think leaves something to be desired. I would be interested in seeing the cross-entropy loss at the byte level, this could be compared to the byte normalized loss of the LM at the token level (i.e. the "bits per byte") which is obtained by converting the standard cross entropy loss from units of nats/token to bits/byte (for a certain text).
Theoretical Claims: I did not carefully check the proofs in the appendix, but I strongly believe the propositions are correct.
Experimental Designs Or Analyses: The existing experiments are valid and I appreciate the results. But I think the paper misses a chance to really motivate the problem. Basically the existing comparison is between the proposed algorithm with a small beam width and the same algorithm with a large beam width. I think the paper would be much stronger if a comparison was made between the proposed algorithm and the existing practice of simply tokenizing the prefix and sampling as well as the token healing correction heuristic. Of course, neither of these are the "correct" thing and so I would expect the proposed algorithm to beat them soundly, especially with a large beam width. If there is a big difference, then there's a clear motivation for using the proposed algorithm if one cares about byte conditioned sampling. On the other hand, if there isn't a big difference, it means that the heuristics may be "good enough" which is something we didn't know before!
Supplementary Material: I skimmed the proofs and the provided code. I did not carefully check each step but the overall structure of the proofs looks correct to me.
Relation To Broader Scientific Literature: This paper can be thought of as an extension of the work of marginalizing over segmentations of a text (Cao & Rimell, 2021; Chirkova et al., 2023) but done in a manner that is "open to the right," which allows one to consider sampling an extension of the text.
Essential References Not Discussed: I'm not aware of any prior papers on this exact topic. All of the related papers I know of are discussed fairly in the paper.
Other Strengths And Weaknesses: I really appreciate the clarity of the presentation. I found the paper very enjoyable to read.
Other Comments Or Suggestions: I think the footnote corresponding to "10" in Figure 1 seems to be missing?
Questions For Authors: Some of the existing works on the marginalization problem (e.g. Cao & Rimell, 2021; Chirkova et al., 2023) tackle the problem using importance sampling, which has the benefit of being unbiased. Is there any way to apply a similar idea in the setting of conditional generation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### General response
Many reviewers suggested that our evaluation methodology, which uses a large beam as a proxy for ground truth character-level probabilities, may have some systematic bias. We will add discussion to the paper about the challenge of designing a faithful evaluation as well as the possible pitfalls in our large-beam evaluation scheme.
That being said, we agree that our evaluation can be improved. In subsequent revisions of the paper, we will seek to address this in the following ways:
- **Additional baselines and comparisons**:
- **Token healing for character-level probabilities**: Thanks to a suggestion from reviewer DnV9, we will add a comparison between our algorithm and an algorithm based on token healing for inferring character-level probabilities.
- **Perplexity per byte**: Based on the suggestion by reviewer DnV9, we will add a comparison between our method’s byte-level cross-entropy and the byte-normalized cross-entropy of a token-level language model.
- **Increased beam-width**: We will further increase the beam width of the baseline model, bringing it even closer to the ground truth and reducing any potential bias in the comparison.
- **Evaluation of downstream accuracy on LM reasoning benchmarks**: We agree that some evaluation of the downstream effects of our approach would strengthen the paper. Therefore, we will investigate the feasibility of adding an evaluation of the downstream accuracy on one or more common LM reasoning benchmarks (e.g., HellaSWAG or GLUE). However, we do not necessarily expect to see benefits. It's possible that randomly shifting the prompt boundary to the left would be interesting. Consider an example from HellaSWAG:
```
Prompt: "A man is sitting on a roof. He "
. "is using wrap to wrap a pair of skis.",
2. "is ripping level tiles off.",
3. "is holding a rubik's cube.",
4. "starts pulling up roofing on a roof."
```
We would randomly shift the prompt left, e.g., move `"of. He"` from the end of the prompt to the possible continuations:
```
Prompt: "A man is sitting on a ro"
1. "of. He is using wrap to wrap a pair of skis.",
2. "of. He is ripping level tiles off.",
3. "of. He is holding a rubik's cube.",
4. "of. He starts pulling up roofing on a roof."
```
We note, however, that we do not claim that our method must improve performance on downstream tasks, e.g., math reasoning. The purpose of our approach is to enable a character-level interface to a tokenized LM, which naturally solves the prompt boundary problem.
### Response to Reviewer DnV9
Thank you so much for your review. It has been incredibly helpful as your suggestions for additional baselines are fantastic, and we will add them (more below).
> re: "bits per byte" suggestion
Thank you for this suggestion! This is a great comparison to run, and we plan to add it to our experimental evaluation.
> re: a token-healing-based baseline
This is an incredibly good suggestion. Coincidentally, we used that method precisely during debugging but didn't think of adding it as a baseline. What an excellent idea - thank you!
> I think the footnote corresponding to "10" in Figure 1 seems to be missing?
It will be fixed in the next revision. The text for the two footnotes in Fig 1 (i.e., 10 and 11) appear on the previous page, on lines 376–384 [left column], as footnotes labeled with the same numbers.
> Re: Importance sampling
Importance sampling could be used to estimate the prefix probability (and indeed it is unbiased for that). However, it does not give an unbiased conditional prefix probability, as it requires the division of two prefix probabilities. We briefly experimented with sequential Monte Carlo but found that noncanonical token sequences were strongly overrepresented. For example, when we use it on the string `"SELECT * FROM"` roughly 94% of GPT-2's samples start with the token `S` rather than the canonical token `SELECT`. So, we ended up ruling it out in favor of beam search. Perhaps a more sophisticated proposal distribution would help. We suspect that it is useful to represent particles as "buckets" as we did in our pruning heuristic (starting on line 337).
It's possible that we should add something about this 'negative result' to an appendix.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal. I think with the proposed changes the work will be much more solid, so I have increased my score. | Summary: This paper presents an algorithm for converting token-level language models for character level language models. The authors present compelling analysis as well as detailed explanation. This work also includes practical experimental evaluation results.
Claims And Evidence: The claims are backed by convincing theoretical and empirical analysis.
Methods And Evaluation Criteria: The authors evaluated their algorithms using GPT2 and Llama 8B models on the Wikitext dataset—a reasonable choice. They measured performance using Jensen-Shannon distance between a high-budget model (treated as ground truth) and various faster approximation models. While this approach makes practical sense, it remains somewhat unsatisfying. Could the authors compare their methods against a truly exact model using full coverings, at least in small-scale settings? This would provide a more rigorous baseline for evaluation.
Theoretical Claims: They appear reasonable.
Experimental Designs Or Analyses: See my concerns in Methods And Evaluation Criteria. The experimental design and analysis is largely convincing.
Supplementary Material: N/A, the paper is reasonably self contained.
Relation To Broader Scientific Literature: To the best of my knowledge, this paper is very upfront about introducing prior work, including mentioning existing solution to the token boundary problem using token healing heuristics, and also presents compelling analysis revealing their short comings.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: Strength:
- Generally the paper is well written, the authors made considerable effort to put together this research.
- The research topic and idea is very novel and compelling.
- I really appreciate the author's effort in systematically explaining the problem, existing solutions and their shortcomings. The explanations are valuable contributions in and of itself.
Weakness:
- Would be great if the authors can repeat the comparison with an exact character model.
Other Comments Or Suggestions: N/A.
Questions For Authors: N/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ### General response
Many reviewers suggested that our evaluation methodology, which uses a large beam as a proxy for ground truth character-level probabilities, may have some systematic bias. We will add discussion to the paper about the challenge of designing a faithful evaluation as well as the possible pitfalls in our large-beam evaluation scheme.
That being said, we agree that our evaluation can be improved. In subsequent revisions of the paper, we will seek to address this in the following ways:
- **Additional baselines and comparisons**:
- **Token healing for character-level probabilities**: Thanks to a suggestion from reviewer DnV9, we will add a comparison between our algorithm and an algorithm based on token healing for inferring character-level probabilities.
- **Perplexity per byte**: Based on the suggestion by reviewer DnV9, we will add a comparison between our method’s byte-level cross-entropy and the byte-normalized cross-entropy of a token-level language model.
- **Increased beam-width**: We will further increase the beam width of the baseline model, bringing it even closer to the ground truth and reducing any potential bias in the comparison.
- **Evaluation of downstream accuracy on LM reasoning benchmarks**: We agree that some evaluation of the downstream effects of our approach would strengthen the paper. Therefore, we will investigate the feasibility of adding an evaluation of the downstream accuracy on one or more common LM reasoning benchmarks (e.g., HellaSWAG or GLUE). However, we do not necessarily expect to see benefits. It's possible that randomly shifting the prompt boundary to the left would be interesting. Consider an example from HellaSWAG:
```
Prompt: "A man is sitting on a roof. He "
. "is using wrap to wrap a pair of skis.",
2. "is ripping level tiles off.",
3. "is holding a rubik's cube.",
4. "starts pulling up roofing on a roof."
```
We would randomly shift the prompt left, e.g., move `"of. He"` from the end of the prompt to the possible continuations:
```
Prompt: "A man is sitting on a ro"
1. "of. He is using wrap to wrap a pair of skis.",
2. "of. He is ripping level tiles off.",
3. "of. He is holding a rubik's cube.",
4. "of. He starts pulling up roofing on a roof."
```
We note, however, that we do not claim that our method must improve performance on downstream tasks, e.g., math reasoning. The purpose of our approach is to enable a character-level interface to a tokenized LM, which naturally solves the prompt boundary problem.
### Response to Reviewer zQVq
Thank you for the thoughtful review and the kind words.
> Methods And Evaluation Criteria: The authors evaluated their algorithms using GPT2 and Llama 8B models on the Wikitext dataset—a reasonable choice. They measured performance using Jensen-Shannon distance between a high-budget model (treated as ground truth) and various faster approximation models. While this approach makes practical sense, it remains somewhat unsatisfying. Could the authors compare their methods against a truly exact model using full coverings, at least in small-scale settings? This would provide a more rigorous baseline for evaluation.
Unfortunately, using a truly exact model is not feasible as marginalizing over tokenization of even short strings quickly becomes infeasible. We don't think that short string comparisons of beam summing's performance would necessarily generalize to the case of longer strings. So we worry that it might be misleading.
Another option we considered was evaluating our method's ability to convert a probabilistic context-free grammar over a tokenized set of symbols into a character-level model, as it is possible to compute those probabilities in cubic time; however, it is unclear whether the results would generalize to the general LM setting. We are happy to discuss this option further in the discussion period. | Summary: This paper is motivated by addressing the "prompt boundary problem" in token-level language models. In models using tokenizers like BPE, even small changes at the prompt boundary, such as adding a whitespace, can dramatically alter the next token distribution in unintuitive ways, which is undesired behavior.
The authors propose a principled solution to convert token-level language models into character-level ones. Their method is built around the concept of a "covering" - the set of all minimal token sequences that, when decoded, would produce a given character string or a string having it as a prefix. By considering the weighted probability distribution across this entire covering rather than just a single tokenization, the method correctly computes character-level probabilities from token-level language models.
The paper presents both exact algorithms and efficient approximations based on beam search for computing these character-level probabilities. Their experiments show that with reasonable computational resources, their method achieves high accuracy in estimating character-level distributions with minimal error at practical speeds on models like Llama 3.1 8B, effectively solving the prompt boundary problem and creating a more intuitive interface for working with tokenized language models.
Claims And Evidence: yes
Methods And Evaluation Criteria: For the experiment, the ground truth in this evaluation is an approximation obtained using a very high beam size (K = 128) from the same method, it may inherit the biases or errors of that approximation. Evaluating on real-world datasets would provide stronger evidence that the method effectively resolves the prompt boundary problem in practical scenarios, rather than only on controlled benchmarks.
Theoretical Claims: yes
Experimental Designs Or Analyses: please see above
Supplementary Material: yes
Relation To Broader Scientific Literature: This paper is related to tokenization in LLMs, prior works have identified the problems with token-level models processing character-level prompts, this paper formalizes this problem of prompt boundary. This work also relates to constrained decoding works, where converting a token-level LLM to a character level LLM helps in constrained decoding.
Essential References Not Discussed: no
Other Strengths And Weaknesses: Pros:
1. The paper effectively formalizes the challenge of converting token-level LLMs to express character-level outputs. Its formulation is clear and the use of color-coded notations enhances readability.
2. The proposed method incorporates beam search pruning to significantly improve efficiency.
3. The authors demonstrate its effectiveness under a reasonable computational budget.
Cons:
1. The evaluation is primarily conducted on a single corpus. Expanding experiments to include diverse, real-world datasets would help validate the method’s robustness and provide more practical insights into how it improves actual problem-solving.
2. The evaluation is evaluated against the same method but with higher beam search K, this may introduce bias.
Other Comments Or Suggestions: no
Questions For Authors: please see other sec
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### General response
Many reviewers suggested that our evaluation methodology, which uses a large beam as a proxy for ground truth character-level probabilities, may have some systematic bias. We will add discussion to the paper about the challenge of designing a faithful evaluation as well as the possible pitfalls in our large-beam evaluation scheme.
That being said, we agree that our evaluation can be improved. In subsequent revisions of the paper, we will seek to address this in the following ways:
- **Additional baselines and comparisons**:
- **Token healing for character-level probabilities**: Thanks to a suggestion from reviewer DnV9, we will add a comparison between our algorithm and an algorithm based on token healing for inferring character-level probabilities.
- **Perplexity per byte**: Based on the suggestion by reviewer DnV9, we will add a comparison between our method’s byte-level cross-entropy and the byte-normalized cross-entropy of a token-level language model.
- **Increased beam-width**: We will further increase the beam width of the baseline model, bringing it even closer to the ground truth and reducing any potential bias in the comparison.
- **Evaluation of downstream accuracy on LM reasoning benchmarks**: We agree that some evaluation of the downstream effects of our approach would strengthen the paper. Therefore, we will investigate the feasibility of adding an evaluation of the downstream accuracy on one or more common LM reasoning benchmarks (e.g., HellaSWAG or GLUE). However, we do not necessarily expect to see benefits. It's possible that randomly shifting the prompt boundary to the left would be interesting. Consider an example from HellaSWAG:
```
Prompt: "A man is sitting on a roof. He "
. "is using wrap to wrap a pair of skis.",
2. "is ripping level tiles off.",
3. "is holding a rubik's cube.",
4. "starts pulling up roofing on a roof."
```
We would randomly shift the prompt left, e.g., move `"of. He"` from the end of the prompt to the possible continuations:
```
Prompt: "A man is sitting on a ro"
1. "of. He is using wrap to wrap a pair of skis.",
2. "of. He is ripping level tiles off.",
3. "of. He is holding a rubik's cube.",
4. "of. He starts pulling up roofing on a roof."
```
We note, however, that we do not claim that our method must improve performance on downstream tasks, e.g., math reasoning. The purpose of our approach is to enable a character-level interface to a tokenized LM, which naturally solves the prompt boundary problem.
### Response to Reviewer us9y
Thank you for the review and constructive suggestions for improving our experimental evaluation. [We are also glad you appreciated the color coding!]
> For the experiment, the ground truth in this evaluation is an approximation obtained using a very high beam size (K = 128) from the same method, it may inherit the biases or errors of that approximation. Evaluating on real-world datasets would provide stronger evidence that the method effectively resolves the prompt boundary problem in practical scenarios, rather than only on controlled benchmarks.
This is a reasonable concern: Essentially, the concern is that beam size K=64 might be really good at predicting K=128 because they systematically make the same kinds of errors. We treat K=128 as essentially ground truth, which we have not proven is the case. We will discuss this issue in the revised paper, as it is an important limitation of our experiment design. Thank you.
> The evaluation is primarily conducted on a single corpus. Expanding experiments to include diverse, real-world datasets would help validate the method’s robustness and provide more practical insights into how it improves actual problem-solving.
Thanks for this suggestion! We will investigate the feasibility of adding a downstream accuracy evaluation on a common LM benchmark, e.g., HellaSWAG, GLUE. Note that we do not intend to claim that our method will necessarily improve model performance on downstream tasks (unless there are prompt boundary issues). We emphasize that enabling a character-level interface to language models is valuable in its own right—for instance, when measuring character-level surprise for psycholinguistic experiments ([Giulianelli et al., 2024](https://arxiv.org/abs/2410.02691)) or handling tasks that inherently require fine-grained character-level control.
> The evaluation is evaluated against the same method but with higher beam search K, this may introduce bias.
Great point. Please see the general response. | null | null | null | null | null | null |
Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization | Accept (poster) | Summary: CBMs aim to improve explainability of models by making decisions based on human-interpretable concepts but often suffer from mislabeled concept data, leading to significant performance drops. To address this, the Concept Preference Optimization (CPO) objective is introduced, leveraging Direct Preference Optimization to reduce the impact of concept mislabeling. The authors compared CPO to conventional BCE across multiple datasets to evaluate robustness against concept noise.
Claims And Evidence: The paper claims that CPO improves the robustness of CBMs by replacing correctness assumptions with a preference-based optimization approach. Theoretical results and empirical findings suggest that CPO leads to better performance under noisy concept labels than traditional BCE. The authors provide mathematical derivations to show that CPO's gradient updates remain closer to the optimal noise-free gradient than BCE's, which serves as the main theoretical justification for CPO's robustness. However, while the claims are well-motivated and backed by theoretical derivations, the scope of noise modeling is limited to empirical concept label noise and does not consider more complex, structured noise models (e.g., systematic bias or adversarial noise). Additional empirical results exploring these settings would strengthen the evidence.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The primary theoretical claim is that CPO is more resilient to noise than BCE due to its gradient properties, specifically, that LCPO’s gradient updates are closer to the optimal noise-free gradient than BCE’s under constant noise conditions. The authors prove this with Proposition 4.1 and Theorem 4.3, which mathematically formalize this gradient similarity. The main limitation here is that the assumptions behind these proofs are strong and may not hold in all real-world cases. First, this assumption of independent and uniform priors for concepts is unrealistic. Many concepts are correlated (e.g., "feathers" and "wings" in birds). Second the noise model assumes random label corruption but does not consider structured noise, such as systematic bias in datasets or adversarial noise attacks. A stronger theoretical contribution could involve relaxing these assumptions and extending the analysis to more realistic noise distributions.
Experimental Designs Or Analyses: The experimental setup appears reasonable. See below for more.
Supplementary Material: No.
Relation To Broader Scientific Literature: Important topic.
Essential References Not Discussed: Thorough analysis.
Other Strengths And Weaknesses: Strengths:
- The paper presents a novel preference-based optimization approach that relaxes strict correctness assumptions, making it highly relevant for noisy real-world settings.
- Theoretical results are clearly derived, and mathematical proofs support key claims.
- The empirical study demonstrates improvements on multiple real-world datasets, suggesting that CPO has practical benefits.
Weaknesses:
- Strong assumptions (e.g., uniform priors, concept independence) may limit generalizability.
- No discussion of computational efficiency—CPO introduces an online learning approach, but the paper does not analyze how training time or model complexity compares to BCE.
- Limited evaluation on different noise types—structured or adversarial noise is not explored.
Other Comments Or Suggestions: - Table 1 does not follow the order of baseline introduction. Please align the presentation for clarity.
- Consider visualizing gradient behavior over time. A plot comparing LCPO and BCE gradients over training iterations could illustrate the robustness claim more effectively.
Questions For Authors: - Why did the authors only evaluate the jointly trained CBM and not the sequential one?
- What happens when empirical data is systematically biased? The paper assumes empirical preferences are more reliable than random sampling, but what if systematic annotation errors exist (e.g., domain shifts, adversarial perturbations)?
- How would you model concept dependencies (e.g., "wings" and "feathers" in birds) ? The assumption of conditional independence between concepts is unrealistic. How could correlated concept structures be incorporated into CPO?
- How does LCPO perform in early training phases? Proposition 4.2 suggests LCPO’s gradient updates are more conservative than BCE’s, potentially leading to slower convergence. Could adaptive learning rates (scaling updates based on entropy) address this?
- How does CPO handle dataset biases? Since LCPO only modifies the policy when incorrect concepts are sampled, it assumes mislabeled concepts are uniformly distributed. However, in practice, certain incorrect concepts may dominate due to dataset biases. Would reweighting techniques help mitigate this issue?
- Provide a computational efficiency analysis. Given that CPO involves online preference updates, how does its training time and memory usage compare to BCE?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and helping us improve our work.
`Consider visualizing gradient behavior over time...`
We agree with the reviewer that this figure can aid in illustrating our claim. A version of it is already in App D, where we empirically confirm our theoretical results.
`1 Sequential CBMS`
We chose to focus on joint CBMs as they are shown to generally be the best-performing models for a variety of tasks. Below, we provide some results for the unnoised setting using sequential CBMs. We generally observe that CPO provides better results for both the sequential and joint setting, specifically in task accuracy, where we find that the added uncertainty estimates encoded into the concept representation allows for more expressive modeling to be picked up by the decoder.
Sequential CBM results: https://ibb.co/bMJKXcLP
` 2. What happens when empirical data is systematically biased?`
1.1 We agree that incorporating experiments with more structured noise would strengthen our conclusions. Specifically, in our response to reviewer **sJF1** we describe the following experiments:
One where noise is introduced at the concept group level (e.g., flipping only wing-related attributes in birds or altering beak colors).
Another where noise is introduced based on the confidence level provided by the labeler (as available in the CUB dataset).
` 3.Strong assumptions`
While our results do rely on the assumption of a uniform prior, this is the weakest prior one can have. We show in Sec 5.3 how using a prior improves the performance of the model through the added flexibility. This ability to leverage a prior (which BCE cannot) but still work using an uninformative one is beneficial. Thus, we believe that while it is an assumption that our results rely on, it is not impairing.
While we acknowledge that concepts are often correlated, assuming conditional independence is a common approach in CBM literature, with dedicated works exploring dependency modeling (e.g., stochastic CBMs [1]). Given this, we opted for a model that assumes conditional independence.
This feedback has prompted us to reflect on whether conditional independence is necessary for the correctness of our analysis. Upon review, we found that it is not required, meaning our results can generalize to more expressive models.
To illustrate this, consider modeling the joint distribution of a concept with all other concepts, $\pi(c_i, c_{0:i-1}|x)$ . Using the chain rule, this can be factorized into an autoregressive formulation:
$$
\pi(c_i | c_{i-1}, x)\pi(c_{i-1} | c_{i-2}, x) \dots
$$
With this factorization, we can always compute the joint distributions $\pi(c | x)$ and $\pi(c' | x)$, ensuring that our results in Appendix C.1 still hold even without only assuming conditional independence on x.
The main reason for the independence assumption was to simplify the derivation of the CPO loss in Eq. (6). Since we find this assumption unnecessary, we will drop this assumption from our work.
Here, we present preliminary results using this autoregressive formulation. Specifically, we implement this approach with two networks: $ \pi_{\phi}(c_{i} \mid c_{0:i-1}, x) $ and $ h_{\omega}(\tau \mid c, x) $, where $ h_{\omega} $ determines the order of the autoregressive decomposition. In this process, $h_{\omega}(\tau \mid c, x) $ takes as input the concept probabilities predicted by $\pi $ and produces a softmax distribution over concepts. The concept with the highest probability is then selected and predicted by $\pi $, and this procedure repeats in an autoregressive manner until all concept probabilities have been obtained.
AR CBM Table: https://ibb.co/BH9s5jcm
Here, we find that our autoregressive model beats conditionally independent models for BCE, slightly underperforming for CPO. Regardless, we find that CPO also performs better for this type of model.
` 4. How does LCPO perform in early training phases? Proposition 4.2 suggests LCPO’s gradient updates are more conservative than BCE’s, potentially leading to slower convergence...`
In practice, CPO can be slower in fitting training data, but we observe that this does not negatively impact generalization. Instead, it suggests that CPO may be less prone to overfitting, which is a known issue in likelihood-based training. To illustrate this, we provide anonymized links for sample runs on the CUB dataset across two seeds.
https://ibb.co/Z67qYySF (Validation Task Accuracy)
https://ibb.co/4wkNn1WW (Validation C AUC)
https://ibb.co/Xx8WNLnZ (Training task accuracy)
https://ibb.co/chxX81Jf (Training C AUC)
Regarding adaptive learning rates or entropy-based updates, any improvements seen from these strategies should also apply to CPO, as its updates are fundamentally based on data likelihood.
`Computational Analysis`
Please see our response to reviewer **sJF1**
[1] Stochastic Concept Bottleneck Models https://arxiv.org/html/2406.19272v1 | Summary: One limitation of concept bottleneck models (CBMs) is that their training requires the set of correct concept annotations for all samples. However, concept mislabelling is inevitable due to labeling noise or subjective annotations. To this end, this paper proposes a CBM that is robust to concept-label noise. Specifically, inspired by the recent progress in Preference Optimization, Direct Preference Optimization (DPO) in particular, they introduce a novel objective function for training CBMs, named Concept Preference Optimization (CPO). Theoretically, they show the similarities and differences between the traditional BCE loss and the CPO loss, explaining why the latter is more robust to concept-label noise. In addition, they demonstrate that CPO is equivalent to learning the concept’s posterior distribution. Empirically, they show that CBMs trained with the CPO loss perform better in both un-noised and noisy environments.
## Update after rebuttal
I keep my score since the author's responses have addressed my questions.
Claims And Evidence: The claims made in the submission are supported by clear and convincing theoretical and empirical evidence.
Methods And Evaluation Criteria: The datasets used in this paper are the standard benchmarks used to evaluate the performance of CBMs, including CUB, AwA2, and CelebA.
The studied problem (i.e., training a model robust to concept-level noise) is well motivated because there indeed exists much noise in these datasets.
Theoretical Claims: The theoretical claims and proofs seem correct to me.
Experimental Designs Or Analyses: The experimental designs seem valid to me, because the datasets used in this paper are the standard ones and the experiments have studied two different environments—with and without concept-level noise (even with different levels of noise), which well demonstrate the effectiveness of the proposed method across various scenarios.
Supplementary Material: The implementation details described in the supplementary material seem sufficient for reproducing the results.
Relation To Broader Scientific Literature: This paper focuses on the robustness of CBMs against concept-level noise, which is a well-motivated problem and may inspire further research on removing concept-level noise in datasets or building noise-robust CBMs.
This paper designs a loss function based on Direct Preference Optimization (DPO) and may inspire further research on combining the field of preference optimization and the area of concept bottleneck models.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: This is a solid piece of work, clearly defining the problem, formulating the method, theoretically analyzing the properties of the proposed loss in comparison to the traditional loss, and evaluating its effectiveness across various environments.
Other Comments Or Suggestions: In Eq (6), $D$ should be $\mu$?
Questions For Authors: 1. In Eq (5), are $c$ and $c^\prime$ a scalar or a k-dimensional vector?
2. Different from DPO which adopts a static preference dataset, the proposed method CPO adopts a dynamic dataset because $c^\prime$ is sampled from the current model $\pi_\theta$. My question is: I understand that in the early training stage, $c \succ c^\prime$, because $\pi_\theta$ is not well-trained at that time. But, if it is possible that in the late training stage, $c^\prime \succ c$ could happen? If this is the case, will continuing training the model with the proposed loss hurt the model's ability to distinguish preference pairs, i.e., give a higher score to the better concept?
3. Can I conclude the reason why CPO loss is more robust against the concept-level noise as follows: DPO is the same as BCE when $c\sim\mu^{+}$; however, when $c\sim\mu^-$, DPO yields smaller gradients?
4. I am just curious: in the future, if it's possible to utilize CPO to identify mislabeled concepts or even find the correct labels for these concepts?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and encouragements. Here we provide some detailed responses to some of the questions you have raised.
`In Eq (6), should $D$ be $\mu$ ?`
Yes, that is a notational mistake on our end. We will fix it.
`1. In Eq. (5), are c and c′ scalars or k-dimensional vectors?`
In our work, Eq. (5) uses scalars. However, this is not a strict requirement—one could use vectors to structure concepts in this way. That said, scalars are significantly less computationally demanding, as modeling structured objects would lead to $2^k$ possible concept representations, which could become impractical.
`2. In the later training stages, could c′ ≻ c occur? If so, would continued training hurt the model's ability to distinguish preference pairs?`
Yes, this situation can arise later in training. To mitigate this, we apply early stopping across all models. This ensures that the model does not overfit to later-stage preferences in a way that degrades its ability to distinguish concept quality.
`3. Is the reason CPO is more robust against concept-level noise that DPO behaves like BCE when c ∼ μ+, but for c ∼ μ−, DPO yields smaller gradients?`
This intuition is correct. In **Appendix D**, we show that when noise is low (e.g., **0.1**), CPO’s gradients are indeed smaller than BCE’s. This is consistent with our experiments, where BCE performs reasonably well under **p_noise = 0.1**. However, at **p_noise = 0.3**, CPO’s gradients are **orders of magnitude smaller** than BCE’s. This aligns with our empirical findings—CPO remains relatively unaffected by high noise levels, whereas BCE struggles significantly.
`4. Could CPO be used to identify mislabeled concepts or even recover correct labels?`
This is an interesting question. While we have not explored this explicitly, we have qualitatively found that CPO’s uncertainty estimations seem more grounded to the target object in the image. To show this, we provide examples of images before and after various augmentations, along with the concept being analyzed. Our uncertainty score is based on the variance of the concept, normalized to a [0,1] range, where 0 indicates complete certainty and 1 represents maximum uncertainty. Results show that when the target object becomes obscured, CPO more effectively increases its concept uncertainty, while both BCE and ProbCBM perform significantly worse at this task. Additionally, we observe that in certain scenarios, cropping the image creates a zooming effect that makes models more confident in their assessments.
https://ibb.co/JR9DZvfd
https://ibb.co/219TTtLN
https://ibb.co/23Rkwqqr
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the responses, which have addressed my questions, so I will keep my score.
It would be appreciated if the authors could include A2, i.e., the discussion on $c^{\prime} \succ c$ in the later training stages, in the paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their constructive feedback and will add this discussion as either a footnote in section 4 or to the appendix. | Summary: The paper proposes training CBMs (and its variants) using Concept Preference Optimization (CPO) - a method directly borrowing from the Preference Optimization (PO) literature
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes, all experiments are standard.
Supplementary Material: Yes, all of it.
Relation To Broader Scientific Literature: The paper studies an important problem of CBM training with a trending topic of PO.
Essential References Not Discussed: All references present.
Other Strengths And Weaknesses: Strengths:
1. Strong work combining interesting PO approaches with CBMs.
2. Empirical evaluation and theoretical justification are sound.
3. The work is a step towards increasing the robustness of CBMs.
Weakness:
1. Presentation Issues: The paper can significantly benefit from presentation issues. For example, 3 pages into the reading, there is only one motivation example which is not discussed well. I would implore authors to add some concrete examples and motivation in clear sentences to make this a more appealing paper.
2. The experiments seem too synthetic: Even though I understand the motivation of the approach, and the experiment design - the setting is a bit too synthetic for a good analysis. In particular, label flipping with a probability can be one of the experiments, but how about mislabelling *similar* concepts, which is a much more likely scenario? For example - red wings flip to 0, brown wings flip to 1. You can utilize CLIP/Bertscore to do something like that. This will make the experiments much more "life-like".
3. Better presentation of Figure 4: Fig-4, the most important figure in the paper suffers from a lack of visual intuitiveness in my opinion. The bars are just too close together to convey a strong improvement message. Why not just make a Table, like Table-1?
Other Comments Or Suggestions: Refer Weakness.
Questions For Authors: 1. What exactly is the ground truth concept distribution the preference is calculated against? Is it taken directly from the dataset annotation or is it done manually?
2. What are the computational overheads if any? Are there ANY limitations at all?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their time and effort in helping us improve our work.
### W 1
We agree. We will address these concerns in the camera-ready version. We discuss the details in comment 2 of our response to reviewer **2knp**.
### W 2
We also agree (thank you). As you and other reviewers have pointed out, experiments with more structured noise would strengthen our results. To address this, we have designed two experiments with more structured noise to the concepts than our original setting.
### Noising by Group Level
As you suggested, we now study mislabeling similar concepts based on concept groups in the CUB and AwA2 datasets. This means we introduce noise to semantically similar concepts, such as switching *red wings* to *brown wings* (as in your example). In this experiment, we apply noise at the group level with different noise levels, $p\in$ {$\{0.1,0.2,0.3,0.4\}$}.
Full results are below. We observe that models trained using BCE experience a substantial drop in performance, whereas CPO models remain much more robust. For instance, in CUB task accuracy in CBM CPO remains relatively stable even at high noise levels. Likewise, in AwA2 we observe CPO models are able to preserve Concept AUC performance much better compared to their BCE counterparts.
Unfortunately, due to character limits in our response, we are restricted to providing links to these results. If any reviewer is uncomfortable clicking the link, we will try to provide a version of the full table in our next (final) response.
CUB and AwA2 Noising by Group:
https://ibb.co/NgDZ9sBC
### Uncertainty Experiment
Additionally, we leverage a useful property of the CUB dataset: labelers provide confidence scores for their annotations. These scores range from $\{1,2,3,4\}$, where higher values indicate greater confidence. We introduce noise proportionally to these scores—labels with lower confidence are more likely to be flipped. Specifically, we apply noise at the following rates:
- Confidence 1 → 40% noise
- Confidence 2 → 30% noise
- Confidence 3 → 20% noise
- Confidence 4 → 10% noise
CUB confidence-based results:
https://ibb.co/KcfB6qHp
As in other experiments, BCE-based models are heavily impacted by noise, leading to significant drops in task accuracy and concept AUC. In contrast, CPO-based models consistently exhibit robustness to noise, regardless of their structure.
We thank all reviewers for suggesting experiments with more structured noise, as we believe these results significantly improve the quality of our work.
### W 3
This is a good point; improving visual clarity is important. We are experimenting with converting Figure 4 into a table. However, we initially chose a plot for its space efficiency. At a minimum, we will include a table with all metrics in the appendix for the camera-ready version.
Additionally, if we retain the figure, we may remove Prob-CBMs from the plot to enhance clarity, restricting their results to the appendix given their underperformance in this setting.
# Questions
` What exactly is the ground truth concept distribution that the preference is calculated against?...`
The preferred concepts are indeed taken from the empirical dataset. We discuss it in lines 165-170:
*"To circumvent this issue, we can leverage the empirical dataset and state its preference over a concept set sampled from $\pi_\theta$. The preference over a pair of concepts should hold specifically early on in training where the policy is suboptimal compared to the empirical data."*
` What are the computational overheads, if any...`
We thank the reviewer for suggesting improvements to our computational analysis. Below, we provide a detailed breakdown of the computational overhead associated with each model. The following table reports the average time per epoch (in minutes) for all models, measured over a full run on the CUB dataset. Additionally, we include the number of trainable parameters for each model.
https://ibb.co/8gL3ZNLq
While CPO introduces a small computational overhead, the increase is minimal—approximately 0.05 minutes per epoch compared to BCE. This is in contrast to CEM and ProbCBM, which significantly increase runtime. Moreover, we compare an unoptimized implementation of CPO against an optimized BCE implementation built specifically for efficiency in PyTorch. This suggests that further optimizations could reduce CPO’s overhead even more.
On the other hand, ProbCBM has 4× the number of parameters compared to CBM and nearly doubles the training time, despite underperforming CPO in most settings.
A potential limitation of CPO is that its gradients tend to be smaller—particularly in the early stages of training—compared to BCE. This could lead to slower convergence in some cases. However, as we also mention in our response to reviewer dJqj, we do not observe this empirically. | Summary: The paper proposes a Preference Optimization (PO) based training Paradigm - CPO for training CBMs. The paper gives a detailed analysis of the proposed method and has experiments on intervention and random label flips. The preference set is taken as observed empirical evidence while the negative sampling of concepts is taken as unpreffered.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, extensively.
Experimental Designs Or Analyses: Yes, the experiments are appropriate.
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: The new CBM training mechanism fixes known problems.
Essential References Not Discussed: All references are discussed well.
Other Strengths And Weaknesses: Strengths:
1. The paper addresses a well-known but rarely explored aspect of CBMs - concept mislabelling. Usually, concept labels are treated as almost always accurate but can be susceptible to mislabelling.
2. The paper is well-written and easy to follow. The theoretical analysis and equations are clear to understand.
Weakness:
1. Diverse Experiments will strengthen the conclusion: For the "Noised Evaluation" experiment, the experiment setting tests the performance of such models on random concept flips. This evaluation leads to 2 problems - 1) The ideal real-world setting does not directly entail random label flip, but a label-confound, where a user can probably mislabel based on their perception of the image. 2) If the model still performs almost as well when the label flips with 0.4 probability, it is deviating from its empirical evidence and relying more on the posterior. For (2) - should we actually ignore empirical evidence?
2. Confusing Writing: The paper can benefit much more from improving the flow and writing in the initial sections. A dedicated paragraph discussing the example figure in simple English can make the persuasiveness and overall appearance much more appealing. One has to shift to Section-3 for a thorough understanding directly.
Other Comments Or Suggestions: Refer weakness
Questions For Authors: How are the hyperparameters tuned for the model?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their time and effort in helping us improve our work.
## Weaknesses
### 1 Diverse Experiments to Strengthen Conclusions
1.1 Thank you for this valuable feedback. We agree that incorporating experiments with more structured noise would strengthen our conclusions. In our response to reviewer **sJF1**, we discuss two additional experiments:
One where noise is introduced at the concept group level (e.g., flipping only wing-related attributes in birds or altering beak colors).
Another where noise is introduced based on the confidence level provided by the labeler (as available in the CUB dataset).
1.2
`
If the model still performs almost as well when the label flips with 0.4 probability, it is deviating from its empirical evidence and relying more on the posterior. For (2) - should we actually ignore empirical evidence?
`
While it is possible that the model’s strong performance is partially due to reliance on the posterior rather than empirical evidence, we do not believe this fully explains the results. For example, in some settings, particularly for CEMs trained with BCE, models can still perform well even under high levels of noise.
We attribute this to the fact that, despite 40% of the labels being noisy, the remaining 60% provide sufficient empirical evidence for the model to fit the data—albeit imperfectly.
### 2. Improving the Introduction for Clarity
We appreciate the reviewer’s suggestion to include a dedicated paragraph explaining Figure 1 in the introduction. While we agree that this addition would improve clarity, space constraints may limit our ability to introduce an entirely new paragraph. Instead, we propose enhancing the introduction by adding context around Figure 1, making it more digestible for the reader.
Here, we provide the modified text we would add to the introduction to address this issue
```
We propose Concept Preference Optimization (CPO), a policy optimization-inspired objective loss for CBMs. Figure 1 illustrates how CPO leverages pairwise comparisons of concept preferences to guide updates toward preferred concepts while mitigating the impact of incorrect gradients. Unlike traditional likelihood-based learning, which updates on all samples regardless of correctness, CPO selectively adjusts based on sampled preferences. This reduces sensitivity to noise by being able to mitigate incorrect gradient updates when incorrect concepts are sampled. Our analysis shows that CPO is equivalent to learning the posterior distribution over concepts, leading to more robust training. Empirically, we demonstrate that CPO not only improves CBM performance in noise-free settings but also significantly alleviates the impact of concept mislabeling.
```
Question
1. How are the hyperparameters tuned for the model?
We discuss these details in Appendix A.1, which we provide here for reference:
```
From App A. We use a batch size of 512 for the Celeb dataset and 256 for CUB and AwA2. We train all
models using RTX8000 Nvidia-GPU. In all datasets, we train for up to 200 epochs and early stop if the validation loss has not improved in 15 epochs. For fair evaluation across methods, we tune the learning rate for CEMs, CBMs, and ProbCBM. Specifically, for CUB and AwA2 datasets, we explore learning rates ∈ {0.1, 0.01}, while for CelebA, we expand the search to ∈ {0.1, 0.01, 0.05, 0.005} due to the observed instability of CEMs at higher learning rates. Additionally, we set the hyper-parameter λ ∈ {1, 5, 10} for all methods. For CEMs and models trained using LDPO, we found RandInt beneficial, which randomly intervenes on 25% of the concepts during training. ProbCBM introduced a few extra hyperparameters, which we did not tune in this work, and we directly used the hyperparameters provided by the original authors. Similar to other models, ProbCBM employs RandInt at 50%, making it particularly sensitive to interventions, especially in concept-complete tasks such as AwA2 and CUB. The only model for which we tune additional hyper-parameters is Coop-CBM, where we adjust the weight parameter for the auxiliary loss.
```
We hope our additional experiments and clarifications can help alleviate your concerns.
---
Rebuttal Comment 1.1:
Comment: I appreciate the experiment regarding group-level noise analysis. Utilizing labeler confidence is an interesting approach and I hope/trust the authors will provide a more detailed analysis of the experiment in the final camera-ready version. The experiment design will help in improving present CBM architectures as well.
I am improving my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for their feedback. We will include a more detailed analysis of both these experiments in the final draft of the paper. | null | null | null | null | null | null |
SCENIR: Visual Semantic Clarity through Unsupervised Scene Graph Retrieval | Accept (poster) | Summary: SCENIR is a novel unsupervised scene graph-based retrieval framework that prioritizes semantic content over low-level visual features. SCENIR uses a Graph Autoencoder to eliminate the need for labeled data. It outperforms vision-based, multimodal, and supervised GNN approaches in both accuracy and efficiency. Additionally, it introduces Graph Edit Distance (GED) as a robust metric for scene graph similarity, improving retrieval reliability and enabling generalization to unannotated datasets for counterfactual image retrieval.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: Yes
Essential References Not Discussed: It seems that essential references are discussed.
Other Strengths And Weaknesses: S1: SCENIR overcomes visual biases in vision models by using scene graphs to enhance semantic understanding.
S2: It introduces an unsupervised Graph Autoencoder for scene graph retrieval, removing the need for labeled data.
S3: SCENIR ensures robust evaluation with GED and extends to unannotated datasets for broader applications.
W1: Figure 1 could use clearer examples, as the Top-1 result of Efficient-ViT appears more similar to SCENIR's Top-1 result, where the sports-related semantic relationship is not prominent.
W2: Clarify the rationale for using GED as an evaluation metric in scene graph retrieval, emphasizing its ability to capture structural and semantic differences effectively.
W3: The paper could further explore or compare the capabilities of recent large language and multimodal models in addressing the scene graph retrieval problem.
Other Comments Or Suggestions: Refer to the weaknesses mentioned above
Questions For Authors: Refer to the weaknesses mentioned above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer PwzX's thoughtful feedback and their recognition of the validity of our method and the soundness of our evaluation. We now provide clarifications for each of the concerns raised.
- W1: We appreciate the reviewer’s suggestion regarding Figure 1 and would like to clarify that our design choices for the teaser figure were intentional. The selected images specifically illustrate the color bias present in Efficient-ViT, which SCENIR effectively mitigates. This example highlights SCENIR’s ability to retrieve images based on true semantic understanding rather than superficial color similarities. Examining the ranked results in detail (all three positions), we can observe Efficient-ViT’s color bias when analyzing the semantic relevance of the retrieved images, as discussed in lines 47-56 of the Introduction:
1. The **Top-1 result** from Efficient-ViT does contain a ‘group of people,’ but they are not engaged in ‘riding’ any form of ‘sports equipment’ or ‘wheeled vehicle’ on the ‘street’ - key semantic elements of the query.
2. While the **Top-2 result** might appear somewhat relevant due to the presence of a car, it lacks critical query concepts such as ‘group of people,’ ‘riding,’ and ‘sports equipment,’ demonstrating a reliance on color rather than semantics (as noted in lines 49-53 of the Introduction).
3. The **Top-3 result** from Efficient-ViT moves away from a black-and-white image but still fails to capture the essential semantics, as it lacks both a ‘group of people’ and any form of ‘sports equipment.’ In contrast, SCENIR retrieves an image of people snowboarding, which, while differing in activity, aligns more closely with the core query semantics than Efficient-ViT’s selections.
- W2: We appreciate the reviewer’s request for clarification and would like to reiterate the rationale behind using GED as an evaluation metric, as detailed in the Ground Truth and Evaluation section (starting at line 267). GED provides a deterministic method for identifying the most similar graph pairings in a dataset by measuring structural differences. Specifically, it measures structural differences: It quantifies the dissimilarity between two graphs by counting the minimum number of edit operations (node/edge insertion, deletion, substitution) needed to transform one graph into another. This ensures that retrieval is based on meaningful changes in object compositions and relationships, rather than just superficial features. Unlike purely visual metrics (e.g., pixel similarity) or text-based evaluations (e.g., caption similarity), GED directly measures the semantic coherence between two scene graphs. Semantically related objects or objects with similar roles (e.g., "person sitting on a chair" vs. "child sitting on a bench") will have lower edit distances, reflecting semantic similarity. Motivated by these advantages, as well as non-negligible evaluation shortcomings stemming from leveraging captions and caption similarity (see inconsistent top-1 SBERT retrievals in Figure 1, as well as low agreement between varying SBERT models in Figure 2), together with the support of prior literature in scene graph similarity (Dimitriou et al, 2024), we conclude that GED arises as an ideal evaluation measure in scene graph retrieval, promoting semantic preservation while eliminating ambiguity in defining ground-truth pairings.
- W3: We appreciate this interesting suggestion; however, we strongly believe it aligns with a parallel research direction rather than the core focus of our work. One of our main claims is computational efficiency (see Section 4.4. Computational speedup): SCENIR achieves fast retrieval by executing all stages in the pipeline in around 8 minutes, while requiring minimal computational resources (single NVidia Tesla P100 16Gb VRAM GPU). On the other hand, utilizing LLMs/Multimodal LLMs already increases the computational budget, even if we exclude the pre-training stage, since most potent LLMs require larger GPUs than the one used in our experiments, or paid APIs (e.g. as in the case of ChatGPT). Other than that, harnessing scene graphs directly fuses semantic information within the pipeline, ensuring determinism in results, meaning that multiple runs of the pipeline yield consistent results. On the other hand, multimodal LLMs introduce variability, as they are not explicitly guided on which features to prioritize, potentially leading to biases similar to those observed in visual and vision-language models (e.g., Efficient-ViT in Figure 1 and DEiT in Figure 6), along with additional prompt-based variability. In any case, we regard this comment as a future work direction.
We once again thank the reviewer for their valuable feedback and hope this clarification addresses their concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and for addressing my previous comments. After careful review and consideration, I would like to keep my original score. | Summary: The paper introduces an unsupervised framework for scene graph retrieval using GNN to prioritize semantic content over low-level visual biases. It employs a graph autoencoder to learn scene graph embeddings without labeled data and advocates for Graph Edit Distance as a deterministic evaluation metric.
Claims And Evidence: NA
Methods And Evaluation Criteria: method.
Theoretical Claims: NA
Experimental Designs Or Analyses: NA
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: S:
1\ Scene graphs explicitly model objects and relationships, mitigating biases from superficial features like color
2\ Extends to unannotated datasets using automated scene graph generation
3\ Introduces GED as a deterministic ground truth, addressing variability in caption-based evaluation.
W:
1\ Experiments focus on PSG and Flickr30K, broader validation across diverse domains is needed.
2\ The impact of adversarial training and decoder design could benefit from deeper analysis. This part need more ablations.
Other Comments Or Suggestions: c.f., Weakness.
Questions For Authors: What is the sensitivity of SCENIR to different scene graph generation models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer tG8k for their thoughtful comments and for acknowledging the strengths of our approach, methodology, and evaluation process. We now address your concerns systematically
- W1: To address the reviewer’s valuable concern regarding the comprehensiveness of the datasets used in our work, we offer the following clarification. Since our approach focuses on scene graphs, and after a thorough investigation of potentially related datasets, we found that PSG and Flickr30K serve as a superset or an improved version of other similar datasets (e.g., Visual Genome, as mentioned in lines 254-255). Experimenting with additional datasets like Visual Genome would lead to redundancy, as it contains the same images from MSCOCO. The richness of scenes, objects, and relationships in the PSG and Flickr30K datasets already covers a wide range of domains and scenarios, thereby eliminating the need to explore other datasets that would essentially reflect the same data distribution. Furthermore, our experiments on Flickr30K with synthetic scene graphs and PSG (or Visual Genome) with annotated scene graphs follow the experimental setup used in prior work (Yoon et al., IRSGS).
- W2: Regarding the impact of adversarial training and the feature/edge decoder design, we would like to clarify that we have already conducted ablation studies on both components in Section 4.1. The effect of adding or removing the adversarial loss is shown in line 3 of Table 4, while the decoder ablations are detailed in lines 4 and 5 of the same table, further supporting our design choices for the final framework architecture. While we appreciate the reviewer’s perspective, given our focus on proposing a novel end-to-end semantic image retrieval framework, rather than optimizing specific GNN components, a more fine-grained ablation analysis (e.g., tuning the normalization of the adversarial training module) falls beyond the intended scope of our work.
- Questions For Authors: Despite the irrefutable validity of such a query, we believe that evaluating SGG frameworks’ quality falls outside the scope of this work. In our study, we report findings using the most effective SGG module available, as older frameworks may struggle to accurately represent scenes, potentially limiting retrieval quality. Therefore, we recommend adapting SCENIR to the strongest SGG framework for meaningful results. That said, SGG is not a core component of the SCENIR pipeline but is included primarily to demonstrate the effortless extendability of our approach to unannotated datasets.
We once again thank the reviewer for their valuable feedback and hope this clarification addresses their concerns. | Summary: This paper presents SCENIR, an unsupervised framework for scene graph retrieval that aims to improve semantic understanding in image-to-image retrieval tasks. It introduces a Graph Autoencoder-based architecture, eliminating the dependence on supervised ground truth labels like captions, which suffer from variability and inconsistencies. Key contributions include advocating Graph Edit Distance (GED) as a deterministic ground-truth measure, superior retrieval performance compared to both vision and supervised GNN baselines, and demonstrated applicability to real-world datasets and counterfactual retrieval scenarios. Experimental results on PSG and Flickr show SCENIR outperforming state-of-the-art methods in terms of accuracy and computational efficiency.
Claims And Evidence: The major claims, improved retrieval performance and computational efficiency, are supported by experimental results, comparisons, and ablation studies.
Methods And Evaluation Criteria: The methods (GAE-based unsupervised learning with adversarial training, graph pooling strategies, GED evaluation) are suitable for addressing biases arising from low-level visual features in scene graph retrieval tasks. Using GED as the evaluation metric is sensible due to its deterministic nature, reducing ambiguity in retrieval evaluations.
Theoretical Claims: n/a
Experimental Designs Or Analyses: The experimental designs are thorough and valid. Various models including Vision, Vision-Language (VL), supervised GNNs, and unsupervised GAEs were fairly compared using clear and appropriate evaluation metrics.
A slight limitation might be the lack of an ablation study on diverse graph embedding dimensions.
Supplementary Material: The supplementary material has been briefly reviewed, specifically the sections on dataset preprocessing, ground truth and retrieval metrics, and dataset details. These sections adequately clarify methodological specifics.
Relation To Broader Scientific Literature: SCENIR's contributions are clearly positioned against related literature. Its novelty primarily lies in extending unsupervised graph autoencoders for scene graph retrieval tasks and advocating GED over caption-based supervision. The authors have clearly discussed prior related works such as IRSGS and Graph Counterfactuals.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
1. Clear method contributions in using unsupervised methods effectively.
2. Robust and comprehensive experimental validation.
3. Effective demonstrations of practical applicability.
Weaknesses:
1. Limited analysis of robustness to dataset variability or extreme cases, especially in the Flickr dataset.
Other Comments Or Suggestions: 1. Clarify runtime environment specifics (hardware configurations clearly in the main paper for reproducibility).
Questions For Authors: 1. How does the choice of embedding dimension affect SCENIR’s retrieval accuracy and computational cost?
2. Have you explored the robustness of SCENIR against noisy or incomplete scene graphs? If so, what were the findings?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer kfau for their thorough feedback and for recognizing the validity of our evaluation methods, the soundness of our experiments, and the clarity of our presentation. We address their reported limitations and respond to their questions below.
- Other comments and suggestions: Regarding the runtime environment specifics, we note that all experiments were conducted on an NVIDIA Tesla P100 GPU (line 269). Given the page limit constraints, we prioritized presenting our core contributions concisely. However, if the paper is accepted, we will include additional details in the camera-ready version to further enhance reproducibility. For reference, our setup includes: a single NVIDIA Tesla P100 GPU (16GB VRAM), a preprocessed scene graph dataset (~2GB), Python 3.10, PyTorch Geometric 2.4.0, PyTorch 2.2.0, and CUDA 11.8.
- Questions For Authors - 1: Regarding the impact of embedding dimension on retrieval accuracy and computational cost, we would like to emphasize that these results stem from hyperparameter tuning, where we empirically determined the embedding dimension alongside other hyperparameters (e.g., loss terms, encoder architecture). To address the reviewer’s valid concern, we would like to emphasize that the significant computational efficiency demonstrated in Table 5 is primarily due to SCENIR’s unsupervised nature. Specifically, its linear training time stems from the fact that it does not regress on a pair’s similarity value, and its linear preprocessing time arises because we do not compute similarity values for every pair—an obligatory step in the other two frameworks. This distinction is further explained in Section 4.4 of the paper.
- Questions For Authors - 2/W1: Regarding SCENIR’s robustness against noisy or incomplete scene graphs, we would like to clarify that the primary focus of our work is to evaluate and compare different retrieval frameworks rather than to conduct an in-depth robustness analysis for noisy or out-of-domain inputs. To ensure a high-quality benchmark, we use the PSG dataset, which significantly refines annotations from previous datasets (as mentioned in lines 254-258). Additionally, we apply preprocessing steps (detailed in Appendix A) to construct the final train/test sets. That being said, some level of noise remains inevitable. For example, this has been highlighted by the deliberate inclusion of Figure 11 (left). Specifically, it can be observed that background objects may still appear in the scene graph. However, SCENIR effectively identifies the primary semantic object of interest (e.g., “train”) despite the presence of surrounding noise, demonstrating its ability to focus on key scene elements.
We once again thank the reviewer for their valuable insights and suggestions, which help strengthen the clarity and impact of our work. | Summary: This paper tackles the problem of image-to-image retrieval, focusing on improving the retrieval performance by emphasizing semantic content over low-level visual features. The authors argue that current models often rely on visual biases (e.g., colors, lighting conditions) rather than semantic contents/understanding. To avoid bypassing semantic information, the authors recast this task into scene graph retrieval and propose SCIENIR, which is an unsupervised graph autoencoder framework for scene graph retrieval. The model architecture is essentially a branched VGAE, which is trained with a combination of losses such as the reconstruction loss, adversarial loss, and KL loss (against Gaussian).
In the experiments, this approach is evaluated in two data settings, with/without scene graph annotation. For the dataset without scene graph annotation, captions and scene graphs are automatically generated with existing tools (BLIP and PSGTR). Graph edit distance is used as the similarity score throughout the experiments. The quantitative results show that the proposed method outperforms various baselines ranging from existing GNN-based approaches to VLMs. Additionally, the authors provide qualitative analysis, counterfactual retrieval results, and speed analysis, indicating SCIENIR's effectiveness and efficiency.
Claims And Evidence: In my opinion, the most important and interesting claim is that an unsupervised graph autoencoder can effectively retrieve images, even surpassing its supervised counterparts. The quantitative results clearly support that the proposed approach has advantages over the baselines. I feel the baselines look a bit old. I discuss this point in the “Methods And Evaluation Criteria” section. Also, the authors advocate that GED is a more reliable measure for scene graph similarity than caption-based approaches. Figure 2 and Figure 7 demonstrate disagreement of those approaches and support this claim. Overall, this paper clarifies its claims and provides support evidence to them properly.
Methods And Evaluation Criteria: The proposed framework and the motivations behind it sound reasonable. My only concern is the choice of baselines. As of now, there are many strong captioning models available (e.g., LLaVA). Also, it would be nice to make some connection with SOTA LLMs such as GPT and Gemini models. It would be a reasonable baseline for the idea of scene graph retrieval (e.g., generate detailed image descriptions for all images and retrieve them in the text space using any text encoder). The model sizes are too different, but it would be great if the proposed approach achieves comparable results.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See my comments in the “Methods And Evaluation Criteria” section.
Supplementary Material: I mainly read Appendix A and B to understand the dataset and metrics.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - Overall, this paper is well-written and easy to follow. Supplementary materials typically provide additional information to clarify ambiguous points in the main text.
- To clarify my position, my only concern is the choice of baselines. I had hoped that this paper would use more up-to-date ones.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer s1fN for their thoughtful feedback and for the care they took in understanding our claims and evaluation methods. We greatly appreciate their recognition of the validity of our work and their positive assessment of its presentation. We would like to address their main concern to provide further clarity and ensure a shared understanding.
We appreciate the reviewer’s suggestion and acknowledge the relevance of strong captioning models such as LLaVA and large language models (LLMs) like GPT and Gemini. However, to ensure a fair comparison, we adhere to replicating baseline models as proposed by their respective authors without extending them beyond their intended scope (e.g., substituting the captioner with an LLM-based alternative). More broadly, we believe that modifying the captioning module is slightly beyond the scope of our work. While a more advanced captioner could generate richer visual descriptions - potentially benefiting prior approaches that rely on caption-based supervision (e.g., IRSGS) - our core contribution is to move away from textual descriptions altogether. As discussed in the Introduction (lines 091-107) and further validated in Section 4.2 (Caption-driven disagreements), caption-based similarity models (e.g., SBERT variants) exhibit high variability, leading to inconsistent matching results depending on the specific SBERT model employed. Importantly, this issue persists regardless of improvements in caption quality.
By instead leveraging scene graphs, we utilize Graph Edit Distance (GED) as a more deterministic and reliable supervision signal, reducing the ambiguity inherent in text-based representations. This is a key aspect of our second contribution (highlighted at the end of the Introduction) and is further supported by recent literature (Dimitriou et al., 2024). Additionally, our approach enables counterfactual retrieval via conceptual edits - a direction explored in recent explainability research (Dimitriou et al., 2024) - which would not be as naturally facilitated by caption-based methods.
Moreover, our decision not to rely on LLM-based captioning models aligns with our goal of reducing computational burden while maintaining strong performance. As the reviewer has acknowledged, LLM-based methods tend to be significantly more resource-intensive. Our approach provides a more efficient alternative, demonstrating that high-quality retrieval can be achieved without the need for large-scale caption generation.
We once again thank the reviewer for their insightful feedback and for encouraging further discussion on this aspect of our work. | null | null | null | null | null | null |
Are High-Quality AI-Generated Images More Difficult for Models to Detect? | Accept (poster) | Summary: This paper investigates whether high-quality AI-generated images (AIGIs), as preferred by human perception models, are more difficult for detection models to distinguish from real images. Contrary to intuition, the authors find that images with higher human preference scores tend to be easier to detect by existing AIGI detectors.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The paper presents no theoretical claims.
Experimental Designs Or Analyses: The experiments are well-designed and include appropriate baselines. Key strengths include:
1) Diverse dataset: Incorporates a variety of prompts, generators, and preference models;
2) Multiple detectors: Ensures robustness of findings across different architectures.
However, there are some potential concerns:
1) More details on dataset filtering would help: While the paper describes dataset construction, additional insights into how negative prompts and modifiers influence quality could strengthen the argument.
2) Further generalization studies: The study could benefit from testing whether similar findings hold for closed-source models like MidJourney and DALL·E 3.
Supplementary Material: The supplementary material provides details on dataset creation, clustering parameters, and additional analyses.
Relation To Broader Scientific Literature: Overall, the work fills a potential research gap by linking human preference-based quality scoring with AIGI detectability.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: 1) Limited discussion on failure cases (e.g., when high-quality images are misclassified).
While the paper convincingly demonstrates that high-quality images tend to be more detectable, it does not provide an in-depth analysis of failure cases, i.e., scenarios where high-quality AI-generated images are misclassified as real or where lower-quality images fool the detectors.
Are there specific types of high-quality images (e.g., those with certain color distributions, structural complexity, or semantic attributes) that remain difficult to detect despite scoring highly on human preference models?
2) Evaluation on additional closed-source generators could enhance impact.
The study primarily focuses on open-source text-to-image models (e.g., Stable Diffusion variants, PixArt-α). However, many of the most widely used real-world AI-generated images come from closed-source models such as MidJourney, DALL·E 3, and Runway Gen-2.
Other Comments Or Suggestions: 1) Clarify how "negative prompts" impact image quality in dataset creation.
The dataset includes negative prompts during generation to improve output quality, but it is unclear how these prompts influence image characteristics and whether they introduce biases in the evaluation.
Do negative prompts primarily reduce artifacts, improve semantic fidelity, or affect low-level features like texture richness and contrast?
Could certain types of negative prompts inadvertently make images more detectable by standardizing certain visual properties (e.g., smoothing textures, removing unnatural edges)?
2) Consider testing adversarial modifications to verify detector robustness.
The study demonstrates that certain image characteristics (e.g., contrast, saturation, texture richness) correlate with detectability. However, adversarial image manipulations (e.g., adjusting contrast, adding noise, altering texture patterns) could be used to challenge these conclusions.
Testing adversarial perturbations (e.g., reducing texture richness in high-quality images) could reveal whether detectors are relying on spurious correlations or genuinely robust visual cues.
Are there specific image transformations that can fool existing AIGI detectors while maintaining a high human preference score?
Questions For Authors: 1) How do adversarial perturbations (e.g., reducing texture richness) affect detection accuracy?
2) Would a multi-task learning approach (joint training on quality scoring & detection) further improve detector performance?
3) Have you tested whether closed-source generators (e.g., MidJourney, DALL·E 3) exhibit the same quality-detectability trends?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1: How do negative prompts and modifiers influence image characteristics and quality?**
Firstly, we provide an ablation study, comparing the average quality scores of SDXL images with and without negative prompts and positive modifiers. The table below suggests that **prompt engineering improves the average quality of generated images**.
|Negative prompts & positive modifiers|ImageReward|HPSv2|MPS|
|---|---|---|---|
|Without|0.0092|0.2452|12.14|
|With|0.1172|0.2602|12.54|
Since the modifiers are randomly sampled for each image, we compute the average accuracy and quality scores with and without each modifier for SD 3 images. The table below shows that none of these modifiers alone has significant and consistent effects on image quality or accuracy. We argue that the specific effect of modifiers may depend on the prompt and the synergy between modifiers.
|Metric|Modifier|HDR|best quality|dynamic lighting|hyper detailed|photorealistic|professional lighting|ultra highres|ultra realistic|
|---|---|---|---|---|---|---|---|---|---|
|Accuracy (%)|Without|56.5|57.2|56.7|56.6|56.7|56.7|57.3|56.5|
| |With|56.8|56.1|56.5|56.6|56.5|56.5|56.0|56.7|
|ImageReward|Without|0.6123|0.6265|0.6515|0.6268|0.6076|0.6395|0.6245|0.6339|
| |With|0.6429|0.6285|0.6035|0.6282|0.6476|0.615|0.6304|0.6212|
|HPSv2|Without|0.2655|0.2662|0.2661|0.2648|0.2640|0.2657|0.2652|0.2649|
| |With|0.2660|0.2652|0.2654|0.2666|0.2675|0.2658|0.2662|0.2665|
|MPS|Without|12.93|13.00|13.00|12.99|12.95|13.12|12.93|13.04|
| |With|13.13|13.06|13.06|13.07|13.11|12.94|13.13|13.02|
**Q2: Test on closed-source models like MidJourney and DALL·E 3.**
Please refer to Q4 of Reviewer rmyb.
**Q3: Discussion on failure cases (e.g., when high-quality images are misclassified).**
We take the high-quality images with top 30% ImageReward and HPS v2 scores, and regard the samples with at most 2 correct predictions for the 6 detectors (#Correct<=2) as failure cases, comparing them with the cases of #Correct>=5. We find that the failure cases consistently have lower average contrast and saturation, which is consistent with our conclusions in Sec. 3.
|Metric|#Correct|SD 2.1|SD 3|SDXL 1.0|PA-alpha|FLUX.1 [dev]|Infinity|Average|
|---|---|---|---|---|---|---|---|---|
|contrast|>=5|66.19|67.46|53.72|63.96|60.44|58.86|61.77|
| |<=2|51.11|61.58|49.60|45.72|52.67|51.62|52.05|
|saturation|>=5|0.4343|0.4604|0.4551|0.5400|0.4817|0.4714|0.4738|
| |<=2|0.2478|0.3553|0.4134|0.4043|0.4079|0.2982|0.3545|
**Q4: Consider testing adversarial modifications to verify detector robustness.**
Table 1(a) of the paper indicates certain adversarial directions for image modification, such as lowering the lightness/contrast/saturation, or increasing the sharpness. We implement these manipulations with different factors (0.5 means decreasing by 50%; 1.5 means increasing by 50%). The results below suggest that the adversarial modifications are effective for DRCT and RINE, while CoDE and SuSy are less sensitive to such modifications. Interestingly, NPR shows stronger performance on any manipulated data.
|Manipulation|Factor|DRCT-ConvB|DRCT-CLIP|RINE|CoDE|SuSy|NPR|Avg|
|---|---|---|---|---|---|---|---|---|
|none|N/A|65.6|83.1|31.8|66.7|83.8|66.8|66.3|
|lightness|0.5|51.8|55.8|20.9|68.0|84.9|89.2|61.8|
| |1.5|71.7|93.6|77.8|69.7|79.4|82.0|79.0|
|contrast|0.5|44.4|76.6|15.7|72.9|82.6|89.1|63.6|
| |1.5|76.3|93.3|84.1|69.5|85.5|87.3|82.7|
|saturation|0.5|63.2|84.3|35.8|64.9|84.7|64.2|66.2|
| |1.5|67.6|90.9|56.7|65.1|82.7|68.5|71.9|
|sharpness|0.5|76.6|85.3|34.0|71.2|84.9|89.5|73.6|
| |1.5|55.0|81.7|52.2|62.5|80.6|73.4|67.6|
**Q5: Are there specific image transformations that can fool existing AIGI detectors while maintaining a high human preference score?**
To the best of our knowledge, there may not be simple transformations with these properties. However, one may consider replacing the perceptual constraint in the perceptual adversarial attack [a] with the preference score constraint and implement an effective attack for this purpose.
[a] Perceptual Adversarial Robustness: Defense Against Unseen Threat Models. ICLR 2021.
**Q6: Would a multi-task learning approach (joint training on quality scoring & detection) further improve detector performance?**
We believe that this is possible as training on quality scoring could encourage the detector to learn on more quality-related features, especially those of higher levels, which could make the detector more robust to the distribution shift of low-level features in the cross-generator scenario. | Summary: This paper reveals an interesting yet counterintuitive phenomenon, where a higher-quality AI-generated image (AIGI) preferred by humans can be more easier for existing AIGI detectors to detect. The authors then investigate this effect and find that (1) images generated from short prompts and (2) certain image characteristics, such as texture richness, jointly influence both quality scores and detection accuracy. To address this, they propose a novel method to enhance the detection performance of existing patch-based detectors.
Claims And Evidence: The authors have provided reasonable evidence to support their findings, and most claims proposed in the introduction section are well-supported by proper visualizations and references.
Methods And Evaluation Criteria: The benchmark datasets and evaluation criteria used are widely accepted in the field.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: I have checked the soundness of the proposed experimental analyses, which all make sense to me.
Supplementary Material: I have reviewed all the content in the supplementary material and found no obvious errors.
Relation To Broader Scientific Literature: I believe the key contributions of this paper should not be limited to the detection of entire-image synthesis alone.
- Broader Application in Deepfake Detection: Similar phenomena have been observed in the detection of face-swapping content [1], where it has been noted that a resolution gap can lead to model shortcuts and overfitting, thereby limiting generalization. The findings of this paper have the potential to be applied in other related fields, such as deepfake detection, not just the detection of text-to-image generation content.
- Enhancing the Generation Process: Beyond improving detection, this technique can also benefit the generation process. The detectors can serve as critic models or reward models, helping to generate more realistic and undetectable AI-generated images (AIGIs). This dual application of the paper's findings could have significant implications and connections to other scientific literature.
[1] DF40: Toward Next-Generation Deepfake Detection, NeurIPS 2024.
Essential References Not Discussed: I recommend that the authors discuss similar research focusing on resolution, such as [1] and [2]. It would be valuable to provide a detailed comparison with these studies, highlighting the unique contributions of this paper to the field. This will help readers understand the distinct advancements and insights offered by this work.
[1] DF40: Toward Next-Generation Deepfake Detection, NeurIPS 2024; [2] Exploring Strengths and Weaknesses of Super-Resolution Attack in Deepfake Detection, ArXiv 2024; [3] A Quality-Centric Framework for Generic Deepfake Detection, ArXiv 2024.
Other Strengths And Weaknesses: Other Strengths:
- The originality of this paper is quite high and brings new insights to the field. I believe these findings actively and positively contribute to the entire field.
- The paper is well-written and very easy to follow. Most claims are well-supported by suitable evidence.
Other Weaknesses:
- While the findings have the potential to be applied to a broader range of fields, including face-swapping, face-reenactment, face-editing, and realistic image generation, the authors do not provide a comprehensive discussion on these potential applications.
- The paper is missing evaluations with more model architectures, which are necessary to verify the generality of the findings. Additional models such as CLIP, ViT (ImageNet), and CNNs trained from scratch should be included.
- The proposed method to address the resolution issue is somewhat limited, focusing primarily on patch-based detectors. There is potential for broader applicability beyond these specific detectors, which is a minor but notable weakness.
- The paper could benefit from using more recent and comprehensive datasets to make its evaluations more solid and robust.
Other Comments Or Suggestions: I don't have other comments in this section. Please see the question section and weakness section.
Questions For Authors: - Frequency Domain Analysis: Since resolution is strongly related to the frequency domain, could analyzing from the frequency domain provide some new insights?
- Detection of High-Quality Images: Would high-quality images, such as super-resolution images, be easier to detect? What are the reasons behind this? Could you provide further analysis? For instance, if face-swapping is performed and then followed by super-resolution, would this make the detection easier?
- Traces in High-Quality and Low-Quality Images: Theoretically, both high-quality and low-quality images contain traces of DNN generation models. Why are high-quality images easier to detect?
- Mitigating Overfitting on High-Quality Images: How can we alleviate the overfitting of detection models to the forgery traces in high-quality images and reduce the performance gap between different quality levels?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1: It is recommended to discuss similar research focusing on resolution, such as [1,2].**
Thank you for your valuable suggestion. While high resolution is usually an important aspect of "high-quality" images in a broad sense, this paper considers a narrower sense of image quality, which is evaluated by human preference models. The generated images studied in this paper generally have the same resolution for each generator, while [1,2] emphasize the discrepancy between low-resolution and high-resolution deepfake images, as well as the influence of super-resolution. The paper [3], as you mentioned, focuses on "forgery quality" (i.e., *"whether the deepfake image is realistic or not"*), instead of the blurriness or resolution of images, which is closer to our definition of quality. We will include more discussions on these related works and clarify our focus.
**Q2: This paper does not provide a comprehensive discussion on potential broader applications.**
As you kindly suggested, the methodology of this research could be extended to related tasks such as image manipulation detection, and the results in this paper may benefit the research on improving the generative models or building more robust detectors. We will add these discussions to our revision.
**Q3: The paper is missing evaluations with more model architectures, such as CLIP, ViT (ImageNet), and CNNs trained from scratch.**
We agree that evaluation with diverse model architectures is important for reaching a reliable conclusion. The experiments in Sec. 5 already included detectors with different architectures, such as CLIP (RINE, DRCT), ImageNet-pretrained ViT (CoDE), and ResNet trained from scratch (NPR). Besides, we add more evaluations of the quality-accuracy correlation on generators of different architectures (please refer to Q4 of Reviewer rmyb).
**Q4: The proposed method to address the resolution issue is somewhat limited, focusing primarily on patch-based detectors.**
We acknowledge that the proposed patch selection strategies are restricted to patch-based detectors. However, other kinds of detection methods could also benefit from the results of this paper. For example, our regression models obtained in Sec. 4.3 could be applied to identify the hard samples for detection, and emphasizing these samples in model training may enhance the generalization.
**Q5: The paper could benefit from using more recent and comprehensive datasets.**
Thank you for your advice. In our response to Q4 of Reviewer rmyb, we validate our main conclusions on recent datasets of real-world generated images from commercial models. As for the experiments in Sec. 5, DRCT-2M is the latest published comprehensive benchmark for AIGI detection to the best of our knowledge.
**Q6: Since resolution is strongly related to the frequency domain, could analyzing from the frequency domain provide some new insights?**
Certain fingerprints of fake images such as the up-sampling traces utilized by NPR (Tan et al., 2024) may be witnessed in the frequency domain. Nonetheless, our preliminary experiments suggest that frequency domain analysis may not bring meaningful insights into the image quality we focus on.
**Q7: Would high-quality images, such as super-resolution images, be easier to detect?**
Thank you for raising this valuable question. Since generative super-resolution with diffusion models is popular in real-world applications, we collect 1000 fake images generated by an SDXL variant without or with generative super-resolution ($1024\to1536$). We test the detectors on these images, and the results below suggest that whether the super-resolution images are easier or harder to detect depends on the detector. We believe that this question is worthy of further study.
|Generative super-resolution|CoDE|DRCT-ConvB|DRCT-CLIP|NPR|RINE|SuSy|avg|
|---|---|---|---|---|---|---|---|
|Without|76.0|74.6|51.4|100.0|59.7|78.8|73.4|
|With|57.3|88.3|65.2|99.9|27.5|75.4|68.9|
**Q8: Why are high-quality images easier to detect, given that both high-quality and low-quality images contain traces of DNN generation models?**
According to our analyses in Sec. 4, existing detectors are sensitive to certain features that correlate with high quality scores, such as high contrast and high texture richness, which means the traces of generation may be more prominent in such images for the detectors.
**Q9: How can we alleviate the overfitting of detection models to the forgery traces in high-quality images and reduce the performance gap between different quality levels?**
We may collect more diverse and balanced data in terms of image quality. Moreover, designing models that are more capable of detecting higher-level artifacts like the distortion of object structures could reduce the performance gap, as such artifacts are common in low-quality images.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I think the Q7, Q8, and Q9 are very important problems for future research in AIGI detection. We hope the author could provide an in-depth and further discussion here. Overall, the authors have addressed most of my initial concerns so I maintain my initial rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your support and highlighting the importance of Q7, Q8, and Q9. We acknowledge that these raise significant and valuable directions for future research in AIGI detection. However, considering the limited time of the rebuttal and the scope of the paper, a more in-depth discussion of these complex issues is not feasible at this time. We recognize the importance of these questions and plan to address them in our future work. | Summary: This work considers the relationship between AI-generated images and real images, noting a counterintuitive phenomenon: generated images with higher quality scores, as assessed by human preference models, tend to be more easily detected by existing AIGI detectors. Additionally, it is observed that images generated from short prompts tend to achieve higher preference scores while being easier to detect.
Claims And Evidence: The authors illustrate this phenomenon using a distribution plot (e.g., Fig1), but the relationship between the Accuracy curve and the distribution is not clearly explained. As a result, readers may find it challenging to quickly identify and interpret the correspondence between the accuracy and the underlying distributions of image quality scores.
Methods And Evaluation Criteria: The authors specifically collect an AIGI dataset to evaluate image quality and the difficulty of detection, utilizing a human preference model for assessment. Specifically, the authors utilize two pre-trained human preference models and six existing open-source AIGI detectors to support their experimental findings.
Theoretical Claims: The authors primarily rely on extensive experiments to support their findings, but the study lacks sufficient theoretical analysis.
Experimental Designs Or Analyses: I believe that while the authors successfully summarize phenomena through histogram distributions, the underlying reasons are not well explained. Additionally, the descriptions accompanying the histograms lack clarity, potentially leading to confusion among readers.
Supplementary Material: The supplementary material introduces the data collection methodology and describes metrics used for evaluating low-level image features.
Relation To Broader Scientific Literature: This paper reveals that high-quality AI-generated images (AIGIs), as preferred by humans, tend to be easier for existing AIGI detectors to identify. To further investigate this phenomenon, this paper analyzes how text conditions and image features influence the correlation between the average accuracy of detectors and the quality scores predicted by human preference models.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1.The paper is well-written and clearly structured.
2.The authors focus on a meaningful research direction, especially the detection of AI-generated images.
3.The experimental validation is comprehensive.
Weaknesses:
1.The data histograms provided by the authors are not sufficiently intuitive. Specifically, the relationship between the distribution and the scores lacks clarity.
2.The experiments primarily discuss observations and phenomena but lack a deeper theoretical analysis or exploration of underlying causes.
Other Comments Or Suggestions: Please refer to the weakness.
Questions For Authors: Do the authors provide comprehensive statistics on the dataset construction, such as the quantity and distribution of each category? Additionally, is there a clear indication of whether the dataset will be publicly available in the near future?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1: The relationship between the curve and the histogram in the plots is not clearly explained.**
Thank you for pointing out the potential difficulty for readers to understand the figures. In Figure 1-3, the red curve illustrates how variable $y$ (e.g., the accuracy in Fig. 1) changes with respect to $x$ (e.g., the quality score in Fig. 1), and the blue histogram depicts the distribution of $x$. The curve and the histogram share the same $x$-axis. We will revise the captions and related descriptions of Figure 1-3 to improve the clarity.
**Q2: The underlying reasons for the phenomena illustrated by the figures are not well explained.**
The main observation illustrated by Fig. 1 (i.e., generated images of higher quality tend to be easier to detect) **can be explained by our analyses in Sec. 4**. To clarify, we do not suppose that the relation between the detector accuracy and the quality scores is causal; instead, we aim to explore the **potential confounders underlying the counterintuitive positive correlation** between the two variables. Our results in Sec. 4.3 suggests that certain low-level image characteristics and high-level features may be the confounders for this correlation, and our experiments in Sec. 5 further validate that these low-level image characteristics can be utilized to predict the detectability of image patches on a broader range of data. We will make these points clearer in the revision.
**Q3: This study lacks a deeper theoretical analysis or exploration of underlying causes.**
We understand that theoretical analysis can provide deeper insights into the relationship between the detector accuracy and the image quality, although this paper is supposed to be an empirical study. We will provide a discussion from the causal perspective (as described in Q2) with a theoretical causal graph that explicitly depicts the relations among the variables we studied in this paper.
**Q4: Provide comprehensive statistics on the dataset construction, such as the quantity and distribution of each category.**
Thank you for your kind suggestion. The distributions of the image quality scores and the prompt lengths are depicted by the histograms in Fig. 1 and Fig. 3, respectively. However, different from some previous datasets for AIGI detection that are based on certain object categories (e.g., ForenSynths (Wang et al., 2020) and GenImage (Zhu et al., 2023)), the images in our dataset contain diverse content including complex scenes with various objects. Therefore, it is difficult to categorize the images and provide reliable statistics on the image content.
**Q5: Will the dataset be publicly available in the near future?**
Yes. As stated in the footnote on Page 3 of the paper, the dataset will be publicly available upon acceptance. More specifically, we will release the generated images, the corresponding prompts, and other metadata such as the denoising steps for diffusion models and the JPEG compression quality.
---
Rebuttal Comment 1.1:
Comment: Thanks for response. I will maintain my rating. | Summary: The paper investigates the detectability of AI-generated images (AIGIs), revealing a counterintuitive finding: higher-quality AI-generated images, preferred by humans, are actually easier for detection models to identify. It shows that prompt complexity (with shorter prompts producing higher quality, more detectable images) and specific image characteristics (like high saturation, contrast, and rich textures) significantly influence both image quality scores and detector accuracy. Finally, the paper demonstrates how these insights can be practically leveraged to enhance the performance of existing detectors by optimizing input patch selection based on predicted detectability.
Claims And Evidence: Even though the paper is very interesting, several of its claims lack clear substantiation:
1. On lines `045-052`, the paper claims that existing datasets are randomly generated without ranking, leading to discrepancies between training data and real-world applications. However, existing literature [1][2][3] shows that some detectors perform well even with out-of-distribution data, which appears to conflict with the paper's claim. The authors should address this discrepancy explicitly.
2. The paper argues (lines `110-111`) that existing datasets lack diversity but doesn't provide evidence for that. Moreover, prior studies [1][2][3] already include diverse samples generated from both GAN-based and diffusion-based models, including Stable Diffusion used by the authors themselves.
3. The paper mentions the lack of advanced generators in existing datasets. However, current benchmarks already include both GAN and diffusion-based models, specifically Stable Diffusion, which is also utilized in this paper.
4. The study employs only **four** generators, three of which (Stable Diffusion variants) likely have similar architectures, training datasets, and methodologies. This limitation could bias the results. Including a wider variety of generators, particularly those with differing architectures (e.g. Dall E 3, Imagen 3, Flux -- both open and close source models), would strengthen the validity of the findings.
5. Importantly, the pre-trained detectors evaluated in this paper might be biased towards short prompts since they were likely trained on shorter prompt samples, given that the authors claim their dataset is the first to include high-quality images generated from longer prompts. This potential bias would significantly undermine the paper's conclusions. Therefore, providing quantitative evidence demonstrating that detector performance is not influenced by prompt length is imperative.
# Ref:
- [1] ArtiFact: A Large-Scale Dataset with Artificial and Factual Images for Generalizable and Robust Synthetic Image Detection
- [2] Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection
- [3] Towards Universal Fake Image Detectors that Generalize Across Generative Models
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: Yes
Essential References Not Discussed: [1] ArtiFact: A Large-Scale Dataset with Artificial and Factual Images for Generalizable and Robust Synthetic Image Detection
[2] Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection
[3] Towards Universal Fake Image Detectors that Generalize Across Generative Models
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: The paper needs clarifications:
1. The paper claims that short prompts have higher quality and higher detectability. However, it also discusses the shortest prompt, which exhibits opposite characteristics. This contradiction is confusing for the authors. Intuitively, both short and long prompts should perform poorly, while a medium-length prompt, which hits the sweet spot, should perform well. Therefore, the authors are requested to resolve this inconsistency.
2. The authors consider prompts from datasets like COCO to be short, which leads to confusion regarding what is classified as short or long. The paper should provide qualitative examples of short, medium, and long prompts, along with their generated images and corresponding results.
3. The paper also needs to provide qualitative evidence (prompt with generated images) on how prompt length affects the generation results. Currently, only quantitative results are presented.
Questions For Authors: Check claim & evidence section
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1: Discrepancy between existing datasets and real-world applications.**
The discrepancy between existing datasets and real-world applications lies in many aspects, such as semantics, quality, and image compression. This paper focus on the quality (as explained in lines 110-117, left column), while some previous studies emphasize that the real-world performance of detectors can be affected by the image compression bias in existing datasets.
We acknowledge that some detectors [1,2,3] are reported to perform well on some OOD data. However, we find that they still have poor performance on other datasets, as suggested by the results in: https://anonymous.4open.science/r/ICML2025-rebuttal-4584/table.pdf. We did not test the method proposed in [1] as it is not open-sourced.
**Q2: Evidence for lacking diversity.**
Thank you for pointing this out. To clarify, prior studies (such as [1,2,3] as you mentioned) collect images generated by diverse generators to study the cross-generator generalization of detectors. In this paper, we instead focus on the data diversity corresponding to the same generator, especially the diversity of (1) quality-related image features and (2) prompt complexity. Specifically:
(1) The diversity of quality-related features is achieved by the independently sampled positive modifiers, which are not applied in most existing datasets.
(2) In contrast to Synthbuster, which has the highest diversity in prompt complexity but 98.5% of its prompts are under 60 words, our dataset (Fig. 3) exhibits a significantly wider distribution of prompt lengths.
We will revise the related descriptions in Sec. 2 and Sec. 3.1 to improve the clarification on dataset diversity.
**Q3: The role of "advanced generators".**
We agree that current benchmarks already contain diffusion-based models like SD (1/2/XL series). However, by "advanced generators", we refer to those more capable of producing high-quality images, especially DiTs. SD 3 and PixArt-α
are selected as representatives of DiTs, and the average quality of their generated images is higher than SD 2.1/XL with U-Net architecture, as indicated by the histogram of Fig. 1. We will improve the related descriptions and provide more statistical comparisons.
**Q4: Results with more generators.**
Thank you for your valuable suggestion. We supplemented our data with images generated by the commercial DiT model FLUX.1 [dev], and Infinity, an autoregressive model. We reproduce Fig. 1/2 on the extended data and the results are presented in Fig. I/II in [a]. In addition, we validate our findings on closed-sourced models Midjourney v6 and DALL·E 3 based on images sampled from existing datasets in Fig. V/VI in [a]. The phenomena in these figures are consistent with Fig. 1/2 of the paper.
[a] https://anonymous.4open.science/r/ICML2025-rebuttal-4584/quality_and_accuracy.pdf
**Q5: The biases towards short prompts.**
We agree that these pre-trained detectors might be biased towards images generated from short prompts if their training data only comprises such images. Fig. 3 does suggest that these detectors tend to have higher performance if the prompt length is below 40.
However, the effects of prompt length on the detector accuracy are indirect. Specifically, longer prompts may include more objects and depict a more complex scene, or they could be more likely to mention certain attributes of objects that affect their visual appearance. Therefore, the emphasis of our analyses (Sec. 4.2/4.3/5) is on the visual characteristics of generated images instead of the prompts, and our results in Sec. 5 suggest that our conclusions can be utilized to improve the performance of detectors on existing datasets based on short prompts. Please refer to Q2 of Reviewer 581z for further explanations.
**Q6: Images generated from the shortest prompt exhibit opposite characteristics.**
Thank you for underlining the importance of this intriguing phenomenon. As explained in lines 210-215 (right column) and Appendix B (lines 694-729), we notice that the a significantly lower ratio of the shortest prompts contain color-related descriptions, while such descriptions tend to increase the saturation of image. Hence, compared with medium-length prompts (21-40 words), the shortest prompts induce lower saturation, which indicates lower accuracy according to Sec. 4.3. Therefore, this phenomenon does not contradict with our further analyses concerning image characteristics.
**Q7: Explain the classification and provide qualitative examples of short, medium, and long prompts, along with their generated images and corresponding results.**
Thank you for your advice. We will revise the descriptions related to prompt lengths according to the following classification standard: "short" means 1-20 words; "medium" means 21-40 words; "long" means more than 40 words.
We provide randomly sampled qualitative examples accordingly in https://anonymous.4open.science/r/ICML2025-rebuttal-4584/qualitative.pdf. | Summary: This paper studied the correlation between the quality score of AI-generated images and the detection accuracy of AI-generated images. They found that AI-generated images with higher quality scores are easier to be detected by models. Then, they analyzed the influence of the length of text prompts and image quality features to the correlation. They also conducted experiments to study how to select patches for AI-generated image detection.
Claims And Evidence: This paper tried to study and explain the correlation between the quality of AI-generated images and the detection accuracy of AI-generated images. However, the quality of image is evaluated by some pre-trained preference models, not by human. Since the image quality assessment is a highly subjective and challenging task, it is questionable that the quality scores given by these models can reflect actual human preference/image quality. In other words, are the images with high scores really high quality?
Methods And Evaluation Criteria: When we detect AI-generated images, only the image itself can be accessed. Therefore, the quality of an image should also be decided only by the image itself, why the quality score is determined by both text prompt and image? The correlation between the quality and the detection accuracy of AI-generated images observed in this paper maybe mainly influenced by the text prompt not by the image true quality.
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: See “Methods And Evaluation Criteria”
Supplementary Material: Yes, all of them
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No.
Other Strengths And Weaknesses: --How about the correlation between the quality of real images and the detection accuracy?
--In Section 4.3, why perform regression analyses at the cluster level, but not image level?
--The main finding of this paper is that AI-generated images with high quality scores are easier to be detected. But, in Section 5, the study shows that selecting high quality patches is not always helpful for AI-generated image detection, which is conflict with the main finding.
Besides, the experimental results shows that carefully selecting different quality patches cannot bring consistent performance gain.
Other Comments Or Suggestions: Xu et al., 2024 “Imagereward: Learning and evaluating human preferences for text-to-image generation”This paper was published in NeurIPS 2023, not 2024.
Questions For Authors: See “Claims And Evidence”, “Experimental Designs Or Analyses” and “weakness”.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1: It is questionable that the quality scores given by the pre-trained preference models can reflect actual human preference/image quality.**
We agree that the preference models could not replace humans in assessing image quality, and it is difficult to predict the human preference on image quality due to its subjective nature. Therefore, we show the validity of our conclusions by the **consistent phenomenon among different generators and different preference models**. We acknowledge that two preference models may be insufficient to support the consistency, hence, we complement the main results with an additional preference model, MPS [b], and other generators (please refer to Q4 of Reviewer rmyb). The consistent results in Figure I/II in [a] further validate our conclusions.
[a] https://anonymous.4open.science/r/ICML2025-rebuttal-4584/quality_and_accuracy.pdf
[b] Learning Multi-dimensional Human Preference for Text-to-Image Generation.
**Q2: Why is the quality score determined by both text prompt and image?**
Existing preference models commonly take the text prompt for generation as input because a high-quality image in practice should be not only visually appealing but also aligned with the text prompt (i.e., satisfying the intention of users). However, as you kindly suggest, this paper should focus on the visual quality of the generated image alone.
To this end, we try to minimize the influence of the image-text alignment in the comparison of image quality by replacing the text input for the preference models: instead of the original prompt for generation, we use the **BLIP-2 caption of the generated image itself**, which is expected to be well-aligned with the image as evaluated by the preference models. The corresponding results are presented in Figure III/IV in [a].
**Q3: The correlation between the quality of real images and the detection accuracy.**
Thank you for your suggestion. We provide the results in Figure VII/VIII in [a], which suggests that there is no significant and consistent correlation between the quality and detection accuracy for real images, indicating that existing detectors may learn different features on real and fake images.
**Q4: Why perform regression analyses at the cluster level but not image level in Sec. 4.3?**
At the image level, the accuracy is discrete (0/6, 1/6, ..., 6/6) as only 6 detectors are evaluated, which could induce overly high variance of the data and hinder the linear regression analyses. Therefore, we use the cluster-level data, where each sample is representative of a group of images with similar characteristics, to ease the regression analyses.
**Q5: Sec. 5 shows that selecting high-quality patches is not always helpful for AI-generated image detection, which is in conflict with the main finding.**
We acknowledge that the proposed patch selection strategy may not always bring a performance gain for different data and different detectors. This is expected as our main observation (i.e., generated images with higher quality scores *tend to* be easier to detect) is **statistically** valid, and its application to the improvement of detectors may depend on the characteristics of the detector and certain implementations such as the image patchification strategy. We believe that the findings of this paper could motivate future studies to propose more effective detectors with improved algorithmic designs.
**Q6: The ImageReward paper was published in NeurIPS 2023, not 2024.**
We are grateful for your careful reading! We will correct this error and proofread the references in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I have increased my rating. | null | null | null | null |
Update Your Transformer to the Latest Release: Re-Basin of Task Vectors | Accept (poster) | Summary: This paper introduces TransFusion, a method that re-basins task vectors by aligning model weights to base models in the parameter space, aiming at adapting task vectors to a later version of the model. In particular, TransFusion employs a two-step permutation process: inter-head alignment using spectral distance and intra-head matching to align each pair of head matrices. Experiments on vision and NLP tasks show that TransFusion outperforms existing re-basin methods, preserving fine-tuning effects with improved zero-shot generalization.
## update after rebuttal
The rebuttal helped address my concerns on functional equivalence, so I raised my score to 3.
Claims And Evidence: While the paper presents feasible empirical results, some claims lack theoretical guarantees or deeper analysis. For example, It is not explicitly shown how well TransFusion aligns two self-attention layers.
Methods And Evaluation Criteria: While the paper presents promising results, it is unclear whether the proposed two-step permutation process fully preserves the functional equivalence of the self-attention mechanism. In objective functions (8) and (9), each permutation matrix is optimized separately. The approach appears to lack explicit constraints to guarantee that the transformed model is functionally equivalent to the original model. Further clarification or theoretical verification would strengthen the evidence for the effectiveness of TransFusion.
In addition, complexity is an important metric for evaluating the efficiency of re-basin [1]. However, this paper does not include the complexity of TransFusion theoretically or empirically.
[1] Ainsworth, Samuel, Jonathan Hayase, and Siddhartha Srinivasa. "Git Re-Basin: Merging Models modulo Permutation Symmetries." The Eleventh International Conference on Learning Representations.
Theoretical Claims: I checked the correctness of Theorem A.1, i.e., permutation preserves the spectral distance. This theorem provides motivation for choosing the spectral distance to measure the alignment of different attention head matrices.
Experimental Designs Or Analyses: - The paper does not clearly specify the details of the transformer models used for NLP experiments
- The paper evaluates only one transformer model backbone, limiting the generalizability of the findings. It would be helpful to include more experiments on different architectures to ensure the robustness and scalability of the proposed TransFusion.
Supplementary Material: I reviewed the whole supplementary material.
Relation To Broader Scientific Literature: This paper proposes an important application of model re-basin (parameter matching): to adapt task vectors to different versions of the base model. Additionally, it proposes a novel permutation for the self-attention mechanism to align the multi-head attention parameters while preserving the functional equivalence. This can benefit further studies on permutation invariance and mode connectivity of transformers.
Essential References Not Discussed: I don't find any missing references.
Other Strengths And Weaknesses: Strengths:
- This study extends parameter matching techniques to transformer models, improving the alignment of self-attention layers by permutation.
- This paper is overall easy to follow and well-organized.
Weaknesses:
- The definition of symbols is a bit confusing. $H$ is first defined as the number of heads and then defined as the head matrix. Additionally, the head matrix $H_i^A$ should have a subscript (q, k, or v) to indicate the role of the matrix.
- It is mentioned that the proposed head alignment techniques address the head contamination problem. However, it is unclear to me why we need to tackle this issue. In my opinion, if the rows of different head matrices align better than rows in the same head matrix, permuting the rows across different head matrices is acceptable as long as the functional equivalence is preserved.
- It is strange to separately optimize $P_{inter}$ and $P_{intra}$ as the result permutation might not preserve the functional equivalence of a self-attention layer, which is a prerequisite for re-basin. It would be better to explicitly show how the joint application of $P_{inter}$ and $P_{intra}$ leads to functional equivalence.
- Although I like the novel problem setting of adapting task vectors to different base models, this new problem setting does not bring distinct technical challenges. The novel part of two-step attention head alignment only solves the head contamination problem (with an unclear motivation to me). It would be helpful to highlight the technical challenges and contributions of TransFusion for better significance.
Other Comments Or Suggestions: Please refer to Strengths And Weaknesses
Questions For Authors: Please refer to Strengths And Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Functional equivalence
The central criticism of the reviewer revolves around the preservation of functional equivalence in self-attention layers. To address this justifiable concern, we provide a proof showing that our two-stage method **ensures** functional equivalence.
**Theorem** Let $P\_{inter}$ be a permutation of attention heads, and $P\_{intra}$ be a set of permutations applied independently within each head. Then, applying $P\_{inter}$ followed by $P\_{intra}$ preserves the functional equivalence of the multi-head self-attention layer.
Notation:
- $X$: input sequence of shape $(S,d\_m)$
- $W_q,W_k,W_v$: weight matrices each of shape $(d\_m,d\_m)$
- $|H|$: number of heads
- $d\_k=d\_m/|H|$: number of units within each head
The self-attention layer:
- computes $Q,K,V$ matrices as $Q=X W\_q$, $K=X W\_k$, $V=X W\_v$
- splits Q into $|H|$ heads, $Q=\[Q\_1,Q\_2,...,Q\_{|H|}\]$ (the same for $K$ and $V$).
**Step 1** The two-stage permutation procedure involving inter- and intra-head swaps can be summarized by a single, composed permutation matrix $P\_{attn}$. Such a matrix of shape $(d\_m,d\_m)$ has a notable form, namely a grid $|H|\times|H|$ of block matrices. Each block has shape $(d\_k,d\_k)$ and is either filled with zeros or a valid matrix permutation of $d\_k$ units. Moreover, in each row and column, there can be only one permutation block. An example with three heads:
```
P_{attn}=
P_{intra}^0, 0, 0
0, 0, P_{intra}^1
0, P_{intra}^2, 0
```
with heads swapped according to $P\_{inter}$:
```
P_{inter}=
1, 0, 0
0, 0, 1
0, 1, 0
```
For the sake of notation, we will use the permutation vector $\pi$ to denote how $P\_{inter}$ swaps heads. In this example $\pi=[0,2,1]$.
Notably, a block matrix $P\_{attn}$ with the features mentioned above can be written as:
$P\_{attn}=\sum\_i^{|H|} E^{i,\pi(i)} \otimes P\_{intra}^i$
where $\otimes$ is the Kronecker product and $E^{i,\pi(i)}$ is a matrix filled with zeros except at position $i,\pi(i)$, where it contains a one.
**Step 2** When applying the block matrix $P_{attn}$ to the query weight $W\_q$, we get the following query tokens:
$Q^{'}=X W\_q P\_{attn}=Q P\_{attn}=\big[ \sum\_{j=1}^{|H|} Q\_{j} P\_{attn}[j,i] \big]\_{i=1}^{|H|}=\[ Q\_{\pi^{-1}(i)} P\_{intra}^{\pi^{-1}(i)} \]\_{i=1}^{|H|}$
where $P\_{attn}[j,i]$ refers the block at position $(j,i)$ (which can be either zero-filled or not). The relation is relevant as it entails:
$Q\_i^{'}=Q\_{\pi^{-1}(i)} P\_{intra}^{\pi^{-1}(i)}$
The new head $Q\_i^{'}$ corresponds to the head designated by the inter-head permutation $\pi^{-1}(i)$, modified according to the permutation $P\_{intra}^{\pi^{-1}(i)}$ applied to its units. Note that the same result applies to $K\_i^{'}$ and $V\_i^{'}$.
**Step 3: attention** For each head, the attention matrix (after permutation) is:
$A\_i^{'}=\text{softmax}(Q\_i'{K\_i'}^T/\sqrt{d\_k})=\text{softmax}(Q\_{\pi^{-1}(i)} P\_{intra}^{\pi^{-1}(i)} {P\_{intra}^{\pi^{-1}(i)}}^T {K\_{\pi^{-1}(i)}}^T /\sqrt{d\_k})$
$\quad \ =\text{softmax}(Q\_{\pi^{-1}(i)}{K\_{\pi^{-1}(i)}}^T/\sqrt{d\_k})$
$\quad \ =A\_{\pi^{-1}(i)}$
Thanks to the orthogonality of the intra-head permutation blocks, the attention scores are only influenced by the inter-head permutations.
**Step 4: output** For each head, the output is:
$O\_i^{'}=A\_i^{'}V\_i^{'}=A\_{\pi^{-1}(i)}V\_{\pi^{-1}(i)}P\_{intra}^{\pi^{-1}(i)}=O\_{\pi^{-1}(i)}P\_{intra}^{\pi^{-1}(i)}$
The final output is obtained by concatenating all heads:
$O'=\[O\_1^{'},O\_2^{'},...,O\_{|H|}^{'} \]=\[ O\_{\pi^{-1}(i)} P\_{intra}^{\pi^{-1}(i)} \]\_{i=1}^{|H|}=\big[ \sum\_{j=1}^{|H|} O\_{j} P\_{attn}[j,i] \big]\_{i=1}^{|H|}=O P\_{attn}
$
**Result** To sum up, applying a block permutation $P\_{\text{attn}}$ to each projection matrix is equivalent to permuting the output of the self-attention mechanism $O'=O P\_{\text{attn}}$. This demonstrates functional equivalence in the context of our approach. When the output is fed to the next layer, we have to multiply it by ${P\_{\text{attn}}}^T$ to recover the original output.
> In my opinion, if the rows of different head matrices align better than rows in the same head matrix, permuting the rows across different head matrices is acceptable as long as the functional equivalence is preserved.
Due to space constraints, a detailed discussion is not possible. It is important to note that permuting rows across heads means that the blocks within $P\_{\text{attn}}$ at each head level are not necessarily orthogonal. Hence, the associated $P\_{\text{intra}}$ term does not cancel out when calculating the input to the softmax function. This creates a challenge in reversing the effects of the permutation, which is necessary to ensure that functional equivalence is maintained after applying the softmax function.
## Other aspects
For details on the backbones used in our NLP experiments, as well as additional tests across different model sizes and a complexity analysis, please see our response to yjkC. | Summary: This paper addresses a critical challenge for foundation models, i.e., fine-tuned models becoming obsolete when their base models are updated. The authors introduce TransFusion, a data-free method to transfer task vectors from an old base model to a new one. By leveraging a structured permutation strategy tailored for Transformers, i.e., addressing multi-head attention alignment and residual connections through spectral theory, the method ensures functional equivalence while preserving generalization. Extensive experiments on vision (e.g., EuroSAT, SVHN) and NLP tasks (e.g., GLUE) demonstrate significant improvements over baselines including Git Re-Basin, validating the effectiveness and robustness of the proposed approach.
Claims And Evidence: The core claims are well-supported. As shown in Table 1, the proposed method consistently outperforms baselines in zero-shot task accuracy (e.g., +4.95% on EuroSAT, Table 1) while maintaining generalization (e.g., minimal drops on ImageNet-R). Figure 5 and Table 3 also show the advantages of using Functional Equivalence and Transformer-Specific Design.
Methods And Evaluation Criteria: The proposed methods are reasonable, building upon established neural network permutation invariances and extending them thoughtfully to Transformer architectures. Their innovative two-step spectral-based attention head alignment and residual connection permutation strategy effectively address previously unresolved challenges (head contamination and residual mismatch). The chosen evaluation metrics, including zero-shot accuracy on downstream tasks and generalization to support sets, are appropriate and directly relevant to assessing the effectiveness and practical applicability of the proposed solution.
Theoretical Claims: The authors provide rigorous theoretical analyses, particularly regarding their proposed permutation-invariant spectral distance metric for attention head alignment. Their mathematical proof in Appendix A.1 confirms that singular value-based distance metrics are indeed invariant to permutations, logically and rigorously justifying the attention alignment strategy. Additionally, their handling of residual connections is clearly explained and mathematically sound, ensuring consistency across permutations and preserving functional equivalence.
Experimental Designs Or Analyses: Experimental design is thorough, comprehensive, and rigorous. Evaluations span multiple tasks, including image classification and NLP benchmarks, demonstrating cross-modal applicability. Baselines (e.g., Git Re-Basin, Optimal Transport, and vanilla transport) are effectively selected to illustrate clear performance distinctions. The results consistently show substantial improvements from TransFusion, with sufficient statistical robustness inferred from the consistently large margins observed.
Supplementary Material: Supplementary materials substantially enhance the paper's credibility and completeness. Appendix A.1 provides critical theoretical proofs supporting spectral invariance, while Appendix A.2 details residual connection permutation implementation, facilitating reproducibility and deeper theoretical understanding. The additional sensitivity analyses in Appendix A.3 further confirm the robustness and generality of the proposed method.
Relation To Broader Scientific Literature: The paper is well-connected to existing literature. The following works are also suggested to be discussed:
[1] Zhou et al. DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration. In ICCV, 2023.
[2] Yamaguchi et al. Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks. In CVPR, 2023.
Essential References Not Discussed: No critical missing references were identified. The existing cited literature covers the majority of essential works thoroughly, and any minor omissions do not detract from understanding or evaluating the paper’s core contributions.
Other Strengths And Weaknesses: Strengths:
+This paper proposes a novel solution to Transformer re-basin with theoretical guarantees.
+The proposed method enables cost-effective model updates without data access.
+This paper provides strong empirical validation across modalities and tasks.
Weaknesses:
-The propsoed method requires identical model architectures. Will slight changes (e.g., layer counts) deteriorate the alignment?
-How is the scalability of the permutation alignment for large models?
-How does the quality of the task vectors affet the performance of the proposed method?.
Other Comments Or Suggestions: No.
Questions For Authors: 1.Could the method handle minor architectural changes (e.g., added layers) via adaptive permutations?
2.What does the limited gains on DTD mean? Is it related to task vector quality or dataset characteristics?
3.How does the computational complexity scale with model size (e.g., ViT-B vs. ViT-L)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Complexity analysis
To assess how computational complexity scale with model size, we define:
- $|L|$: number of layers, evenly divided into MLP ($\frac{|L|}{2}$) and self-attention ($\frac{|L|}{2}$).
- $|H|$: number of attention heads.
- Each MLP layer contains two linear projections with dimension $(d\_m, d\_h)$ and $(d\_h, d\_m)$. We assume $d\_m = d\_h$ for simplicity.
- Self-attention layers have Q, K, and V matrices, each with dimensions $(d\_m, d\_m)$.
We now estimate the complexity for MLP and self-attention layers for a single iteration of the weight-matching algorithm.
### MLP Layers
Permutation alignment for MLP layers resembles GiT Rebasin. The main computational cost involves computing a matrix of pairwise dot products between rows in the projection matrix, with complexity $O(d\_m^3\)$. Subsequently, applying the Hungarian algorithm to a $(d\_m, d\_m)$ matrix also has complexity $O(d\_m^3)$. Thus, each MLP layer incurs $O(d\_m^3)$.
### Self-Attention Layers
The analysis is split into two steps: inter-head and intra-head permutations.
**Inter-head permutation**:
- Computing singular value decompositions (SVDs) for matrices Q, K, V across $2$ networks (A,B) and $|H|$ heads results in $6|H|$ SVDs. Each SVD on head-level matrices sized $(\frac{d\_m}{|H|},d\_m)$ has complexity $O(\frac{d\_m^3}{|H|^2})$. Hence, total complexity for all SVDs is $O(\frac{6 d\_m^3}{|H|})$.
- Computing distance matrices for Q, K, V, each sized $(|H|,|H|)$, involves $d\_m$ operations per element, hence the complexity is $O(\frac{3|H|^2 d\_m}{2})$.
- Applying the Hungarian algorithm to the resulting $(|H|,|H|)$ matrix incurs complexity: $O(|H|^3)$.
Thus, inter-head permutation complexity is: $O(\frac{6 d\_m^3}{|H|}+\frac{3|H|^2 d\_m}{2}+|H|^3)$.
**Intra-head permutation**:
- Computing similarity matrices for intra-head permutations involves cost matrices of dimensions $(\frac{d\_m}{|H|},\frac{d\_m}{|H|})$, incurring complexity per head: $O((\frac{d\_m}{|H|})^2 \times d_m)$=$O(\frac{d\_m^3}{|H|^2})$.
- Applying the Hungarian algorithm separately for each head results in a per-head computational complexity of $O((\frac{d\_m}{|H|})^3)$.
Summing across all heads, total intra-head permutation complexity is $O(|H| \times ((\frac{d\_m^3}{|H|^2})+(\frac{d\_m}{|H|})^3)=O(\frac{d\_m^3}{|H|}+\frac{d\_m^3}{|H|^2})$.
Therefore, total complexity per self-attention layer is: $
O(\frac{6 d\_m^3}{|H|}+\frac{3|H|^2 d\_m}{2}+|H|^3+\frac{d\_m^3}{|H|}+\frac{d\_m^3}{|H|^2})$.
### Overall Complexity
Considering all layers, the total complexity combines the contributions from MLP and self-attention layers, yielding:
$O\big(\frac{|L|}{2}d\_m^3+\frac{|L|}{2} (\frac{6 d\_m^3}{|H|}+\frac{3|H|^2 d\_m}{2}+|H|^3+\frac{d\_m^3}{|H|}+\frac{d\_m^3}{|H|^2})\big).$
This complexity scales **polynomially**, dominated by terms involving $d\_m^3$ and $|H|^3$. The method remains substantially more efficient than full retraining, avoiding costly gradient computations or data iterations.
## Aligning models with different architectures
The idea of aligning models with minor variations in architecture, such as different layer counts, is worth exploring. To address this, one simple approach could involve selectively pruning layers from the model with more layers to match its counterpart. For instance, one could remove redundant and unimportant layers. Alternatively, one could adopt the opposite strategy and replicate the last block of the smaller network multiple times to achieve a match. We plan to explore these aspects further in future work.
## Related works
Both our work and that mentioned by the reviewer aim to improve transfer during fine-tuning. However, while DR-Tune and AdaRand assume access to downstream task data during fine-tuning, our method requires no training data to transfer fine-tuning from one model to another. Moreover, these solutions employ data-driven regularization during optimization; instead, our approach uses a parameter alignment technique that eliminates the need for training steps.
## Quality of task vectors
> ... How does the quality of the task vectors affect the performance of the proposed method? ... What does the limited gains on DTD mean? Is it related to task vector quality or dataset characteristics?
Our approach follows a twofold procedure, i.e., align the two models $\theta\_A$ and $\theta\_B$, and then transport the task vector $\theta\_B + \pi(\tau)$. Regarding the quality of the task vector, the first stage is unaffected as it solely depends on $\theta\_A$ and $\theta\_B$. The second step, instead, employs $\tau$ and thus it could be affected by low-quality fine-tuning $\tau$ (after all, garbage in, garbage out). For instance, when considering DTD, we argue that the lower quality of the task vector is directly attributable to the challenging characteristics of the dataset, which features only 40 examples per class on average (for comparison, the other datasets we use have at least around 1000 examples per class). | Summary: The introduces a new rebasin method for models for keeping models up-to-date as their underlying pretrained backbones evolve, focusing on transformers in particular. The author's method involves a “transport” the fine-tuning modifications—captured as a task vector—from the older base model to a new checkpoint without re-training or using data. The paper introduces TransFusion, a data-free and training-free re-basin procedure that realigns the task vector to be applicable on a new model release. The paper applies their method to visual and NLP tasks with ViT-B and different NLP classifiers.
Claims And Evidence: The core concern I have about the claims of the paper are as follows
* Assumption of structural similarity of models: One core assumption of the method is that the new checkpoint $\theta_B$ is largely similar to tthe original $\theta_A$. Is this always true? For instance, if $\theta_B$ is trained under different conditions such as a new data distribution or slight architectural tweaks, then the permutation alignment may not capture necessary transformations. Moreover, when the representation spaces of your two networks diverge, then the transport of $\tau$ could be suboptimal or even detrimental.
* Scalability/Computational Overhead: The procedure requires solving a series of linear assignment problems for every transformer layer and for each attention head pair. Although the Hungarian algorithm is polynomial in complexity, the cumulative cost might become significant for very deep or wide models.
* Hyperparameter Tuning: Could the authors discuss more on how to choose $\alpha$ in practice? I worry that sensitivity may make this difficult to tune.
* Disadvantages of a data free transfer: One major disadvantage and limitation I would have liked to see discussed is about cases where domain-specific adjustments are necessary. In many real-world cases, even a minimal dataset could help fine-tune the alignment further, suggesting that a semi-supervised or data-augmented variant might offer better performance. It would be helpful for the authors to discuss _when_ we should use TransFusion.
Methods And Evaluation Criteria: I see no issues with the methods and evaluation criteria. The authors evaluate on well-known NLP tasks which seem reasonable for this paper although I would have like to see pretrained decoder models. I would be curious about evaluating on a dataset like GSM after finetuning to see capabilities of the model on a dataset that is (reasonably) potentially out of distribution.
Theoretical Claims: I checked proofs in the appendix and didn't see any issues. However, I do have one concern about theoretical aspects of this work.
* Spectral Metric for Head Matching: While using singular values to form a permutation-invariant distance seems justified, it's not clear if this metric always aligns heads based on functional roles. Two heads with similar singular value profiles might perform very different roles in the network. This could lead to cases where heads that are similar in “energy” are paired, but the semantic content of their learned representations is not matched, potentially reducing the efficacy of the transport.
* Residual Connections: The method decides redefine the identity mapping in residual connections. To me, this may be sensitive to the specific ordering and scale of the permutation matrices. And, this also keeps this method from applying to other non-transformer based methods.
Experimental Designs Or Analyses: * The authors should expand on what encoder language model they use. Is it BERT, T5, etc.
* It would be great for the authors to discuss how their method scales to deeper and wider models. If the authors could run different models sizes, that would be very useful for contextualizing their approach.
Supplementary Material: No supplementary material was uploaded.
Relation To Broader Scientific Literature: The most significant paper that is similar to this approach is obviously Git Rebasin (ICLR 2023). The authors cite and discuss this paper at length to explain the similarities and differences of their approach. The authors also discuss task arithmetic and weight interpolation at length, as this is relevant to their approach.
Essential References Not Discussed: None that I can find.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## Clarification
We clarify that our paper does not assume similarity between $\theta\_A$ and $\theta\_B$. As stated in Sec. 1 (line 67) and 3, these checkpoints may result from training on distinct data distributions or techniques. The re-basin mechanism is indeed designed to align models with significant differences in parameter space.
## Using a small dataset to enhance transport
We agree; if even a small amount of data were available for each class, it would certainly be beneficial to use it. However, there are real-world scenarios where retaining data is not possible. That said, if these constraints do not apply, our method can be effectively utilized in conjunction with fine-tuning. To show this, starting from a small subset of $10$ shots per class, we follow \[a\] and learn one scaling coefficient per layer $\alpha=\[\alpha\_1,...,\alpha\_{|L|}\]$. We then examine the performance after fine-tuning these coefficients.
Method|EuroSAT|DTD|GTSRB|SVHN
-|-|-|-|-
$\theta\_{B}+\alpha\tau$|+7.93|-1.44|+4.70|-15.98
$\theta\_{B}+\alpha\pi(\tau)$|**+10.00**|**+1.21**|**+6.80**|**+10.52**
There is a significant gain when fine-tuning a model that has undergone re-basin using our approach, represented as $\theta_{B}+\alpha \pi(\tau)$. In contrast, fine-tuning from $\theta_{B}+\alpha \tau$ (which does not involve permutation) produces inferior results. This underscores that rebasing and fine-tuning should not be viewed as mutually exclusive but as complementary strategies.
\[a\] Knowledge composition using task vectors with learned anisotropic scaling. NeurIPS 2024.
## More experiments on different model sizes
We repeated the tests of Tab.1 using the CLIP ViT-L/14 models pretrained on "laion400m\_e31" ($\theta\_A$) and "commonpool\_xl\_clip\_s13b\_b90k" ($\theta\_B$).
||EuroSAT|EuroSAT-Supp|DTD|DTD-Supp|GTSRB|GTSRB-Supp|SVHN|SVHN-Supp|
|-|-|-|-|-|-|-|-|-|
|$\theta\_{B}$|67.37|88.66|64.94|88.66|60.15|88.66|68.53|88.66|
|$\theta\_{B}+\tau$|-4.07|-0.33|**+0.74**|-0.36|+0.70|-0.93|+2.04|-3.73|
|**Ours**|**+1.15**|**-0.01**|+0.37|**-0.20**|**+2.29**|**-0.13**|**+5.33**|**-0.73**|
The results align with those reported in the paper.
## Computational Overhead
For an extensive complexity analysis, please refer to our response to gzgf. We highlight two additional aspects:
- By separating multi-head attention alignment into inter- and intra-head, we significantly reduce scaling issues related to width. For example, in CLIP’s ViT-B/16 (12 heads), the search space is reduced from $768!$ to $12 \times 64!$. For a ViT-L/14, GiT Re-basin’s space grows to $1024!$, while TransFusion’s remains much smaller at $16 \times 64!$.
- The weight-matching procedure depends only on $\theta\_A$ and $\theta\_B$, and not on the downstream tasks to be transferred. This ensures that associated costs can be incurred only once. In practice, the entity responsible for releasing the updated model can precompute and distribute the permutation matrices, eliminating the computational burden for end users.
## Functional role
Intuitively, encoding semantic similarity would require optimizing permutations based on the activations of each layer, which is hard in a data-free scenario like ours. However, if few data are available, one can leverage our decoupled inter- and intra-head approach and replace our energy-based metric with one crafted on activations discrepancy.
Nevertheless, while singular values do not directly encode high-level semantics, they capture structural properties. The largest singular value of a linear transformation corresponds to its Lipschitz constant; hence, aligning attention heads with similar distributions of singular values implies aligning heads with comparable sensitivity to input perturbations. Moreover, the spectrum of singular values indicates how information is propagated: a low-rank attention head typically focuses on a narrow subspace of the input, whereas a full-rank head affects a broader range of directions.
## Hyperparameters
Considering $\tau$, it is optionally scaled by a hyperparameter $\alpha$ fixed at $1$ for all experiments, due to our assumption of no access to data, which precluded further tuning. To assess sensitivity to $\alpha$, we kindly refer to Fig. 4 (main paper) -- applying TransFusion leads to increased robustness across various choices of $\alpha$.
## Residual connections
We remark that our approach to handling residual connections is not specific to transformers but is applicable to any residual block. Indeed, residual connections assume an implicit identity mapping
$z=I\_n z\_{\text{out}}+I\_n x$, where $I\_n$ represents the identity matrix. However, once the permutations are applied to all layers, this assumption no longer holds, resulting in $z=P\_{\text{out}} z\_{\text{out}}+P\_{\text{input}} x$. As shown in Fig. 5, this deviation is sufficient to break the functional equivalence of the model.
## Backbones for NLP tests
We apologize for the omission; please refer to our response to S7w3.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. It would be great if authors are able to incorporate these additional results into the paper. Since I have no further concerns, I will increase my score. | Summary: This paper explores how to update a fine-tuned downstream model from an older version of a pre-trained model to a newer pre-trained model without requiring re-finetuning. The paper proposes the TransFusion method, which incorporates attention head matching, alignment, and residual connection handling. Experiments on CV and NLP tasks demonstrate the effectiveness of the proposed approach.
### Post-rebuttal
I will maintain my score of 4.
Claims And Evidence: The contributions claimed in the paper are supported by the experimental results.
Methods And Evaluation Criteria: The proposed method meets the requirements of the intended application scenario.
Theoretical Claims: The theoretical proofs in the paper seem correct, but since I am not a researcher in this specific field, I cannot be completely certain.
Experimental Designs Or Analyses: The experiments are designed with pretrained models and datasets from both the language and vision domains, demonstrating the effectiveness of the proposed method.
Supplementary Material: The appendix includes proofs and additional experimental results.
Relation To Broader Scientific Literature: If the pre-training and fine-tuning processes are viewed as two different tasks, the problem being studied may be related to continual learning, which aims to mitigate catastrophic forgetting when learning new knowledge.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
+ The problem studied in the paper is practically meaningful, and the motivation behind the proposed method is clearly articulated.
+ The proposed method is well-supported by theoretical foundations.
Weaknesses:
+ Some experimental details are not clearly specified. For example, which pre-trained models were used in the NLP experiments?
+ The paper only conducts experiments on classification tasks. However, the problem it addresses is clearly more relevant to large language models. Can this method be applied to models at the 7B scale?
Other Comments Or Suggestions: N/A
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## Backbones used in NLP experiments
> ... For example, which pre-trained models were used in the NLP experiments?
We sincerely apologize for the omission. In our experiments, we used two variants of OpenCLIP's ViT-B-16 text encoder: "ViT-B-16/commonpool-l-s1b-b8k" for $\theta\_A$ and "ViT-B-16/datacomp-l-s1b-b8k" for $\theta\_B$. We specifically chose these versions over models like BERT or T5 because we needed two variants of the same base model trained in different ways to represent ours $\theta\_A$ and $\theta\_B$.
## Application to LLMs
> ... The paper only conducts experiments on classification tasks. However, the problem it addresses is clearly more relevant to large language models. Can this method be applied to models at the 7B scale?
Our method can be seamless applied to scenarios beyond classification (e.g., detection, segmentation, VQA). This flexibility arises from our approach not relying on classification-specific techniques nor imposing any assumptions regarding the type of loss function used during optimization.
Regarding the application to 7B models such as LLaMA 2 and Mistral7b, this represents an intriguing area that we plan to explore in future work. We consider it feasible; indeed, as discussed in the response to Reviewer gzgf, the complexity of weight matching scales **linearly** with the number of layers. We also advocate for restricting fine-tuning operations to self-attention layers only, for efficiency reasons. Indeed, our method exhibits appealing computational complexity for these layers, as noted again in the response to Reviewer gzgf. Notably, this selective fine-tuning approach aligns conceptually with established state-of-the-art techniques such as QLORA [a], which fine-tune only the self-attention layers while excluding feed-forward layers. Consequently, re-basin operations could similarly be confined exclusively to self-attention layers, given that no information transfer is necessary for the remaining layers (as the task vector would be null).
[a]: Dettmers et al., QLoRA: Efficient Finetuning of Quantized LLMs, NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author’s efforts. I have no further questions. If no other reviewers raise strong objections in the subsequent discussion, I will maintain my score of 4. | null | null | null | null | null | null |
Multiaccuracy and Multicalibration via Proxy Groups | Accept (poster) | Summary: The authors propose way to measure multicalibration/multiaccuracy by leveraging proxies (that way is to necessary to have access to the data) in order to compute worst case scenarios (formalized as an upper bound on their proposed metrics). They show that postprocessing model to satisfy multicalibration and multi-accuracy across proxies ends up in a reduction of the worst case (the computed upper bounds) violations.
They prove in theorem 4.2 how the multi-calibration/multi-accuracy of a model with respect to G is upper bounded with the multi-calibration/multi-accuracy with respect to the proxy.
The experimental results back up the developed theory .
Weaknesses:
- The last result, how the standard post-processing for multicalibration on proxy groups reduces the worst case could be a corollary from the main result (theorem 4.2). Is not clear to me why to devote a whole section to introduce standard procedures.
- For me there is a mismatch on the motivation the authors used for using proxy-groups and their methods. They claim using population level values allows for better privacy (which I agree) but then they present a method which still needs individual level data about the proxy groups. Moreover, given that the final algorithm is the standard postprocessig to achieve multicalibration, the only real information that is being omitted are which ones were the original groups of interests.
Strengths:
- The main result is interesting and timely. The experiments are well thought for backing their claims and usefulness of their approach of using proxy groups.
- The method is dependent both on the soundness of the proxies and the quality of the model.
Overall I think the idea is good but the paper has very superficial analysis, again section five is superfluous. Perhaps will be better to study further the relationship between the quality of the model vs the quality of the groups, obtaining thus a result akin to the optimal bound that the practitioner can use as a reference for when to improve their proxies or when to improve their model.
Finally, although the experiments are sound, is not clear to me how they improve fairness. I would like to see frorm a decision making point of view, how the proposed method would be used in real life.
Claims And Evidence: The theory results are well presented, correctly proved and backed by relevant experiments.
Methods And Evaluation Criteria: They do.
Theoretical Claims: Th proofs are correct.
Experimental Designs Or Analyses: The experiments are well thought and are relevant for the paper.
Supplementary Material: I did not check the supplementary material.
Relation To Broader Scientific Literature: See weakness in the summary section
Essential References Not Discussed: Not to my knowledge
Other Strengths And Weaknesses: See weakness in the summary section
Other Comments Or Suggestions: ...See weakness in the summary section
Questions For Authors: - Is there a way to know when to refined the proxies versus when is more beneficial to instead improve the accuracy of the model?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the review!
**Concern 1: Motivation and Method Mismatch**
We believe there might be a misunderstanding of the motivation behind our work - we apologize if it was not sufficiently clear.
Our goal is to develop a predictor $f$: $\mathcal{X} \rightarrow [0,1]$ that is multiaccurate/multicalibrated across a set of sensitive groups $\mathcal{G} = \set{g: \mathcal{X} \times \mathcal{Z}: \rightarrow \set{0,1}}$. However, we do not observe sensitive information $Z$ and instead have access to the marginal distribution over features $X$ and labels $Y$, denoted as $\mathcal{D}_{\mathcal{X}\mathcal{Y}}$. Thus, note that once cannot evaluate the true MA/MC violations with respect to the true grouping functions $g \in \mathcal{G}$ since they are functions of unobserved $Z$. This is a common and important problem often seen in healthcare and government applications [1,2].
Thus, we employ the popular approach of using proxies [3,4], denoted as $\hat{g}: \mathcal{X} \rightarrow \set{0,1}$, of the true groups $g$. We show that with proxies (and their misclassification rates), while we can’t measure the MA/MC violations over the true groups, we can provide upper bounds that certify that their worst-case violations are below a given level. Moreover, we show that the worst-case bound is actionable; i.e. it can be reduced by making $f$ multiaccurate/multicalibrated with respect to the groups defined by the proxies. This is useful because:
1) If we can ensure that the worst-case violation $< \alpha$, then the true violation (across the true, unobserved group) is $< \alpha$ as well!
2) We do not need to develop a new algorithm, and instead can leverage existing ones that generate models that are multiaccurate/multicalibrated for any arbitrary set of groups.
In conclusion, one can get *approximate* MA/MC guarantees via reducing worst-case violations.
We are not trying to “claim that using population values allows for better privacy”, and apologize if this was not clear.
Additionally, the reviewer asks, “They present a method which still needs individual-level data about the proxy groups?”. Note that the algorithms take as input any set of groups and, as currently presented, there are multiple instances of $g(X, Z)$ written. We see how this could be confusing, as it implies that we need access to $Z$. However, we want to clarify that when using the proxies, all instances of $g(X, Z)$ are replaced by $\hat{g}(X)$. As a result, with the proxies as inputs to the algorithms, all sensitive information $Z$ is naturally omitted.
**Concern 2: Need for section 5**
We kindly disagree that section 5 is superficial. For many readers not familiar with the literature, it may not be immediately clear from Theorem 4.2 how post-processing for multicalibration on proxy groups would reduce the worst-case violation, and to what level $\alpha$ one needs to multicalibrate. Our theorem makes this clear. Importantly, Section 5 shows that enforcing MA/MC does not degrade the MSE, which we need to ensure that our bounds get smaller when enforcing MA/MC across the proxies.
**Additional Questions/Concerns**
*Although the experiments are sound, is not clear to me how they improve fairness. I would like to see from a decision-making point of view, how the proposed method would be used in real life*
The notions of fairness we are focused on are multiaccuracy/multicalibration [5]. Our theory and experiments clearly show that these notions of fairness (their worst-case violations) can be provably reduced. We believe this is very useful: Suppose one needs to build a model that is $\alpha$-MC across the true groups. With our results, we can reduce the worst-case violation and, if it is less than $\alpha$, then we know the true violation is less than $\alpha$ as well - without requiring knowledge of the true groups!
*Is there a way to know when to refine the proxies versus when it is more beneficial to instead improve the accuracy of the model?*
By "beneficial", does the Rev. mean reducing the worst-case bounds? Based on our results, neither is more beneficial than the other since improving the proxies or improving the accuracy of $f$ will always reduce the bound. Additionally, multicalibrating with respect to the proxies **will** reduce the bound.
Thank you for your questions and comments. We hope we have addressed them in a clear and satisfactory manner, and we'll be happy to address any outstanding questions!
[1] "Advancing healthcare equity through improved data collection", J.S. Weissman et al. New England Journal of Medicine, 2011.
[2] "Improving fairness in machine learning systems: What do industry practitioners need?", K. Holstein et al. CHI 2019.
[3] "Using Bayesian imputation to assess racial and ethnic disparities in pediatric performance measures", D. Brown et al. Health services research 2016.
[5] "Calibration for the (Computationally-Identifiable) Masses", Úrsula Hébert-Johnson et al. ICML 2018. | Summary: The paper "Multiaccuracy and Multicalibration via Proxy Groups" addresses the challenge of ensuring fairness in predictive machine learning models when sensitive group data is missing or incomplete. The authors focus on two fairness notions—multiaccuracy and multicalibration—which aim to ensure that model predictions are unbiased and well-calibrated across groups.
The paper demonstrates that proxy-sensitive attributes (features correlated with true sensitive attributes) can be used to derive actionable upper bounds on the true multiaccuracy and multicalibration violations. This allows practitioners to assess worst-case fairness violations even when true sensitive group data is unavailable.
The authors show that enforcing multiaccuracy and multicalibration using proxy-sensitive attributes can significantly mitigate fairness violations for the true (but unknown) demographic groups. They introduce computational methods to adjust models to satisfy these fairness criteria.
Finally, the study evaluates the proposed methods on multiple datasets, including ACSIncome, ACSPublicCoverage, and CheXpert (a medical imaging dataset). The results demonstrate that enforcing fairness across proxies leads to substantial reductions in worst-case fairness violations.
Claims And Evidence: Some claims are well supported.
1) The authors derive provable upper bounds on multiaccuracy and multicalibration violations using proxy-sensitive attributes. These bounds are mathematically justified and clearly stated in Theorem 4.2 and Lemma 4.1. The theoretical results align with known fairness literature, and the derivations appear sound.
2) The authors present two algorithms (Multiaccuracy Regression and Multicalibration Boosting) that adjust models based on proxy attributes to reduce fairness violations. Theoretical results (Theorem 5.1 and Theorem 5.3) prove that these algorithms reduce worst-case violations while maintaining or improving predictive performance.
3) Empirical results on ACSIncome, ACSPublicCoverage, and CheXpert datasets demonstrate that the proposed methods successfully reduce fairness violations. Figures 1–4 illustrate reductions in worst-case multicalibration errors, showing that fairness guarantees improve after post-processing.
Nevertheless, some claims are still non-supported. For instance, the very last sentence of the introduction is "Even when sensitive information is incomplete or inaccessible, proxies can extend approximate multiaccuracy and multicalibration protections in a meaningful way." I did not see an explicit discussion on that point Also (see discussion below), "our methods offer, for the first time, the possibility to certify and correct multiaccuracy and multicalibration without requiring access to ground truth group data". I think that some related research on fairness with partially observed sensitive attributes suggests that bounding fairness violations using proxies has been explored before, such as
Awasthi, P., Kleindessner, M., and Morgenstern, J. Equalized odds postprocessing under imperfect group information. In International Conference on Artificial Intelligence and Statistics, pp. 1770–1780. PMLR, 2020.
or
Bharti, B., Yi, P., and Sulam, J. Estimating and controlling for equalized odds via sensitive attribute predictors. Advances in neural information processing systems, 36, 2024.
And finally, at the end of section 5, it is claimed that "Applying our methods ensures stronger fairness guarantees," but while post-processing reduces fairness violations with respect to proxies, it does not guarantee fairness with respect to the true sensitive attributes (which remain unobserved).
Methods And Evaluation Criteria: The methods assume that proxy attributes are sufficiently good approximations of true sensitive attributes. However, the paper does not systematically test when proxies fail or lead to misleading fairness estimates. A stronger evaluation would vary proxy errors to test robustness.
The paper does not benchmark against other fairness estimation techniques that work with missing sensitive data (e.g., Bayesian imputation, adversarial debiasing). Adding comparisons would help assess whether proxy-based methods are superior, complementary, or limited in certain contexts.
Further, since fairness bounds depend on proxy quality and sample size, reporting confidence intervals or uncertainty estimates would make the results more robust.
Theoretical Claims: Yes, all of them. The mathematics are rather straightforward
Experimental Designs Or Analyses: I noticed the lack of a controlled synthetic dataset. There is no dataset where ground truth fairness violations are explicitly known. This makes it impossible to verify whether proxy-based fairness estimates accurately reflect reality. A synthetic dataset (where fairness violations are designed and known) could have served as a check.
Some proxy groups (e.g., “Women” in Table 1 with zero error) may be too accurate, suggesting that another feature (e.g., pregnancy status ?) strongly correlates with the sensitive attribute.
Supplementary Material: No
Relation To Broader Scientific Literature: When sensitive attributes are missing or incomplete, fairness evaluation becomes difficult, and several approaches have been proposed. Important references are given.
Essential References Not Discussed: The definition of multicalibration used here seems to be weaker than that of the original work
Hebert-Johnson, U., Kim, M., Reingold, O., & Rothblum, G. (2018). Multicalibration: Calibration for the (computationally-identifiable) masses. In J. Dy & A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning (Vol. 80, pp. 1939–1948).
It is important to explicitly clarify this distinction and its implications early in the paper, particularly the relationship between the definition based on the ECE and the definition (3.2) of multicalibration in Hebert-Johnson et al. (2018).
Additionally, the derived upper bounds should be explicitly linked to this weaker definition of multicalibration.
Other Strengths And Weaknesses: The paper is very interesting.
The fairness bounds are derived under the assumption that proxy attributes adequately approximate true sensitive attributes, but this assumption is never explicitly validated. Furthermore, it does not explore how fairness guarantees degrade when proxies are inaccurate.
It does not benchmark against other methods for fairness estimation without sensitive attributes, such as Bayesian imputation for fairness estimation (as in Chen et al., 2019) or worst-case fairness estimation (as in Kallus et al., 2022)
Other Comments Or Suggestions: see previous box
Questions For Authors: none
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the review! We are glad you enjoyed the paper. Our responses to your questions and concerns are below.
**Concern 1: Validity of some claims**
We apologize if some claims seem unsupported; let us clarify:
*Claim 1: "even when sensitive information is incomplete or inaccessible, proxies can extend approximate multiaccuracy and multicalibration protections in a meaningful way"*
With proxies, we can establish non-trivial upper bounds on true MA/MC violations. Denote this bound as $B$. Then, while one cannot determine the exact violations, one can still provably assert that the model is $B$-multiaccurate/multicalibrated, providing a meaningful guarantee on the MA/MC violations of $f$. We will reword the statement to accurately reflect this point.
*Claim 2: "our methods offer, for the first time, the possibility to certify and correct multiaccuracy and multicalibration without requiring access to ground truth group data"*
You are correct that other works have also explored fairness by bounding fairness violations using proxies - as we cite in our manuscript. However, all of these works were concerned with parity- or group-based notions of fairness, such as demographic parity, equalized odds, equal opportunity, etc. Our work is the first to use proxies to bound MA/MC violations, which are quite different notions of fairness that are not parity-based [1].
*Claim 3: "Applying our methods ensures stronger fairness guarantees"*
We apologize that this claim suggests our methods allow us to control the true MA/MC violations. What we meant to convey is that our methods allow us to get stronger guarantees on the worst-case violations, which we believe (and demonstrated) can be practically useful. This is because, in our setting, the true violation cannot be evaluated. However, if one can modify $f$ such that our upper bound is less than $\alpha$, we can conclude that the true violation is also less than $\alpha$. Thus, our theory and methods are useful. We will make sure to clarify this point in the revised manuscript.
**Concern 2: Potential failure of proxies**
We believe there might be a slight misunderstanding here. In this work, we make no assumptions on how good or bad the proxies are. Additionally, our goal is not to assess how the true MA/MC guarantee changes as the proxies change. In the demographically scarce setting that we study, the true MA/MC violations cannot be identified/determined. Thus, we focus on providing bounds on these violations, which can be interpreted as worst-case violations. When analyzing the bounds (see e.g. Lemma 4.1), it's clear that if the proxies are bad, then our bounds will be naturally large, indicating the MA/MC violations could be large as well. Likewise, as the proxies become more accurate, our bound collapses to the true violation.
Regardless of the quality of the proxies, our bounds always hold (i.e. we don't require any assumptions on them), and our methodology will always reduce the worst-case violation. Finally, the reason we do not benchmark against other methods is that our focus was not on studying *how* to build proxies for accurate fairness estimation. Instead, our focus is to study how to obtain practical MA/MC guarantees with *any set of proxies* that might be available via a general worst-case analysis.
**Concern 3: questions about experiments**
We agree that a synthetic experiment would be good! It would allow us to showcase when our bounds are tight (see rebuttal to reviewer LB2V) and analyze how the true fairness violations and proxy-fairness violations differ as a function of the proxy error. We plan to include a synthetic example in the revised manuscript. Thank you for the great suggestion!
Additionally, you observe that some proxy groups (e.g., “Women” in Table 1 with zero error) may be too accurate, suggesting that another feature (e.g., pregnancy status ?) strongly correlates with the sensitive attribute. This is correct! This point comes to illustrate that in many real-world scenarios, even without observing $Z$, we can learn highly accurate proxies and use them to provide meaningful fairness estimates via worst-case analysis.
**Additional comments**
*Definition of Multicalibration*
Thank you for pointing this out! You are correct in saying that the definition of multicalibration is different from the one proposed in [1]. The one we use is also referred to as "multicalibration" in other works [2,3] but, to be more precise, it should be referred to as approximate-multicalibration, or multicalibration in expectation. We will make this distinction very clear in the revised manuscript.
[1] "Calibration for the (Computationally-Identifiable) Masses", Úrsula Hébert-Johnson et al. ICML 2018.
[2] "Swap Agnostic Learning, or Characterizing Omniprediction via Multicalibration", Gopalan et al.. NeurIPS 2023.
[3] "Multicalibrated Regression for Downstream Fairness", Globus-Harris et al. AIES 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks. I confirm my Overall Recommendation: 4: Accept | Summary: In this paper, the authors study the problem of fairness in ML. To be specific, they focus on a scenario where different groups are evaluated independently with respect to their accuracies and calibration errors (coined as Multiaccuracy and Multicalibration fairness in the literature). The literature has studied Multiaccuracy and Multicalibration in scenarios where sensitive attributes are available. The authors address this limitation by (i) providing theoretical bounds for Multiaccuracy and Multicalibration fairness if proxy attributes/groups are used and (ii) proposing/adapting algorithms to improve Multiaccuracy and Multicalibration fairness of ML algorithms without sensitive attribute information.
## update after rebuttal
I've read the comments by other reviewers and the rebuttal provided the authors. Therefore, I keep my original acceptance recommendation.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Partially.
Experimental Designs Or Analyses: Yes, all of them.
Supplementary Material: I briefly looked at the proofs.
Relation To Broader Scientific Literature: The literature has studied Multiaccuracy and Multicalibration in scenarios where sensitive attributes are available. The authors address this limitation by (i) providing theoretical bounds for Multiaccuracy and Multicalibration fairness if proxy attributes/groups are used and (ii) proposing/adapting algorithms to improve Multiaccuracy and Multicalibration fairness of ML algorithms without sensitive attribute information.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
+ Addressing bias and fairness in scenarios where sensitive attributes are unavailable is an important challenge in ML fairness.
+ Multiaccuracy and Multicalibration are important fairness definitions.
+ The paper provides theoretical supports as well as experimental results on several datasets.
+ Overall, the paper is very well written and easy to follow.
Weaknesses:
I am generally happy with the paper but I would like to state a few things:
1. I am generally unhappy with the proxy attribute approach relying on a separate model trained to estimate the sensitive attributes/groups. We are introducing a source of error into a very sensitive issue. If there turns out to be low fairness, it is not clear what the source of the problem is.
2. "the proxies exhibit some error, albeit small." => For completeness, please provide these figures.
3. Figs 1-4 do not have the labels for the axes and the lines. This makes it difficult to compare the subplots and evaluate the results.
Other Comments Or Suggestions: Minor comments:
- What is v in the ECE definition on Line 186?
- "data-scare regimes" => "data-scarce regimes".
- "notefirst that" => "note first that".
Questions For Authors: Please see Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your careful review! We are glad you enjoyed the paper. Our responses to your questions and concerns are below.
**Concern 1: Use of proxies**
We agree with the reviewer: proxy-sensitive attributes can be problematic as they introduce another source of error into an already sensitive issue. Nonetheless, note that there are settings where (i) this is inevitable (when ground-truth attributes aren't available/recorded), or (ii) when subjects wish to withhold their ground-truth sensitive data (e.g. for privacy reasons). In both these cases, we show that one can nonetheless provide precise guarantees on maximum fairness violations; thus, these proxies can be helpful. We will make sure to discuss these concerns in depth, the pros and cons of using proxies, in our revised version.
**Concern 2: Figures**
Regarding the comment that “the proxies exhibit some error, albeit small.”, we note that we are referring to the misclassification error of proxies, which are precisely reported in Tables 1-3 in the main text, along with Tables 4-6 in the Appendix. We could turn these tables into figures, if necessary, but we thought the tabular format to be sufficient.
We apologize for the lack of clarity in Figures 1-4. The x-axis refers to the group memberships, and the y-axis is the expected calibration error of each of the groups, $ECE(f,g)$. To make the figures more readable, we refer to the groups as $g_1, \dots, g_k$, but we realize that it may be unclear which group each $g_i$ is referring to. We will certainly clarify in this revised version!
Additionally, the 2 lines refer to the actual violation (red) and upper bound (blue). We understand that the legend may be confusing. In the revised version, we will place “True Violation” directly above the dotted red line and “Upper Bound” directly above the blue line, making it clearer.
**Additional comments**
What is $v$ in the ECE definition?
The value $v$ refers to every value in $[0,1]$ that the predictor can take. For example, let $v = 0.3$. Then, the inner term in the ECE is just $|E[g(X,Z)(0.3 - Y)|f(X)=0.3]|$. That is, the group-wise error for all points $X$ where $f(X)=0.3$. The expected calibration error takes the average of this quantity over all $v$, where $v$ is sampled from the distribution of predictions made by $f$.
To be more clear, we will edit the manuscript with the following:
"Central to evaluating MC is the expected calibration error (ECE) for a group $g \in \mathcal{G}$
$$ECE(f,g) = E_{v \sim \mathcal{D}_f}[|E[g(X,Z)(f(X)−Y)|f(X) = v]|]$$
where the outer expectation is over $v \sim \mathcal{D}_f$, the distribution of predictions made by the model $f$ under $\mathcal{D}$"
We also thank you for catching the other typos! We will make sure to fix them in the revised manuscript.
Finally, thank you for your questions and comments! We hope we have addressed them in a clear and satisfactory manner.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for answering my concerns. | Summary: This paper explores the problem of evaluating and achieving multiaccuracy (MA) and multicalibration (MC) when sensitive group information is missing. The authors address this challenge by learning proxy functions to predict group membership without direct access to sensitive attributes.
1. Theoretical results: The paper establishes an upper bound on the evaluation error of MA and MC for the original groups in terms of the corresponding error on the proxy groups. Additionally, the authors propose an algorithm that post-processes a predictor to ensure multiaccuracy and multicalibration with respect to the proxy groups, which in turn guarantees these properties for the original groups.
2. Empirical results: Experimental results demonstrate that post-processing based on proxy groups effectively reduces worst-case MA and MC violations for the original groups.
Claims And Evidence: 1. Line 234 "Conversely, if the worst-case violations are large, this suggests that f may potentially be significantly biased or uncalibrated for certain groups g". Although the authors emphasizes the word "potentially", I still don't think it is a right claim to make in the theoretical section as they do not have a theorem that lower bound error on the original groups by the error of the proxy groups.
2. For the claim made by authors after theorem 5.1, they should also emphasize that they guarantee they get is for the so-called "worst-case" violation in their bound analysis, but does not necessarily mean that the violation gets smaller.
In general, the authors should be careful to make it clear that the worst-case violations does not mean the actually violation, and also this worst-case violations might not be tight upper bound for certain cases.
Methods And Evaluation Criteria: Learning proxy functions is a natural and intuitive approach for this problem, and the chosen benchmark/dataset is appropriate. However, I am uncertain about how the upper bound bar in the figure is determined, whether it takes finite sample analysis into consideration.
Theoretical Claims: I have checked all the proofs, and they appear to be correct. However, some theorems lack clear interpretations. As I previously pointed out, their theorems suggest that the "worst-case" error decreases, but the worst-case analysis may not be tight. Additionally, reducing worst-case violations does not necessarily imply a reduction in the true violation.
For instance, the bound in Theorem 5.3 might not be tight. The authors seem to suggest that achieving an
epsilon error would require 1/epsilon^8 iterations, which appears overly pessimistic. I recommend that the authors carefully verify the result in Globus-Harris et al. (2023).
Moreover, it would be beneficial if the authors included a finite-sample analysis.
Experimental Designs Or Analyses: I have checked the soundness of the experiments, and the results appear solid. However, I have a few clarifications and suggestions:
1. In line 436, the statement "Notably, both models are approximately 0.03-multicalibrated with respect to the proxies" might be incorrect. Should this refer to the original groups instead of the proxies?
2. It would be helpful if the authors discussed how the predictor could be further calibrated with respect to the original groups in cases where it already exhibits small violations for the proxies. This could provide deeper insights into the limitations and effectiveness of the approach.
3. Additionally, I would like clarification on how the upper bound in the figure is determined. Does it take finite-sample analysis into account, or is it simply computed using the evaluation error? Providing details on this would improve the interpretability of the results.
Supplementary Material: No, I did not run the code.
Relation To Broader Scientific Literature: The algorithm used to achieve multiaccuracy (MA) and multicalibration (MC) in this paper is based on the work of Roth (2022) and Globus-Harris et al. (2023), meaning the authors do not introduce a new algorithm. Their primary contribution lies in addressing the scenario where the sensitive attributes that define the groups are missing. While fairness under incomplete sensitive data has been studied in the context of other fairness metrics, this paper is the first to explore the problem specifically for MA and MC. Their approach leverages proxy functions to address this challenge.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The writing is clear and well-structured, making the paper easy to follow.
2. This is the first work to study the use of proxy functions for achieving multiaccuracy (MA) and multicalibration (MC) in the setting where sensitive groups information is missing.
Weaknesses:
1. The paper lacks algorithmic innovation. Rather than introducing a new algorithm, the authors primarily adapt existing methods by applying them to proxy groups instead of the original groups. While this is a meaningful extension, it limits the novelty of their technical contributions.
Other Comments Or Suggestions: No.
Questions For Authors: 1. I would like clarification on how the upper bound in the figure is determined. Does it take finite-sample analysis into account, or is it simply computed using the evaluation error?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review! Our responses to your questions and concerns are below.
**Concern 1: General Concerns around Worst-Case Violations**
As you correctly state, we establish upper bounds on the MA/MC violations across the true groups in terms of the violations on the proxies, the proxy errors, and the $MSE$ of $f$. We would like to add that
1) We have established lower bounds using identical techniques for MA (and we believe the same hold for MC); we will include them in the revised version.
2) The bounds are **tight**, i.e. there exists a distribution over $(f, Y, g, \hat{g})$ such that the bounds hold with **equality**:
W.l.o.g., consider the AE violation for a group $g$. Consider first $MSE(f) \leq P(\hat{g} \neq g)$, so the upper bound is
$$
|AE(f, g)| \leq |AE(f, \hat{g})| + \sqrt{MSE(f)}\cdot\sqrt{P(\hat{g} \neq g)}.
$$
Assume also that $MSE(f) > 0$ and $P(\hat{g} \neq g) > 0$ (i.e. not perfect predictors). The bound holds with equality if the data-generating process satisfies:
1) $P(\hat{g} \neq g) = P(\hat{g} = 0, g=1)$
2) $f-Y = \lambda\cdot(g - \hat{g})$ where $\lambda = \sqrt{\frac{MSE(f)}{P(\hat{g} \neq g)}}$
On the other hand, when $MSE(f) > P(\hat{g} \neq g)$, the bound is
$$
|AE(f, g)| \leq |AE(f, \hat{g})| + P(\hat{g} \neq g),
$$
With analogous (but slightly more involved) steps, one can construct a distribution where this upper bound holds with equality.
We will include all these points in the revised version. Given these two facts, we believe that our claim —“If the worst-case violations are large, this suggests that $f$ may potentially be significantly biased or uncalibrated for certain groups”—is reasonable and hope the reviewer agrees.
Additionally, we agree that it is important to emphasize that Theorem 5.1 applies to the worst-case violation, and not the true violation. In our setting, the true violation cannot be evaluated; thus, one cannot provably reduce it. Yet, we can minimize the upper bound. The reviewer is correct that this will not reduce the actual violation, but we argue this is still an effective way to provide meaningful guarantees — as demonstrated in the experiments. We can modify $f$ such that the upper bound is less than $\alpha$, concluding that the *true violation* is also less than $\alpha$. We will make sure to clarify this distinction in the revised manuscript.
**Concern 2: Figures/Finite Sample Analysis**
Recall the upper bound is a function of $MSE(f)$, $err(\hat{g})$, and $ECE(f, \hat{g})$. Here $f$ is the output of a learning algorithm that takes a training sample $S_{train}$, and the modified $f$ is the output of running the algorithm on a calibration set $S_{cal}$. To compute the upper bound, we simply compute sample estimates of all 3 quantities on a fixed held-out set and report the average over five train/calibration splits. Note that stating the theoretical results in terms of the distributional quantities is standard practice in this area of work [1,2,3].
We do not include a finite sample analysis in the manuscript but refer the readers to [2] where a complete finite sample analysis is done. The analysis repeatedly uses Chernoff bounds to show that, as long as the algorithms run on a finite sample of $n$ i.i.d samples from $\mathcal{D}$, then the guarantees carry over to the true distribution with high probability when $n$ is sufficiently large. We will happily include the main results from [2] and explain their application to our setting in our revised appendix.
**Concern 3: Lack of Algorithmic Innovation**
While we do not introduce a new algorithm, we do not think this is a weakness. When trying to extend various fairness guarantees via proxies, other works have needed to provide new algorithms to control either the true violation or upper bounds [4,5] because the theory demanded it. Our theory clearly shows that a new algorithm is **not** needed, which we believe to be an elegant contribution.
**Additional comments**
1) Thank you for pointing out the typo in Theorem 5.3! It should be $T < \frac{4}{\alpha^2}$ rounds. We will correct this.
2) Line 436 is correct. Since the models are already highly multicalibrated across the proxies, correcting more provides a minimal reduction in our bounds.
Thank you for your questions and comments! We hope we have addressed them in a clear and satisfactory manner.
[1] "Multicalibration for Confidence Scoring in LLMs", Detomasso et al. ICML 2024.
[2] "Uncertain: Modern Topics in Uncertainty Estimation", Roth 2022.
[3] "Multicalibration as Boosting for Regression", Globus-Harris et al. ICML 2023
[4] "Estimating and Controlling for EOD via Sensitive Attribute Predictors", Bharti et al. NeurIPS 2023.
[5] "Multiaccurate Proxies for Downstream Fairness", Diana et al. FaccT 2022.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the reviewers for addressing most of my questions. Overall, I feel that my concerns were generally well answered. I will maintain my current score for now, as I mentioned before the algorithmic novelty and theoretical contributions to be relatively straightforward, but I appreciate the authors' efforts in improving the paper and will discuss with AC and other reviewers in the discuss period. | null | null | null | null | null | null |
Contrastive Localized Language-Image Pre-Training | Accept (poster) | Summary: This paper explores a data-driven approach to enhance the regional representation capabilities of CLIP. The authors designed a data annotation pipeline to expand regional-level annotations and developed a training architecture featuring a Prompter. This architecture enables more effective utilization of the annotated data for fine-grained training. Experimental results, particularly those obtained under certain MLLM settings, demonstrate the advantages of the proposed method.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes
Theoretical Claims: No proofs for theoretical claims.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper have several sound aspects, but also some areas that could be further explored. Please refer to the 'Weaknesses' part for more details.
Supplementary Material: No supplementary material.
Relation To Broader Scientific Literature: Compared with previous work, the main contributions of this paper are: 1) construction of a large-scale region-text dataset; 2) design of a fine-grained pre-training paradigm; 3) exploration of scalability in MLLM scenarios.
Essential References Not Discussed: Please refer to the 'Weaknesses' part for more details.
Other Strengths And Weaknesses: I believe this is a high-quality paper, and I greatly appreciate the authors' contributions in terms of the data pipeline, training architecture, and MLLM evaluation. However, I have the following concerns:
1. UMG-CLIP [ECCV 2024] focuses on similar issues as this paper. Nevertheless, there is a lack of comparison with it in the authors' discussion. I expect to see more in-depth discussions, including:
a) Data annotation process. UMG-CLIP first uses an open-vocabulary detector to predict bounding boxes and then generates descriptions for each box. In contrast, this paper first identifies entities and then predicts bounding boxes for each entity (somewhat similar to RegionCLIP). The advantages and disadvantages of these two pipelines need to be explored. In particular, I noticed that UMG-CLIP claims to have good accuracy for bounding boxes (Table 14 in their paper), and its region-level descriptions seem to be more detailed (Figure 3 in their paper).
b) Training architecture. The main difference between this paper and UMG-CLIP appears to be the replacement of ROI-ALIGN with a prompter. The authors claim that this is because the inaccurate pseudo-annotations limit the effectiveness of ROI-ALIGN. If this is the case, could ROI-ALIGN be considered for use when ground-truth annotations are available? Additionally, the paper lacks an evaluation of annotation accuracy. Does this imply that the bounding boxes generated by this paper are of poor quality, and the training architecture is a compromise due to the low-quality data?
2. I look forward to seeing more validation results, such as experimental verifications in open-world detection (similar to RegionCLIP) or segmentation scenarios (similar to UMG-CLIP).
Other Comments Or Suggestions: Please refer to the 'Weaknesses' part.
Questions For Authors: Please refer to the 'Weaknesses' part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank you for the positive review and constructive comments.
> UMG-CLIP
We thank you for pointing out the reference that we briefly compared in L363 left. We agree that there is some technical similarity between our CLOC and UMG-CLIP, but the goals and positioning of our work and theirs are quite different from the following perspectives.
UMG-CLIP is designed primarily for vision-centric tasks such as detection and segmentation, whereas our focus is on MLLMs (L33, right in the Introduction). Unlike open-vocabulary vision tasks, which involve a relatively small set of classes, MLLM VQA tasks require more extensive language understanding and thus demand large-scale pre-training data. This distinction also influences the design of our prompter architecture, which incorporates attention layers tailored to the downstream use cases of the LLM decoder (further discussed in L256, left).
Regarding the annotation pipeline, the primary difference lies in scale. UMG-CLIP fine-tunes a pre-trained CLIP model for dense vision tasks on a 41M-image dataset, whereas our approach pre-trains from scratch on up to 2B images. Additionally, unlike UMG-CLIP and RegionCLIP, which first predict bounding boxes and then run a captioner on each box, our VESL pipeline (Section 4) does not scale with the number of boxes, making it significantly more efficient for data annotation. Notably, annotating billions of images for our experiments required over 500+ GPUs for more than a week (L757, right).
Last but not the least, we see the UMG-CLIP fine-tuning approach as complementary to our CLOC pre-training rather than conflicting with it.
> Open-world detection
We thank you for the advice on evaluating our encoder on open-vocab dense vision tasks. First, we want to emphasize that our original motivation for this work is for MLLM tasks with localization use cases such as conversational referring and grounding (e.g., Table 4), but not for dense vision tasks. And in Table 2, we have reported competitive results of zero-shot object recognition and retrieval given bounding boxes.
To further address your concern, for open-vocabulary detection, we provide additional zero-shot evaluation results on COCO Detection (minival), ODinW (test-dev), and LVIS-Det (minival). When comparing GLIP [1] and CLOC, we observe that CLOC consistently achieves better results than GLIP across all backbone categories (T / B / L), suggesting that CLOC offers advantages in localization and object detection performance. Notably, GLIP employs DyHead—a strong decoder/head module—on top of the encoder, whereas our ablation study uses only two simple heads for classification and regression. This further supports that the encoder representation in CLOC is indeed superior. See the table below for detailed results.
| Model | ViT | COCO-Det (minival) | ODinW (test) | LVIS-Det (minival) |
|--------|----------|---------------------|--------------|--------------------|
| GLIP-T | ViT-T/16 | 46.6 | 46.5 | 26.0 |
| GLIP-L | ViT-L/14 | 49.8 | 52.1 | 37.3 |
| CLOC-B | ViT-B/16 | 47.3 | 48.4 | 29.6 |
| CLOC-L | ViT-L/14 | 50.8 | 53.6 | 38.1 |
[1] Grounded Language-Image Pre-training, CVPR 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I will improve my rating. | Summary: This work introduces a dynamic attention mechanism, inspired by SAM, to aggregate regional image features and perform contrastive learning at both the image-text and region-text levels. The approach is novel in the context of visual pretraining.
Claims And Evidence: The experiments (Table 2, 3) demonstrate strong results in region-aware visual pretraining compared to CLIP. However, it remains unclear whether the performance gains stem from the use of more or cleaner data or from the pretraining schema itself.
Methods And Evaluation Criteria: The methods seem reasonable and elegant to me. However, the evaluation should be more comprehensive, including comparisons with other location-aware pretraining methods (e.g., LocCa, RegionCLIP) on the same benchmarks, such as RefCOCOs.
Theoretical Claims: I checked, please see "Other Strengths And Weaknesses"
Experimental Designs Or Analyses: Can this work be seamlessly extended to zero-shot object detection?
Supplementary Material: Yes, scan throught the dataset
Relation To Broader Scientific Literature: Visual encoder pretraining
Essential References Not Discussed: It covers well, need discuss more on training efficiency.
Other Strengths And Weaknesses: Weakness:
1. A broader comparison with additional methods, as mentioned above.
2. More discussion and comparisons on training efficiency (e.g., with CLIP, LocCa) would be valuable.
3. The general issue of attribute binding, inherited from CLIP, is not well addressed.
Minor:
1. For Eq2, Please pay attention to the superscripts in the formula, especially since $m'$ is not defined.
2. For Equ4, please well define $L_{CLOC}$
Other Comments Or Suggestions: See above
Questions For Authors: Will you release the pretrianed model (including the Prompter) to the public?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for the positive review and constructive comments.
> Unclear whether the performance gains stem from the use of more or cleaner data or from the pretraining schema itself.
In Table 2, we provide detailed ablations of our proposed ingredients on top of the CLIP we trained by ourselves, including the prompter design, the training labels generated by our pipeline. Note that, the CLIP model (row 2) we compared in the experiments was trained on the same image data as CLOC for a fair comparison.
> Can this work be seamlessly extended to zero-shot object detection?
Yes, CLOC can be extended to zero-shot object detection. We provide additional evaluation results on COCO Detection (minival), ODinW (test-dev), and LVIS-Det (minival). When comparing GLIP [1] and CLOC, we observe that CLOC consistently achieves better results than GLIP across all backbone categories (T / B / L), suggesting that CLOC offers advantages in localization and object detection performance. Notably, GLIP employs DyHead—a strong decoder/head module—on top of the encoder, whereas our ablation study uses only two simple heads for classification and regression. This further supports that the encoder representation in CLOC is indeed superior. See the table below for detailed results.
| Model | ViT | COCO-Det (minival) | ODinW (test) | LVIS-Det (minival) |
|--------|----------|---------------------|--------------|--------------------|
| GLIP-T | ViT-T/16 | 46.6 | 46.5 | 26.0 |
| GLIP-L | ViT-L/14 | 49.8 | 52.1 | 37.3 |
| CLOC-B | ViT-B/16 | 47.3 | 48.4 | 29.6 |
| CLOC-L | ViT-L/14 | 50.8 | 53.6 | 38.1 |
> A broader comparison with additional methods
Thank you for the suggestion. We agree that a broader comparison to more encoders would certainly be great. However, many previous models (LocCa, RegionCLIP) are trained on quite different data, labels, training cost, architectures etc. that might make it hard to draw a fair comparison directly, and some of them are not open-sourced. Therefore, we have limited the scope of our paper to the CLIP method and carefully ablated it in the same setting (e.g., training images, number of steps) as closely as possible.
> More discussion and comparisons on training efficiency (e.g., with CLIP, LocCa)
We provide discussion in the “Training cost” paragraph (L789 left). Compared to CLIP, the extra cost is small for the object-level contrastive loss and the prompter — we observe about 10% more GPU time. Notably, the lightweight prompter operates on the image embedding that is shared across all the prompts within an image. The main overhead is to compute the image embedding through the ViT, which does not scale with the number of prompts. Compared to LocCa, our CLOC is much more lightweight since LocCa needs a full encoder-decoder transformer for autoregressive next-token prediction.
> Attribute binding issue of CLIP
In this paper, indeed, we do not tailor to address the attribute binding shortcoming of the original CLIP directly. However, with our promptable embedding design (Figure 2 & Section 3.2), we think CLOC provides an alternative approach that allows the users to interact with our encoder to obtain a fine-grained embedding given a prompt of their interest by specifying a box location or an object description.
> Minor fixes for Eq2 and Eq4
Thanks for pointing them out. We will revise them accordingly.
> Will you release the pre-trained model (including the Prompter) to the public?
Yes, we aim to release the pre-trained model, and are actively working on that.
[1] Grounded Language-Image Pre-training, CVPR 2022. | Summary: The submission introduces a new pre-training method called Contrastive Localized Language-Image Pre-training (CLOC). The pre-training method extends CLIP pre-training with additional losses based on the outputs of a new "Prompter" module. This new module consists of a light-weight transformer layer that enhances CLIP image embeddings for regional losses (similarity to bounding box, and grounding of region description).
For training CLOC, the paper also introduces a new captioning pipeline termed Visually-Enriched and Spatially-Localized (VESL). This pipeline first generates detailed image captions and then uses a text-conditioned zero-shot detection model to generate bounding boxes for sub-queries of the caption generated by named-entity recognition.
The paper then compares a CLOC model with a CLIP baseline that is trained on the same data using the same hyper parameters but without the CLOC losses. The performance is compared on Ferret bench, RefLVIS, RefCOCO, and various VQA benchmarks.
Claims And Evidence: The paper claims that CLOC outperforms traditional CLIP on referring and grounding tasks. This claim is supported by Tables 3 and 4.
The paper also claims that CLOC unlocks zero-shot region-level capabilities. This claim is supported by Table 2.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are valid and make sense to evaluate the proposed method.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: The experimental designs are appropriate to confirm the two claims mentioned above.
The authors first introduce their own reproduction of CLIP and compare it to the original OpenAI CLIP. Then they ablate various design decisions of their proposed method (CLOC / VESL) and report zero-shot performance on various image and region tasks.
The baseline CLIP and the improved CLOC models are then compared in different benchmarks to confirm the claims about improved capabilities with respect to referring and grounding tasks (Ferret-Bench, RefLVIS, RefCOCO, Flickr), and also improvements on some image-level multimodal benchmarks (Table 5).
Supplementary Material: I have read the supplementary material. I recommend at least reading Section D, which answered a number of questions I had after reading the paper.
Relation To Broader Scientific Literature: The pre-training recipe (CLOC) is mainly anchored on the original CLIP paper (Radford, 2021). This is done on the context of MLLMs that use CLIP as a vision encoder, and here the paper refers to (Tong, 2024). The idea of the light-weight Prompter module is introduced by referring to (Kirillov, 2023). The data annotation pipeline that is used to train the CLOC model is discussed in the context of previous works such as (Minderer, 2024) and (Kirillov, 2023).
Essential References Not Discussed: I think all essential references are mentioned.
Other Strengths And Weaknesses: The paper is well written and illustrated. The formulation of the added module and losses is very clear, and the paper does a great job at walking the reader through the process. I also enjoyed the way how the ablations are first presented, and discussed by comparing individual rows in the main text.
The main weakness of the work is the superficial ablation of the VESL captioning pipeline introduced. I do not find the selected examples in Figure A very convincing, and it's easy to imagine that captioning hallucinations might create problems (although they might be filtered by the object detection model), and that the strict named-entity recognition loses a lot of the interesting information (note that the other figures are a bit misleading in this respect, e.g. a description like "a stunning ocean view" would never be extracted by NER). From the numbers presented in Table 2, I'm a bit puzzled to see such different effects of rows 3-5 vs. 13-15.
Other Comments Or Suggestions: Missing clarity:
1. lines 260-262 (right column): are the pairs ignored or are the gradients on $f_T$ ignored?
Typos:
1. line 117: "Another less and arguably more"
1. line 124: "and are more computation overhead"
1. line 276: "annotates it"
1. line 294 (right column): "We implement the in JAX"
1. line 429 (right column): "in the foresee of"
Various remarks:
1. "CLIP has become arguably the default choice of vision backbone for multimodal large language models (MLLMs) (Liu et al., 2023; McKinzie et al., 2024) due to its superior prior knowledge in aligning vision and language (Tong et al., 2024)." – Looking at (Tong, 2024) Section D, it's not clear to me how this reference would give evidence to the claim that CLIP is the default choice due to its superior prior knowledge (e.g. vs. SigLIP). Consider backing up this claim more clearly, or re-formulating (both in "Introduction" and "Related Work" sections).
Questions For Authors: None.
Ethical Review Concerns: No concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank you for the positive review and constructive comments.
> The main weakness of the work is the superficial ablation of the VESL captioning pipeline introduced.
Thank you for your constructive comments, and we will consider better examples in our figures. We agree with the reviewer that image captioning could suffer from hallucination, though it enriches the visual description. The reviewer is also correct that our pipeline filters such cases by relying on the object detector, which is quite effective as it was pre-trained on over thousands of common objects. A central design challenge for our pipeline is balancing the benefits of richer captions with the potential risk of hallucination. We believe our design is robust, considering the effectiveness of current state-of-the-art open-vocabulary object detectors and the inherent resilience of contrastive learning objectives to noisy text annotations in large-scale training.
In Table 2 (rows 3-5 vs. 13-15), we compare with the baseline that does not use the image captioner but the AltText (e.g., left of Figure 3). It could be hard to extract useful object phrases from AltText for the open-vocab detector.
For the hallucination issue, we have investigated our captioning pipeline and compared it with other recent works to show its superiority in terms of the hallucination. We compared it to public models: LLaVA-1.5, Shikra, MiniGPT-4, and InstructBLIP on CHAIR scores [1] (lower values refer to less hallucination). CHAIR_i measures the fraction of hallucinated object instances and CHAIR_s calculates the fraction of sentences containing at least one hallucinated object. The results are summarized below:
| Captioner | CHAIR_i | CHAIR_s |
|--------|----------|----------|
| InstructBLIP | 14.5 | 30.0 |
| MiniGPT-4 | 8.2 | 24.2 |
| Shikra | 7.0 | 22.0 |
| LLaVA-1.5 | 6.2 | 20.6 |
| Ours | 5.9 | 19.6 |
While the long captions from our captioning pipeline may still unavoidably introduce hallucinations even though our captioning pipeline has less hallucinations compared to other models, it can be further mitigated since we only consider confident objects agreed by the detector (L296 left). We also remove very generic words and stopwords, as noted in the code in Listing 1 in the appendix. We believe having more accurate object labels is the key to the improvements in Table 2 for our pipeline, as evidenced by the statistics 11.6 regions per image identified by the pipeline (only 5.1 for the baseline) we report in Table 1 (L275 right).
> lines 260-262 (right column): are the pairs ignored or are the gradients on $f_T$ ignored?
For filtering region-text conflicts in Section 3.4, the region-text pairs are ignored in Equation 2. That is, these elements are “masked” and will not be considered in the contrastive loss matrix. We will make this clearer in the final version.
> Remarks on CLIP
We apologize for any confusion regarding this statement. Our reference to “CLIP” in the context of MLLMs was intended to denote the broader family of language-supervised methods, including both CLIP (as a representative model) and SigLIP. Specifically, we were citing the first row block in Table 12 of Section D (Tong, 2024), which demonstrates that these methods outperform others, such as self-supervised approaches, in MLLMs. We will make this clearer and revise both the introduction and related works as you suggested.
> Typos
Thank you for pointing out the typos, and we have fixed them in the revised manuscript.
[1] Rohrbach, A., Hendricks, L.A., Burns, K., Darrell, T. and Saenko, K., 2018. Object hallucination in image captioning. arXiv preprint arXiv:1809.02156. | Summary: This paper proposes Contrastive Localized Language-Image Pre-training (CLOC), an approach extending CLIP-style image-text contrastive learning to also incorporate region-level alignment. The authors introduce a lightweight “Prompter” module that can transform global image embeddings into region-aware representations given bounding boxes. They further design a large-scale pseudo-labeling pipeline (VESL) to generate region-text annotations, resulting in a 2B dataset. Through extensive experiments across classification, retrieval, and multimodal reasoning tasks, the method demonstrates solid improvements over standard CLIP, particularly in fine-grained vision-language scenarios like referring expression comprehension and region-based VQA.
Claims And Evidence: - Claim: CLOC enhances fine-grained visual understanding in downstream tasks that require identifying or referring to specific image regions.
Evidence: Experiments on region-level classification and retrieval benchmarks (e.g., COCO, GRIT) and on MLLM tasks (Ferret, LLaVA) show that CLOC consistently outperforms the CLIP baseline in tasks needing spatial grounding.
Methods And Evaluation Criteria: Yes. The proposed methods make sense.
Theoretical Claims: No theoretical claims are made.
Experimental Designs Or Analyses: Yes. I have checked the experimental designs.
Supplementary Material: Yes. I have read all the supplementary material.
Relation To Broader Scientific Literature: This work is a CLIP Extension, which tries to incorporate region-level alignment.
Essential References Not Discussed: The author should discuss works such as [*1].
[*1] Wan, Bo, et al. "Locca: Visual pretraining with location-aware captioners." Advances in Neural Information Processing Systems 37 (2024): 116355-116387.
Other Strengths And Weaknesses: **Strengths**
- The prompter module is lightweight, which only causes minimal overhead compared to baseline CLIP.
- The performance looks promising. From the table and Figure, we can see consistent improvements in fine-grained tasks, region-level retrieval, and large multimodal model reasoning.
- The data creation pipeline is scalable and might make a great contribution to the community.
**Weaknesses**
- The quality and diversity of region-text pairs depend heavily on the open-vocabulary detector and captioning pipeline—if these pipelines introduce bias or errors, the final model inherits them.
Other Comments Or Suggestions: - It would be interesting to see if text-based region prompts (e.g., referencing “the person on the left”) work well out of the box at inference without bounding boxes.
Questions For Authors: - The length of the average of the text per caption is 2.1, which is much shorter than RefCOCOg or WiT. Will this potentially affect the expression performance of the image embedding?
- Is it possible to include some direct comparison of CLOC to dedicated open-vocabulary detection models (e.g., in terms of box AP metrics)?
- Can you provide more detail on how the bounding boxes are sampled in the training?
- Did the authors consider other designs of the prompter?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank you for the positive review and constructive comments.
> The quality and diversity of region-text pairs depend heavily on the open-vocabulary detector and captioning pipeline.
Thank you for pointing out this important aspect. We agree that the quality of the open-vocabulary detector and captioner is important. Our pipeline is built upon the recent advances in these models (L275, right). We think the proposed framework is promising since, with better and better detectors and captioners introduced, the pipeline can seemingly enjoy the improvements, such as less bias and errors. We briefly discussed it in L782 (left column) of the Appendix, and will further emphasize the pipeline’s dependency on these models for pseudo-labeling in the final version.
To further address the reviewer’s concern, we use the hallucination metric (CHAIR score, where lower is better) to assess the quality of the synthetic captions. As shown below, our captioner demonstrates high quality compared to other models.
| Captioner | CHAIR_i | CHAIR_s |
|--------|----------|----------|
| InstructBLIP | 14.5 | 30.0 |
| MiniGPT-4 | 8.2 | 24.2 |
| Shikra | 7.0 | 22.0 |
| LLaVA-1.5 | 6.2 | 20.6 |
| Ours | 5.9 | 19.6 |
> Text-based region prompts
In our experiments, we observed a reasonably low L1 distance of 0.02 between the predicted boxes and ground-truth boxes when the Prompter receives a text region description as input, indicating that text-based region prompts performed well out of the box. In the revision, we will include qualitative visualization examples to illustrate this. A more in-depth investigation is left for future work.
> Region caption is short
The reviewer is correct that the region-level captions are much shorter than the image-level captions. However, we want to clarify that in Equation 4, we still retain the original image-level CLIP loss, ensuring that the image embedding quality remains on par with the original CLIP, as evidenced by the “Image tasks” results in Table 2.
> Comparison to open-vocab detection models
First, in our paper, we included a comparison in Footnote 1 (L379, left), demonstrating that on the region classification task (predicting class names given a bounding box), our approach achieves over 70% mAcc on COCO, significantly outperforming the 47% reported in previous work.
For comparison with open-vocabulary detection models, we also provide zero-shot evaluation results on COCO Detection (minival), ODinW (test-dev), and LVIS-Det (minival). When comparing GLIP [1] and CLOC, we observe that CLOC consistently achieves better results than GLIP across all backbone categories (T / B / L), suggesting that CLOC offers advantages in localization and object detection performance. Notably, GLIP employs DyHead—a strong decoder/head module—on top of the encoder, whereas our ablation study uses only two simple heads for classification and regression. This further supports that the encoder representation in CLOC is indeed superior. See the table below for detailed results.
| Model | ViT | COCO-Det (minival) | ODinW (test) | LVIS-Det (minival) |
|--------|----------|---------------------|--------------|--------------------|
| GLIP-T | ViT-T/16 | 46.6 | 46.5 | 26.0 |
| GLIP-L | ViT-L/14 | 49.8 | 52.1 | 37.3 |
| CLOC-B | ViT-B/16 | 47.3 | 48.4 | 29.6 |
| CLOC-L | ViT-L/14 | 50.8 | 53.6 | 38.1 |
> How bounding boxes are sampled during training
During training, we simply sample 4 boxes randomly (padded if fewer) per image.
> Designs of the prompter
In our experiments, we mainly consider the prompter takes a bounding box or a single text embedding as the prompt. We compared with a baseline implementation of RoIAlign in Table 2 (row 4, 9, 14), and confirm that the proposed prompter is a better design (discussion in L360 right and also Section 3.4). For other designs, we consider the following as promising future work: (1) different types of prompts, such as points, masks, etc.; (2) multi-prompts or compositional prompts for higher-level prompting. We included more discussions in L770 “Future directions” paragraph.
> LocCa reference
We thank the reviewer for this suggestion. LocCa is indeed a relevant work which we have cited and discussed in Section 2 (L100, right). However, LocCa differs significantly from our approach in two important ways: (1) it employs a full encoder-decoder transformer architecture, thus being substantially less efficient, especially for large-scale training; (2) LocCa embeddings do not directly facilitate zero-shot retrieval or classification tasks as our embeddings do (Table 2). The focus of our method remains specifically on improving CLIP-based localization capabilities, and we will further clarify this distinction in our final revision.
[1] Grounded Language-Image Pre-training, CVPR 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. Most of my concerns are solved. I will update my rating. | null | null | null | null | null | null |
Provably Efficient RL for Linear MDPs under Instantaneous Safety Constraints in Non-Convex Feature Spaces | Accept (poster) | Summary: This paper establishes a regret bound applicable to both star-convex and non-star-convex cases. Moreover, the violation of safety constraints is zero with high probability throughout the learning process. A key technical challenge in these settings is bounding the covering number of the value-function class, which is essential for achieving value-aware uniform concentration in model-free function approximation. For the star-convex setting, this paper develops a novel technique called Objective–Constraint Decomposition (OCD) to properly bound the covering number. This result also resolves an error in a previous work on constrained RL. In non-star-convex scenarios, where the covering number can become infinitely large, this paper proposes a two-phase algorithm, Non-Convex Safe Least Squares Value Iteration (NCS-LSVI), which first reduces uncertainty about the safe set by playing a known safe policy. After that, it carefully balances exploration and exploitation to achieve the regret bound. Finally, numerical simulations on an autonomous driving scenario demonstrate the effectiveness of NCS-LSVI.
Claims And Evidence: The claims made in this paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense.
Theoretical Claims: The theoretical results look reasonable, but I didn’t go through every proof.
Experimental Designs Or Analyses: The experiments look reasonable.
Supplementary Material: I didn’t read the supplementary material.
Relation To Broader Scientific Literature: This paper is relevant to the literature.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
1. This paper is well executed. It designs an algorithm and establishes regret bounds for safe RL with non-convex feature spaces.
2. Empirical evaluations are provided to demonstrate the effectiveness of the proposed algorithm.
3. This paper is well written overall.
Weaknesses:
1. My main concern is on the unique challenges and technical novelty compared to prior works on safe RL for linear MDPs, especially [Amani et al. 2021]. The proposed algorithm and theoretical analysis in this paper looks similar to [Amani et al. 2021]. More discussions on the challenges brought by non-convex feature spaces are needed.
2. The font in Figure 3 is too small. Is there any more baseline algorithm or adaptation algorithm which can be included for comparison in empirical evaluations?
Reference:
Amani, S., Thrampoulidis, C., and Yang, L. Safe rein forcement learning with linear function approximation. In International Conference on Machine Learning, pp. 243–253. PMLR, 2021.
Other Comments Or Suggestions: Please see the weaknesses above.
Questions For Authors: Please see the weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful feedback. We respond to each of your comments below.
**Q1** My main concern is on the unique challenges and technical novelty compared to prior works on safe RL, especially [Amani et al. 2021].
**A1** Below we highlight our unique contributions and technical challenges in two parts.
**1. Identifying and Fixing a Key Proof Gap in Amani et al. [2021]**
We first note that the proof in Amani et al. (2021) is incorrect because they directly apply covering-number arguments from unconstrained RL. In the unconstrained case, $V_h^k$ is computed as a maximum over a fixed decision set $\mathcal{A}$, where the max operator acts as a contraction. Hence, a covering over $Q$-functions directly implies one over $V$-functions. In contrast, in constrained RL, the maximization is performed over the data-dependent estimated safe set $\mathcal{A}_h^k(s)$. Since this set varies with the data, even identical $Q$-functions can induce very different $V$-functions if the estimated safe sets differ significantly. As a result, bounding only the $Q$-function class is not enough to ensure uniform convergence of the value functions.
To address this issue, we develop a novel Objective–Constraint Decomposition (OCD) technique, explicitly separating the complexities of estimating the $Q$-functions (objective) and estimating the safe set (please see Section 6). Our key insight leverages the mild geometric structure provided by star-convexity: small variations in the safety parameters do not drastically alter the feasible set, ensuring two close $Q$-functions yield close value functions despite the evolving safe set. This resolves the proof gap in Amani et al. [2021] and our analysis yields an additional factor of $O(\sqrt{\log(\frac{1}{\tau})})$. This shows how tighter constraints (smaller $\tau$) inflate the covering number. Thus, our contribution significantly advances the theoretical understanding of constrained RL.
**2. Algorithmic and Theoretical Contributions for Non-Star-Convex settings.**
Our next main contribution involves both algorithmic and theoretical advancements. On the algorithmic side, we developed the novel NCS-LSVI algorithm, which differs from the SLUCB-QVI algorithm in Amani et al. [2021], and is designed for non-star-convex safe RL environments, common in applications like autonomous driving and robotics, where obstacles or constraints create disjoint or irregular safe regions. On the theoretical side, we provide a regret analysis of our method. We elaborate below.
**The Challenge of Non-Star-Convex Problems.**
Lemma 5.3 shows that, unlike the star-convex case, small variations in the safety parameter can drastically change the decision set, leading to an arbitrarily large covering number in non-star-convex scenarios. Thus, in these cases, the algorithm proposed for star-convex settings cannot attain a proper covering bound using our OCD technique, indicating the inadequacy of previous star-convex-based methods and highlighting the need for novel approaches.
**A Two-Phase Algorithm for Non-Star-Convex Environments**
Motivated by these limitations, we propose NCS-LSVI, a two-phase algorithm designed for non-star-convex feature spaces under our Local Point Assumption. Drawing on the insight that large shifts in the safe set complicate the analysis, NCS-LSVI begins with a pure-safe exploration phase that samples from a small neighborhood around the initial safe policy.By the end of this phase, the estimated safe set stabilizes in the sense that small changes in the safety parameter lead to bounded changes in the value function with high probability, enabling more tractable covering-number bounds. In Theorem 5.4, we show that by characterizing the number of episodes in the pure exploration phase, NCS-LSVI achieves sublinear regret, specifically $O(\sqrt{K})$, along with an additional $O(\frac{\log(K)}{\epsilon^2 \iota^2})$ term that stems from the pure exploration phase. This extra term reflects the fundamental complexity of non-star-convexity, making the result nontrivial.
**Q2** The font in Figure 3 is too small. Can more baseline or adaptation algorithms be added for comparison?
**A2** We will revise the font size in Figure 3 for the final submission. To address the baseline comparison, we provide additional simulation plots at this anonymous link https://anonymous.4open.science/r/ICML-Safe-RL-figures-CE2D. Figure [a] shows the regret of NCS-LSVI over more episodes, demonstrating sublinear growth for $K' = 2000$. Figure [b] shows the regret of a sub-optimal but safe baseline constrained to an \(\epsilon\)-neighborhood of the initial policy. Figure [c] shows the regret of LSVI-UCB (Jin et al., 2020), which achieves lower regret but violates constraints, as shown in Figure [d], where cumulative violations grow linearly. The other two methods have zero violations, so no violation plots are included. We will add these results in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanation on the technical novelty compared to Amani et al. [2021]. My concerns were well addressed. I raised my score from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback and for reconsidering your score. We are glad our clarifications were helpful and truly appreciate your support. | Summary: This paper investigates safe reinforcement learning (RL) with instantaneous safety constraints and linear function approximation, where the objective is to ensure zero violations at each step. The authors first identify a technical error in previous work (Amani et al., 2021) and introduce a novel approach, OCD, to address this issue. They then propose a new assumption on the linear structure, termed the "Local Point Assumption," which is more realistic than the star-convexity assumption in Amani et al. (2021). By incorporating an additional pure exploration phase, they prove that the algorithm achieves sublinear regret while maintaining zero violations. Experimental results further validate their theoretical findings.
Claims And Evidence: The theoretical claims made in the submission are supported by clear proof.
Methods And Evaluation Criteria: The evaluation criteria are the cumulative regret and the violation, which is common and reasonable in the safe RL.
Theoretical Claims: I check most of the proof in the paper. It is correct to me. However, the covering number seems to be bounded by the union of the covering number of $V_h^k$ at each step. Hence, it will only induce an extra $\log(K)$ factor in the final regret, making the contribution of correcting the previous theoretical proof somewhat unclear.
Experimental Designs Or Analyses: The experiment is conducted on an autonomous vehicle path-planning task. However, the results in Figures 3 and 6 do not exhibit a clear sublinear regret pattern. (Even for $K'=2000$, after 2000 episodes, the regret seems still linear and get 6000 regret for 10000 episodes). The authors should run additional episodes to better demonstrate the sublinear regret behavior.
Supplementary Material: The source code is contained in the supplementary material. Due to the time limit, I didn't check it.
Relation To Broader Scientific Literature: The key contribution is (a) finding a technical error in an essential paper for safe linear MDP with instantaneous constraint and overcoming it by bounding the covering number using additional tricks, and (b) changing the star-convexity assumption to a more realistic local-point assumption.
Essential References Not Discussed: I don't find any essential additional references that need to be discussed.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written. The presentation about the previous technical error is clear and easy to understand.
2. The theoretical contribution is essential. [Amani et al. 2021] is an important paper on safe RL with linear function approximation, and identifying and correcting its error represents a big contribution.
Weaknesses:
1. The intuition why the local-point assumption is better than the star-convexity assumption seems unclear. The paper does not contain a convincing example to illustrate that the local-point assumption is reasonable. The example provided just states the unreasonableness about the star-convexity.
2. The experiment result is unsatisfying: First, the author does not clarify why the local-point assumption holds in their experiment setup. Also, the regret looks linear in the number of episodes. The author should perform more episodes to enhance the quality of the experiment.
Other Comments Or Suggestions: See the questions part.
Questions For Authors: 1. In the given autonomous driving example, I believe the star-convexity assumption still holds. The feasible region remains continuous, while actions corresponding to the inaccessible region are unsafe. In fact, the star-convexity assumption only imposes a continuity structure on the action set, not necessarily on the safe action set. Hence, I think it is a reasonable assumption in real-world scenarios. Could the authors clarify the example further to illustrate why the star-convexity assumption fails in this case? Also, in this case why the local-point assumption hold?
2. Can the covering number be bounded by the union of the covering number of $Q_h^k$ at each step? It will only induce an acceptable $\log K$ factor in the final term. I am happy to increase the score if the author explains why this approach fails.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your comments and have addressed your points individually below.
**Q1** Can the covering number be bounded by the union of the covering number of Q_h^k at each step? It will only induce an acceptable logK factor in the final term.
**A1** We thank the reviewer for this insightful question. However, this suggested approach does not work in our case due to the unique structure of constrained RL with data-dependent safe sets. We elaborate below.
While the $Q_h^k$ functions in our setup are linear and their covering number is easy to bound, this is not sufficient for constrained RL. In the unconstrained case, $V_h^k$ is computed as a maximum over a fixed decision set $\mathcal{A}$, where the max operator acts as a contraction. Hence, a covering over $Q$-functions directly implies one over $V$-functions. In contrast, in constrained RL, the maximization is performed over the data-dependent estimated safe set $\mathcal{A}_h^k(s)$. Since this set varies with the data, even identical $Q$-functions can induce very different $V$-functions if the estimated safe sets differ significantly. As a result, bounding only the $Q$-function class is not enough to ensure uniform convergence of the value functions. We address this issue in Section 5 by using OCD to decompose the effects of the estimated safe set and the $Q$-function on the covering number.
**Q2** In the driving example, the action set seems continuous, so star-convexity may still hold. Could the authors clarify why it fails here, and why the local point assumption applies instead?
**A2** We agree that the original action space in autonomous driving may appear star-convex. However, in our setup, a pre-trained Collision Avoidance (Collav) module masks out unsafe actions before the RL agent makes decisions. This results in a non-star-convex feasible set. The reason is that collision avoidance itself is inherently non-convex—since it requires the autonomous vehicle to avoid entering a region (typically a ball) around another vehicle. This introduces gaps in the feasible set. So, while the raw action space might be star-convex, the effective decision set seen by the RL agent is not.
To illustrate this concretely, suppose $a_s^0$ is the initial safe action and $ a_s^*$ is the optimal safe action (e.g., moving quickly before the other car arrives). While their convex combination $\alpha \phi(s, a_s^0) + (1 - \alpha)\phi(s, a_s^*)$ might still satisfy the lane-keeping constraint, for some intermediate $ \alpha \in [\epsilon, 1-\iota]$, the resulting action may fall into the collision region, i.e., the car neither waits long enough nor moves fast enough to avoid the other vehicle at the intersection. This violates the collision avoidance constraint and breaks star-convexity.
That said, the Local Point Assumption still holds. Around the initial safe action (e.g., stopping), small perturbations (e.g., low speeds in [0, ε]) remain safe, as they allow the other vehicle to pass. Further, high speeds beyond a threshold $v'$ (up to $v^*$) can also be safe. This assumption only requires local structure and is well-suited for disjoint or irregular safe sets.
Note that, even when $\mathcal{A}$ is star-convex in different applications, the transformed space $\mathcal{F}_s = \phi(s, \mathcal{A})$ may not be star-convex, especially when $\phi$ is nonlinear.
**Q3** The covering number appears to add only a $\log(K)$ factor, making the correction to the prior proof seem minor.
**A3** The regret analysis in Amani et al. (2021) is incorrect because it applies unconstrained RL covering-number arguments, ignoring that data-dependent safe sets may significantly alter feasible actions across episodes. A naive union bounding approach would only add a log(K) factor, but it fails once the safe set shifts over time. Our OCD technique explicitly separates Q-function estimation (objective) from safe-set complexity (constraint), yielding an additional term $\mathcal{O}(\sqrt{\log(\frac{1}{\tau})})$ (Lemmas C.2 and C.6). This shows how tighter constraints (smaller $\tau$) inflate the covering number, making our correction nontrivial rather than a minor log(K) overhead.
**Q4** Regret seems linear; more episodes may improve the experiment.
**A4** Our regret is sublinear; please see Figure [a] at the anonymous GitHub link: https://anonymous.4open.science/r/ICML-Safe-RL-figures-CE2D.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I still have questions about Q1. My question is whether it is possible to find a covering number for each timestep $K$? In other words, after the safe set $A_h^k$ is fixed for time $k$, it becomes a normal MDP with restricted actions. Can we bound the covering number for each time step $k$ and get a union of them as the final covering number? (It could be time-dependent since it depends on the shift of the safe action set.) I am unsure whether this data-dependent covering number will lead to some problems in the proof.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for this insightful question. Below, we explain why the approach you have outlined does not resolve this issue, and why we need the uniform concentration bound.
Let us begin with why we need the uniform concentration bound.
**Overview of Value-Aware Uniform Concentration**
The key step in our analysis is to control the fluctuations arising in the least-squares value iteration. Specifically, we need to establish the bounds of the form at episode $k$:
$$\left\|\sum_{\tau=1}^{k-1}\phi(s_h^\tau,a_h^\tau)\bigl[V_{h+1}^k(s_{h+1}^\tau)-P_h V_{h+1}^k(s_h^\tau,a_h^\tau)\bigr]\right\|_{(\Lambda_h^k)^{-1}}\leq O(d\sqrt{\log K}),.$$
As discussed by Jin et al. (2020) and Ghosh et al. (2022), handling the dependency between the estimated value function $V_{h+1}^k$ and the collected samples $[s_{h+1}^{\tau}],\ \tau = 1,\ldots,k-1$ requires value-aware uniform concentration inequalities. This dependency renders the standard self-normalized inequalities insufficient in the model-free setting. The key idea is thus to pre-specify a function class $\mathcal{V_h}$ and subsequently demonstrate that every value function $V_{h}^k$ produced by our algorithm resides within this class, whose log-covering number remains polynomially bounded.
To apply this value-aware uniform concentration inequality, the general strategy is to fix one function class $\mathcal{V_h}$ in advance, chosen large enough so that it includes every possible value function $V_{h}^k$ our algorithm might produce. This single, fixed function class $\mathcal{V}_{h}$ has a polynomial log-covering number. The value-aware uniform concentration inequality then simultaneously guarantees a high-probability bound uniformly over all value functions within this single function class, across all episodes and timesteps.
**The Issue with Bounding Each Episode Separately (why the reviewer’s suggestion will not work)**
Note that treating each timestep separately and then using a union bound, as noted in He et al. (2023) (see Section 6.3), can inflate the log of the covering number by a factor **proportional to $K$, rather than $\log(K)$**, even in the unconstrained case. Specifically, since the total size of the covering number up to episode K would be the product of individual coverings from episode 1 through K, we have $\mathcal{N} = (\mathcal{N}_q)^K$, resulting in:
$$\log(\mathcal{N}) = \log\left((\mathcal{N}_q)^K\right) = K \log(\mathcal{N}_q),$$
where $\mathcal{N}$ denotes the total covering number up to episode $K$, and $\mathcal{N_q}$ is the covering number at a single episode.
**The Issue with Uniform Bounding over All Possible Safe Action Sets**
Moreover, to achieve the uniform concentration bound at each episode, given that we do not know $\mathcal{A_h^k}$ beforehand, how can we achieve the uniform concentration bound? One possible approach, inspired by your suggestion, is to consider all possible safe action sets at each episode and redefine our class of value functions at episode k as follows:
$$\mathcal{V}^k = \textbraceleft V_h^{k}(\cdot)\mid V_h^{k}(\cdot)=\max_{a \in \mathcal{A}_h^k(\cdot)} Q(\cdot,a), \forall \mathcal{A}_h^k \textbraceright ,$$
where we explicitly consider all possible estimated safe action sets $\mathcal{A}_h^k$. Since $\mathcal{A}_h^k$ is state-dependent, we must account for all possible action subsets at every state and for every step in the horizon. As a result, the total number of possible safe sets grows exponentially with the size of the state space, the action space, and the horizon, leading to a covering number of order $\exp(|\mathcal{S}| |\mathcal{A}| H)$ even for a single episode $k$. Consequently, this approach quickly becomes intractable as the state space, action space, or horizon $H$ grows.
Thus, despite the intuitive appeal, the fundamental challenge of bounding the covering number under constraints remains unresolved.
**Reference:**
He, J., Zhao, H., Zhou, D., and Gu, Q. (2023, July). Nearly minimax optimal reinforcement learning for linear Markov decision processes. In International Conference on Machine Learning (pp. 12790–12822). PMLR. | Summary: This paper studies the theoretical problem of online reinforcement learning with instantaneous hard constraint in the context of *non-star-convex* decision space, which better characterizes some critic domains requiring safety than the existing star-convex counterpart. The authors propose the Non-Convex Safe Least Square Value Iteration (NCS-LSVI) algorithm which achieves sub-linear online regret for two setups: (i) star-convex decision space, where the results refines the mistake of a previous work; (ii) non-star-convex decision space with the local point assumption. The theoretical results are further demonstrated by some numerical experiments.
Claims And Evidence: Yes, most of the claims made in the submission supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods make sense for the problem this paper study.
Theoretical Claims: Yes, the reviewer is able to briefly check the proofs of the theoretical results.
Experimental Designs Or Analyses: Yes, the reviewer checks the validity of the numerical experiments.
Supplementary Material: Yes, the reviewer briefly checks the proofs in the supplementary material.
Relation To Broader Scientific Literature: The paper contributes to the line of works on online RL theory under safety constraints. Especially, the paper identifies and refines a theoretical analysis error in a previous work [1] and considers another setup (non-star-convex decision space with local point assumption).
**References:**
[1] Amani, S., Thrampoulidis, C., and Yang, L. Safe reinforcement learning with linear function approximation. In International Conference on Machine Learning, pp. 243–253. PMLR, 2021
Essential References Not Discussed: To the best knowledge of the reviewer, there is no lacking essential references.
Other Strengths And Weaknesses: **Strengths:**
1. The paper considers a more realistic decision space model which is non-star-convex.
2. The paper identifies and refines an error in existing results for online RL under safety constraints
3. The theoretical results for online regret bounds in the non-star-convex case is also solid and sound.
**Weaknesses:**
1. Despite interesting, the local point assumption is somehow hard to parse. More intuitive explanations of the assumption could be included.
2. The lower bound of the problem studied is unknown and not discussed.
3. The new results for the non-star-convex setups are not well discussed. There is only the theoretical guarantee but without any explanations and discussions.
Other Comments Or Suggestions: Please see the question part below.
Questions For Authors: 1. How to understand that the online regret of the problem gets worse when the constraint threshold $\tau$ is decreasing? Especially, the results mean that when $\tau=0$, the algorithm will have no guarantee. If $\tau=0$, we know that the safe action $a_0$ is valid and a baseline policy is to always choose the safe action which in the worst case results in a linear regret. So how to understand the behavior of the upper bound when $\tau$ is approaching zero?
2. What is the reason to consider observed cost values that are also random (there is a sub-Gaussian noise part in the cost value)? What are typical real-world scenarios for such considerations? Technically, does this also contribute to the hardness of the handling of the non-star-convex decision space?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for providing the constructive review. Please see our responses to your questions below.
**Q1** How should we interpret the regret bound as the constraint threshold $\tau$ decreases, especially as $\tau \to 0$, where always choosing the known safe action yields linear regret?
**A1** We implicitly consider the upper bound $\tilde{\mathcal{O}}(\min(KH, (1 + \frac{1}{\tau}) \sqrt{\log(\frac{1}{\tau}) d^3 H^4 K} + \frac{1}{\epsilon^2 \iota^2}))$, ensuring linear regret in the worst case (we will clarify this in the final version). Intuitively, smaller $\tau$ corresponds to tighter budgets, limiting safe exploration opportunities. Theorem 20 in Pacchiano et al. (2024) formalizes this by proving a lower bound of $\Omega(1/\tau^2)$, showing the $\tau$-dependence is information-theoretically necessary. As $\tau \to 0$, the regret becomes linear, and the policy remains sub-optimal, as the only policy we may be able to play without violating the constraint is the prior safe policy.
**Q2** Why model cost values with sub-Gaussian noise?
**A2** Measurements are often inaccurate due to sensor errors, environmental uncertainties, or modeling errors. For example, sensor readings in robotics or medical applications are noisy. Assuming observed cost values include sub-Gaussian noise effectively captures these uncertainties. The sub-Gaussian assumption is particularly useful as it models noise with light tails, which is common in practice (e.g., Abbasi-Yadkori et al., 2011).
Regarding the non-star-convex challenge, while randomness in cost and reward observations can add complexity, the primary source of hardness lies in the **environment dynamics** (state transition kernel in the MDP). If the dynamics were independent of the chosen actions, the problem would reduce to a contextual bandit setting, and we could remove the pure-exploration phase (even for non-star-convex decision sets), merely adjusting the exploration-exploitation bonus for near-optimal performance. Thus, the noise in costs and rewards is **not** the main contributor to the problem’s hardness; however, we have included the randomness to make our analysis complete.
**Q3** The non-star-convex results lack explanation and are only stated theoretically.
**A3** Due to rebuttal character limits, we provide a brief version below and will elaborate fully in the final paper:
The regret dependence on $K$ in Theorem 5.4 is $O(\sqrt{K})$. The key difference between Theorems 5.4 and 5.1 lies in the need for a pure exploration phase: while Theorem 5.1 requires none (i.e., $K' = 0$), Theorem 5.4 uses $K' = O(\log(K)/(\epsilon^2 \iota^2))$, which stems from the Local Point Assumption and reflects the added complexity of non-star-convex settings compared to star-convex ones. This term also appears explicitly in the regret bound, capturing the cost of ensuring safe exploration in such environments.
**Q4** The lower bound is unknown and not discussed.
**A4** Since linear bandits are a special case of our setting, Theorem 20 of Pacchiano et al. (2024) and Theorem 3 of Shi et al. (2023) imply a lower bound of $\tilde{\Omega}(\max(\frac{1}{\tau^2}, d H \sqrt{K}))$ in the star-convex case, indicating that the $\tau$-dependence in constrained RL is unavoidable. Our regret bound is $O((1 + 1/\tau)\sqrt{\log(1/\tau)}\sqrt{K})$, which matches the unconstrained regret in Jin et al. (2020) up to additional terms accounting for instantaneous constraints and non-star-convexity. Our goal was to design an algorithm with sublinear regret that remains safe in both star-convex and non-star-convex environments. Deriving a tight lower bound is beyond the scope of this paper.
**Q5** The Local Point Assumption is hard to parse and would benefit from a more intuitive explanation.
**A5** Thank you for the suggestion. In the final version, we will add the following intuition to clarify Assumption 3.2: This assumption requires that we can perturb the initial safe action $a^0_s$ slightly, i.e., we can safely sample from a small neighborhood around it, with radius determined by $\epsilon$. For example, in the autonomous driving setup in Section 3, if speed $v_0$ is safe, then speeds in $[v_0 - \epsilon, v_0 + \epsilon]$ should also be safe. The second part of the assumption requires local connectivity property in the small neighborhood around the constraint boundary, e.g., if $ v^*$ is optimal and exactly at the constraint boundary, then speeds in $[v^* - \iota, v^*]$ must be safe.
---
Rebuttal Comment 1.1:
Comment: Most of the concerns of mine have been addressed, and I am willing to raise my score to 3 accordingly. Please add the corresponding explanations of the theorem and the intuitions behind assumptions to the revised version.
---
Reply to Comment 1.1.1:
Comment: We appreciate your support and your decision to raise the score. We will incorporate explanations of the theorem and the intuitions behind the assumptions in the revised version, as you suggested. | Summary: The paper addresses the challenge of safe reinforcement learning (RL) under instantaneous hard constraints in an episodic Linear MDP setting. The focus is on scenarios where the set of safe actions can be non-convex (or non-star-convex) – for example, safe actions might form disjoint or irregular regions due to obstacles, as in autonomous driving. In such tasks, the agent must never violate safety requirements at any time step (as opposed to constraints in expectation or over an episode). This strict requirement is critical in domains like robotics and autonomous vehicles (e.g., always avoiding collisions) and makes exploration challenging.
Contributions: The paper’s primary contributions are: (1) Provable performance guarantees for safe RL in both star-convex and more general non-convex safe spaces. It derives a regret bound on the agent’s performance: roughly $\tilde O!\Big((1+\frac{1}{\tau})\sqrt{\ln(\frac{1}{\tau})},d^3H^4K\Big)$ (ignoring constant/log factors), where $d$ is the feature dimension, $H$ the episode length, $K$ the number of episodes, and $\tau$ is the safety threshold. This bound indicates that the cumulative reward regret grows sublinearly in $K$ (i.e. slower than $O(K)$) under certain conditions, meaning the algorithm learns an optimal safe policy over time. Crucially, safety is never violated with high probability throughout training – the algorithm maintains zero constraint violations at all times with overwhelming probability. (2) The paper introduces a novel analytical technique called Objective–Constraint Decomposition (OCD) to handle the complication of a changing safe action set during learning. OCD provides a way to bound the covering number of the value-function class even when the feasible action set varies over time, which is essential for proving uniform convergence and regret bounds. This technique leverages the geometry of star-convexity, essentially showing that if the safe set is nicely shaped (star-convex), small changes in constraint parameters won’t drastically change the optimal value function. Using OCD, the authors correct a flaw in a prior analysis from Amani et al. (2021) – an earlier work on safe RL in linear MDPs – by properly accounting for the dynamic safe set in the theoretical guarantees. (3) For the more difficult non-star-convex case, the paper proposes a new two-phase learning algorithm called Non-Convex Safe LSVI (NCS-LSVI). In the first phase, NCS-LSVI performs safe exploration using a known initial safe policy: the agent stays in a local neighborhood of this baseline policy, exploring only actions believed to be safe to gradually expand its knowledge of the safe region. Once the safe set is sufficiently well-estimated (stabilized with high confidence), the algorithm enters the second phase of normal exploration–exploitation (optimistic value iteration) but restricted to the now-established safe set. This carefully structured approach allows the authors to handle even infinitely large covering numbers by ensuring the effective policy class remains well-behaved. They prove that NCS-LSVI achieves the same order of regret as in the star-convex case (up to constant terms), marking the first provably efficient safe RL result for non-convex safe action spaces.
Claims And Evidence: Provable Regret Bound: The authors claim a specific $\tilde{O}(d^3 H^4 K)$ regret bound (with additional dependence on the safety threshold $\tau$) for their algorithms. This claim is backed by rigorous proofs outlined in the main text and detailed in the appendices. They present formal theorems for both the star-convex case (Theorem 5.1) and the non-convex case (Theorem 5.4), each establishing the stated regret upper bound. The structure of the proof is clearly sketched: by developing high-probability confidence bounds and using the OCD technique, they derive uniform concentration results that lead to the regret guarantee.
Zero Constraint Violations: A standout claim is that the algorithm incurs no safety violations with high probability during learning. This is a strong claim (ensuring safety at all times, not just asymptotically or on average) and is supported by clear reasoning. The authors ensure this by design: the agent always acts within an estimated safe set that is initialized with a known safe action and expanded cautiously. They provide a theoretical guarantee (with high probability) that the estimated safe set is a subset of the true safe set at every step, which directly implies no constraint violation occurs.
Effectiveness of OCD (Objective–Constraint Decomposition): The authors claim that OCD is a novel technique that properly bounds the covering number in the star-convex scenario and “resolves an error in a previous work”. This refers to correcting the analysis of Amani et al. (2021). They give a detailed explanation of the previous error: in prior work, the value function’s covering number was treated like in standard RL, but in safe RL the feasible action set depends on past data, invalidating a direct application of covering arguments.
Methods And Evaluation Criteria: Proposed Methods: The paper proposes two main methods: the analytical OCD technique and the algorithmic framework NCS-LSVI. Both are well-motivated and appropriate for the problems they target. The OCD technique is essentially a novel analytical lens that decomposes the problem of bounding the value function class into objective and constraint parts. This method is highly appropriate for the star-convex safe RL problem – it directly addresses the core difficulty of a changing feasible action set. The NCS-LSVI algorithm is also well-designed for the given problem. Safe exploration is notoriously hard when the safe region is not a single nice set. The two-phase approach – first conservative exploration using a safe policy, then optimistic exploitation/exploration within the enlarged safe set – is a sensible strategy.
Benchmark Evaluation (Datasets/Environments): The paper’s primary experimental evaluation is on a custom autonomous driving simulation (vehicle merging scenario). In this scenario, the agent must learn to merge lanes while obeying a lane-keeping constraint; a separate module (called “Collav”) filters out obviously unsafe (collision-causing) actions, making the remaining safe set for lane-keeping non-convex (because the safe actions at each moment can be split by the removed unsafe ones). This setup is a reasonable and relevant benchmark for the problem. It’s not a standard benchmark from prior literature, but safe RL scenarios often require custom environments to test specific constraints.
While the chosen scenario is appropriate, one could assess whether the range of experiments is sufficient. The paper appears to present only this one primary scenario in the main text (though they mention additional details and experiments in Appendix A). The merging scenario provides one data point of success. It would have been beneficial to see the algorithm tested on a variety of environments or constraints – for example, a different robotics task or a synthetic grid-world with a complicated safe region – to further validate generality.
Theoretical Claims: Core Analytical Approach: The authors adapt the optimism in the face of uncertainty approach (common in RL theory) to the constrained setting. They maintain confidence intervals for the estimated model or value function and define an optimistic value function that the agent uses to choose actions. This standard approach is complicated here by the safety constraint, which limits the action set. The authors identify that the usual uniform convergence arguments (which rely on fixed function classes) need refinement because the feasible action set is history-dependent. Their analysis via the OCD technique splits the problem: one part deals with the usual uncertainty in the $Q$-function approximation, and another part deals with how errors in constraint estimation can affect the value function. This decomposition is conceptually sound – it acknowledges that even if two $Q$-functions are similar, their induced policies could differ if their feasible actions differ, and it bounds that difference by leveraging star-convexity.
Soundness of Theoretical Results: The regret bounds derived are high-level plausible given the problem difficulty. For star-convex safe sets, the authors essentially reclaim a sublinear regret guarantee similar to unconstrained RL (with some extra factors for safety).
Assumptions and Their Restrictiveness: The theoretical results do rely on several assumptions, which are explicitly stated. It’s important to assess if any of these are too restrictive or unrealistic:
Linear MDP assumption: The paper assumes the MDP’s dynamics and rewards can be embedded in a linear feature space of dimension $d$. This means essentially that $Q^*$ functions lie in a known $d$-dimensional linear span (or the transition kernel is linear in features).
Known initial safe action/policy: The authors assume that at each state (for star-convex case, as in Amani et al. 2021) or at least at the start of learning (for non-convex case via the Local Point Assumption), the agent has some action it knows is safe. In the non-convex scenario, the Local Point Assumption (detailed in Section 3.2) likely means that from any state reachable by the initial safe policy, there exists a safe action “not too far” in feature space from the initial safe action – ensuring a locally connected safe region. This is a critical assumption for safety: it essentially provides a foothold to begin exploring without any risk. If no initial safe action is known, then ensuring zero violations from scratch is generally impossible (unless the agent is extremely conservative or gets external guidance).
Safety threshold $\tau$ and cost function: The problem formulation involves a cost function $c(s,a)$ and a threshold $\tau$ such that actions are safe if $c(s,a)\le \tau$ (presumably). They likely assume $c(s,a)$ is known or at least can be observed (the agent can tell if it violated the constraint or measure the cost feedback). Possibly, $c(s,a)$ might also be linear in the features or share the linear structure (the paper doesn’t explicitly say, but many constrained RL works assume a linear structure for cost if they assume it for reward).
In terms of how these assumptions affect applicability: the most limiting one is perhaps the need for an initial safe action/policy and local safe connectivity.
Experimental Designs Or Analyses: Soundness of Design: The experiment is designed to mirror the paper’s problem setting: an autonomous driving merging scenario is chosen to exemplify an MDP with an instantaneous hard constraint (stay in lane) and a non-convex action feasibility due to an external collision-avoidance system.
Clarity of Goals and Metrics: The goal in the experiment is for the agent to learn an optimal policy (presumably merging efficiently without leaving its lane) and the metrics used are regret and safety violations which are suitable in this context.
Results and Analysis: The results (as described) show that with a sufficiently long pure exploration phase ($K'=2000$ episodes of safe random exploration), the algorithm achieves sublinear regret growth, indicating it successfully learns the optimal policy.
This analysis is correct but minimal – it states the key observation (successful learning with sublinear regret) and doesn’t delve into much more detail. For instance, they didn’t explicitly say what the optimal policy’s performance was or how quickly the algorithm approached it in absolute terms.
Missing baselines: No baseline algorithm is compared. It would be informative to compare NCS-LSVI with, say, the initial safe policy (which would have zero violation but likely higher regret, essentially a flat line of regret increasing linearly if it never learns). Or compare with a naive RL algorithm that ignores safety (which would likely achieve lower regret initially but incur violations). Such comparisons could illustrate the trade-off between safety and performance. The absence of baselines means we only see that the algorithm works, but not how much better it is than “do nothing” or how much worse it might be than an unsafe method. Including at least the “Initial Safe Policy” as a baseline regret (which would accumulate regret if that policy is suboptimal) would contextualize the results.
Lack of multiple scenarios: It would be nice to see another environment – perhaps a different type of constraint or a different shape of safe region – to confirm the algorithm’s versatility.
Supplementary Material: The supplementary material significantly complements the paper. It ensures that readers have access to all details necessary to fully verify the theoretical aspects of this work
Relation To Broader Scientific Literature: The paper positions its contributions in the context of existing literature on safe reinforcement learning and constrained MDPs, particularly those with theoretical guarantees.
Safe RL with Instantaneous Constraints: Prior to this work, Amani et al. (2021) tackled a very similar problem – safe RL in linear MDPs with per-step (instantaneous) constraints under the assumption of a star-convex safe action set. Amani et al. introduced the idea of maintaining an estimated safe set and guaranteed safety by always acting within that set. However, as noted in the paper, their theoretical analysis claiming sublinear regret was flawed. The current work directly builds on and corrects Amani et al.’s approach. The OCD technique in this paper fixes the analytical gap, enabling a correct proof of sublinear regret in the star-convex case.
Essential References Not Discussed: There are possibly a few key missing references. Berkenkamp et al. (2017) – Safe Model-Based Reinforcement Learning with Stability Guarantees. This work deals with safe exploration in continuous state spaces using control-theoretic (Lyapunov) methods. The idea of slowly exploring to expand the region of safe actions is explored in this paper. It would be nice if the author's include a discssuion re this method.
Other Strengths And Weaknesses: Strengths:
The paper tackles a crucial problem in reinforcement learning – ensuring safety at every step of learning.
The work makes original contributions in theory. The introduction of the Objective–Constraint Decomposition (OCD) is a novel analytical technique that advances understanding of how to handle dynamic feasible sets in RL.
A major strength is that the paper provides strong theoretical guarantees – provably sublinear regret (meaning the agent approaches optimal performance efficiently) while incurring zero safety violations (with high probability).
The paper demonstrates a deep understanding of prior work by identifying and correcting a mistake in earlier research (Amani et al. 2021).
Weaknesses:
Restrictive Assumptions: One of the paper’s weaknesses is that the results rely on strong assumptions that may limit direct real-world applicability. The linear MDP assumption (linear function approximation with known feature map) is a simplification that might not hold in complex environments where one would truly need safe RL (e.g., driving with raw images, robotics with complex dynamics). Similarly, the requirement of a known initial safe action/policy and the Local Point Assumption mean the method requires prior knowledge to get started and that the safe region is nicely connected. If the environment’s safe space is disconnected or if no baseline policy is known, the current approach wouldn’t apply. These assumptions, while necessary for theoretical guarantees, weaken the generality of the results.
The derived regret bound, $\tilde{O}(d^3 H^4 K(1+1/\tau)\sqrt{\ln(1/\tau)})$, has quite large polynomial factors ($d^3 H^4$) and a linear dependence on $K$ (for the leading term). This suggests that the worst-case sample complexity could be high. In simpler terms, the algorithm might require a large number of episodes to reach near-optimal performance, especially if the feature dimension or horizon is long.
Limited Empirical Evaluation: The experimental evaluation, although positive, is quite limited. Only one primary scenario (autonomous merging) is presented in the main paper. There is no comparison to any baseline methods or ablation of algorithm components. This makes it hard to gauge how robust or effective the approach is relative to alternatives or in different conditions.
Scope of Safety Consideration: The paper deals with a single constraint (like one cost with threshold $\tau$). In many real applications, there could be multiple constraints (e.g., joint torque limits, collision avoidance, battery usage all at once). The current framework is not explicitly extended to multiple constraints.
Algorithm Complexity and Practicality: The algorithm NCS-LSVI is conceptually sound, but it may be complex to implement in practice. It likely requires maintaining large confidence sets, computing optimistic value functions via some form of extended value iteration at each episode, and carefully managing two phases. For high-dimensional problems or ones requiring function approximation beyond linear, this could be challenging.
Another minor weakness in presentation is that the related work discussion is largely deferred to the appendix. This can make it a bit difficult for a reader of the main paper to immediately see how this fits with prior efforts.
Other Comments Or Suggestions: Clarity – Definition of the Local Point Assumption: The paper introduces a “Local Point Assumption” for the non-convex case (mentioned around Section 3). While the general idea can be inferred (it ensures some local connectivity of the safe set around the initial policy), it would benefit the reader to have a very clear definition in the main text
In the description of OCD’s results, the text says “(See Rmark 6.1)”
“scaler $\alpha_3$” in Appendix (it should be “scalar” I think)
Questions For Authors: 1. The regret bound derived is $\tilde O(d^3 H^4 K + \text{const})$. Do you believe this dependence on $K$ (essentially linear) and on $d, H$ is order-optimal for safe RL, or could it be improved? In particular, is it possible to achieve a $\tilde O(\sqrt{K})$ regret (as in unconstrained linear RL) while still guaranteeing zero violations, or is the linear-in-$K$ growth a fundamental cost of imposing instantaneous safety?
2. The bound has factors $(1+1/\tau)\sqrt{\ln(1/\tau)}$ indicating slower learning for smaller $\tau$. How critical is knowing the exact threshold value $\tau$ for the algorithm?
3. How would you extend your approach to multiple simultaneous safety constraints (e.g., constraints $c_1(s,a)\le \tau_1$ and $c_2(s,a)\le \tau_2$ that must both hold)?
4. The Local Point Assumption essentially assumes a connected safe region around the initial safe policy. Suppose in an environment the safe state-action space actually consists of two disjoint regions, A and B. The initial safe policy lives in region A, but the globally optimal safe policy is in region B (which is not reachable following safe actions from A). In that case, your algorithm would never discover region B (since it can’t safely get there). Is my understanding correct that the method is limited to the connectivity of the initial safe region? And if so, do you have thoughts on strategies to deal with such situations?
5. In NCS-LSVI, the length of the pure exploration phase $K'$ is a crucial parameter. How should one choose $K'$ in practice? The experiments tried a few fixed values (like 2000) – was this based on a theoretical formula (e.g., a function of $\epsilon,\iota,d,H$) or more from empirical intuition? If $K'$ is chosen too small, what failure mode do you observe – is it that the safe set hasn’t stabilized and thus the algorithm remains overly cautious or incurs regret due to a small feasible action space? Conversely, if $K'$ is very large, aside from more exploration cost, does it ever hurt the stability (maybe by over-exploring)?
6. You mention interest in exploring “Deep Safe RL” with nonlinear function approximation. Do you foresee the OCD technique extending to certain classes of nonlinear function approximators (for example, kernel methods or neural networks with particular architectures)?
7. In your algorithm, you maintain an estimated safe set of actions at each state. How is this represented and updated in implementation? For example, in the driving scenario, is the safe set represented implicitly via upper confidence bounds on the constraint function $c(s,a)$? Do you discretize actions or use some parametric form to check which actions are safe?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful feedback. We respond to each of your comments below.
**Q1** Is the regret bound $\tilde{O}(d^3 H^4 K + \text{const})$ optimal, or can a bound of $\tilde{O}(\sqrt{K})$ be achieved?
**A1** Please note that the regret is already $\tilde{O}(\sqrt{K})$. The actual the regret bound is given by $\tilde{\mathcal{O}}((1 + \frac{1}{\tau}) \sqrt{\log(\frac{1}{\tau}) d^3 H^4 K} + \frac{1}{\epsilon^2 \iota^2}),$ as stated in Theorems 5.1 and 5.4. This is already sublinear in $K$, and the dependence on $d$ and $H$ is under the square root, which matches the unconstrained regret in Jin et al. (2020) up to additional terms accounting for instantaneous constraints and non-star-convexity. We're happy to address any further questions.
**Q2** How important is it to know the exact value of $\tau$?
**A2** In many applications, $\tau$ is known. For example, in lane keeping, the car can measure its distance from lane boundaries, and in financial or power systems, $\tau$ corresponds to budget or power limits. When $\tau$ is unknown, a conservative estimate can be used, and our algorithm and regret bound continue to hold. While this may lead to a slightly higher regret, the dependence on $K$ remains $\tilde{\mathcal{O}}(\sqrt{K})$. The regret’s dependence on $\tau$ is inherent, as established by the $\Omega(1/\tau^2)$ lower bound in Theorem 20 of Pacchiano et al. (2024).
**Q3** How would you extend your approach to multiple constraints?
**A3** Our framework extends to multiple constraints by defining $\mathcal{A}^k_h(s) = \cap_{j=1}^M \mathcal{A}^{k,j}_h(s)$ in Line 11, where each $\mathcal{A}^{k,j}_h(s)$ is the estimated safe set for constraint $j$. Hence, our approach is readily applicable to multi-constrained settings. We will include the above discussion in the final version.
**Q4** Is the method limited by the connectivity of the initial safe region?
**A4** Unlike star-convexity, which requires global connectivity, the Local Point Assumption only imposes local conditions near the initial safe action and constraint boundary, making it suitable for disconnected decision sets. Our method, NCS-LSVI, is specifically designed to handle such environments.
**Q5** Limited Empirical Evaluation
**A5** Please see our response to Reviewer sJHh, Comment A2.
**Q6** How should K′ be chosen in NCS-LSVI?
**A6** Based on Theorem 5.4, in non-star-convex settings, $K'$ should be $O(\log(K)/(\epsilon^2 \iota^2))$. In star-convex cases (Theorem 5.1), $K' = 0$, so no pure exploration is needed. The choice of \(K'\) only affects regret and does not impact the safety guarantee or cause stability issues.
**Q7** Can the OCD technique be extended to nonlinear settings?
**A7** The key insight of OCD, decoupling the effect of Q-function estimation from that of the estimated safe set, naturally extends to nonlinear settings. In these settings, the policy is still recovered over an estimated safe set, so controlling both sources of error remains essential. With appropriate smoothness assumptions, our covering number analysis can be adapted.
**Q8** The method relies on strong assumptions, such as linear MDPs and a prior safe policy.
**A8** Linear MDPs are widely studied and effective in practice. Zhang et al. (2022) demonstrate state-of-the-art performance on MuJoCo and DeepMind Control benchmarks, and Jin et al. (2020) show that linear MDP solutions offer regret guarantees even when the true MDP is nonlinear. Also, a safe sub-optimal policy can often be identified offline using domain knowledge (Amani et al., 2019; Khezeli & Bitar, 2020; Shi et al., 2023).
**Q9** NCS-LSVI may be computationally expensive. How is the estimated safe set computed?
**A9** As we explained in Appendix A, to improve computational efficiency, our implementation updates the $Q$-function estimates every $2^n$ episodes rather than at each episode.
Regarding the complexity of maintaining the estimated safe set and computing UCB-based values, in some cases we can skip the explicit computation of lines 11 and 12 in Algorithm 1 and directly solve the constrained optimization in line 13. Since the $Q$-function is linear in $\phi$, the maximum lies on the boundary of $\phi(s, \mathcal{A}_h^k)$. When the safe set has a favorable structure (e.g., unions of convex sets), this is tractable. In complex or nonlinear cases, we use discretization, heuristics, or nonconvex solvers. We will add this to Appendix A.
**Q10** The related works, omits Berkenkamp et al. (2017).
**A10** We will move the related work to the main text and include Berkenkamp et al. (2017) in the final submission.
**References**
Zhang et al. (2022), Making Linear MDPs Practical via Contrastive Representation Learning, International Conference on Machine Learning (ICML), pp. 26447–26466. PMLR. | null | null | null | null | null | null |
PANDAS: Improving Many-shot Jailbreaking via Positive Affirmation, Negative Demonstration, and Adaptive Sampling | Accept (spotlight poster) | Summary: The paper introduces PANDAS (Positive Affirmation, Negative Demonstration, and Adaptive Sampling), which is a method for improving many shot jailbreaking (MSJ) made up of three smaller techniques. The first two (positive affirmation and negative demonstration) entail inserting fake statements as if from the target model either complying or refusing harmful requests in the long-context of MSJ attacks, and then either follow those statements up with a positive comment (if a fake example of model compliance) or negative comment (if the fake comment from the model is a refusal). They also do adaptive sampling to discover what mixture of harmful behaviours is best for different target harmful queries. They also analyse how attention inside the models they attack is affected by MSJ and other similar long-context attacks, up to 32 shots, and find that as demonstrations increase, models allocate more attention to earlier examples, supporting the hypothesis that MSJ works by reinforcing instruction-following patterns.
Their experiments seem to show that PANDAS consistently outperforms MSJ and other baselines (including the author's improved version of MSJ based on i-FSJ), on AdvBench and HarmBench.
Claims And Evidence: Overall I think the papers' claims are well substantiated!
Their primary claim - that PANDAS improves jailbreaking success rates over baseline methods - is well-supported by comprehensive experimental results across multiple (open source only) models and datasets.
The claim about each component (PA, ND, AS) contributing to performance gains is adequately demonstrated through ablation studies shown in Table 2
The paper's finding that jailbreaking effectiveness doesn't continuously improve with more shots is interesting and supported by experimental data, but I mostly believe their first hypothesis that this is due to limits in context retention for the size/capability of the models they're working with. But that just means I see this paper's results more as coming from the angle of "for a limited context size, how do you produce the most harmful long-context jailbreak" - still very interesting to me.
The attention analysis provides plausible evidence for how PANDAS affects model behaviour, but it's limited to a small number of shots (up to 32)
Methods And Evaluation Criteria: AdvBench and HarmBench are standard datasets of harmful queries to use.
I'm impressed by their evaluation approach (tracking both common refusal phrases, and using LlamaGuard to classify the outputs as harfmful or not). The authors mention some manual validation on qwen but I'd ideally want to hear that they checked some samples for all the models.
I thought it was great that they introduced i-MSJ as a baselines, and it's clever to derive that baseline from the i-FSJ paper in order to come up with an alternative method to measure against.
Overall, this paper tests pretty comprehensively across different defense approaches, but I'd like to see more composition of different defenses where possible?
Theoretical Claims: No formal/theoretical proofs in this paper.
Experimental Designs Or Analyses: The experimental design comparing across multiple models with different context capabilities is sound
The ablation study effectively isolates the contribution of each component
The manual inspection of responses from Qwen-2.5-7B adds credibility to the analysis of the gap between ASR-L and ASR-R but I'd like to see it extended to the other models.
The attention analysis methodology seems appropriate but is limited to smaller context windows (32 shots), but that's likely for practicality
The defense evaluation tests each defense individually but does not explore compositions of multiple defenses, which could be more realistic (where possible)?
Supplementary Material: yes, all appendices
Relation To Broader Scientific Literature: PANDAS builds directly on the recent MSJ technique introduced by Anil et al. (2024), and related work on similar jailbreaking methods, like few-shot jailbreaking).
The work connects to in-context learning literature, particularly works studying demonstration design and selection - and the negative demonstration concept draws from recent research on "learning from mistakes" in benign ICL settings, as the authors cite.
The authors refer to relevant and contemporary defense and evaluation literature.
Essential References Not Discussed: I can't think of any, the paper seems comprehensive
Other Strengths And Weaknesses: The attention analysis provides useful insights into the mechanisms behind MSJ but doesn't feel especially connected to the rest of the paper. Not a big issue and still interesting to see a quick dive into attention mechanisms behind MSJ! Definitely better to have in than not.
I don't think the authors make a strong enough case in the paper as written about the practical reasons to use PANDAS over MSJ - especially if it might be the case that PANDAS does not exhibit the same scaling laws! The main thing I'd be curious to see if the authors explored how effective the technique will be for adjusting the slope of the scaling law / reducing the number of shots for a given jailbreak
Other Comments Or Suggestions: Would be interested to know how PANDAS might transfer to reasoning models
Questions For Authors: Why did you not evaluate PANDAS on more capable commercial models like the GPT series, or various Claudes or Geminis? As far as I understand the only part of the paper which needs internals access is the attention analysis, which I'd be fine for you to skip in order to attack SOTA models. Would you expect different results on these models?
For the models you tested on, at what point would the simplicity of adding more shots outweigh the complexity of implementing PANDAS (if ever)?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and detailed response. We are glad that the reviewer finds our claims well-substantiated and our experimental design sound and valid. In the revised paper, we will include a discussion on the manual inspection of model responses during evaluation, present additional experiments on the composition of defenses, and expand the discussion on model selection and the challenges of evaluating proprietary models. Below, we address specific concerns of the reviewer.
**Manual inspection of model responses:** We indeed verified responses from all models. We will clarify this in the revised version of the paper.
**Compositions of multiple defenses:** Since some defense methods have limited or no effect, and some even increase the ASR, we focus specifically on compositions that generally have a positive effect on reducing ASR. For MSJ, we combine SmoothLLM and Self-Reminder; at 64 shots, MSJ has an ASR-L of 36% and an ASR-R of 40%. For PANDAS, combining Self-Reminder and ICD-Exact results in ASR-L and ASR-R values of 70% and 82%, respectively. We observe that combining defenses can further decrease the effectiveness of jailbreaks.
**Practical reasons to use PANDAS over MSJ:** The modifications introduced by PANDAS, particularly PA and ND, are straightforward to implement and can be used as direct plug-ins to MSJ with minimal overhead. We will highlight this in the revised version of the paper.
**On scaling laws:** In our paper, we focused on using attention analysis to understand how PANDAS improves upon MSJ and did not explore scaling laws. However, we agree with the reviewer that investigating how PANDAS interacts with scaling behavior is an interesting direction for future work. Additionally, the scaling analysis in Anil et al. [1] was based on Claude 2, which was not specifically fine-tuned for robustness in long-context scenarios. As more recent models are likely trained with such robustness in mind, examining their scaling behavior would be particularly valuable.
**Evaluations on reasoning models:** Running MSJ and PANDAS is memory-intensive, and the large size of most reasoning models makes them even more challenging to evaluate. While preparing the original submission, we tested DeepSeek-R1-Distill-Llama-8B [2], a small reasoning model fine-tuned based on pre-trained Llama-3.1-8B, but found it to be easily jailbroken, and therefore omitted it from the paper.
**Evaluations on proprietary models:** Our primary goal is to demonstrate that PANDAS achieves improved jailbreaking effectiveness compared to MSJ. As we have limited access to credit for commercial models, we decided to focus our evaluation on the latest open-source models, all of which were released between May and December 2024, with most incorporating safety guardrails. Notably, Llama-3.1-8B is specifically fine-tuned for robustness in long-context scenarios. Our results indicate that PANDAS substantially improves jailbreaking performance on these open-source models.
To additionally validate our findings, we conduct evaluations on a commercial model, GPT-4o-mini, which has the lowest per-token cost among available commercial options (Claude’s and GPT’s). Using 128 shots and only a single restart on AdvBench, MSJ and PANDAS achieve an ASR of 0.19% (1 out of 520) and 2.12% (11 out of 520), respectively. For cost reasons, we do not perform Bayesian Optimization and adopt uniform sampling across all malicious topics. This result shows that, despite both methods showing low ASR, the effectiveness of MSJ can still be improved.
[1] Anil et al., Many-shot jailbreaking. In NeurIPS’24
[2] DeepSeek-A, DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
[3] Zheng et al., Improved few-shot jailbreaking can circumvent aligned language models and their defenses. In NeurIPS’24
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thoughtful and detailed response. Happy to keep my strong accept score. | Summary: This paper proposes PANDAS, an improvement on the many-shot jailbreaking (MSJ) by adding positive affirmations (PA), negative demonstrations (ND), and adaptive sampling (AS) demonstrations using Bayesian optimization. Positive affirmations acknowledge the desired behavior in the fabricated model output, and negative demonstrations are intentional rejected generations followed by a correction. The authors show the empirical improvements of employing PANDAS on HarmBench, AdvBench and AdvBench50, showing that it is the most effective jailbreaking method among the original MSJ and i-MSJ. They find that jailbreaking effectiveness is not strictly correlated with the number of shots. They evaluate results with ASR-R, which uses text-matching, and ASR-L, which is an LLM as a judge. They explore the existing jailbreak-defense mechanisms such as Self-Reminder, Retokenization, etc. and show that PANDAS remain effective.
## update after rebuttal
I have read and appreciated the authors' rebuttal and their responses to my questions. After considering their clarifications, I have decided to maintain my original score. While PANDAS demonstrates some effectiveness in enhancing jailbreaking methods, it builds heavily on the existing many-shot jailbreaking (MSJ) approach. Although this extension is interesting, I find the novelty and significance to be somewhat limited. As such, a weak accept continues to best reflect my evaluation of the paper.
Claims And Evidence: The claims are supported by experimental evidence such as empirical results of PANDAS on the three datasets, ablation studies for the individual effectiveness of PA, ND and AS, and the attention analysis. One slight issue is that authors use open-source, uncensored, helpful-only models to generate malicious questions and answers to generate the many-shot demonstration history; however, they do not discuss the quality or correctness of these generations.
Methods And Evaluation Criteria: HarmBench and AdvBench are popular jailbreaking benchmarks. Using lexical similarity (ASR-R), and LLM-as-a-judge (ASR-L) are also common metrics to measure the attack success rate (ASR), although it is expected that ASR-R will not capture benign rejections that do not explicitly reject to answer the question.
Theoretical Claims: Equation 6 seems to have a minor error where the summation is given as the summation of i = 1 to i = i-1, where it should be j = 1 to j = i-1. Also, on page 7, it is not clear why in an n-shot MSJ the breakdown is from N_1 to N_(n+2) instead of up to N_(n+1). S_i,j should also be explicitly defined. Overall, the theoretical idea for analyzing segment-level attention is agreeable, however, the presentation of the mathematical notation can be improved.
Experimental Designs Or Analyses: As far as I can judge from the paper, as I have not seen the code/implementation, the experimental designs are sound with only the question of how trustworthy LLM-as-a-judge evaluation framework is for this task as there are no correlation provided with human evaluation, and the generated question and answer pairs from the open-source models for negative demonstrations.
Supplementary Material: I have read the appendix, and did not see any issues. I did not find a code, or data repository for this submission.
Relation To Broader Scientific Literature: This research contributes to the jailbreaking literature, providing further insight into the weaknesses of LLMs and potential problems that need to be addressed to ensure AI safety. It shows that MSJ can be further improved to increase attack success rate.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- Ideas in the paper are well presented, with a clear and easy-to-follow structure.
- Empirical results, and ablation studies validate the effectiveness of PANDAS
Weaknesses:
- How does PANDAS and MSJ perform compared to other jailbreaking methods?
- How does PANDAS perform with closed-source proprietary models, and/or with larger open-weight models?
- What is the cost performance tradeoff between many-shot jailbreaking and other methods that use less context?
- No discussion of how PANDAS might be mitigated
Other Comments Or Suggestions: Typos/suggestions:
- Line 427, “instruct-following”, maybe that was supposed to be instruction-following
- Figure 4, it is not clear what “with negative demonstration” is supposed to mean, isn’t what ND represents already?
- Table 4, caption only mentions successful samples, and not failed samples, but the table presents both, also it might be better to show the change in performance with the unshuffled performance to see the little-to-no difference.
- Trustworthy Machine Learning might be a more suitable category for this research.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and detailed response. We are glad that the reviewer finds our claims well-supported by empirical evidence and our experiment designs sound and valid. We will revise the paper by clarifying the quality of the malicious demonstrations, improving the overall presentations in Sec. 4.4., and expanding the discussion on model selection and the challenges of evaluating proprietary models. Below, we address specific concerns of the reviewer.
**Quality of the malicious demo:** For all malicious demonstrations, we use the Llama-Guard-3 and ensure all queries are evaluated as unsafe. Manual inspections are also performed.
**Error in Equation 6:** Thank you for pointing out the error and it indeed should be $R_i = 1-S_{i,i} = \sum_{j=1}^{i-1} S_{i,j}$. Also, the definition of $S_{i,j}$ can be found in Equation 5 at bottom of page 7.
**Clarification on the n-shot MSJ breakdown:** We include the target prompt as a part of the definition. That is, $N_{n+1}$ represents the beginning of the target prompts and $N_{n+2}$ marks the end of the entire prompt.
**LLM-as-a-judge:** Using LLM as a judge as a way to evaluate the effectiveness of jailbreak is a popular approach adopted in many recent works. In addition to using LLM alone, we also manually verify responses from all models.
**Comparison with other methods:** In this paper, we focus on methods in long-context scenarios. To the best of our knowledge, the only relevant prior work is i-FSJ (improved few-shot jailbreaking), which we extend to the many-shot setting.
**Model Selection:** Mazeika et al. empirically demonstrate that jailbreak effectiveness is consistent within model families despite differences in parameter count, with the main variation occurring across different model architectures. Based on this, we focus on models with around 8B parameters. This choice also allows us to evaluate with a large number of shots, which is important given the GPU memory demands of MSJ and PANDAS. Furthermore, our experimental setup follows recent work on few-shot jailbreaking.
**Evaluations on proprietary models:** Our primary goal is to demonstrate that PANDAS achieves improved jailbreaking effectiveness compared to MSJ. As we have limited access to credit for commercial models, we decided to focus our evaluation on the latest open-source models, all of which were released between May and December 2024, with most incorporating safety guardrails. Notably, Llama-3.1-8B is specifically fine-tuned for robustness in long-context scenarios. Our results indicate that PANDAS substantially improves jailbreaking performance on these open-source models.
To additionally validate our findings, we conduct evaluations on a commercial model, GPT-4o-mini, which has the lowest per-token cost among available commercial options (Claude’s and GPT’s). Using 128 shots and only a single restart on AdvBench, MSJ and PANDAS achieve an ASR of 0.19% (1 out of 520) and 2.12% (11 out of 520), respectively. For cost reasons, we do not perform Bayesian Optimization and adopt uniform sampling across all malicious topics. This result shows that, despite both methods showing low ASR, the effectiveness of MSJ can still be improved.
**Cost performance tradeoff:** When comparing long-context jailbreaking methods to others, we observe a trade-off between inference-time and compute-time cost. For example, popular jailbreak methods like GCG append a short suffix to the target prompt, introducing minimal overhead during inference compared to MSJ and PANDAS. However, GCG typically requires multiple model queries at inference time to optimize the suffix. In contrast, MSJ and PANDAS only require generating malicious demonstrations once, without repeated queries to the target model. These demonstrations can then be repeatedly sampled and reused to construct new jailbreak prompts, potentially reducing inference cost over time.
**Mitigation of PANDAS:** We focus on improving the original MSJ approach and use attention analysis to understand the effectiveness of PANDAS. In Table 3, we demonstrate PANDAS’ performance on models equipped with various defense methods. The results show that Self-Reminder and ICD-Exact are more effective in reducing ASR compared to other defenses. As requested by reviewer dnBC, we additionally evaluated the composition of defenses: combining Self-Reminder and ICD-Exact results in ASR-L and ASR-R values of 70% and 82%, respectively. We observe that combining defenses can further reduce the effectiveness of jailbreaks.
**Clarification of Figure 4:** As discussed below Figure 4, a refusal phrase is inserted after the first question, making the first exchange between the human and assistant a negative demonstration. In Figure 4, all other plots start at index 1, whereas the ND plot starts at index 0. This was an intentional choice to improve clarity by highlighting the presence of the ND example at the beginning of the sequence. | Summary: The paper describes a method ("PANDAS") to make many-shot jailbreaks more effective by:
- inserting positive affirmations (encouraging phrases that reinforce the instruction-following behavior)
- inserting negative demonstrations (examples of recovery from refusal)
- using adaptive sampling (instead of uniformly sampling topics, using a Bayesian optimization framework to identify the optimal sampling frequency of each topic)
They find that:
- PANDAS consistently outperforms vanilla MSJ (Anil et al.)
- Jailbreaking effectiveness doesn't always increase with more shots (peaks at 64 shots for some models)
- Each component of PANDAS independently improves jailbreak success rates
- Most defense methods (like Perplexity filtering, Retokenization, SmoothLLM) are ineffective against PANDAS
Claims And Evidence: Overall, the claims made in the submission are well-supported by the evidence provided.
- The authors test their approach across five different models (Llama-3.1-8B, Qwen-2.5-7B, GLM-4-9B, openchat-3.6-8B, OLMo-2-7B), three datasets (AdvBench, AdvBench50, HarmBench), and compare against established baseline methods (MSJ, i-MSJ)
- Table 1 shows that PANDAS outperforms baseline methods across most model/dataset combinations
- Table 2 shows how each component of PANDAS contributes separately to the overall effectiveness
- The authors analyze how the components of PANDAS techniques affect attention patterns, suggesting a mechanistic explanation of why their approach works
- They evaluate against multiple defensive techniques (Self-Reminder, ICD, Perplexity Filtering, Retokenization, SmoothLLM), showing the robustness of their approach to common defense methods
One weakness is that the authors focus primarily on 7-9B parameter models, which may not represent how PANDAS performs on larger models that could be more resistant to jailbreaking.
Methods And Evaluation Criteria: The experiments evaluate PANDAS against baseline methods (MSJ and iMSJ) across multiple LLMs, including Llama-3.1-8B, Qwen-2.5-7B, GLM-4-9B, openchat-3.6-8B, and OLMo-2-7B, using datasets AdvBench and HarmBench. They also look at how PANDAS performs against several defense methods designed to prevent jailbreaking (Self-Reminder, ICD, Perplexity Filtering, Retokenization, SmoothLLM). Overall, the evaluations and baselines chosen are reasonable.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The main experiment design is generally sound and valid. The authors also perform ablation studies to how the individual components of PANDAS contribute to the attack's efficacy.
The attention analysis experiment was only performed on Llama-3.1-8B, not across all evaluated models. It would be interesting the see whether the results replicate on other models.
Supplementary Material: The supplementary material in the appendix is generally helpful for understanding the paper and provides additional evidence for the authors' choices.
- Appendix B outlines the methodology for generating malicious demonstrations across 12 topics based on Anthropic's Usage Policy, including prompt templates instructing uncensored models to craft harmful questions.
- Appendix C documents the language used for the components of PANDAS: 10 positive affirmation phrases like "Exactly the detail I needed! Thanks!", 10 refusal phrases, such as "I can't help with that," and 10 correction phrases, including "Your response to the previous question was either incomplete or did not address it correctly."
- Appendix D shows how defense methods like Retokenization and SmoothLLM fail against MSJ and PANDAS.
- Appendix E contrasts ICD-Exact versus ICD-Ours implementations, showing that ICD-Ours inadvertently strengthens jailbreaking by functioning as an extra negative demonstration.
- Appendix F demonstrates the near-permutation-invariant property of successful MSJ prompts which justifies treating the Bayesian optimization parameter as sampling probabilities rather than ordered demonstration sequences.
Relation To Broader Scientific Literature: The paper builds directly on recent work by Anil et al. (2024) which describes many-shot jailbreaking, a method of using a large number of fabricated conversational turns to make LLMs comply with harmful instructions. The paper also ties into the broader literature on language model jailbreaking and defenses.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: My main concern with this paper is its limited scientific significance. While the authors demonstrate empirical improvements to many-shot jailbreaking through their PANDAS approach, the modifications are quite incremental. The core contribution lacks the substantive novelty typically expected for conference publication.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback. We are glad that the reviewer finds our claims well-supported, our choice of datasets and baselines reasonable, and our experiment designs sound and valid. We will revise the paper by including additional attention analysis on other models, highlighting other important scientific contributions. Below, we address specific concerns of the reviewer.
**Model Selection:** Mazeika et al. empirically demonstrate that jailbreak effectiveness is consistent within model families despite differences in parameter count, with the main variation occurring across different model architectures [1]. Based on this, we focus on models with around 8B parameters. This choice also allows us to evaluate with a large number of shots, which is important given the GPU memory demands of MSJ and PANDAS. Furthermore, our experimental setup follows recent work by Zheng et al. [2].
**Attention Analysis:** For the attention analysis in Section 4.4, we focus on Llama-3.1-8B due to its popularity. However, the observation in Figure 4 holds across other models as well, and we will include these additional results in the revised version.
**Scientific Significance:** We thank the reviewer for acknowledging the empirical improvements of PANDAS over MSJ. The proposed modifications, particularly PA and ND, are not only motivated by existing jailbreaking literature, but also serve as supporting evidence for those underlying hypotheses. PA is inspired by the competing objective hypothesis introduced by Wei et al. [3], while ND draws from the idea of ‘learning from mistakes’ in in-context learning [4], as Anil et al. argue that MSJ’s success results from ICL mechanisms [5].
In addition, we would like to highlight other key contributions of our work. First, we investigate long-context vulnerabilities in models equipped with various defense strategies, showing that while some defenses are highly effective in general, they can be circumvented by MSJ and PANDAS. Second, our partition-based attention analysis shows a trend which new demonstrations increasingly attends to previous ones, and PANDAS further encourages this pattern to achieve higher jailbreak effectiveness. We will highlight these contributions more explicitly in the revised paper.
[1] Mazeika et al., Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. In ICML’24
[2] Zheng et al., Improved few-shot jailbreaking can circumvent aligned language models and their defenses. In NeurIPS’24
[3] Wei et al., Jailbroken: How does llm safety training fail? In NeurIPS’23
[4] Zhang et al., In-context principle learning from mistakes. In ICML’24
[5] Anil et al., Many-shot jailbreaking. In NeurIPS’24 | Summary: The paper presents a novel method to strengthen the many-shot-jailbreaking attack (MSJ), which uses many question-answer pairs as malicious demonstrations in context to jailbreak LLMs on safety queries. The authors introduce PANDAS, a hybrid technique designed to improve MSJ by incorporating three strategies: (1) Positive Affirmations (PA): Inserting reinforcement phrases (e.g., “Exactly the detail I needed! Thanks!”) to strengthen the instruction-following pattern in fabricated conversations. (2) Negative Demonstrations (ND): Embedding refusal and correction phrases within existing examples to teach the model to override refusals. (3) Adaptive Sampling (AS): Using Bayesian optimization to refine the selection of malicious demonstrations based on the target prompt’s topic. Empirically, PANDAS consistently outperforms prior long-context jailbreaking techniques across multiple datasets and LLMs. The authors also provide an attention analysis to understand how models’ long-context capabilities are exploited and how PANDAS improves upon MSJ.
Claims And Evidence: **Claim** 1: PANDAS improves MSJ by incorporating Positive Affirmation (PA), Negative Demonstration (ND), and Adaptive Sampling.
Evidence is given by the ablation study in Section 4.2 and Table 2.
Concerns: It is not clear if these methods essentially relate to long-context. In Table 2, fewer shots are also improved. Therefore, the claim may not be precise. The method is just improve the few-shot jailbreaking which can also strengthen many-shot jailbreaking (not surprisingly).
**Claim 2**: PANDAS improves long-context jailbreaking over existing methods.
Evidence is given by benchmarks in Section 4.2 on 3 datasets, 0-128 shots, 5 LLMs. PANDAS did improve the attack success rate compared to i-MSJ (Zheng et al., 2024), a format based method.
Concerns: However, the paper only includes 8B open-source models. I did not find concrete reasons to exclude larger LLMs, probably using API models like ChatGPT, Claude, or Gemini. Essentially, the larger models have stronger instruction-following capabilities. The method only needs to process data and should be applicable with API models.
**Claim 3**: "Both PA and ND encourage each new demonstration to reference previous demonstrations more heavily, thereby reinforcing the instruct-following behavior established by earlier examples." quoted fro the last paragraph of Section 4.
Evidence: The claim is supported by the analysis of the attention with many-shot jailbreaking. The authors analyzed how the target prompt references the previous demonstrations by summing the attention between the concerned demonstration and previous ones.
Concerns: It is not clear to me why the attention between the current demonstration and previous ones are considered. As our interest lies on the target prompt, why not directly measure the attention between the target prompt and previous demonstration at index i?
Per the claim, I am not sure what it means to "reinforce the instruct-following behavior". The analysis only shows that the relation between demonstrations. Why it is related to "instruction following"?
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: I checked the benchmark, ablation study, and attention analysis.
The benchmark excludes larger and close-source models. I am not convinced that this is a reasonable evaluation.
The authors argued that "The focus on models with approximately 8B parameters follows prior work (Zheng et al., 2024), which was based on the empirical observation that the effectiveness of attacks are stable within model families but vary significantly between different families (Mazeika et al., 2024)."
I cannot fully understand this. The argument does not explain why larger models have to be excluded even if they are likely to perform similarly. The larger close-source models are not considered either, which are out of the selected families.
In the benchmark (Table 1), i-MSJ is excluded in many tasks. I did not find an explanation for this.
I am a bit concerned about the attention analysis. It is not clear to me why the attention between the current demonstration and previous ones is considered. As our interest lies on the target prompt, why not directly measure the attention between the target prompt and the previous demonstration at index i?
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper proposed three novel strategies compared to the state-of-the-art by (Zheng, et al. 2024). (Zheng, et al. 2024) focused on the formatting of the MSJ, including special tokens like [/INST] and searching for malicious demonstrations. The proposed method instead focuses on the choice of samples based on some heuristics. The findings like topic sensitivity and negative demonstrations are interesting, resulting in better attacking performance.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strength:
1. The proposed method is original by combining three heuristic strategies.
2. The attention analysis is an interesting attempt to uncover the mechanism of MSJ.
3. Extensive experiments are conducted.
Weakness
1. The benchmark evaluation and attention analysis are problematic, as I stated in previous sections.
Other Comments Or Suggestions: None
Questions For Authors: How well the method can work on close source models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback. We are glad that the reviewer finds our method novel and our experiments are extensive. We will revise the paper by clarifying the improvements introduced by PANDAS, expanding the discussion on model selection and the challenges of evaluating proprietary models, and providing additional details on the attention analysis. Below, we address specific concerns of the reviewer.
**Improvements in the long-conext setting (Claim 1):** To clarify, our claim is not that PANDAS only works in long-context scenarios, but rather that it is designed to improve in long-context settings, while also offering improvements in shorter contexts. We also observe that improvements in few-shot does not always translate to improvement in the many-shot setting. For example, on i-MSJ, a method originally designed to improve in few-shot jailbreaking, for models such as Qwen and GLM, we observe that the ASR-L falls behind MSJ and we do not observe the same improvement with a large number of shots. On the other hand, we highlight that for all cases, PANDAS is consistently equal or more effective compared to baselines across all shots.
**Model Selection (Claim 2):** The main reason for focusing solely on 8B open-source models is that they allow us to perform inference with a large number of tokens given limited GPU resources. This choice is supported by the empirical findings of Mazeika et al. [1] and follows a similar model selection strategy as in Zheng et al. [2].
**Evaluations on Commercial Models (Claim 2):** Our primary goal is to demonstrate that PANDAS achieves improved jailbreaking effectiveness compared to MSJ. As we have limited access to credit for commercial models, we decided to focus our evaluation on the latest open-source models, all of which were released between May and December 2024, with most incorporating safety guardrails. Notably, Llama-3.1-8B is specifically fine-tuned for robustness in long-context scenarios. Our results indicate that PANDAS substantially improves jailbreaking performance on these open-source models.
To additionally validate our findings, we conduct evaluations on a commercial model, GPT-4o-mini, which has the lowest per-token cost among available commercial options (Claude’s and GPT’s). Using 128 shots and only a single restart on AdvBench, MSJ and PANDAS achieve an ASR of 0.19% (1 out of 520) and 2.12% (11 out of 520), respectively. For cost reasons, we do not perform Bayesian Optimization and adopt uniform sampling across all malicious topics. This result shows that, despite both methods showing low ASR, the effectiveness of MSJ can still be improved.
**Clarification on the attention analysis (Claim 3):** Our hypothesis is that the success of MSJ arises from the reinforcing instruction-following behavior as the number of demonstrations increases. The phrase “reinforcing instruction-following behavior” refers to the phenomenon where the model interprets the malicious question as another instruction to follow, rather than an unsafe request to be rejected, due to the consistent compliance pattern from early demonstrations.
PA and ND are designed to further encourage this effect. While measuring attention from the target prompt to previous demonstrations would directly show which examples it references, it would not reveal how the demonstrations themselves build upon one another.
By introducing a reference score between demonstrations, we capture how each demonstration “looks back” at earlier ones. A higher reference score suggests that later demonstrations are increasingly influenced by prior demonstrations, potentially strengthening the pattern of instruction-following established throughout the prompt.
Attention scores have been widely used as a proxy for understanding transformer behavior [3, 4]. In this work, we leverage this to study how contextual dependencies develop across long sequences, providing insight into how PANDAS improves over MSJ.
**Evaluation of i-MSJ:** We focus on AdvBench50 for the evaluation of i-MSJ primarily due to the extensive runtime required to perform 128 iterations of random search, following the setup in Zheng et al. [2]. It is also worth noting that even in the original paper by Zheng et al., results are reported only on the AdvBench50 dataset.
[1] Mazeika et al., Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. In ICML’24.
[2] Zheng et al., Improved few-shot jailbreaking can circumvent aligned language models and their defenses. In NeurIPS’24
[3] Oymak et al., On the role of attention in prompt-tuning, In ICML'23
[4] Quirke & Barez, Understanding addition in transformers, In ICLR'24 | null | null | null | null | null | null |
FlowDrag: 3D-aware Drag-based Image Editing with Mesh-guided Deformation Vector Flow Fields | Accept (spotlight poster) | Summary: This paper proposes a novel method for drag-based image editing. Compared with the previous work, the proposed method takes the 3D geometric information into consideration through mesh construction, ensuring a stable and 3D-plausible editing. The method is claimed to achieve state-of-the-art performance.
Claims And Evidence: - The method is not comparing with the very first work of this task: DragGAN. Even though the proposed method is diffusion-guided, DragGAN is still observed achieving better performance in certain cases. This hurts the soundness of this paper.
- As a comparison: GoodDrag compares with DragGAN.
- The paper shows some high-quality qualitative results. This can demonstrate the claim of state-of-the-art performance.
- However, this paper is not using Drag100 benchmark, which is created in the most powerful baseline GoodDrag, and contains various types of editing as shown in GoodDrag's Fig.6 (editing involves content removal and creation). This raises lots of concerns about whether the proposed method can also achieve this.
Methods And Evaluation Criteria: - The method is reasonable and well-motivated. I am especially impressed by the progressive deformation part.
- As mentioned in "Claims And Evidence", Drag100 benchmark proposed in the paper of the most powerful baseline GoodDrag is not used for experiments. This raises concerns about whether the proposed method can support editing involving content removal and creation.
- In fact, from the method, I think the proposed method cannot well support content creation as it will be hard to generate the new contents in 3D. I would like the author to show some results to disprove this.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: - As mentioned in "Claims And Evidence"
- The method is not comparing with the very first work of this task: DragGAN. Even though the proposed method is diffusion-guided, DragGAN is still observed achieving better performance in certain cases. This hurts the soundness of this paper.
- Drag100 benchmark proposed in the paper of the most powerful baseline GoodDrag is not used for experiments. This raises concerns about whether the proposed method can support editing involving content removal and creation.
Supplementary Material: N/A.
Relation To Broader Scientific Literature: This paper proposes a novel approach to inject 3D awareness and basis to drag-based editing pipelines. This enables the requirement of 3D-plausibility in these editing tasks and augments the capability.
Essential References Not Discussed: - As mentioned in "Claims And Evidence", the method is not comparing with the very first work of this task: DragGAN, just cited once.
Other Strengths And Weaknesses: - The two orange arrows in Fig.4 are overlapped with texts, which makes the figure quite messy.
- There is no link in citations and references to figures and tables.
- The axis metrics in Fig.2 (d) and (e) are inconsistent.
Other Comments Or Suggestions: I would like to see some additional results (1) comparing with DragGAN, and (2) from the "content removal" or "content creation" categories of Drag100 dataset.
All these results are received in rebuttal, resolving most of my concerns. Therefore, I will raise my score from 3 to 4.
Questions For Authors: Please refer to the reviews above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **[Q1] The paper lacks comparison with DragGAN, which achieves better performance in certain cases.**
**[A1]** We provide additional qualitative comparisons between FlowDrag and DragGAN in Fig. 14(a) (please refer to the link below). To ensure fair comparisons, we reproduced DragGAN using the official GitHub repository and employed PTI [1] for GAN inversion. Since the provided DragGAN weights are limited to the StyleGAN-Human model, our comparisons focus exclusively on human images. As shown in Fig. 14(a), DragGAN struggles to preserve the original person's identity, whereas FlowDrag maintains identity effectively and exhibits better geometric consistency.
Fig 14. https://anonymous.4open.science/r/FlowDrag-B950/Fig_14.pdf
[1] Pivotal Tuning for Latent-based Editing of Real Images (ACM TOG 2022)
**[Q2] I would like to see additional results from the "content removal" or "content creation" categories of the Drag100 dataset.**
**[A2]** We applied FlowDrag to the "content creation" and "content removal" examples from the Drag100 dataset, and present additional results in Fig. 14(b)-(c). In the "content creation" category (Fig. 14(b)), FlowDrag successfully generates natural results with fewer artifacts compared to GoodDrag. Similarly, in the "content removal" category (Fig. 14(c)), FlowDrag demonstrates editing quality that is comparable or superior to GoodDrag.
Although FlowDrag primarily targets "rigid edits" to preserve object rigidity during editing (as stated in our introduction), these results confirm that our method is also effective in handling non-rigid editing scenarios.
**[Q3] The two orange arrows in Fig.4 overlap with text, making the figure quite messy.**
**[A3]** Thank you for pointing this out. We will adjust the positioning of the orange arrows in Fig.4 to avoid overlap and improve visual clarity in the final manuscript.
**[Q4] There is no link in citations and references to figures and tables.**
**[A4]** We will include hyperlinks in citations and references to figures and tables in the final manuscript to enhance readability.
**[Q5] The axis metrics in Fig.2 (d) and (e) are inconsistent.**
**[A5]** Yes, the axis metrics in Fig. 2(d) and Fig. 2(e) intentionally differ. Fig. 2(d) compares methods on the DragBench dataset using metrics such as 1-LPIPS (for image fidelity) and MD (Mean Distance, for evaluating handle-point movement accuracy). In contrast, Fig. 2(e) shows comparisons on our proposed VFD-Bench dataset, which provides ground-truth edited images from video frames, enabling evaluation using RGB-level (PSNR) and feature-space metrics (1-LPIPS and MD). Fig. 2(e) specifically plots results using PSNR and 1-LPIPS. We will explicitly clarify this distinction in the final manuscript. | Summary: This paper proposes FlowDrag, a method that leverages pre-trained stable diffusion models for drag-based image editing. This method improves the drag-based image editing by building a field of 3D-aware dragging instruction from the user's input. Specifically, FlowDrag first leverages an image-to-depth or image-to-mesh model to generate a mesh for the foreground object. It then applies SR-ARAP, a mesh deformation algorithm, to calculate how the user's drag deformed the object. Through the novel progressive deformation with SR-ARAP, it obtains a flow field showing how a larger area of the object would move. This flow field is then sampled and projected to 2D space to act like the densified dragging instruction. By providing a field of 3D-aware dragging instructions, the diffusion model receives more guidance on how each pixel is going to move and yields a better editing result with the standard motion supervision and point tracking pipeline. Moreover, FlowDrag further uses the projected deformed mesh as a guide to improve the editing result. It also constructs a new drag-based image editing benchmark, VFD-Bench Dataset, with ground-truth editing results to compensate for the fact that the existing benchmark, DragBench, does not provide ground-truth images. FlowDrag is evaluated on both DragBench and VFD-Bench datasets and shows superior performance than previous methods.
## Update after rebuttal
I appreciate the author's response and insightful discussion. I would keep my original rating.
Claims And Evidence: There are two main contributions claimed by the paper: the densification of dragging instruction through mesh deformation and VFD-Bench datasets. Both of them are well substantiated by the experiment results. The effectiveness of FlowDrag is demonstrated by the results in Table 1, Table 2, and Table 3. Each component in the method is studied in the ablation study. The VFD-Bench dataset provides ground-truth images for drag-based image editing, which is exactly what the community needs, as it provides an accurate and objective way to evaluate the result of image editing. Therefore, the claims in this paper are very solid.
Methods And Evaluation Criteria: The proposed method is a novel and clear way to improve the drag-based image editing using diffusion models. It realized that a good editing result should comply with local 3D rigidity constraints. However, the user's input is sparse and existing methods rely on the inference ability of the diffusion models to hallucinate how the object may change. This could be inaccurate as the diffusion models are trained on pure 2D data so it has no idea how the object should deform in 3D. This paper chooses an intuitive and explicit way to achieve 3D-aware editing: lifting the object from 2D to 3D, simulating the change in 3D caused by the drag, and reflecting these changes in 2D in the form of a flow field. This design targets the main problem very well, and the design is intuitive and logical.
The paper also proposes a new dataset VFD-Benchmark, a drag-based image editing benchmark with ground-truth editing result. The existing benchmark, DragBench, only provides images and instructions but no ground-truth editing results. It relies on Mean Distance (MD) and 1-LPIPS to evaluate the editing results. However, these two metrics cannot accurately reflect the editing effect: MD only compares DIFT feature similarity between source and target keypoints, and 1-LPIPS ignores object's deformations before and after editing. VFD-Benchmark provides ground-truth editing images. This allows the editing to be accurately and objectively evaluated by comparing it with the ground-truth image. The dataset offers a good solution to an existing issue in the evaluation.
Therefore, the paper provides solid solutions to both the editing pipeline and evaluation, making significant contributions to this field.
Theoretical Claims: This paper is an application paper with no significant theoretical contribution.
Experimental Designs Or Analyses: The experiments are thorough and detailed, with no significant issue. Given that the flow field generation is independent of a specific editing method, a potential improvement in the experiment could be applying the flow field to a range of drag-based editing methods, such as DragDiffusion, GoodDrag, etc, to see whether the flow field may improve existing methods. If it could, the significance of this work would be much greater.
Supplementary Material: I have read all sections of the supplementary material. It provides detailed information on SR-ARAP and the effect of background on editing results.
Relation To Broader Scientific Literature: The paper is built on the existing drag-based image editing pipeline. It follows motion supervision, point tracking and latent representation optimization proposed by DragDiffusion. The evaluation metrics, Mean Distance and 1-LPIPS, are commonly used by the literature.
Essential References Not Discussed: All related works have been discussed.
Other Strengths And Weaknesses: Some place requires further clarifications, which I would elaborate in the Question section. These questions do not affect the contribution and significance of this work.
Other Comments Or Suggestions: The proposed method is solid and intuitive, and the paper is well-written and easy to follow. It makes concrete contributions to the field. Therefore, I recommend the acceptance of this paper.
Questions For Authors: I would appreciate it if the author could address the following questions:
1. Regarding the Progressive Deformation with SR-ARAP, what is the value of $\lambda$ in Equation 8? I am also confused with the handle-matching term in Equation 9. $v_t$ is fixed because it is the target point, $v^{(k+1)}_{h}$ is calculated based on Equation 8. Therefore, both terms seem fixed. What is the learnable in here? Is it $\lambda$?
2. The paper mentioned that both DepthMesh and DiffMesh are used, yet there is no experiment or discussion on which one is preferred. Could you provide more information on this matter? Which one is used to achieve the reported result in Table 1, 2 and 3?
3. Section C of the supplementary mentions background separation. How is it achieved?
4. What are the hardware requirement for FlowDrag?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **[Q1] What is the value of $\lambda$ in Equation 8?**
**[A1]** The parameter $\lambda$ in Eq. (8) represents the incremental step size at each iteration, indicating the fraction of the displacement between the handle vertex ($v_h$) and the target vertex ($v_t$). However, we discovered a minor typo in the original version of Eq. (8), and we sincerely appreciate the reviewer’s careful observation. The corrected Eq. (8) is:
$ v_h^{(k+1)} = v_h^{(k)} + \lambda (v_t - v_h), \quad 0 < \lambda \le 1 $
In this equation, both $v_h^{(k+1)}$ and $v_h^{(k)}$ are intermediate vertices positioned incrementally between the initial handle vertex ($v_h$) and the target vertex ($v_t$).
To facilitate a clearer understanding, we provide an enhanced illustration in Fig. 15 (please refer to the link provided below), complementing Fig. 3 in the paper.
This formulation implies that instead of moving the handle vertex directly from $v_h$ to $v_t$ in one step, our method progressively moves it through multiple intermediate positions. Specifically, we set $\lambda=0.2$ in our implementation, meaning the vertex moves 20% closer to its target at each iteration. For example, with $\lambda=0.2$, the handle vertex path would be as follows:
$v_h (\text{handle vertex}) = v_h^{0} → v_h^{1} → v_h^{2} → v_h^{3} → v_h^{4} → v_h^{5} = v_t (\text{target vertex})$
This progressive SR-ARAP algorithm thus prevents abrupt mesh distortions by smoothly distributing large vertex displacements across multiple intermediate steps, achieving more stable and coherent deformations.
We will clearly correct Eq. (8) accordingly in the revised manuscript. We sincerely thank the reviewer for this valuable clarification.
Fig 15. https://anonymous.4open.science/r/FlowDrag-B950/Fig_15.pdf
**[Q2] I am also confused with the handle-matching term in Equation 9.**
**[A2]** The handle-matching term in Eq. (9) acts as a soft constraint, ensuring handle vertices smoothly approach their intended intermediate (and ultimately final) positions. Since the SR-ARAP algorithm primarily optimizes vertex positions by minimizing local rigidity (minimal local distortion), handle vertices may not exactly reach intermediate targets. To resolve this, the handle-matching term gently penalizes deviations from target positions:
$\beta \sum_{v_h \in \text{handles}} \Bigl\|\, v_h^{(k+1)} - v_t \Bigr\|^2$
This ensures balanced optimization between local rigidity and accurate vertex positioning. We will clarify this explicitly in the revised manuscript.
**[Q3] $v_t$ and $v^{(k+1)}_{h}$ are fixed. What is the learnable in Eq (8)? Is it $\lambda$?**
**[A3]** Yes, $v_t$ and $v^{(k+1)}_{h}$ are both fixed. Additionally, as explained in A1-2, the parameter $\lambda$ is a manually set hyperparameter (set as 0.2) and not learnable. Eq. (8) itself contains no learnable parameters. The learnable parameters are the positions of vertices (other than handle and target vertices), which are optimized through the SR-ARAP energy function with the handle-matching term in Eq. (9).
**[Q4] The paper mentions using both DepthMesh and DiffMesh. Which one is preferred and why? Additionally, which mesh (DepthMesh or DiffMesh) is used to achieve the reported results in Tables 1, 2, and 3?**
**[A4]** We provide additional experimental comparisons of drag-editing results using DepthMesh and DiffMesh in Fig. 12 (please refer to the link provided below).
As shown in Fig. 12(a), DepthMesh struggles to generate geometry for regions unseen in the single input image, resulting in unnatural mesh deformation. In contrast, DiffMesh (Fig. 12(b)), generated via a diffusion model, effectively captures complete geometry, enabling more natural and coherent mesh deformation. Therefore, we prefer DiffMesh due to its better geometric consistency, crucial for accurate mesh deformation using our SR-ARAP algorithm.
All reported results in Tables 1, 2, and 3 are based on DiffMesh. The detailed editing process using DiffMesh is illustrated in Fig. 11.
Fig 11. https://anonymous.4open.science/r/FlowDrag-B950/Fig_11.pdf
Fig 12. https://anonymous.4open.science/r/FlowDrag-B950/Fig_12.pdf
**[Q5] Section C of the supplementary mentions background separation. How is it achieved?**
**[A5]** Background separation is performed during the DepthMesh generation by applying a background threshold ($τ_b$) in Step 4 of Algorithm 1 (Supplementary, Section B). Specifically, any mesh facets with depth values smaller than $τ_b$ are removed. For example, Fig. 7(a) in Section C illustrates a result without background separation ($τ_b=0$), whereas Fig. 7(b), with $τ_b=0.3$, effectively removes background facets.
**[Q6] What are the hardware requirement for FlowDrag?**
**[A6]** FlowDrag requires less than 14GB of GPU memory for processing a 512×512 input image, as evaluated on a single NVIDIA A100 GPU. We will include detailed hardware requirements in the final manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. So the purpose of the regularization term is to correct the position of $v^{(k+1)}_{h}$ because the moving direction $(v_t - v_h)$ in Equation 8 is solely based on initial and target positions, so this direction does not always moving point towards target positions. Therefore, the regularization term is included to move points towards target positions from current positions? If that is the case, isn't the original version of Equation 8 better because its updated direction is calculated based on the current positions?
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful comments. Your understanding of the regularization term is indeed correct. In our earlier rebuttal [A1], we presented a modified Eq. (8) with a fixed $\lambda$. However, as you suggested, the original Eq. (8) calculates the update direction based on the current position (intermediate vertex), thus representing a more general and reasonable approach. In this case, dynamically adjusting $\lambda$ via a scheduling strategy can also be effective to ensure that vertices reliably reach the target position. Following your valuable feedback, we will present both approaches (fixed vs. dynamic $\lambda$) in our final manuscript and provide additional comparative experiments. Thank you! | Summary: This paper proposes a novel drag-based editing framework called FlowDrag. Its key feature is the introduction of control points generated through the deformation of a 3D mesh, which helps to mitigate the geometric discontinuities commonly present in existing drag-based editing methods. Judging from the results provided in the paper, this method is quite effective and shows a noticeable improvement over current approaches.
Claims And Evidence: The authors introduced a dedicated dataset for testing called VFD-Bench, which provides a more comprehensive quantitative analysis than existing methods. However, the paper presents a limited number of visualization results, and a qualitative analysis should include more visual effects.
Methods And Evaluation Criteria: Yes
Theoretical Claims: The paper improves the SR ARAP, and provides formula derivations in the supplementary material. I have reviewed them and found no significant issues.
Experimental Designs Or Analyses: Yes
Supplementary Material: I have reviewed the entire supplementary material, which provides algorithmic details for mesh generation and additional comparative results.
Relation To Broader Scientific Literature: Image editing
Essential References Not Discussed: None
Other Strengths And Weaknesses: The paper introduces a 3D Mesh model to determine the control points needed for editing, which is innovative。The quantitative and qualitative analysis results provided demonstrate the feasibility and effectiveness of the approach.
Weaknesses:
However, the authors do not discuss the impact of the 3D Mesh model on the results. It is well known that algorithms for constructing mesh models from a single image are still immature. If mesh construction fails or there are serious artifacts, could this lead to image editing failure? At the same time, regarding the selection of control points, as can be seen from Figure 3, the geometry of the edited dog's head is not consistent after mesh editing. If control points are chosen in a regular manner, could this lead to bad results?
Other Comments Or Suggestions: Refer to Weaknesses
Questions For Authors: The paper should provide more visual editing results to further demonstrate the effectiveness of the proposed method. This would help to substantiate the claims made in the paper and offer a more comprehensive understanding of the technique's capabilities and potential applications.
My view is that this paper is meaningful in improving existing drag-based image editing methods from a 3D perspective, but the proof of visual effects should be strengthened. Therefore, I have recommended a "weak accept."
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **[Q1] The authors do not discuss the impact of 3D mesh construction. Could failures or severe artifacts in mesh reconstruction cause image editing to fail?**
**[A1]** Yes, severe artifacts or failures in 3D mesh reconstruction could hinder the accurate generation of an accurate 2D vector flow, potentially causing editing failures. To address this concern, we evaluated the robustness of FlowDrag across various mesh reconstruction conditions, as detailed in our response to Reviewer 1 (A2) and illustrated in Fig. 13 (please refer to the link provided below). Due to the space limitations, we respectfully ask the reviewer to refer to these detailed analyses.
Briefly, we analyzed two mesh-generation approaches used in FlowDrag: DepthMesh and DiffMesh. For DepthMesh, varying the reduction ratio (controlling mesh density during construction) can degrade the original geometry if reduced excessively. For DiffMesh, changing the diffusion sampling step of the image-to-3D diffusion model (Hunyuan3D 2.0) can introduce artifacts and geometry degradation when sampling steps become very short. Our comprehensive experiments identified robust operating ranges: DepthMesh remains robust within a reduction ratio range of approximately 0.001–1, and DiffMesh maintains robustness for diffusion sampling steps of 10 or higher.
These analyses demonstrate FlowDrag's robustness, confirming its capability to consistently produce stable and accurate editing outcomes, even under varying conditions of mesh reconstruction quality.
Fig 13. https://anonymous.4open.science/r/FlowDrag-B950/Fig_13.pdf
**[Q2] Regarding the selection of control points, as seen in Figure 3, the geometry of the edited dog's head is inconsistent after mesh editing. If control points are chosen in a regular manner, could this lead to bad results?**
**[A2]** As shown in Fig 3, minor geometric inconsistencies may arise from the mesh deformation itself, since the deformed mesh does not perfectly preserve fine-grained rigidity. However, we do not directly use this mesh for editing. Instead, we project the deformed mesh onto a 2D vector field and select optimal control points (referred to as "2D vector flow") from this projection. Furthermore, we conducted the experiment on selecting control points in a regular manner corresponding to the "Uniform sub-sampling" approach described in our paper (Sec 4.3). Our experiments show that this approach still effectively maintains overall geometric consistency and outperforms many existing methods. (We guess this effectiveness is attributed to our multiple drag vector concept.) Additionally, we proposed "Magnitude-based sampling", which selects more effective vectors with the largest displacements, achieves optimal editing results, as quantitatively demonstrated in Table 5.
**[Q3] The paper presents limited qualitative visualization results.**
**[A3]** We provide additional qualitative visualization results in Fig. 10 (additional results on VFD-Bench and Drag100), Fig. 11 (visualization of the mesh-guided editing process using DiffMesh), Fig. 12 (comparison of mesh deformation pipelines using DepthMesh and DiffMesh), Fig. 13 (sensitivity analysis and robustness comparison of mesh deformation), and Fig. 14 (additional qualitative comparisons of drag-based editing results). We will also incorporate these results in the final manuscript.
Fig 10. https://anonymous.4open.science/r/FlowDrag-B950/Fig_10.pdf
Fig 11. https://anonymous.4open.science/r/FlowDrag-B950/Fig_11.pdf
Fig 12. https://anonymous.4open.science/r/FlowDrag-B950/Fig_12.pdf
Fig 13. https://anonymous.4open.science/r/FlowDrag-B950/Fig_13.pdf
Fig 14. https://anonymous.4open.science/r/FlowDrag-B950/Fig_14.pdf
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed responses. I believe the authors' replies have basically resolved my confusion. I had already given an opinion leaning towards acceptance in the first round of review. I maintain my rating, and I am inclined to recommend the acceptance of this paper. | Summary: This paper proposes FlowDrag, which focuses on improving geometry consistency of drag-based image editing. It reconstructs a 3D mesh from the image, and uses an energy function to guide mesh deformation. The deformed mesh is then projected into 2D and used to guide the image editing denoising process. This paper also proposes a new benchmark dataset, VidFrameDrag (VFD), as the first drag-editing benchmark that has ground truths using consecutive view shots.
Experiments are conducted on both the proposed VFD benchmark and an existing DragBench and validate the effectiveness of the proposed method, evaluated by MD and 1-LPIPS as metrics and a user study.
## update after rebuttal
The rebuttal has addressed most of my concerns. I believe that maintaining my original positive rating appropriately reflects my overall favorable impression of the work.
Claims And Evidence: - The motivation is reasonable and straightforward. To preserve geometric consistency, the paper adds a 3D mesh as an intermediate representation for 2D editing to inject 3D geometric prior.
- It is also great to clarify that the model specifically tackles the kind of "rigid edit," which only contains rigid transformations.
Methods And Evaluation Criteria: - [Pipeline]: The pipeline is carefully designed. The input image is used to generate a 3D mesh leveraging off-the-shelf tools. The drag modification is done in the 3D space by offsetting the mesh vertices using an energy-function based method ARAP (and its followup SR ARAP). A progressive process is carefully crafted for better deformation. The deformed mesh vertices are then sampled using two candidate strategies, and used to extract a 2D vector flow map for motion supervision and point tracking, as well as used to obtain a 2D projection and inject the Unet layout features to guide the spatial and geometry information.
- [VFD-Bench]: The introduced VFD-Bench provides ground truths by leveraging video frames. This effectively provides fair evaluation capabilities and could have a positive impact on the field.
Theoretical Claims: NA
Experimental Designs Or Analyses: - The experiment results show that the proposed method outperforms existing methods, including DiffEditor, DragDiffusion, DragNoise, FreeDrag, and GoodDrag, on the VFD-Bench, evaluated by 1-LPIPS and MD.
- On the DragBench dataset, it has slightly lower 1-LPIPS values than the bests, while the paper reasonably argues it is because the other methods induce minimal edits.
- User study is also conducted to validate the method’s superior performance.
Supplementary Material: I reviewed the supplementary material, including the ablations on the parameter $\beta$, the definition of ARAP error, the mesh deformation ablations, and the additional qualitative visualizations. These results are supportive to validate the proposed method.
Relation To Broader Scientific Literature: - This paper focuses on improving the challenging geometric consistency issue in existing dragging-based image editing works, showing an interesting and promising improvement on a specific kind of dragging editing that focuses on rigid transformations.
- The paper proposes VFD-Bench as an addition to existing dragging-based editing benchmark, e.g., DragBench. This VFD benchmark provides ground truths by leveraging video frames. It effectively provides fair evaluation capabilities and could have a positive impact on the field.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: - [Fine-grained geometry inconsistency]: The method facilitates overall geometric and spatial consistency; while the edited images still showcase some inconsistency in the fine-grained geometry. For example, the hat's shape in the first sample of Fig.6.
Other Comments Or Suggestions: NA
Questions For Authors: - [3D mesh quality] How important is the quality of the 3D mesh reconstruction? What will happen if the 3D mesh fails to represent the image? How often will it succeed in generation, and is it robust? More ablation and insights on this would help clarify.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **[Q1] The method facilitates overall geometric consistency, but edited images still show some fine-grained inconsistencies, e.g., the hat shape in the first sample of Fig.6.**
**[A1]** Yes, we agree with the reviewer’s observation. While FlowDrag significantly improves overall geometric consistency, some fine-grained geometric inconsistencies remain (e.g., the hat shape in Fig. 6). We speculate that this issue primarily arises due to the inherent limitation of the pre-trained 2D Stable Diffusion model (version 1.5), specifically its insufficient 3D understanding. This limitation is commonly observed across diffusion-based drag editing methods. Nonetheless, FlowDrag mitigates overall geometric inconsistency and shows enhanced robustness compared to existing methods (please refer to detailed comparisons in Tables 1 and 2, as well as Figures 6 and 9).
To further illustrate this strength, we provide additional visualization results in Fig. 10 (additional comparisons on VFD-Bench and Drag100 datasets, including extra examples from Prompt-to-Prompt) and detailed visualizations of our mesh-guided editing process in Fig. 11.
Additionally, we believe that utilizing backbones inherently capable of stronger geometric reasoning (e.g., video diffusion models) could potentially address and further reduce such fine-grained inconsistencies. We hope our work and results inspires future research in this promising direction.
Fig 10. https://anonymous.4open.science/r/FlowDrag-B950/Fig_10.pdf
Fig 11. https://anonymous.4open.science/r/FlowDrag-B950/Fig_11.pdf
**[Q2] How important and robust is the 3D mesh reconstruction? Additional ablations or insights would help clarify this.**
**[A2]** To analyze the importance and robustness of the 3D mesh reconstruction, we conducted additional sensitivity analyses for both DepthMesh and DiffMesh, as shown in Fig. 13 (please refer to the link provided below).
For DepthMesh (Fig. 13(a)-(b)), robustness is evaluated by varying the reduction ratio, which directly controls the density of facet connections during mesh construction (as detailed in Supplementary Algorithm 1, Step 3). Specifically, a ratio of 1 indicates a fully connected mesh, while lower ratios substantially reduce vertices and facets, causing geometry degradation and unintended outcomes in mesh deformation and subsequent drag editing. We performed experiments on 20 images from DragBench. Since DragBench lacks ground-truth edited images, we quantified robustness by computing the ratios of metrics relative to the highest-quality mesh (reduction ratio = 1, used as reference image), defined explicitly as follows:
$$ \text{PSNR ratio} = \frac{\text{PSNR (source image)}}{\text{PSNR (reference image)}} $$
$$ \text{1–LPIPS ratio} = \frac{\text{1–LPIPS (source image)}}{\text{1–LPIPS (reference image)}} $$
As shown in Fig. 13(a)-(b), FlowDrag demonstrates stable and robust editing outcomes within an effective reduction ratio range (0.001–1).
For DiffMesh (Fig. 13(c)-(d)), we similarly assessed robustness by varying the diffusion sampling steps (40, 20, 10, and 5) in the image-to-3D mesh generation process (using Hunyuan3D 2.0). Evaluations on the same 20 DragBench images with identical metrics revealed that sampling steps between 10 and 40 consistently maintained overall object geometry. However, at sampling steps below 10 (e.g., step=5), geometry degraded significantly, causing unintended deformation and editing results. Nevertheless, FlowDrag showed strong robustness for sampling steps of 10 or higher (Fig. 13(c)-(d)).
These detailed analyses confirm FlowDrag’s robustness across various mesh reconstruction conditions, validating our method’s effectiveness.
Fig 13. https://anonymous.4open.science/r/FlowDrag-B950/Fig_13.pdf | null | null | null | null | null | null |
Multi-Objective Causal Bayesian Optimization | Accept (poster) | Summary: The paper introduces a novel framework, MO-CBO, which integrates causal inference with multi-objective Bayesian optimization, addressing an underexplored research area. The theoretical characterization of Pareto-optimal intervention sets via causal graph topology is a notable contribution. However, several aspects of the paper require improvement.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: Related to some extent
Essential References Not Discussed: yes
Other Strengths And Weaknesses: Strength:
Introduces a new class of optimization problems, MO-CBO, expanding causal Bayesian optimization to multi - objective scenarios, filling a research gap in multi-objective optimization considering causal structures.
Weakness:
Although it mentions the potential combination of existing CBO variants with MO-CBO in the future, the paper does not explore this aspect, lacking the full integration of existing research results and limiting the depth and breadth of the research.
Other Comments Or Suggestions: None
Questions For Authors: a. The paper lacks a thorough discussion of the current state of the field, making the motivation for this work somewhat unclear. A more explicit analysis of existing challenges and the necessity of this approach would strengthen the introduction.
b. While the theoretical analysis (e.g., Propositions 3.4, 4.10) is rigorous, and the proofs in the appendix enhance the credibility of the results, the assumption of a fully known causal graph is restrictive and may not be practical in real-world scenarios. The paper should discuss how partial knowledge of the causal structure (e.g., unobserved confounders) could impact the algorithm’s performance. Additionally, the surrogate model assumes independent Gaussian processes for each objective, but a discussion on how to handle shared confounders or leverage multi-task learning would improve the methodological robustness.
c. The proof of section 4.1 derivations should be presented in a more symbolic and structured manner to enhance readability.
d. While the paper is generally well-written, some sections lack clarity. For example, the construction of the structural causal model (SCM) in Appendix B.2 (Theorem 4.8 proof) is overly technical and difficult to follow.
e. The baseline comparisons appear to be limited to DGEMO (2020). To convincingly demonstrate the advantages of causal Bayesian optimization, the study should compare against more recent and efficient multi-objective optimization algorithms.
f. Figure 7 presents experimental results comparing MO-CBO with MOBO on a real-world health application, where MO-CBO demonstrates superior performance. However, additional real-world applications should be explored to further substantiate the practical utility of MO-CBO.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer 8ruG,
Thank you for your insightful suggestions to improve our work with a more extensive literature review, better defined limitations, and improved experimental evaluation! We hope that our response below will further reinforce your confidence in our work:
# Extensions to MO-CBO
Our contribution is the development of a multi-objective variant of CBO, MO-CBO, for which we theoretically and empirically show superior performance compared to traditional MOBO. We introduce MO-CBO as its own problem formulation and our paper aims to provide a methodology to address this new type of problem. This includes a decomposition of the MO-CBO task along with establishing graph characterizations to identify possibly Pareto-optimal sets to intervene upon. Our empirical validation supports the theoretical claims.
**Combining MO-CBO with existing CBO variants.** There exist many CBO variants, each developed through its own dedicated research effort - examples include dynamic CBO, functional CBO, and constrained CBO. In this sense, our work stands as a distinct research contribution in its own right. We find it more appropriate to explore these integrations in future research.
**Prior knowledge on the causal structures.** Requiring prior knowledge about the causal graph is indeed a notable limitation of causal Bayesian optimization methods, including our own work. To better communicate this, we now added a limitation section (see our response to reviewer easG for more details on the applicability of our method under causal knowledge requirements). We would like to emphasize that investigating how MO-CBO performs under partial knowledge of the causal structure should be the focus of a separate research effort aimed at developing graph-agnostic (MO-)CBO methods (e.g. see Mukherjee et al., (2024)).
**Multi-task GPs.** Multi-task GPs are a possible technique to increase cost-effectiveness of MO-CBO. We had already discussed this at the end of the paper as a limitation of our work since we believe that an extensive study on multi-task GPs for MO-CBO is best dedicated to future work.
# Literature Review
We do agree with you that our review of the current state of the field would benefit from a more thorough analysis of existing literature! Therefore, we included a review on multi-armed bandit problems as well as a more detailed description of existing CBO variants in order to give a better overview of the methods within causal decision-making. Moreover, we discuss the assumption of having fully vs. partial knowledge of the causal graph in the prior works.
# Proofs
**Intuition of proofs.** Thank you for pointing this out! We did restructure some of the proofs to enhance readability - see our response to reviewer fTDx for a more structured proof of Proposition 4.2. However, we still believe that maintaining mathematical rigor requires some level of technical detail. That said, to enhance readability, we have now included a description of the main idea of each proof within 1-2 sentences in the main body of the paper. For instance, for Proposition 3.4 we write: “The rigorous proof of Proposition 3.4 is given in Appendix A, and it exploits the fact that the space of all intervention set-value pairs is the union of the input spaces of each local problem. This allows to match the Pareto optimal intervention set-value pairs with the Pareto-optimal solutions from the local problems where the intervention set is fixed.”
**Proof of Theorem 4.8.** In Theorem 4.8 we show that any intervention on $\mathbf{X}\_s$ is weakly-dominated by some intervention on $\textnormal{IB}(\mathcal{G}\_{\overline{\mathbf{X}}\_s}, \mathbf{Y})$. Thus, we did not construct an SCM here. We assume you mean Proposition B.2 or Proposition 4.7? We are happy to answer any remaining questions regarding these proofs!
# Experiments and Baselines
**Additional real-world experiment.** Following your suggestion, we include one more real-world problem, where the SCM describes macro-economic relations based on real-world data (Höllig et al., 2023). Due to space limitations, we would like to point you to our response to reviewer 7sGg for preliminary results.
**MOBO baselines.** We expand our ablation study to include the MOBO algorithms DGEMO, TSEMO, ParEGO, MOEA/D-EGO, qParEGO and qNEHVI. We find that our method performs better than all baselines across all tasks. For more details (including preliminary results) we refer to our response to reviewer 7sGg.
Thanks again for your thoughtful comments - they greatly contribute to improving the clarity of our work! We hope our explanations have addressed your concerns, and we look forward to hearing back.
## References
Mukherjee et al. Graph Agnostic Causal Bayesian Optimisation. In NeurIPS 2024 Workshop on Bayesian Decision-making and Uncertainty.
Höllig et al. Semantic meaningfulness: Evaluating counterfactual approaches for real-world plausibility and feasibility, in: xAI, 2023, pp. 636–659. | Summary: The paper considers the problem of multi-objective optimisation with knowledge of the causal graph of the underlying system, where the objectives are the interventional means of a set of target variables. The authors propose a bayesian optimisation solution and prove theoretically that knowledge of the causal graph provides advantages by reducing the search space for optimisation. The authors prove, using synthetic data, that their method is better than the baselines of no knowledge of causality.
Claims And Evidence: I think the claims are supported, with the exception of the applicability of which I talk in more detail in the weaknesses section.
Methods And Evaluation Criteria: I am unsure. I am not aware of the GD and IGD metrics to evaluate multi objective optimisation problems. I think it would be beneficial for the unknowledgeable readers like me to have a very short introduction on the appendix. In addition I would also include the definition of HVI and $\mathcal{H}$ in the appendix.
Theoretical Claims: The only proof I read in detail is the proof on the paper. If the rest of the proofs are of the same quality and clarity, I don’t think there should be any problem with those.
Experimental Designs Or Analyses: The experimental analyses seem reasonable to me. I would not personally call the health problem a real-world problem since, in the end, it is using synthetic functions to evaluate the model. Additionally, I would not say “Weight” and “BMI” are easy variables to treat.
Supplementary Material: I checked the details of the experiments but did not check the proofs of the mathematical statements.
Relation To Broader Scientific Literature: This paper extends the research in two clear directions, one relating to the use of causality in Bayesian optimisation and the other relating to multiobjective Bayesian optimisation. I think the approach that the authors are proposing is sensible with respect to what has been already said in these two areas.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: Strengths:
- I think the approach proposed by the authors is sensible and fills some of the gaps of the literature.
- The theoretical analysis is serious and exploits some interesting characteristics of causality.
Weaknesses:
- I find the applicability of the method to be limited. Not in the sense that the method is not –theoretically– general enough, but in the sense that the conditions for the method to be applied are too constrained. When do such multi-objective optimisation problems arise in real world problems where in addition we have knowledge of the causal graph?
- The authors could say that the health problem is one such possibility, but as I argued above, weight and BMI are not easily manipulable. But even beyond that, suppose that the health problem is a real world multiobjective optimisation with known causal graph. Then intervening on the system would render a different causal graph (where the arrows of the intervened variables are deleted) and the optimisation becomes a different problem? Maybe I’m missing something here but it would be interesting to understand this a bit better.
Other Comments Or Suggestions: - There is a LaTeX compilation error on Figure 2 below “Solve the local problems”.
- Line 122, second column should be a subset symbol instead of a set membership symbol, I believe.
- Line 124, second column, any reason for using lowercase $x$ instead of $\bigtimes$ (\bigtimes) (or $\Pi$ (\Pi)) for the product space?
- It seems this is intentional but I find it strange that in line 181 and 202, second column; 273 first column; $\mu_i$ instead of just $\mu$ but then you refer to “for all $i$”.
Questions For Authors: Please see weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer easG,
Thank you for recognizing our contributions and for your insightful feedback pointing out the need to better discuss the applicability of our approach and adding further evaluations.
# Applicability of MO-CBO
Requiring prior knowledge about the causal graph is indeed a notable limitation of current causal optimization methods and of our work. We acknowledge that this limitation was not highlighted sufficiently in the initial version of our paper. To better communicate this, we now added a limitation section.
Firstly, in many domains causal knowledge is readily available. For example, the effects of drugs are studied in experimental trials in the medical field. Knowledge about the effects, side-effects, and interactions is available and an example of causal relationships that can be used for CBO. Secondly, if causal information is not readily available, there exist methods to discover it, falling under the field Causal Discovery (Zanga et al., 2023). Here, the corresponding methods aim to identify causal structures from data.
**Health Problem.** We refer to the Health example as a real-world problem since the relations between the variables were derived from real-world analyses (Ferro et al., 2015). We agree that weight and BMI are not easy-to-treat variables. However, they are treatable under high costs. We increased the cost of intervention for the BMI and weight variables in our experiment and added a respective discussion. As a result, the distinction between MOBO and MO-CBO is even clearer, with MO-CBO demonstrating a significantly higher coverage of the causal Pareto front compared to MOBO. The IGD under MO-CBO is 0.02 whereas the IGD under MOBO is 0.05 in an output space ranging from values 0.05 to 0.4 for the Statin medication (thus the difference in IGD is not negligible) .
**Economics Problem.** We include one more real-world problem, where the SCM describes macro-economic relations based on real-world data (Höllig et al., 2023). Due to space limitations, we would like to point you to our response to reviewer 7sGg for preliminary results.
# Graph Representation of Interventions
Regarding your question if intervening on the system would render a different causal graph: An intervention on variables $\mathbf{X}\_s \subseteq \mathbf{X}$ involves replacing the structural equations $f\_{X}$ with a constant $x$ for all $X \in \mathbf{X}\_s$, denoted $\text{do}(\mathbf{X}\_s = \mathbf{x}\_s)$. Thus, performing an intervention removes the influence of all variables on $\mathbf{X}\_s$. The graph $\mathcal{G}\_{\overline{\mathbf{X}}\_s}$ represents this intervention and is obtained by removing incoming edges into $\mathbf{X}\_s$. Therefore, $\mathcal{G}\_{\overline{\mathbf{X}}\_s}$ is used to denote the causal structure under intervention and to perform theoretical analyses regarding the effects of interventions. Importantly, the optimization problem itself remains unchanged.
# Performance Metrics
We added the definitions of GD and IGD from Schulze et al. (2012). They are as follows:
Let $\mathbf{A}$ be the set of points from an approximated Pareto front and let $\mathbf{Z}$ be the set of points on the true Pareto front.
**GD.** The GD is the average distance from any point $\mathbf{a}\_i \in \mathbf{A}$ to its closest point in the Pareto front $\mathbf{Z}$. Formally,
$$\text{GD}(\mathbf{A},\mathbf{Z}) = \biggl( \ \frac{1}{|\mathbf{A}|} \sum\_{i=1}^{|\mathbf{A}|} d\_i^p \ \biggr)^{1/p},$$
where $d\_i$ is the Euclidian distance from $\mathbf{a}\_i$ to its nearest point in $\mathbf{Z}$; we set $p=2$ in our experiments.
**IGD.** The IGD measures the average distance from any point $\mathbf{z}\_i \in \mathbf{Z}$ to its closest point in $\mathbf{A}$.
Formally,
$$\text{IGD}(\mathbf{A},\mathbf{Z}) = \biggl( \ \frac{1}{|\mathbf{Z}|} \sum\_{i=1}^{|\mathbf{Z}|} \hat{d\_i}^p \ \biggr)^{1/p},$$
where $\hat{d\_i}$ is the Euclidian distance from $\mathbf{z}\_i$ to its nearest point in $\mathbf{A}$.
The GD evaluates the convergence of the approximated Pareto front to the true front, whereas the IGD measures the diversity of the solutions across the output space.
We also implemented your remaining suggestions and included the definition of the hypervolume in the Appendix. We hope our explanations fully address your concerns, and we look forward to your response!
## References
Zanga et al. 2022. A survey on causal discovery: Theory and practice.Int. J. of Approximate Reasoning, 151 (2022), 101–129.
Schutze et al. (2012). Using the averaged hausdorff distance as a performance measure in evolutionary multiobjective optimization. IEEE T. on Evolutionary Computation, 16(4), 504–522.
Ferro et al. (2015). Use of statins and serum levels of prostate specific antigen. Acta Urológica Portuguesa, 32.
Höllig et al. Semantic meaningfulness: Evaluating counterfactual approaches for real-world plausibility and feasibility, in: xAI, 2023, pp. 636–659. | Summary: - Decision-making outcomes depend on causal relationships and evaluating them is costly.
- Causal Bayesian optimization uses these relationships to find optimal interventions efficiently.
- Multi-objective causal Bayesian optimization (MO-CBO) extends causal Bayesian optimization to identify Pareto-optimal interventions.
- MO-CBO can decompose into several traditional multi-objective tasks and balance exploration across them.
- MO-CBO is validated on synthetic and real data, outperforming non-causal methods when causal information is available.
Claims And Evidence: - This research tackles an intriguing issue in causal Bayesian optimization.
- The theoretical results strongly back the claims of this study, but I have some concerns regarding the experiments.
Methods And Evaluation Criteria: - Can you provide mathematical definitions of GD and IGD?
- What are PSA and Stain in the Health problem?
- How can we choose $m$ for the number of independent Gaussian processes?
Theoretical Claims: - The authors can provide the proof sketches of the propositions and theorems.
Experimental Designs Or Analyses: - All evaluations at intervention 0 should be identical across MO-CBO and MOBO, if the same random seeds are given for both methods.
- I think the authors can add more baselines such as qNEHVI and ParEGO.
- Why are the results of MO-CBO and MOBO similar in Synthetic 1? Both just work well in this problem? If there is a particular message of this problem, it might be able to be removed.
- The results of the Health problem are also similar across two methods. Why do both methods yield similar results?
Supplementary Material: - I briefly go through the supplementary material.
- It is a minor thing, but Figure 9 should be a table. The font size of this table is too large.
Relation To Broader Scientific Literature: - This work aligns with causal Bayesian optimization and broader scientific literature on Bayesian optimization.
Essential References Not Discussed: - I don't think there are specific references not discussed.
Other Strengths And Weaknesses: Please see above.
Other Comments Or Suggestions: - Are there additional real-world problems to consider? Including more could enhance the presentation of this work.
- In Figure 2, there is Sec. ??. It should be fixed.
Questions For Authors: - As $m$ becomes large, the computational cost of building a surrogate model increases.
- Could you provide the run-time results for the proposed method?
- How can the causal Pareto front be identified? Is there a difference between determining the standard Pareto front and the causal Pareto front?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer 7sGg,
Thank you for your thoughtful feedback and for recognizing our contributions! We have incorporated all of your suggestions to enhance readability, include additional baselines, and provide a clearer formulation of the causal Pareto front for the community.
# Baselines
Thank you for your suggested improvements to our experimental analysis! We now include the MOBO algorithms DGEMO, TSEMO, ParEGO, MOEA/D-EGO, qParEGO, and qNEHVI as baselines. Additionally, we added another real-world problem, where the SCM describes macro-economic relations and is based on real-world data (Höllig et al., 2023). The average GDs are (qParEGO and qNEHVI are still being implemented):
| | DGEMO | TSEMO | ParEGO | MOEA/D-EGO | MO-CBO (ours) |
|----|:----:|:----:|:----:|:----:|:----:|
| Synthetic-1 | **0.14** | 0.16 | 0.30 | 0.38 |**0.14**|
| Synthetic-2 | 7.55 | 7.78 | 12.50 | 11.95 |**1.85**|
| Health | 0.07 | 0.07 | 0.09 | 0.20 |**0.06**|
| Economics | 4.66 | 4.93 | 5.56 | 6.41 |**1.22** |
Analyzing both GD and IGD, we find that our method performs better than all baselines across all tasks.
# Experiments
**IGD and GD values at zero interventions.** Thank you for pointing this out - this is a x-ticks label error. Our algorithm is initialized with prior data that is used to compute a first estimate of the causal Pareto front before starting the optimization loop. We now label the start of the plots with len(prior data) = 5 to resolve this unclarity.
**Similarity between MOBO and MO-CBO.** It is theoretically grounded from Proposition 4.4 that both methods will converge to the same results when no $Y\_i$ is confounded with its ancestors via unobserved confounders, which is indeed the case for both Synthetic-1 and the Health example. In these scenarios, MOBO intervenes on all treatment variables $\mathbf{X}$, while MO-CBO intervenes only on the parents of $\mathbf{Y}$. Note that intervening on $\mathbf{X}$ yields the same results as restricting the intervention to the parents of $\mathbf{Y}$. As shown in Fig. 4, MO-CBO identifies a greater number of solutions by avoiding unnecessary interventions on elements outside the minimal intervention set defined by the parents of $\mathbf{Y}$. This approach reduces costs while achieving broader coverage of the causal Pareto front. However, when some $Y\_i$ is confounded with its ancestors via unobserved confounders, our method can establish superior results than traditional MOBO approaches as demonstrated in Fig. 6 for Synthetic-2.
Moreover, we added the definitions of our metrics as well as the runtime results to the Appendix. All methods, including ours, have an average runtime per iteration between 0.3 and 14 seconds. The fastest method is TSEMO with 0.2 - 4 seconds per iteration (depending on the problem) while ours ranges between 3 and 6 seconds.
# General Discussion
**Number of independent Gaussian Processes.** The number of independent Gaussian processes (GP) $m$ is equal to the number of target variables, we describe this in the preliminaries of MO-CBO in Section 2. We train the GPs independently from each other, resulting in a linear increase of computational cost with $m$.
**Proof Sketches.** Thank you for this suggestion - we think this greatly improves the overall understanding of our methodology! We have now included a description of the main idea of each proof within 1-2 sentences in the main body of the paper. For instance, for Proposition 3.4 we write: “The rigorous proof of Proposition 3.4 is given in Appendix A, and it exploits the fact that the space of all intervention set-value pairs is the union of the input spaces of each local problem. This allows to match the Pareto optimal intervention set-value pairs with the Pareto-optimal solutions from the local problems where the intervention set is fixed.”
**Causal Pareto front.** In MO-CBO, the input space comprises all possible intervention set-value pairs. Based on proposition 3.4, the causal Pareto front is constructed by identifying the Pareto-optimal points across the solutions of local problems, each corresponding to a fixed intervention set. However, since *Pareto front* is a well-established concept in MOBO, we acknowledge that the phrase "causal Pareto front" can cause confusion. To clarify, the causal Pareto front simply represents the best attainable trade-offs *within in a causal system*, considering intervention set-value pairs as inputs. Based on your comment, we propose to rename the term to *Pareto front in causal systems*.
We have addressed all of your remaining comments as well as added the clarification on PSA (prostate specific antigen) and Statin (a type of medication) to the experiments section. Thanks again for your comments to enhance the clarity of our work! We look forward to your response.
## Reference
Höllig et al. Semantic meaningfulness: Evaluating counterfactual approaches for real-world plausibility and feasibility, in: xAI, 2023, pp. 636–659. | Summary: This paper presents a unification of Multi-objective Bayesian Optimisation and Causal Bayesian Optimisation to causal settings in which dependence between variables is mediated by a causal graph, maintaining the same assumptions of a known causal graph but not the structural equations or exogenous distribution as CBO. This results in a procedure for performing Multi-objective Causal Bayesian Optimisation in such settings. The authors develop theory that allows them to reduce the search space down from the power set over manipulative variables to a smaller set of so-called "possibly Pareto-optimal minimal intervention sets", which motivates an algorithm that assesses only possibly Pareto-optimal minimal intervention sets to solve MOCBO problems. The performance of their method is validated in three experiments in which a non-causal Multi-objective Bayesian Optimisation procedure acts as a baseline.
Claims And Evidence: The theory and experiments support the claims made by the authors.
Methods And Evaluation Criteria: I think the experiments to evaluate their method are well-designed and demonstrate the value of the proposed method. They demonstrate this by considering settings in which there are and are not hidden confounders to highlight the overlap in and differences between MO-CBO and MOBO.
Theoretical Claims: I went through all proofs. The only thing that struck me as potentially problematic (but more likely a misunderstanding on my part, hence I would appreciate clarification from the authors please) was the proof for Proposition 4.2. My concern is that the proof appears limited in generality under the assumption that all nodes are associated with their own fair binary exogenous variable, and that the structural equations are just sums over the values of the parent nodes. Why is it ok to consider this case only to prove what looks like a very general claim in Proposition 4.2?
Also, is Proposition 4.2 a new result? I am not sure that I have an especially well-calibrated sense of what is significant in this field, but this result seems like it deserves more attention than the authors are currently giving it?
Experimental Designs Or Analyses: As discussed above, I think the experiments are selected well to highlight the value of this method over non-causal counterparts.
Supplementary Material: Yes, I went through all the supplement to look at further experimental details and proofs.
Relation To Broader Scientific Literature: The paper relates well to relevant prior literature in both causal decision-making and multi-objective Bayesian optimisation (MOBO). In particular, the method naturally extends upon the DGEMO algorithm for non-causal MOBO, and the benefits of this innovation are clearly demonstrated in the experiments.
Essential References Not Discussed: None that come to mind.
Other Strengths And Weaknesses: Further strengths are: notation is clear and defined well up front (I found it necessary to refer back to earlier parts of the work to look up notation, but could always find definitions clearly stated earlier); and I appreciated the two **Examples** the authors placed in Section 4 to help with intuition and clarity.
Other Comments Or Suggestions: Page 4, just before Proposition 4.4, there is a typo: you write "...which we **proof** in Appendix B.2..." but you want "...which we **prove** in Appendix B.2..."
Questions For Authors: Please see **Theoretical claims** above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer fTDx,
We greatly appreciate your detailed feedback and your recognition of our contributions! We are happy to address your remaining questions below:
# Proposition 4.2
Proposition 4.2 is based on Proposition 1 of Lee & Bareinboim (2018), which formalizes the concept of minimal intervention sets for graphs with a single target variable $Y$. We extend this result to causal graphs with multiple target variables $Y\_1,…,Y\_m$.
Your question refers to the “if” direction of the proof. Let us first state the definition of minimal intervention sets along with the proposition.
**Definition 4.1** (Minimal intervention set)**.** A set $\mathbf{X}\_s \in \mathbb{P}(\mathbf{X})$ is called a *minimal intervention set* if there exists no subset $\mathbf{X}\_{s}' \subset \mathbf{X}\_s$ such that for all $\mathbf{x}\_s \in \mathcal{D}(\mathbf{X}\_s)$ it holds $\mu(\mathbf{X\_s},\mathbf{x}\_s) = \mu(\mathbf{X}\_{s}',\mathbf{x}\_s[\mathbf{X}\_{s}'])$, $1 \leq i \leq m$, for every SCM conforming to $\mathcal{G}$.
**Proposition 4.2** Given a causal graph $\mathcal{G}$, $\mathbf{X}\_s \in \mathbb{P}(\mathbf{X})$ is a minimal intervention set if and only if it holds $\mathbf{X}\_s \subseteq \textnormal{an}(\mathbf{Y})\_{\mathcal{G}\_{\overline{\mathbf{X}}\_s}}$.
# Proof Strategy
The "if" direction of the proposition states: $\mathbf{X}\_s \subseteq \textnormal{an}(\mathbf{Y})\_{\mathcal{G}\_{\overline{\mathbf{X}}\_s}} \implies \mathbf{X}\_s$ is a minimal intervention set. To prove this, we use contraposition, meaning we will show the equivalent statement: $\mathbf{X}\_s$ is not a a minimal intervention set $\implies$ $\mathbf{X}\_s \not\subseteq \textnormal{an}(\mathbf{Y})\_{\mathcal{G}\_{\overline{\mathbf{X}}\_s}}$. We show this contrapositive via contradiction: Suppose $\mathbf{X}\_s$ is not a minimal intervention set, but assume for contradition $\mathbf{X}\_s \subseteq \textnormal{an}(\mathbf{Y})\_{\mathcal{G}\_{\overline{\mathbf{X}}\_s}}$. By definition 4.1, there exists a subset $\mathbf{X}\_{s}' \subset \mathbf{X}\_s$ such that **for all SCMs conforming to** $\mathcal{G}$, intervening on $\mathbf{X}\_{s}'$ yields the same outcome as intervening on $\mathbf{X}\_s$, i.e., $\mu(\mathbf{X}\_s,\mathbf{x}\_s) = \mu(\mathbf{X}\_{s}',\mathbf{x}\_s[\mathbf{X}\_{s}'])$, where $\mathbf{x}\_s [\mathbf{X}\_{s}']$ are the values of $\mathbf{x}\_s$ corresponding to $\mathbf{X}\_s \cap \mathbf{X}\_{s}'$. We construct a specific SCM that conforms to $\mathcal{G}$, meaning that we assign structural equations between the variables within $\mathcal{G}$. Then, we leverage these structural equations along with the assumption $\mathbf{X}\_s \subseteq \textnormal{an}(\mathbf{Y})\_{\mathcal{G}\_{\overline{\mathbf{X}}\_s}}$ to establish a contradiction, which is given by $\mu(\mathbf{X\_s},\mathbf{x}\_s) > \mu(\mathbf{X}\_{s}',\mathbf{x}\_s[\mathbf{X}\_{s}'])$, thereby breaking the equality. This invalidates our initial assumption, yielding $\mathbf{X}\_s \not\subseteq \textnormal{an}(\mathbf{Y})\_{\mathcal{G}\_{\overline{\mathbf{X}}\_s}}$. We have thus proven the contrapositive statement.
**Generality.** Since $\mathbf{X}_s \not\subseteq \textnormal{an}(\mathbf{Y})\_{\mathcal{G}\_{\overline{\mathbf{X}}\_s}}$ is a graph-topological property (i.e. it does not associate to any specific SCM), the above strategy completes the proof. Different SCMs cannot yield different graph-topological properties. The constructed SCM serves only as a tool to establish the desired result. We recognize that this aspect may not have been fully articulated in the original proof and have now revised it to improve clarity. While our proof required some technical modifications to the one by Lee & Bareinboim (2018), the core idea remains unchanged.
Thank you for your time and extensive review of all our proofs. We believe your questions and remarks greatly improve the understanding of our derivations for future readers! We hope this clarification fully addresses your concerns, and we look forward to your response.
## References
Lee, S. and Bareinboim, E. Structural causal bandits: Where to intervene? In Advances in Neural Information Processing Systems, 2018.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response and for clarifying – indeed I think I failed to retain the "for all SCMs conforming to $\mathcal{G}$" while reading through the proof. Nice work, and I'm happy to maintain my score.
---
Reply to Comment 1.1.1:
Comment: Great, thank you! Happy to have sorted it out. | null | null | null | null | null | null |
Agent Reviewers: Domain-specific Multimodal Agents with Shared Memory for Paper Review | Accept (poster) | Summary: This paper proposes a multi-agent system to simulate the review process of research papers, called Agent Reviewers. It is equipped with multi-agent interaction, shared memory pool, and multimodal agent. It empowers agent reviewers with observations not only on textual content but also on visual content. The experiments have shown an improvement compared with previous works. This work can benefit the research community, especially by providing more chances to polish their work for all researchers.
Claims And Evidence: In the experiment section, the author presents abundant results to show the effectiveness of their works. I think they can verify the claims that the author has proposed.
Methods And Evaluation Criteria: The demonstration of the proposed method is clear. It designs multiple roles to simulate different reviewers during various stages. The review process is reasonable as well. The Share Memory Pool can further enhance the capability to provide responses of higher quality. The evaluation metrics are reasonable but should be further discussed in the main part of this paper.
Theoretical Claims: There seem to be no theoretical claims.
Experimental Designs Or Analyses: The experimental designs are reasonable, with major experimental results and ablation studies. These experiments verify the claims proposed in Section 1, such as the capabilities of multimodality and SMP.
Supplementary Material: I view the supplementary materials in brief. I think the explanation of metrics can be put in the main text of this paper.
Relation To Broader Scientific Literature: Yes, this paper can contribute to the research community.
Essential References Not Discussed: There seem to be no essential references that are not discussed.
Other Strengths And Weaknesses: Please see above.
Other Comments Or Suggestions: Please see above.
Questions For Authors: - Are the results different among various conferences? For example, ICLR and NeurIPS.
- Are the results different among various years? For example, ICLR 2024 and ICLR 2023.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the detailed comments.
**Q1**: Suggestion of put the explanation of metrics in the main text of this paper.
**A1**: Thank you for your suggestion. We agree that a clear explanation of metrics is crucial for understanding the paper. However, due to the page limit of the main text in ICML, we opted for a trade-off by providing qualitative descriptions in the main text and detailed explanations in Appendix B.1. We plan to provide a more detailed explanation of the metrics in the main text in a future version, which will be updated on arXiv after acceptance.
**Q2**: Are the results different among various conferences? For example, ICLR and NeurIPS.
**A2**:
We have supplemented the results for ICLR 2024 and NeurIPS 2024. For each conference, 100 papers were randomly sampled for evaluation, and the default LLM GPT-4o-mini was used. The results are shown in the table below. Columns from Recall(S&W) to Jaccard(S&W) represent the analysis of strengths and weaknesses, while columns from F1(Dec.) to G-mean(Dec.) represent the analysis of decisions. "S&W" refers to strengths and weaknesses, and "Dec." stands for decisions. All metrics are better when larger.
|Conference|Method|Recall(S&W)|F1(S&W)|MaxSim(S&W)|Jaccard(S&W)|F1(Dec.)|MCC(Dec.)|Bal.A(Dec.)|G-mean(Dec.)|
|:-|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|ICLR 2024|Single Agent|0.347|0.416|0.442|0.270|0.469|0.219|0.609|0.593|
|ICLR 2024|Agent Reviewers|0.409↑|0.453↑|0.476↑|0.307↑|0.605↑|0.385↑|0.705↑|0.705↑|
|NeurIPS 2024|Single Agent|0.357|0.440|0.444|0.299|0.349|0.039|0.525|0.424|
|NeurIPS 2024|Agent Reviewers|0.418↑|0.461↑|0.476↑|0.319↑|0.589↑|0.120↑|0.591↑|0.569↑|
ICLR 2024 and NeurIPS 2024 produce similar outcomes, where Agent Reviewers outperform the Single Agent baseline, achieving an average relative improvement of **23.5%** on ICLR and **44.7%** on NeurIPS, thereby demonstrating consistency.
**Q3**:Are the results different among various years? For example, ICLR 2024 and ICLR 2023.
**A3**:
We have supplemented the results for ICLR 2023, and ICLR 2024. For each conference, 100 papers were randomly sampled for evaluation, and the default LLM GPT-4o-mini was used. The cutoff year for the shared memory pool (SMP) was set to one year prior to the test data year, to avoid data leakage from SMP. The results are shown in the table below. Columns from Recall(S&W) to Jaccard(S&W) represent the analysis of strengths and weaknesses, while columns from F1(Dec.) to G-mean(Dec.) represent the analysis of decisions. "S&W" refers to strengths and weaknesses, and "Dec." stands for decisions. All metrics are better when larger.
|Conference|Method|Recall(S&W)|F1(S&W)|MaxSim(S&W)|Jaccard(S&W)|F1(Dec.)|MCC(Dec.)|Bal.A(Dec.)|G-mean(Dec.)|
|:-|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|ICLR 2024|Single Agent|0.347|0.416|0.442|0.270|0.469|0.219|0.609|0.593|
|ICLR 2024|Agent Reviewers|0.409↑|0.453↑|0.476↑|0.307↑|0.605↑|0.385↑|0.705↑|0.705↑|
|ICLR 2023|Single Agent|0.334|0.408|0.432|0.271|0.474|0.169|0.580|0.554|
|ICLR 2023|Agent Rreviewers|0.390↑|0.439↑|0.467↑|0.298↑|0.538↑|0.139↓|0.570↓|0.570↑|
Whether in 2023 or 2024, Agent Reviewers have shown better results than the single-agent baseline, demonstrating the consistency of the gains of our method. It should be noted that for ICLR 2023, due to the earlier cutoff year of the SMP, the number of papers in the SMP decreased from 28,372 to 21,156 (a 25% relative decrease), which may cause the performance decline in 2023. | Summary: This paper introduces a multi-agent system that enhances automated peer review. It mimics human review processes by employing domain-specific agents, a multimodal reviewer for visual analysis, and a shared memory pool (SMP) that retrieves past paper reviews for informed evaluation. The system also introduces *Reviews-STD*, a standardized dataset of paper reviews from ICLR and NeurIPS, formatted into strengths, weaknesses, and decisions. Tested on 300 ICLR 2024 papers, *Agent Reviewers* outperforms existing AI-based review systems.
Claims And Evidence: The proposed system shows better results compared to baselines but the setting of the experiments (see below) and some questions (see below) need to be clarified to better review this paper.
Methods And Evaluation Criteria: Please see questions below.
Theoretical Claims: There is no theoretical claims in this paper.
Experimental Designs Or Analyses: I have thoroughly reviewed the experimental designs and identified several critical questions that need to be addressed:
First, there are concerns about the open-source model's performance metrics and how they compare to proprietary models. The paper would benefit from a more detailed analysis of these performance comparisons.
Second, the authors should clarify their plans for code and data availability. Making these resources open-source would greatly enhance reproducibility and allow the broader research community to build upon this work.
Third, there are potential data leakage concerns regarding the use of Gemini and Deepseek v3 models. The authors should address how they prevented any training data overlap between these models and their evaluation dataset.
Supplementary Material: Yes, I have reviewed all the supplementary material.
Relation To Broader Scientific Literature: This paper is related to the agent system for automatic paper reviews. The general idea is to develop automatic agent systems to review papers.
Essential References Not Discussed: The related work has discussed essential references
Other Strengths And Weaknesses: The motivation and research question is important for this field and the proposed system seems promising but there are still quite a few questions need to be addressed for further review.
Other Comments Or Suggestions: To strengthen the system's capabilities, I suggest implementing real-time online search functionality rather than relying exclusively on retrieval-augmented generation (RAG). This would allow the system to access the most current research and developments in the field.
Additionally, the system would benefit from expanding its memory pool to include papers from a broader range of conferences and journals beyond ICLR and NeurIPS. This expansion, combined with real-time search capabilities, would provide more comprehensive and up-to-date context for paper evaluations.
Questions For Authors: I have several important questions for the authors that would help clarify key aspects of the system and potentially influence my evaluation:
First, I would like to understand the scope of background knowledge incorporated into the system. Which specific academic fields and domains are currently covered, and how comprehensive is this coverage?
Second, regarding data quality considerations, could you elaborate on how the presence of low-quality reviews in the training dataset affects the system's overall performance and reliability?
Third, I have concerns about the current architecture using a single LLM for multiple agent roles. This approach may limit the diversity of perspectives and capabilities. Have you considered evaluating the system's performance with different specialized LLMs assigned to different reviewer roles?
Fourth, the disparity between Strength and Weakness scores is notable. Could you provide insights into why the system appears to be less effective at identifying weaknesses compared to strengths in papers?
Fifth, what kind of LLMs are used in the baseline methods in Table 2? Line 307 indicates that these baselines follow their default configurations but it is unclear what LLM they are using.
Finally, the current analysis seems to focus primarily on titles, abstracts, and introductions (line 281). Could you explain this limitation and discuss any plans to extend the analysis to include the full content of papers?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your detailed and insightful comments.
**Q1**:What kind of LLMs are used in the baseline methods in Table 2?
**A1**:For the main experiment, all baselines and our Agent Reviewers use GPT-4o-mini for fair comparison (see Table 2 header). Other baseline settings like hyperparameters follow their defaults (Line 307, and detailed in Appendix B.2.). Thanks for your question; we'll clarify in the main text.
**Q2**:Concerns about potential data leakage of Gemini and Deepseek v3.
**A2**:We selected the latest ICLR 2024 data as the test set, taking measures to avoid data leakage. In main experiments, we used GPT-4o-mini (knowledge cutoff: Oct. 2023, no leakage). Gemini-exp-1206 (cutoff: Dec. 2023, no leakage) and Deepseek-V3 (cutoff: Jun. 2024, leakage not guaranteed despite efforts) were used only for cross-LLM generalization verification. The risk is explained in Appendix B.4.
We contend Deepseek-V3 has no serious evaluation-affecting leakage due to:1)Overall performance is inferior to GPT-4o-mini(no leakage).2)When asked to detail papers by title, hallucinations are severe.3)Table 12 shows Agent Reviewers outperforms Single Agent with Deepseek-V3, indicating our method value.
**Q3**:Background knowledge domains and coverage.
**A3**:We currently focus on review in AI. Background knowledge comes from LLMs' inherent knowledge and the shared memory pool(SMP), which has 283728 AI papers and reviews from ICLR 2017-2023 and NeurIPS 2016-2023 on OpenReview, spanning nearly all AI fields. Notably, our training-free method can be applied to other fields by expanding SMP.
**Q4**:Open-source model's performance.
**A4**:Thanks for your suggestion. We've added comparisons with more open-source models (Deepseek-V3-0324, Qwen2.5-VL-72B-Instruct, InternVL-2.5-78B) in the [link](https://imgdrop.io/image/2iYyt).
Overall, GPT-4o-mini performs best, Deepseek-V3 close behind. Largest open-source model approaches or surpasses some proprietary ones. Little difference exists between 2 Deepseek-V3 versions.
**Q5**:Diversity concern & Proposal for different LLMs in different role experiments.
**A5**:We agree on the importance of diversity in review, tried your suggestion with GPT-4o-mini, Gemini-exp-1206, and Deepseek-V3-0324 as LLMs for 3 domain-specific reviewers. Results at [link](https://imgdrop.io/image/2iI9B).
We've seen no better results with a multi-LLM system. We analyze that our system has already enhanced review perspective diversity by endowing domain-specific reviewers with different domain knowledge, so multi-LLMs have limited room for improvement.
**Q6**:Concerns about low-quality review impact in training data.
**A6**:Our method is training-free. We initialized the SMP with ICLR 17-23 and NeurIPS 16-23 papers, and retrieved paper info(summaries, processed reviews) as domain-specific background knowledge for reviewers.
We agree low-quality reviews can affect performance and mitigate it as follows: 1) Processed reviews in SMP, retaining AC decisions and LLM-aggregated pros&cons to reduce individual low-quality review impact.2) Retrieved 5 domain-related papers for each reviewers as memory to ensure sufficient high-quality reviews.
**Q7**:Why focus on title+abstr.+intro.& Plan to extend to full paper content.
**A7**:We use only title+abstr.+intro. in our current method for two reasons: 1) The full text being about 12 times longer, using it in a multi-agent system with interactions incurs great overhead.2)They're crucial in peer review, even for humans. We agree full text can provide richer info and potentially better performance, but challenges like long context need to be overcome. To extend efficiently, we suggest adding an in-depth reading feature for reviewers. After reading first three parts, they can raise questions and retrieve relevant full-text parts for further analysis before generating comments.
**Q8**:Insights into why system identifies paper weaknesses less effectively.
**A8**:Thanks for noting this profound observation. We observed it in all tested methods. We posit it's because reviewers' proposed strengths(str.) maybe similar, e.g., similar to claimed contributions, while weaknesses(wk.) vary greatly, making it harder for LLM-generated weaknesses to match humans'.
We studied the similarity of Agent Reviewers' str. and wk. generated by GPT-4o-mini, Gemini, and Deepseek-V3 for the same article. On average, GPT-4o-mini vs. Gemini: str. 0.664/wk. 0.484; GPT-4o-mini vs. Deepseek-V3: str. 0.738/wk. 0.663. Figure 6 shows most matching str. in "Novelty" & "Research Implications", wk. dispersed.
**Q9**:Suggestion about real-time online search and memory pool expanding.
**A9**:Thanks for your comments! We agree and will try these two approaches in future work to offer reviewers more comprehensive knowledge.
**Q10**:Plans for code and data availability.
**A10**:We'll open-source all code and data (include memory pool) upon acceptance for reproducibility and are preparing for it.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response which has addressed most of my questions and I tend to increase my score by 1.
---
Reply to Comment 1.1.1:
Comment: Thanks again for the detailed comments and insightful suggestions! | Summary: This paper introduces Agent Reviewers, a multi-agent system designed to enhance peer review processes using Large Language Models (LLMs). The system comprises domain-specific agents with a shared memory pool (SMP) that enables them to incorporate historical knowledge and multimodal reviewers that assess visual elements of a paper. The authors construct the largest standardized dataset of paper reviews, Reviews-STD, and evaluate their system on ICLR 2024 submissions. The results indicate superior performance compared to existing AI-based review methods, demonstrating improved quality in review comments and acceptance predictions.
Claims And Evidence: Claims
- Agent Reviewers generates more insightful and diverse reviews compared to existing AI-based review systems.
- The shared memory pool enhances domain-specific knowledge retrieval, leading to improved review accuracy.
- The multimodal reviewer contributes to better assessments by incorporating visual information.
Evidence
- Quantitative Evaluation: The system is benchmarked against AI Scientist, AgentReview, and LLM Review, demonstrating an 8.5% improvement in F1-score and a 10.5% increase in Jaccard index for strengths-weaknesses alignment.
- Decision Analysis: The system achieves a 35.7% improvement in decision F1-score and a 78.9% increase in MCC over existing methods.
- Ablation Studies: Removing the multi-agent discussion or shared memory pool leads to a decline in review accuracy, confirming their effectiveness.
- Case Studies: Examples show how the system provides richer, context-aware critiques by referencing prior research.
Methods And Evaluation Criteria: System Architecture:
- Multi-Agent Interaction (MI): Different agents collaborate in reviewing, discussion, and final decision-making.
- Shared Memory Pool (SMP): Enables domain-specific knowledge retrieval from prior papers.
- Multimodal Reviewer (MA): Evaluates visual aspects like figures and layout.
Theoretical Claims: This is an application paper.
Experimental Designs Or Analyses: - Benchmarking Against Existing Systems: Compared to AI Scientist, AgentReview, and LLM Review on ICLR 2024 papers.
- Ablation Studies: Tested the impact of removing multi-agent collaboration, shared memory, and multimodal components.
- Impact of Shared Memory Initialization: Evaluated how the cutoff year of included papers affects review quality.
Supplementary Material: - A detailed dataset description (Reviews-STD) covering ICLR and NeurIPS conferences.
- Appendices providing metric definitions, additional experiment details, and prompts used for review summarization.
- Case studies illustrating how the system leverages prior literature to generate well-informed critiques.
Relation To Broader Scientific Literature: This work builds on / is relevant to AI-assisted peer review, Multi-agent systems for research automation (AI Scientist, CycleReviewer). Multimodal AI models incorporating textual and visual data.
Essential References Not Discussed: The related work is comprehensive
Other Strengths And Weaknesses: See above
Other Comments Or Suggestions: How is the NeurIPS dataset obtained, given that only the reviews for accepted papers are released?
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the detailed comments.
**Q1**: How is the NeurIPS dataset obtained, given that only the reviews for accepted papers are released?
**A1**: All our data(papers and reviews) stems from the public data on OpenReview, including the NeurIPS dataset. As you noted, 96% of the NeurIPS papers (2016 - 2024) released on OpenReview were accepted. We describe this statistic in Appendix A. Due to the severe imbalance in the NeurIPS data, we do not use it as test data. Instead, we use ICLR 2024 for evaluation.
We use NeurIPS 2016 - 2023 and ICLR 2017 - 2023 data to initialize the shared memory pool, providing domain-specific reviewers with sufficient background knowledge for review. Although nearly all NeurIPS dataset papers are accepted, the ICLR dataset has an acceptance rate of about 37%. Consequently, papers in the shared memory pool are relatively evenly balanced in acceptance and rejection. | null | null | null | null | null | null | null | null |
TimePro: Efficient Multivariate Long-term Time Series Forecasting with Variable- and Time-Aware Hyper-state | Accept (poster) | Summary: This paper introduces TimePro, a model designed for multivariate long-term time series forecasting, but it is marred by significant writing issues. The main problem lies in the overly complex and unclear explanations. The terminology used, such as "variable- and time-aware hyper-states," is vague and confusing, making the methodology difficult to follow, especially for readers not already familiar with the specific framework. The paper overcomplicates simple ideas with jargon that could have been explained more clearly, detracting from its overall accessibility and readability. Additionally, there is a lack of coherence between sections, with abrupt shifts in focus that make the paper feel disjointed. The presentation of the model and its components lacks clarity, and the descriptions of the model's workings are not intuitive, making it harder for readers to understand the core innovations. The overall structure feels cumbersome, and a more straightforward and concise approach would significantly improve the readability and impact of the paper.
## Update after rebuttal
I checked the rebuttal, and most of my concerns are addressed. Therefore, I raise my score from 2 to 3.
Claims And Evidence: The claim of this paper are not supported by clear evidence. See "Other Strengths and Weaknesses" for details.
Methods And Evaluation Criteria: The proposed method simply applies Mamba to time series forecasting, and the improvements in various modules lack motivation.
Theoretical Claims: The paper does not contain any theoretical claims.
Experimental Designs Or Analyses: The paper includes some ablation experiments; however, due to the lack of motivation behind the module design, these ablation experiments appear meaningless.
Supplementary Material: In Figure 6 of the supplementary materials, the visual performance improvement of the proposed method compared to the first two methods is very limited.
Relation To Broader Scientific Literature: This paper merely applies the Mamba tool to time series forecasting, making the paper seem incremental.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Strengths:
The paper presents extensive experiments on several real-world datasets, validating the model’s robustness across different domains and prediction horizons.
## Weakness:
1. The "multi-delay issue" should be given a strict definition rather than a simple description, as it is the core problem addressed by the paper.
2. The authors claim in the abstract, "Traditional models typically process all variables or time points uniformly, which limits their ability to capture complex variable relationships and obtain non-trivial time representations." How does this statement logically connect to the previously mentioned "multi-delay issue"? It feels like there is a lack of logical linkage between these two sentences, making the paragraph appear confusing.
3. The description of the method in the introduction and abstract does not seem to address the "multi-delay issue" proposed by the authors, which makes the proposed method seem meaningless.
4. In the first and second paragraphs of the introduction, a large portion of the content focuses on the Mamba model, which is not directly related to the research problem (time series forecasting). This deviates from the main topic and wastes space that should focus on the challenges of time series forecasting and the contributions of this research. While Mamba provides relevant background value, the excessive detail introduced does not directly relate to the core issue of this paper and may confuse readers, affecting the overall coherence and academic rigor. The authors should consider streamlining this content and focusing more on the research background that is directly related to the paper's goals, thus better highlighting its innovation and practical significance.
5. Why introduce Transformer-based time series forecasting methods in the third paragraph of the introduction when this paper is based on a Mamba-based time series forecasting method?
6 . When introducing "Preliminaries" in Section 3, please first clarify what the inputs and outputs are, as this will prevent confusion for readers who are not familiar with Mamba and time series forecasting.
7. The method section of the paper reads like code documentation, lacking explanations of the motivations behind the module design.
8 . The experimental section spends nearly 80% of its space on ablation experiments, leaving very little room for other important information. This makes the experimental section seem sparse.
Other Comments Or Suggestions: See "Other Strengths and Weaknesses".
Questions For Authors: See "Other Strengths and Weaknesses".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and efforts in reviewing our work and providing thoughtful feedback that can further strengthen our manuscript. We have added some experiments following your suggestions and they are available at https://anonymous.4open.science/r/Anonymous_figure-2319/figure_response.pdf. Please see our detailed responses to your comments below.
### 1 Multi-delay issue definition
We have defined the multi-delay issue in both the abstract and paragraph 3 of the introduction. To make this definition strict, we modify the definition as follows.
The multi-delay issue in multivariate time series forecasting is defined as the temporal discrepancy in the propagation of influence from different predictor variables to the target variable, characterized by the presence of distinct and non-uniform time lags between the changes in predictor variables and their corresponding effects on the target variable.
### 2 Logical connection
To effectively address the multi-delay issue, it is necessary for the model to identify and capture the critical temporal points of each variable within the input time series data. However, traditional models uniformly process different time points of the same variable, thereby causing the key temporal information to be obscured by a vast number of irrelevant time series features. Therefore, the traditional model can not solve the multi-delay issue. We will revise the above content into the paper.
### 3 Addressing multi-delay issue
We have detailed how TimePro solves the multi-delay problem in paragraph 4 of introduction. During the scan of mamba, a specialized network is employed to learn the offsets of critical time points. By adaptively selecting these key time points, TimePro dynamically updates the hidden states to reflect the most salient temporal information. Through adaptive selection of key temporal features, the model focuses more on the delay times of each variable, thus solving the multi-delay problem.
### 4 Excessive Mamba detail
I'm sorry for your concerns. Taking your suggestion into consideration, we will revise the first section to mainly focus on the advantages of Mamba in time-series forecasting, thereby better aligning with the theme of time-series forecasting. In the second paragraph we introduce some Mamba-based works that is relevant to our approach, which can facilitate the reader to understand our contribution and differences from previous work. We will simplify the descriptions of these methods and enhance the summarization of the methods in relation to the theme. We will revise the last sentence of the second paragraph as follows: The mentioned related works often employ different ways to scan features from various directions. However, these methods overlook the fact that different variables have different impact durations on the target variable, which limits their performance.
### 5 Reason of Transformer-based methods introduction
Although TimePro is based on Mamba, transformer-based models dominate the time series community. Therefore, we need to describe these models in the introduction and motivate the reader to understand the shortcomings of these models. As we mention in lines 59-66 of our paper, transformer-based models also can’t capture critical time points. We believe that using some space to describe this work is necessary to help make our motivation clearer.
### 6 Clarify inputs and outputs
The inputs and outputs of the model refer to arbitrary sequences, as shown in lines 145-146 of our paper. Considering the complex mechanism of Mamba, we need to depict the working mechanism of Mamba before introducing our TimePro (i.e., Preliminaries), which facilitates the understanding of our approach. This writing structure is also adopted by other Mamba-based models, including VMamba (NeurIPS2024), ViM (ICML2024) and S-Mamba.
### 7 Lacking explanations of the motivations
We present our motivation before introducing our core modules (HyperMamba and Hyper-Scan), as shown in lines 227-241 and lines 220-229. In addition, we also introduce the motivation for Time and variable preserved embedding, as shown in lines 191-197 of our paper. For other parts such as ProBlock and Linear projection, we do not present the motivation behind them as they do not involve architectural innovations. We will be more specific in describing the motivation before each module description in the final version.
### 8 Supplementary experiments
We have provided some other experimental results in the figure response, including R1: Efficiency comparison, R2: Memory and inference time with different channels, etc. In addition, we will put Fig. R1 and R2 into the final version to increase the space of the comparison experiment. Other experiments will be placed in the appendix.
We will follow your suggestions for the writing to reorganize our final version. If the rebuttal helps address your concerns, we kindly ask that you increase your score to give TimePro a fighting chance!
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. Unfortunately, I still have the following concerns.
**Definition of the multi-delay issue**
The concern is not merely about how the issue is defined in words. If this is a widely acknowledged problem in the field, then proper citations to prior work are expected. On the other hand, if this issue is first proposed by your paper, then there should be clear evidence demonstrating its existence — either through compelling experimental results or theoretical justification.
**Logical connection**
Thank you for the clarification. This is not a major concern.
**Addressing the issue**
Again, thank you for the clarification. However, similar to the concern above, is there any justification (e.g., controlled experiments, ablation studies, or theoretical analysis) that demonstrates your method’s effectiveness in specifically addressing the multi-delay issue?
**Excessive detail on Mamba and input/output description**
These are structural issues. I believe the paper will improve after revision.
**Explanation of motivations**
This connects back to the first concern. The key point is to first establish that the issue exists, and then clearly explain how your method addresses it.
**Supplementary experiments**
This is not a major concern.
Overall, my main concern lies in the lack of justification for the existence of the multi-delay issue, and whether TimePro effectively addresses this issue in a demonstrable way.
# Reply to Reply Rebuttal Comment by Authors
Thanks for addressing my concerns. I'm raising my score to 3 (Weak Accept). Figure R7 is intuitive and should be provided in the revised version. Good luck and hope you well.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your time devoted to reviewing our paper and your constructive comments. After previous discussions, we notice that you are confused about the existence of the multi-delay issue and how we can justify solving it. We have added some experiments following your suggestions and they are available at https://anonymous.4open.science/r/Anonymous_figure-2319/figure_response.pdf. Then, Please see our detailed responses to your comments below.
## Existence of the multi-delay issue
The multi-delay issue is a fundamental research topic in the multivariate time series forecasting. The multi-delay problem is mentioned in references [1-3]. We apologize for the confusion caused by not citing these papers. We will follow your comments and add some references [1-3] to the revised version.
[1] Detecting time lag between a pair of time series using visibility graph algorithm.
[2] Lag penalized weighted correlation for time series clustering
[3] Multivariate time delay analysis based local KPCA fault prognosis approach for nonlinear processes
## TimePro effectively addresses multi-delay issue in a demonstrable way
We demonstrate that TimePro effectively addresses the multi-delay issue in two experiments.
- Quantitative comparison in Tables 2 and 3. First, we perform an ablation analysis, as shown in Table 3. The results show that compared to scanning only in variable dimension, TimePro's MAE is reduced by 0.010. This result indicates that the adaptive time-tune strategy helps capture key time features and improves model performance. In particular, TimePro possesses a smaller number of parameters and GFLOPs than S-Mamba, which precludes performance improvement due to additional parameters and computational costs.
These quantified results suggest that the time-tune strategy can mitigate the multi-delay issue.
- Qualitative comparison in Figure R7.
We first add a visualization experiment to further validate the validity and interpretability of TimePro for the multi-delay issue, as shown in Figure R7 of the figure response. We choose test sequences from the ETTm1 and ETTh1 datasets as examples. Specially, we first calculate the correlation of label sequences (i.e., groundtruth) by Pearson correlation coefficient:
$r_{xy}=\frac{\sum_{i=1}^L(x_i-\overline{x})(y_i-\overline{y})}{\sqrt{\sum_{i=1}^L(x_i-\overline{x})^2\cdot\sum_{i=1}^L(y_i-\overline{y})^2}}$, where $x_i, y_i \in \mathbb{R}$ run through all time points of the paired variates to be correlated.
We then visualize the correlation between the variable features before and after HyperMamba. Figure R7 shows that TimePro selects important time points through the time-tune strategy, which drives the learned multivariate correlations closer to the label sequences. This suggests that TimePro effectively mitigates the detrimental effects of the multi-delay issue on accurate variable relationship modeling.
Besides, we also perform a visual ablation experiment in Fig. R6. It can be observed that TimePro's prediction curves are closer to the label curve than scanning only in the variable dimension or the time dimension.
Furthermore, TimePro's prediction curve is more similar to the ground truth in Fig. R4 and R5 . In contrast, S-Mamba and iTransformer map the variables as coarse embeddings, ignoring the delayed impact of different time points of each variable on the predicted sequence, leading to poorer results.
These qualitative results effectively validate TimePro's ability to mitigate the multi-delay problem.
We will add detailed definitions and references of the multi-delay issue, Figure R7 and corresponding analyses to the final version. If the rebuttal helps address your concerns, we kindly ask that you increase your score to give TimePro a fighting chance! | Summary: This paper introduces TimePro, a novel Mamba-based framework designed for multivariate long-term time series forecasting. The core contribution lies in its variable- and time-aware hyper-state mechanism, which dynamically refines hidden states by adaptively selecting critical temporal intervals to address the multi-delay problem. Extensive experiments across popular benchmarks demonstrate superior performance over existing methods.
## update after rebuttal
I will keep my score.
Claims And Evidence: The claims are supported by evidence.
Methods And Evaluation Criteria: The proposed method is evaluated on five widely used real datasets, including ETT (4 subsets), Exchange, Electricity, Weather, and Solar-Energy.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Reasonable experimental designs and analyses.
Supplementary Material: Provide more implementation details and results.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**
1. The proposed method is technically sound.
2. Extensive experiments across popular benchmarks demonstrate superior performance over existing methods.
3. Detailed ablation studies justify the design choices.
**Weaknesses**
1. I am not sure that the improvement over S-Mamba is significant enough, e.g., 0.251 vs. 0.250 for Weather and 0.398 vs. 0.392 for ETTm1.
2. In Fig. 3“It consists of two parts, including plain state acquisition in the GPU SRAM and time tuning in the GPU HBM.” However, there are no details for this statement and it is not easy to understand.
Other Comments Or Suggestions: Figure 6 is insufficiently clear, making it difficult to discern critical details.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and efforts in reviewing our work and providing thoughtful feedback that can further strengthen our manuscript. We have added some experiments following your suggestions and they are available at https://anonymous.4open.science/r/Anonymous_figure-2319/figure_response.pdf. Please see our detailed responses to your comments below.
### 1 Analysis of the improvement
First, TimePro outperforms S-Mamba on seven datasets, and the improvement is significant on some datasets. For example, TimePro has a 0.009 and 0.015 improvement in MSE and MAE for the ETTh1 dataset, respectively. In addition, TimePro has a 0.011 and 0.008 improvement in MSE and MAE, for the Exchange dataset, respectively. These improvements fully validate the effectiveness of TimePro.
Second, as mentioned in Fig. 5 of the manuscript, TimePro can significantly benefit from longer lookback window lengths because it preserves more local details in the time dimension. So, when the lookback window length is 96, TimePro's improvement on the Weather and ETTm1 datasets is 0.001 and 0.006, respectively. However, when the lookback window length is 336, the improvement is 0.008 and 0.01, respectively. This improvement is significant and further validates TimePro's potential. Finally, we provide a detailed efficiency analysis in Figure R1 in the figure response. It shows that TimePro has a better efficiency performance compared to S-Mamba. In summary, TimePro strikes a better balance between efficiency and performance than S-Mamba.
### 2 details for this statement
As shown in Figure R3 of the figure response, we add some details in Fig. R3 and modify the corresponding caption. These details include: 1) textual explanations of GPU SRAM and HBM; 2) additions to the formulas in the grey boxes so that the reader can understand the initial state generation process; 3) some symbolic additions such as the state $h$ and hyperstate $h_o$, which correspond to Eq. 10-13 in the manuscript; 4) optimizations of some process components to enhance aesthetics and clarity; 5) textual additions to the captions, which further enhances the reader's understanding of hyper-scan. In addition, given that the final version has an extra page, we will add a more detailed description in Sec. 4.2. I hope these modifications can solve your confusion.
### 3 Fig. 6 Blurring
I'm sorry that Figure 6 isn't clear enough. We have uploaded the image in PDF format as shown in Figure R4 of the figure response. In addition, we additionally upload the visualization comparison experiment on the ECL dataset in PDF format as shown in Figure R5. The clarity of these images is guaranteed. It can be observed that the prediction curve of TimePro is much closer to the groundtruth. This further validates the effectiveness of TimePro.
We will add the above modifications and the corresponding analyses to the final version. If the rebuttal helps address your concerns, we kindly ask that you increase your score to give TimePro a fighting chance!
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the author's rebuttal, and decide to raise my score.
---
Reply to Comment 1.1.1:
Comment: # Thanks for your feedback
Thank you very much for raising the rating. Your thoughtful comments have helped to improve this paper a lot! | Summary: This paper proposes TimePro, a Mamba-based model for multivariate long-term time series forecasting. By introducing a hyper-state mechanism that adaptively selects critical temporal intervals, TimePro aims to address the multi-delay problem, where variables influence targets over heterogeneous time spans. Empirical results across various benchmarks demonstrate competitive performance compared to Mamba- and Transformer-based baselines, with claims of linear computational complexity.
Claims And Evidence: 1. TimePro achieves state-of-the-art performance in multivariate long-term forecasting by adaptively modeling variable-specific temporal dependencies, which is justified by Table 2 and Fig. 1, where TimePro obtains superior MSE/MAE on various datasets.
2. This paper claims that the proposed hyper-state mechanism effectively captures both variable interactions and intra-variable temporal dynamics. Ablation studies (Tables 3–5) demonstrate that combining bidirectional variable scanning and time-aware offset learning reduces MSE and improves the performance.
Methods And Evaluation Criteria: This paper aligns with standard protocols (MSE/MAE metrics) across various benchmarks (e.g., ETT, Weather).
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental framework is rigorous, incorporating comprehensive ablation studies (e.g., feature dimensions, encoder depth) and complexity analyses to isolate the contributions of key components.
Supplementary Material: The supplementary material contains more implementation details and results.
Relation To Broader Scientific Literature: TimePro advances Mamba-based time series forecasting by addressing the multi-delay problem through dynamic time-aware hyper-states.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: **Strengths**
1. This paper is well written and the proposed method is technically sound.
2. The integration of bidirectional variable scanning with time-aware offset learning effectively captures dynamic variable-temporal interactions, addressing a clear limitation in existing Mamba variants.
3. Extensive experiments on diverse datasets (e.g., ETT, Weather) demonstrate the effectiveness of the proposed method. The design choices are well justified by ablation studies.
**Weaknesses**
1. While theoretical complexity is analyzed (Table 1), real-world metrics (e.g., inference speed) are absent, which are important for justifying the advantages of the proposed method.
2. The hardware-aware implementation of hyper-scan (Fig. 3) lacks details and poses challenges for understanding this part.
3. As shown in Fig. 4, For the Exchange dataset, increasing the number of layers from 1 to 2 obtains slightly poorer performance. More discussions and insights are encouraged.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and efforts in reviewing our work and providing thoughtful feedback that can further strengthen our manuscript. We have added some experiments following your suggestions and they are available at https://anonymous.4open.science/r/Anonymous_figure-2319/figure_response.pdf. Please see our detailed responses to your comments below.
### 1 real-world metrics are absent
Concerning your comments, we have added two experiments, including Figures R1 and R2 in the figure response.
In Figure R1, we provide a detailed comparison of efficiency metrics including parameters, FLOPs, training time and inference time. It can be observed that TimePro possesses better computational efficiency than previous methods. Specifically, TimePro has a training time and inference time similar to S-Mamba and significantly outperforms other convolutional or transformer-based methods. For example, TimePro obtains lower prediction errors with 2.7 and 14.4 times the inference speed of PatchTST and TimesNet, respectively. Moreover, TimePro has the minimal parameters, FLOPs, and memory consumption. For example, TimePro requires only 67% parameters and 78% GFLOPs of S-Mamba. In addition to satisfactory efficiency, TimePro also outperforms recent advanced models including iTransformer and S-Mamba in forecasting performance. These results demonstrate TimePro's lightweight and suitability for deployment in a variety of real-world scenarios where resources are constrained.
In Figure R2, we further explore the efficiency of TimePro with different variable channels, which can be seen as a complement to Table 1 in our manuscript. It can be observed that as the number of variable channels increases, iTransformer shows a quadratic increase in both the memory and inference time, which severely compromises the efficiency of the model and limits practical applications. In addition, PatchTST has unsatisfactory efficiency under all variable channels. In contrast, the linear scaling ability compared to variable channels, small memory consumption and high inference speed validate TimePro's efficiency.
### 2 lacks details in Fig. 3
Thanks to your comments, we have added some details in Figure R3 in the figure response. These details include: 1) textual explanations of GPU SRAM and HBM; 2) additions to the formulas in the grey boxes so that the reader can understand the initial state generation process; 3) some mathematical symbol additions such as the state h and hyperstate ho, which correspond to Eq. 10-13 in the manuscript; 4) optimizations of some process components to enhance aesthetics and clarity; 5) textual additions to the captions, which further enhances the reader's understanding of hyper-scan.
### 3 More discussions of Fig. 4
In Figure 4, as the number of layers increases from 1 to 4, the prediction error of the model first decreases and then gradually increases. This is due to the fact that when the number of layers is 1, the model is shallow and can not capture complex time-varying and variable relationships. When the number of layers is large, for example, 3 or 4, the model suffers from overfitting, which impairs the model's generalizability to the test set, and therefore a slight increase in prediction error occurs.
We will add the above experiments and the corresponding analyses to the final version. Thanks again for your constructive suggestions! | Summary: This paper proposes TimePro, a Mamba-based model for multivariate long-term time series forecasting. TimePro adaptively selects critical time points to refine variable states, preserving temporal granularity and capturing dynamic variable relationships. Experiments on various benchmarks show competitive performance with existing Mamba- and Transformer-based methods.
## =================update after rebuttal==============
Thanks for the authors' responses. My concerns have been well addressed.
Claims And Evidence: This paper claims TimePro achieves state-of-the-art results with linear complexity. Table 2 reports superior MSE/MAE across datasets. Complexity analysis (Table 1) confirms linear scaling with sequence length.
Methods And Evaluation Criteria: This paper follows standard evaluation criteria with prior methods.
Theoretical Claims: NA.
Experimental Designs Or Analyses: The experimental designs and analyses are technically sound.
Supplementary Material: The supplementary material provides details of datasets and implementation, as well as full experimental results.
Relation To Broader Scientific Literature: The work builds effectively on Mamba-based models for multivariate long-term time series forecasting.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: **Strengths**
1. This paper is well organized and well-written.
2. The idea of incorporating variable and time-aware superstate construction is reasonable and interesting.
3. Comprehensive ablation studies (e.g., feature dimensions, encoder layers) provide insights into design choices.
**Weaknesses**
1. Limited discussion on computational overhead (e.g., training time, inference time) compared to baselines.
2. Is it possible to provide some visualization results for better understand the proposed modules?
Other Comments Or Suggestions: See weakness.
Questions For Authors: NA.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and efforts in reviewing our work and providing valuable feedback that can further strengthen our manuscript. We have added figures with more experiments following your suggestions and they are available at https://anonymous.4open.science/r/Anonymous_figure-2319/figure_response.pdf. Please see our detailed responses to your comments below.
### 1 computational overhead discussion
With reference to your comments, we have added two experiments, including Figures R1 and R2 in the figure response.
In Figure R1, we provide a detailed comparison of efficiency metrics including parameters, FLOPs, training time and inference time. It can be observed that TimePro possesses better computational efficiency than previous methods. Specifically, TimePro has a training time and inference time similar to S-Mamba and significantly outperforms other convolutional or transformer-based methods. For example, TimePro obtains lower prediction errors with 2.7 and 14.4 times the inference speed of PatchTST and TimesNet, respectively. Moreover, TimePro has the minimal parameters, FLOPs, and memory consumption. For example, TimePro requires only 67% parameters and 78% GFLOPs of S-Mamba. In addition to satisfactory efficiency, TimePro also outperforms recent advanced models including iTransformer and S-Mamba in forecasting performance. These results demonstrate TimePro's lightweight and suitability for deployment in a variety of real-world scenarios where resources are constrained.
In Figure R2, we further explore the efficiency of TimePro with different variable channels, which can be seen as a complement to Table 1 in our manuscript. It can be observed that as the number of variable channels increases, iTransformer shows a quadratic increase in both the memory and inference time, which severely compromises the efficiency of the model and limits practical applications. In addition, PatchTST has unsatisfactory efficiency under all variable channels. In contrast, the linear scaling ability compared to variable channels, small memory consumption and high inference speed validate TimePro's efficiency.
### 2 visualization effect of Hyper-Scan
We provide some visualization results to validate the effectiveness of Hyper-Scan, as shown in Figure R6 in the figure response. It can be observed that suboptimal results are obtained for the prediction curves of models that scan only in the time dimension or the variable dimension. Specifically, when scanning only in the variable dimension (Fig. a), the model's prediction curves are smoother, with a poorer fit at the extremes. We attribute this to the model's lack of ability to capture the details of local changes within variables. When scanning in the time dimension only (Fig. b), the model's prediction ability for the extremes improves, but the average accuracy remains poor. This is due to the lack of capturing complex variable relationships. And when we use non-adaptive hyper-scanning (Fig. c), the model's average accuracy and predictive ability for extreme values are improved, but the performance is still unsatisfactory. And when we apply the adaptive hyper-scan, i.e., TimePro (Fig. d), the model can perceive both variable relationships and significant temporal information, resulting in a more accurate prediction performance.
We will add these experiments (i.e., Figures R1, R2, R6) and the corresponding analyses to the final version. Thanks again for your valuable comments! | null | null | null | null | null | null |
Robust Conformal Outlier Detection under Contaminated Reference Data | Accept (poster) | Summary: The authors analyze the impact of contamination on the validity of conformal methods. They show that under realistic, non-adversarial
settings, calibration on contaminated data yields conservative type-I error control. This conservativeness, however, typically results
in a loss of power. To alleviate this limitation, they propose a novel, active data-cleaning framework that leverages a limited labeling budget and an outlier detection model to selectively annotate data points in the contaminated reference set that are suspected as outliers.
Claims And Evidence: The claims made by the authors are supported by clear evidence, both theoretical and experimental in nature.
Methods And Evaluation Criteria: The proposed methods seem correct to show the validity of the proposed method.
Theoretical Claims: There are two novel theoretical claims in the paper, whose proofs seem both largely correct.
Experimental Designs Or Analyses: The experiments appear to be sound and valid, and support the theoretical claims.
Supplementary Material: I have only read the proofs of Lemma 2.2 and Theorem 3.1, which are both largely correct.
Relation To Broader Scientific Literature: The related literature is discussed in sufficient depth. A suggestion for further related works is given in the Questions section.
Essential References Not Discussed: None; see also Question **Q1**.
Other Strengths And Weaknesses: The approach seem novel, and intuitively appealing. The results proved are very interesting, and the impact of the assumptions (and more in general of the limitations) is discussed at length.
Other Comments Or Suggestions: See the Questions section.
Questions For Authors: **Q1** How does the concept of contaminated reference set relate to the ambiguous ground truth case? Refer e.g. to
https://openreview.net/forum?id=CAd6V2qXxc
https://openreview.net/forum?id=L7sQ8CW2FY
https://proceedings.neurips.cc/paper_files/paper/2024/hash/d42a8bf2f40555d4a5120300f98c88f6-Abstract-Conference.html)
I'd like the authors to comment on this (possible) relationship.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the careful review and encouraging feedback. We answer your question below.
> **R1:** How does the concept of contaminated reference set relate to the ambiguous ground truth case?
To the best of our understanding, the papers you referenced study ambiguity in the labeling process within a multi-class classification setting, where some examples may plausibly belong to multiple classes and this ambiguity is explicitly reflected in the labels. In contrast, our work focuses on outlier detection rather than classification, and our setting does not involve any explicit label ambiguity: all calibration points are labeled as inliers, but some may in fact be outliers. The contamination is entirely latent: we have no indication of which points are mislabeled, nor any signal of uncertainty in the labels.
That said, our results do offer some connection to the point of view of the papers you mentioned. As shown in Lemma 2.2 and supported by our experiments, when there is uncertainty about a point’s status, treating it as an inlier is a conservative strategy that preserves type-I error control. This highlights a potentially interesting connection between label ambiguity and reference set contamination, though the two settings at this point still appear quite fundamentally different in how they represent and handle possible labeling errors.
In any case, we agree this connection is worth exploring further in the future and will cite the papers you mentioned while incorporating this discussion into Section 5 of the revised manuscript. | Summary: This manuscript, titled "Robust Conformal Outlier Detection under Contaminated Reference Data," focuses on the problem of conformal outlier detection in the presence of contaminated reference data. It discovers that in non-adversarial scenarios, data contamination makes conformal prediction methods conservative and reduces their detection power. To address this, the manuscript proposes the Label-Trim method, which utilizes a limited labeling budget and an outlier detection model to remove outliers from the contaminated data. Theoretical analysis demonstrates that this method can approximately and effectively control the error rate under certain conditions. Experiments comparing multiple methods on several datasets show that the Label-Trim method can significantly enhance the detection power while controlling the type-I error rate, and it performs particularly well in scenarios with low error rates and low contamination rates.
Claims And Evidence: The claims in the manuscript are supported by clear and convincing evidence. Theoretically, the conservativeness of the conformal outlier detection method and the effectiveness of the Label-Trim method are proven through derivations. Experimentally, the comparison results on multiple real-world datasets support the claims of the article.
Methods And Evaluation Criteria: In this manuscript, the proposed Label-Trim method and evaluation criteria are well-suited to the problem of outlier detection with contaminated reference data. The Label-Trim method effectively deals with contaminated data by using a limited labeling budget and an outlier detection model. The use of multiple benchmark datasets and comparison with multiple baseline methods comprehensively assess the method's performance.
Theoretical Claims: I have checked the correctness of the proofs for the theoretical claims in the paper. The proofs of Lemma 2.2 and Theorem 3.1 are particularly crucial. For Lemma 2.2, which analyzes the conservativeness of standard conformal p-values under contaminated data, the proof process is clear and the logic is rigorous. The proof of Theorem 3.1, which validates the Label-Trim method, is also well-structured. It constructs an imaginary "mirror" version of the method to analyze the relationship between different quantiles. It provide solid theoretical support for the claims in the paper.
Experimental Designs Or Analyses: I've examined the experimental designs and analyses in the paper, and they are generally sound and valid. They use a diverse set of benchmark datasets, including three tabular datasets (shuttle, credit card, KDDCup99) and six visual datasets. This variety helps capture different data characteristics and real-world scenarios, enhancing the generalizability of the results.
Supplementary Material: I have reviewed the supplementary material of this paper, mainly focusing on two key parts. The first part I reviewed is the "Datasets" section in the supplementary material. This information is crucial as it allows readers to understand the data characteristics and the experimental setup better, ensuring the reproducibility of the experiments. The second part is the "Supplementary Experiments and Implementation Details" for both tabular and visual datasets. This comprehensive data in the supplementary material strengthens the experimental evidence presented in the paper.
Relation To Broader Scientific Literature: The paper's key contributions are closely related to the broader scientific literature in multiple ways. The Label-Trim method in this paper builds on the existing understanding of conformal prediction. By addressing the over-conservativeness problem in contaminated data scenarios, it fills a gap in the literature. It provides a new approach to enhance the power of conformal methods while maintaining type-I error control, which is an important addition to the body of knowledge on outlier detection and conformal prediction.
Essential References Not Discussed: After a thorough review, there don't seem to be any essential related works that are not cited or discussed in the paper. The paper comprehensively references prior research on conformal inference under distribution shifts, robustness to data contamination, and outlier detection.
Other Strengths And Weaknesses: S1: The paper's focus on outlier - robust conformal outlier detection with contaminated reference data fills a significant gap in the existing literature. While many studies assume clean reference data in conformal prediction, this work directly addresses the practical issue of contamination, offering new insights into the behavior of conformal methods under such conditions.
S2: The Label - Trim method is a creative solution. By combining a pre - trained outlier detection model with a limited labeling budget to selectively clean the contaminated reference set, it presents a unique approach to enhancing the power of conformal outlier detection while maintaining type - I error control. This method offers a practical alternative to existing data - cleaning strategies in the context of conformal prediction.
S3: The theoretical analysis of the conservativeness of conformal methods in the presence of contaminated data and the validation of the Label - Trim method contribute to the theoretical understanding of conformal prediction. The results can serve as a foundation for further research in robust conformal inference and outlier detection.
S4: The paper is well - structured, with a clear introduction that motivates the research problem, followed by detailed sections on setup, methods, experiments, and discussion. Each section is logically connected, making it easy for readers to follow the flow of the research.
W1: The effectiveness of the Label - Trim method depends on the accuracy of the pre-trained outlier detection model. If the outlier detection model performs poorly, the performance of the Label - Trim method may be severely affected. The paper does not thoroughly explore how to select or improve the outlier detection model for better performance of the overall system.
W2: Although the paper uses visualizations to present experimental results, there is a lack of visual aids to help readers understand the working mechanism of the Label - Trim method.
W3: While the synthetic data experiments are useful, the authors could explore a wider range of outlier injection strategies.
W4: In real - world datasets, the preprocessing steps might be too simplistic.
W5: There could be more diverse baselines considered.
W6: While type - I error rate and power are important metrics, additional metrics could be considered.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and encouraging feedback. To fully address your comments, we have conducted additional experiments, available at https://tinyurl.com/rcod-exps. In our responses below, we refer to these as Supp-Figure X and Supp-Table Y.
> **R1:** The effectiveness of the Label - Trim method depends on the accuracy of the pre-trained outlier detection model. The paper does not thoroughly explore how to select or improve the outlier detection model for better performance of the overall system.
You're absolutely right that the performance of Label-Trim depends on the quality of the outlier detection model used to rank points for annotation. To explore this further, and in response to your concern (as well as R4 from Reviewer Zuz4), we conducted new experiments using outlier detection models with lower performance than Isolation Forest, specifically Local Outlier Factor and one-class SVM on the tabular datasets. As shown in Supp-Figures 2–3, when the underlying model struggles to distinguish inliers from outliers, Label-Trim does not show meaningful power gains over the standard conformal method. This is expected, as the model’s scores no longer reliably identify outliers.
That said, as with any outlier detection task, we recommend using the best model available or fine-tuning a given model using a small set of labeled outliers, something that naturally improves performance. Importantly, Label-Trim is model-agnostic: it imposes no constraints on model complexity or architecture. This ensures that our approach remains compatible with future developments in outlier detection.
> **R2:** Although the paper uses visualizations to present experimental results, there is a lack of visual aids to help readers understand the working mechanism of the Label - Trim method.
Thank you for this excellent suggestion. We agree that a visual explanation would help communicate the logic and steps of the method more clearly. In the revised version, we will include a new schematic illustration that walks through the Label-Trim pipeline.
> **R3:** While the synthetic data experiments are useful, the authors could explore a wider range of outlier injection strategies.
We appreciate this point and want to clarify that all our experiments are conducted on real-world datasets (both inliers and outliers originate from real applications). That said, we agree that modeling more nuanced corruption is important. In our response to Reviewer 2tsw (R4), we include additional experiments with more strategic outlier injection strategies, which make the detection task more challenging.
> **R4:** In real - world datasets, the preprocessing steps might be too simplistic.
We’re not entirely sure what specific aspect of preprocessing you are referring to. In our work, we follow the preprocessing protocols established in prior studies (Bates et al., 2023; Zhang et al., 2024; Yang et al., 2022). Our main departure is the introduction of contamination into the training and calibration sets to reflect realistic scenarios more closely. If there are particular concerns about the preprocessing that we overlooked, we would be happy to address them.
> **R5:** There could be more diverse baselines considered.
We agree that comparisons with a broader set of baselines are useful. In addition to the four methods evaluated in the main manuscript, we have now added results for two additional outlier detection models on tabular data (Local Outlier Factor and one-class SVM), as well as two new models for visual data (ReAct with a VGG19 backbone and SCALE with a ResNet backbone). We’ve also included new experiments under higher contamination rates (up to 15%) and across a range of type-I error levels. As shown in the updated results, type-I error is consistently controlled, and power improves when the detection model separates inliers from outliers with reasonable accuracy.
> **R6:** While type - I error rate and power are important metrics, additional metrics could be considered.
We focused on type-I error and power because they are the standard evaluation metrics in the conformal prediction and outlier detection literature. These metrics directly capture the validity (false positive control) and utility (detection rate) of the method. To provide additional insight, we also report the number of trimmed outliers in the contaminated reference set as a measure of cleaning effectiveness. | Summary: This paper studies conformal outlier detection with contaminated reference sets. It theoretically shows that non-adversarial contamination induces conservative type-I error control, explaining empirical performance gaps. To address power loss, the authors propose Label-Trim: an active data-cleaning framework leveraging limited labeling budgets to annotate and remove suspected outliers from high-scoring regions. Theoretical analysis proves Label-Trim maintains approximate error control under practical conditions. Experiments on tabular and vision datasets validate that standard conformal methods become conservative under contamination, while Label-Trim recovers detection power without inflating errors, achieving near-oracle performance when contamination rates are low.
Claims And Evidence: The paper’s key claims—conservative error control under contamination and Label-Trim’s power recovery—are supported by theoretical proofs (Lemma 2.2, Theorem 3.1) and experiments on tabular/vision datasets. Empirical validation includes score distribution analysis (Fig.1), type-I error trajectories (Fig.2,4), and detection rate comparisons (Fig.2-3, Table1). Near-oracle performance at low contamination is numerically confirmed. Experiments exclude high contamination rates (>5%), limiting claims about robustness to realistic but challenging data shifts.
Methods And Evaluation Criteria: The proposed Label-Trim method aligns with practical constraints: it uses a pre-trained outlier detector to prioritize high-scoring calibration samples for limited manual labeling, then trims confirmed outliers. This leverages model confidence to focus labeling efforts, avoiding random or exhaustive cleaning. Evaluation employs standard conformal metrics (type-I error rate, detection power) on 3 tabular and 6 vision datasets, with controlled contamination rates (1%-5%). Baseline comparisons include Oracle (clean reference), Standard (no cleaning), Naive-Trim (remove top scores without labels), and Small-Clean (random labeling).
While the datasets are established in outlier detection, the contamination simulation (random outlier injection) oversimplifies real-world scenarios where outliers may strategically mimic inliers. The 5% contamination cap excludes high-pollution cases common in practice (e.g., 10%-20%). The focus on low-error regimes (α=0.01-0.03) matches safety-critical applications but ignores moderate α settings. Isolation Forest (tabular) and ReAct (vision) are reasonable model choices, but using only one detector per data type (without testing alternatives) limits confidence in generalizability. The dependence on detector quality is not systematically tested, weakening claims about robustness across detector architectures.
Theoretical Claims: The proofs for Lemma 2.2 and Theorem 3.1 are mathematically correct under their assumptions (e.g., i.i.d. inliers, fixed contamination).
Experimental Designs Or Analyses: Experiments are sound for core claims: controlled contamination (1%-5%) tests type-I error/power trade-offs. Tabular (Isolation Forest) and vision (ResNet) benchmarks are standard. However:
1. Contamination is simulated via random outlier injection, ignoring realistic contamination scenarios.
2. High contamination (>5%) and real-world drift (e.g., temporal shifts) are untested.
3. Only one detector per data type is used; architecture variations are unexplored.
Supplementary Material: I have reviewed the supplementary material (code repository) provided with the submission. The implementation details in the codebase align well with the methodology described in the main paper, and the provided scripts demonstrate reproducibility of the experiments.
The code repository predefines interfaces supporting multiple anomaly detection models. However, the experiments in this paper employ only one model per data type without comparative analysis of alternative models implemented in the code. This omission misses an opportunity to validate whether the proposed framework’s performance remains consistent across different algorithmic choices, potentially limiting insights into the robustness of the methodology.
Relation To Broader Scientific Literature: This paper is the first to bridge conformal prediction with outlier detection under reference set contamination, introducing a novel intersection of these fields. Prior conformal works focused on covariate/label shifts (Tibshirani et al., 2019) or label noise (Sesia et al., 2024), but none addressed **contaminated calibration sets** in outlier detection. Unlike semi-supervised anomaly detection (Jiang et al., 2022), which assumes partial labels, Label-Trim uses limited labels to clean reference data while preserving conformal guarantees—a unique hybrid approach. Theoretically, it extends conservative error bounds (Sesia et al., 2024) to **unknown outlier distributions**, avoiding explicit noise modeling. Compared to worst-case robustness analyses (Barber et al., 2023), this work identifies practical conservatism under non-adversarial contamination, aligning with empirical patterns. Experiments validate the framework on both tabular and vision data, broadening conformal methods beyond traditional single-domain applications.
- Tibshirani, R. J., Foygel Barber, R., Cande`s, E., and Ramdas, A. Conformal prediction under covariate shift. Advances in neural information processing systems, 32, 2019.
- Sesia, M., Wang, Y. R., and Tong, X. Adaptive conformal classification with noisy labels. J. R. Stat. Soc. Series B, pp. qkae114, 2024.
- Barber, R. F., Cand`es, E. J., Ramdas, A., and Tibshirani, R. J. Conformal prediction beyond exchangeability. Ann. Stat., 51(2):816–845, 2023.
- Jiang, X., Liu, J., Wang, J., Nie, Q., Wu, K., Liu, Y., Wang, C., and Zheng, F. Softpatch: Unsupervised anomaly detection with noisy data. Advances in Neural Information Processing Systems, 35:15433–15445, 2022.
Essential References Not Discussed: None
Other Strengths And Weaknesses: **Strengths:**
1. **Originality:** This paper is the first to bridge conformal prediction with outlier detection under reference set contamination, bridging anomaly detection and robust statistics.
2. **Practicality:** Label-Trim’s simplicity suits real-world deployment.
3. **Clarity:** Figures (e.g., score distributions) intuitively explain conservative behavior.
**Weaknesses:**
1. **Moderate innovation:** The method’s simple pipeline (score-sort-label) lacks deeper algorithmic novelty compared to recent advances.
2. **Assumption-heavy theory:** Relies on independent and identically distributed inliers, limiting real-world applicability.
3. **Narrow scope:** Focus on low contamination (≤5%) excludes high-noise scenarios common in practice.
Other Comments Or Suggestions: 1. **Dynamic Budget Allocation:** Explore adaptive labeling budgets (e.g., increasing *m* when contamination is suspected) instead of fixed *m=50*.
2. **Detector-Agnostic Analysis:** Systematically test Label-Trim with alternative architectures (e.g., autoencoders, One-Class SVM) to assess generalizability.
Questions For Authors: 1. How does Label-Trim perform on naturally contaminated datasets? Synthetic noise may not reflect real-world anomalies.
2. How does Label-Trim perform on high contamination rate (>10%) data?
3. Would the method fail catastrophically when the number of outliers exceeds the labeling budget *m*?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive feedback. We appreciate the opportunity to provide some clarifications. To address your questions and strengthen the empirical foundation of our claims, we have also conducted a range of new experiments available at https://tinyurl.com/rcod-exps, which we reference below as Supp-Figure X and Supp-Table Y.
Our goal in this paper is to address a critical gap in conformal outlier detection: robustness to contamination in the reference set. To the best of our knowledge, no prior work offers a fully satisfactory solution to this problem. While our proposed method is deliberately simple, its theoretical guarantees are non-trivial, and our new experiments demonstrate its practical robustness.
> **R1:** The method’s simple pipeline lacks deeper algorithmic novelty
We understand this concern, though we believe the intuitive nature of our approach is a practical strength. Moreover, while the method is intentionally simple, establishing its validity is technically non-trivial. Our analysis required developing a novel proof technique, which we believe offers theoretical and methodological value beyond the specific application studied in this paper.
> **R2:** Assumes i.i.d. inliers
You're right that conformal prediction traditionally assumes i.i.d. inliers and a clean reference set. Our work relaxes the second of these assumptions: we allow the reference set to be contaminated with non-i.i.d. outliers. The assumption we retain is that the inliers themselves are i.i.d. We agree that relaxing the i.i.d. assumption on the inliers is an exciting direction for future work. However, doing so would likely require additional assumptions on the dependency structure, going outside the scope of this paper. We will clarify this point in the revision.
> **R3:** Focus on low contamination ≤5%
Thank you for pointing this out. As described in our response to Reviewer 2tsw, we’ve extended our experiments to include higher contamination levels, and continue to observe consistent trends.
> **R4:** Systematically test Label-Trim with alternative models
We appreciate the suggestion. In addition to Isolation Forest (tabular) and ReAct with a ResNet backbone (visual) used in the main manuscript, we’ve now added results for:
Tabular data: Local Outlier Factor (LOF) and One-Class SVM (OCSVM)
Visual data: ReAct with a VGG19 backbone and SCALE with a ResNet backbone
As shown in Supp-Figures 2–3, Label-Trim consistently controls the type-I error across all models. As expected, when the outlier detection model is less effective (e.g., LOF, OCSVM), the candidate set becomes noisier and power decreases. Still, our method improves over the baseline. For the visual data, Supp-Tables 1–2 show results that closely mirror the trends in the main manuscript. We’ll include these in the revision.
> **R5:** Random outlier injection oversimplifies real-world scenarios where outliers may strategically mimic inliers
We completely agree and have addressed this in our response to Reviewer 2tsw, where we design and test more challenging outlier scenarios.
> **R6:** Real-world drift are untested
We share your interest in this important issue. In our response to Reviewer 2tsw, we present new experiments simulating drift in the outlier distribution over time, and find that Label-Trim remains robust.
> **R7:** Explore adaptive labeling budgets instead of fixed m=50.
We refer you to our response to Reviewer 2tsw, where we discuss the role of the annotation budget m, and show that Label-Trim performs well across a range of values. We also highlight that its performance degrades gracefully as the budget decreases.
> **R8:** How does Label-Trim perform on naturally contaminated datasets?
Please see our response to Reviewer 2tsw, where we explain why controlled contamination is necessary for rigorous evaluation and provide additional experiments modeling more realistic contamination.
> **R9:** Would the method fail catastrophically when the number of outliers exceeds the labeling budget m?
No. Both our theoretical results and experiments show that type-I error control is preserved regardless of how small m is relative to the number of outliers. For instance, in Figure 3 of the main manuscript and Supp-Figure 1 (right panel), the number of outliers significantly exceeds the labeling budget, yet Label-Trim still controls the error and achieves meaningful power gains.
> **R10:** The focus on low-error regimes (α=0.01-0.03) matches safety-critical applications but ignores moderate α
We’ve added experiments evaluating performance for higher type-I error thresholds. As shown in Supp-Figure 9, Label-Trim continues to perform well across all tested α levels. We'll incorporate these results into the revised version. | Summary: This paper analyzes the impact of such contamination on the validity of conformal methods. The paper proves that under realistic, non-adversarial settings, calibration on contaminated data yields conservative type-I error control.
Claims And Evidence: This paper focuses on detecting outliers with conformal prediction in the context of contaminated reference data, which is both an interesting and important problem.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The authors provide a solid and new theoretical analysis of the proposed method in terms of the type-I error rate.
Experimental Designs Or Analyses: Extensive experiments on real data validate that standard conformal outlier detection methods are conservative under contamination and show that the proposed method improves power without sacrificing validity in practice.
Supplementary Material: No.
Relation To Broader Scientific Literature: Yes.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Weaknesses:
1. In most experiments, m is fixed to 50. Generally, how should $m$ be set to achieve good model performance?
2. This paper primarily considers injected outliers within the contaminated calibration set. How does the proposed method perform when applied to real data that is inherently contaminated?
Other Comments Or Suggestions: No.
Questions For Authors: See the weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We’ve conducted additional experiments to answer your questions and further support and clarify our contributions. These results are available at https://tinyurl.com/rcod-exps, and we refer to them as *Supp-Figure X* and *Supp-Table Y* in our responses below.
> **R1:** In most experiments, m is fixed to 50. Generally, how should it be set to achieve good model performance?
We view the annotation budget m as primarily determined by resource constraints (how much it costs to label the data points) rather than as a tuning parameter that can be optimized by a general mathematical argument. This is because the power of our method depends not just on m, but also on the performance of the outlier detection model and the underlying contamination level, both of which are hard to characterize theoretically without much stronger assumptions than we make in this work.
That said, Figure 3 in the main manuscript provides interesting practical insight. In that experiment, the calibration set contains approximately 80 outliers. As shown in the figure, even a small annotation budget (e.g., m ≈ 20) is enough for Label-Trim to achieve power nearly matching that of the oracle conformal method that uses clean reference data. This suggests that even a modest budget (on the order of a quarter of the expected number of outliers) can already yield strong performance. Even more encouragingly, the power of Label-Trim appears to vary quite smoothly with the labeling budget in these experiments, indicating graceful performance degradation under tighter annotation constraints.
Thank you for giving us the opportunity to further highlight this important aspect of Figure 3. We will clarify this message and the associated practical guidance in the revised version.
> **R2:** How does the proposed method perform when applied to real data that is inherently contaminated?
To answer this question, which also relates to Reviewer Zuz4’s comments, we conducted additional experiments simulating various realistic contamination scenarios. We summarize these results at the end of this response.
We also wish to clarify that all our existing experiments are already based on real-world datasets, meaning both the inliers and outliers originate from actual applications. Since our method relies on selective annotation under a limited budget, we simulate contamination in the reference set by injecting outliers. This setup reflects a realistic scenario where labeling resources are constrained and full manual annotation is impractical due to the need for domain expertise.
Moreover, a controlled contamination process is essential for evaluating our method because computing the performance metrics requires knowing the ground-truth labels. Specifically, to estimate type-I error, we need to know which test points are inliers, and to measure detection power, we need to know which test points are outliers.
Summary of additional experiments:
- **Varying contamination rate.** We extend Figure 2 by increasing the contamination rate up to 15%. As shown in Supp-Figure 1, Label-Trim maintains type-I error control while continuing to outperform standard conformal and the “small clean” baseline in terms of power.
- **Strategic outlier injection.** Instead of injecting outliers at random, we selected outliers that resemble inliers—those falling below a given score percentile. Supp-Figure 5 (shuttle dataset) shows how these lower-percentile outliers increasingly resemble inliers, while Supp-Figure 4 demonstrates that Label-Trim still controls the error and improves power. Similar trends hold on the credit-card and KDDCup99 datasets (Supp-Figures 6–7).
- **Test-time distribution drift.** On the shuttle dataset, we simulate drift in the outlier distribution by contaminating the calibration set with high-percentile outliers and gradually shifting to harder, low-percentile outliers at test time. As shown in Supp-Figure 8, Label-Trim remains robust, maintaining error control and power throughout the distribution shift.
We will revise the paper to clarify how our experimental design reflects real-world constraints and to incorporate discussion of these new results demonstrating robustness to a range of contamination scenarios. | null | null | null | null | null | null |
Variational Phylogenetic Inference with Products over Bipartitions | Accept (poster) | Summary: This paper targets variational inference of ultrametric phylogenetic trees and proposed a method called VIPR.
Although many efforts have been paid in the field of machine learning based variational phylogenetic infernece, very few researchers consider this on ultrametric trees.
VIPR sample a ultrametric phylogenetic tree by executing single linkage clustering on a distance matrix which is learnable and parametrized by log-normal distribution.
A main contribution of this paper is the density formula of the abovementioned tree distribution (Proposition 1), and this allows training VIPR with gradient-based methods.
The authors validate the effectiveness of VIPR on DS1-11 and Cov-2 benchmarks and compare VIPR to the sota method VBPI.
## update after rebuttal
First of all, I would like to thank the authors for their detailed response that addresses my questions well. I have one further question on the convergence speed of VIPR. The author shows that it converges faster than other baselines (e.g., VBPI). However, this is a design choice in VBPI that uses the anneal schedule to encourage exploration over the tree space. Or in other words, there is a trade-off between the convergence speed to a good likelihood value of trees and the coverage of the high posterior region. I would expect the convergence speed of VIPR to slow down a bit with a similar annealing schedule, but end up with a better coverage over posterior trees (this would potentially improve the performance of VIPR in terms of marginal likelihood estimation). It would also be interesting to show how close is the approximate tree topology distribution provided by VIPR to the ground truth posterior from MCMC.
Overall, I really like the idea of constructing a variational distribution over ultrametric trees from a distribution of pairwise distances through single-linkage clustering. It would be interesting to explore how to make it flexible enough for complicated tree posterior, which seems a bit challenging, as it is nontrivial how the correlation between pairwise distance would translate into the correlation between tree topologies. That said, I have updated my review accordingly.
Claims And Evidence: Although this paper presents a sound methodology for VIPR, its potential advantages (e.g., inference accuracy or speed) are not revealed by the experiments.
For the inference accuracy, the likelihoods of VIPR's inferred trees lagged behind those from VBPI, as suggested by the Figure 3 (b,c) and Table 2.
For the inference speed, no results directly reports the computation time of different methods.
Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria make sense.
Theoretical Claims: I've read the proof for Proposition 1 but not checked it carefully.
Experimental Designs Or Analyses: - As the variational family in VBPI is applied to the general class of additive phylogenetic trees, the authors should clearly explain how they use VBPI to infer ultrametric trees.
- GeoPhy (NeurIPS 2023; https://arxiv.org/pdf/2307.03675) considered a NJ algorithm on distance matrix for constructing phylogenetic trees, similarly to VIPR. It should be considered as an important baseline in terms of inference speed and accuracy.
- Figure 3(c) (VBPI is better) contradicts with Table 2 (VIPR-VIMCO is better).
Supplementary Material: I reviewed Appendix A and B.
Relation To Broader Scientific Literature: Many prior works considered variational phylogenetic inference, e.g., VBPI, GeoPhy, but few of them considered inferring ultrametric ones.
This paper makes contribution in this sense.
Essential References Not Discussed: - The claim "The VBPI baseline requires MCMC runs to determine likely subsplits (i.e., evolutionary branching events)." in Sec 4.2 is not accurate. A notable progress of VBPI is ARTree (NeurIPS 2023; https://arxiv.org/abs/2310.09553), which does not rely on subsplits and should be discussed.
- Phyloformer (https://www.biorxiv.org/content/10.1101/2024.06.17.599404v1) constructs a phylogenetic tree with a neighbor joining algorithm on pairwise representations. This idea is similar to VIPR.
Other Strengths And Weaknesses: Weaknesses:
- The title page violates the ICML format (one-column titles and missing author information). I suggest the authors cut down the length of Results and Discussion to creating some space for the title page.
- I think ultrametirc trees should be defined by "the leaves of the trees are all equidistant from the root", and the authors' definition in Sec 2.1 seems somewhat misleading.
- The authors does not clearly explain the $N_e$ in the prior distribution in Sec 2.3.
Other Comments Or Suggestions: There is one typo, Line 32: PhylogGFN.
Questions For Authors: I have no other questions fo the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful review: see our responses below.
*For inference accuracy, VIPR's trees' likelihoods lagged behind VBPI. For inference speed, no results report computation time.*
The aspect where VIPR shines is in time (or number of parameter updates) to attain an approximation error, as shown in Figure 5 of Appendix B. For example, on DS1 VIPR-LOOR achieves an estimated marginal log-likelihood (MLL) of -7175 after \~100 iterations, while VBPI10 takes \~1000. For DS5, VIPR-LOOR achieves an MLL of -8300 after \~200 iterations, while VBPI10 takes \~5000.
Figure 5 compares iterations, and we agree that we should also report computation time. Thus, we ran all VI methods on simulated datasets with varying numbers of taxa for 1,000 iterations and reported computation time (see our response to Reviewer 1 for the simulation procedure).
Seconds/1,000 iterations:
taxa | VBPI10 | VBPI20 | LOOR | REP | VIMCO
---|---|---|---|---|---
8 | 24 | 45 | 55 | 74 | 55
16 | 43 | 88 | 110 | 150 | 112
32 | 94 | 192 | 234 | 314 | 240
64 | 192 | 381 | 475 | 633 | 473
128 | 500 | 913 | 1,016 | 1,383 | 1,018
256 | 1,560 | 2,395 | 2,150 | 2,952 | 2,162
512 | 6,958 | 8,780 | 5,014 | 6,822 | 5,060
One iteration is one parameter update. Our method is \~ twice as slow as VBPI per iteration for 8 taxa, but it scales better and outperforms VBPI for 512 taxa. We will add the results to Figure 4 and the Appendix.
We improved our code since submission (see our response to Reviewer 1). VIPR's primary computational bottleneck is now the phylogenetic likelihood, which takes $\mathcal{O}(NM)$ time.
*How is VBPI used for ultrametric trees?*
Zhang and Matsen IV [ICLR 2019] applies to general additive phylogenetic trees, but the follow-up paper [Zhang and Matsen IV, JMLR2024] extends the approach to ultrametric trees in Sections 6 and 7. The github repository (https://github.com/zcrabbit/vbpi-torch) contains code for ultrametric trees (in the directory "rooted") that we use in our experiments. We will make this more clear in the manuscript by providing a Github repository link and referencing the sections within the JMLR paper.
*GeoPhy should be considered*
GeoPhy is similar to VIPR in that it uses a tree construction algorithm on a distance matrix, but it is used for unrooted trees while we focus on ultrametric trees.
*Figure 3(c) (VBPI is better) contradicts with Table 2 (VIPR-VIMCO is better).*
Figure 3(c) reports the marginal log-likelihood, but Table 2 reports the ELBO. For the COVID dataset, VBPI is better in marginal log-likelihood, and VIPR-VIMCO is better in ELBO.
*ARTree should be discussed.*
We will change Section 4.2 (line 264, column 2): "We compare VIPR to the VBPI algorithm as implemented by Zhang and Matsen IV (2024), which uses MCMC runs to determine likely subsplits in an SBN."
We will also add ARTree to the introduction (line 26, column 2): "For example, ViaPhy (Koptagel et al., 2022) uses a gradient-free variational inference approach and directly sample from the Jukes and Cantor (1969) model, GeoPhy (Mimori and Hamada, 2023) uses a distance-based metric in hyperbolic space to construct unrooted phylogenetic trees, and ARTree (Xie and Zhang, 2023) uses graph neural networks to construct a deep autoregressive model for variational inference over phylogenetic tree structures."
*Phyloformer is similar to VIPR*
We will mention Phyloformer in the introduction: see our response to Reviewer 1.
*The title page violates the ICML format*
Thank you for catching the title formatting error. This arose due to a copy/paste mistake. As Reviewer 2 noted, our Figures are not too space-hungry, and we could resolve this by cutting down the length of the Results and Discussion as you suggest, and improving the location of the Figures.
*Ultrametirc trees should be (re)defined*
We will adopt the suggested definition (line 74, column 1):
"We focus on ultrametric trees, in which the leaves of the trees are all equidistant from the root. We denote our ultrametric trees with a rooted, binary tree topology tau and a set of coalescent times ..."
*The $N_e$ in the prior distribution is unclear*
We expanded section 2.3:
"We use the Kingman coalescent (Kingman 1982) as the prior distribution on the trees. This coalescent process proceeds backward in time with exponentially distributed inter-event intervals, and coalescent events occurring at rate $\lambda_k = \binom{k}{2}/N_e$, where $k$ is the number of taxa and $N_e$ is the effective population size, a parameter which governs the rate at which species coalesce. We fix $N_e = 5$ in our experiments. At each coalescent event, a pair of taxa are chosen to coalesce into a single taxon uniformly at random over all pairs of taxa. ... "
We fixed the "PhylogGFN" typo, thank you for catching this. | Summary: This paper proposes a variational Bayesian phylogenetic tree analysis method using a matrix representation of tree structures. Phylogenetic tree analysis is one of the important analytical techniques used to estimate the developmental process and diffusion pathways of a target, and is more and more in demand in formulating future preventive measures, for example, for recent infectious disease pandemics. Conventional Bayesian phylogenetic tree analysis faces several challenges. One is the use of Markov chain Monte Carlo methods in many models and algorithms, whose efficiency, both theoretically and empirically, is not yet clear. Another is that many of them do not explicitly include the fusion time in the phylogenetic tree in their models, and as a result, those methods cannot properly capture the ultrametric nature of the tree structure. As a way to solve these two problems, this paper proposes a model and its inference method that can properly reflect ultrametric by explicitly modeling coalescence time using a tree structure representation that has high affinity to variational methods. The effectiveness of the proposed method is demonstrated with 7 data frequently used in many recent Bayesian phylogenetic analyses and the more practical SARS-Cov-19 data.
Claims And Evidence: The main claim of this paper is that using a matrix representation for the representation of tree structures has two benefits: (1) ultrametric measures can be captured and (2) a differentiable variational representation that avoids MCMC sampling, whose theoretical analysis and empirical goodness for mixing time is not yet clearly known.
One minor concern is that a similar tree-structured matrix representation has been studied independently in another paper [Bouckaert2024] very recently. However, the authors, in fairness, identify differences and improvements over prior work in Section 2.5.
Methods And Evaluation Criteria: This paper uses a variational phylogenetic tree representation that utilizes a matrix representation of the tree structure and derives an inference algorithm using three choices of loss functions. Through experiments, the proposed method is compared to previous state-of-the-art phylogenetic tree analysis methods (including the most related and recent one [Zhang&Matsen, JMLR2024]). The evaluation criteria used are the marginalized likelihood and ELBO in terms of learning and prediction performance.
One minor concern to me is whether the ablation study made it difficult to quantitatively assess the improvement of the proposed method from the other related study, [Bouckaert2024]. Would it be difficult to see the reduction in performance when restricted to only restricted trees, as in Literature A, within the framework of the proposed method?
Theoretical Claims: The key theoretical contribution of this paper is that through the matrix representation of the tree structure, the variational distribution obtains an easy-to-handle closed-form representation, as described in Proposition 1.
The key theoretical contribution of this paper is that through the matrix representation of the tree structure, the variational distribution obtains an easy-to-handle closed-form representation.
I may not yet properly understand the empirical benefit of this theoretical result; as shown in Appendix 1, I can see that this representation does indeed lead to an easy-to-handle closed-form expression. On the other hand, it is not easy to intuitively understand what improvement this has over the variational distribution of the standard mean-field approximation. Any help from the authors in this regard would be greatly appreciated.
Experimental Designs Or Analyses: Experiments are conducted on seven datasets that have been used expressively as benchmark data in recent Bayesian phylogenetic tree analyses, as well as on the SARS-Cov-19 data for more practical applications. The evaluation by the marginalized likelihood and ELBO also reflects recent trends, and the experts feel that the improvement in performance is clearly reported.
The comparison method seems convincing enough, as it is extremely up-to-date. On the other hand, as discussed in the “Methods” section, a quantitative comparison with another related study A might have strengthened its persuasiveness.
Supplementary Material: The supplementary material in this paper is based on (1) the derivation of the variational representation (equivalent to the proof of Proposition 1), (2) additional experiments and results, and (3) the derivation of algorithms for various loss functions.
I briefly checked (1) the derivation part of the variational representation because I did not intuitively understand how the new variational representation in this paper is an improvement over the conventional straightforward mean-field approximation.
Relation To Broader Scientific Literature: As evoked by the recent pandemics, phylogenetic tree analysis is one of the machine learning tasks that has received particular attention in recent years. This paper is not intended to bring any new insights from a scientific point of view, but this technology is expected to contribute to the development of computational biology through the development of general-purpose machine learning.
Essential References Not Discussed: This paper provides a comprehensive discussion of Bayesian phylogenetic tree analysis, from its historical development to the latest developments in recent years. In particular, the relationship between the proposed methods and the challenges and room for improvement are carefully discussed in fairness to recent related research.
Other Strengths And Weaknesses: (Editing)
Other Comments Or Suggestions: I was just a little concerned as to whether the title conforms to the format specified for the conference. Considering the margins involved in some of the figures, this is not overly space-hungry and may not be a problem for the draft stage for peer review. However, it may be a good idea to have it corrected in the camera-ready version if accepted.
Questions For Authors: One minor concern to me is whether the ablation study made it difficult to quantitatively assess the improvement of the proposed method from the other related study, [Bouckaert2024]. Would it be difficult to see the reduction in performance when restricted to only restricted trees, as in Literature A, within the framework of the proposed method?
The key theoretical contribution of this paper is that through the matrix representation of the tree structure, the variational distribution obtains an easy-to-handle closed-form representation.
I may not yet properly understand the empirical benefit of this theoretical result; as shown in Appendix 1, I can see that this representation does indeed lead to an easy-to-handle closed-form expression. On the other hand, it is not easy to intuitively understand what improvement this has over the variational distribution of the standard mean-field approximation. Any help from the authors in this regard would be greatly appreciated.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the thoughtful suggestions below, and hope that we have addressed your comments sufficiently.
*One minor concern is that a similar tree-structured matrix representation has been studied independently in another paper [Bouckaert2024] very recently.*
As we discuss in our literature review, we have two primary significant improvements compared to [Bouckaert2024]: 1) we use optimization-based VI to maximize the ELBO, and 2) we provide a closed-form density in Proposition 1. Using this density means we do not have to restrict the tree space. In addition, our framework allows us to use a relatively general class of variational distributions for the pairwise distances, whereas Bouckaert require log-normal distributions in order to incorporate a covariance matrix $\Sigma$.
*Would it be difficult to see the reduction in performance when restricted to only restricted trees within the framework of the proposed method?*
We believe that this comment refers to selecting an ordering of taxa similarly to [Bouckaert2024], and then re-running VIPR while restricted to the "cube space" from [Bouckaert2024] consistent with the ordering. This is an excellent idea to investigate the effect of restricting tree space.
We have now constructed the maximum clade credibility (MCC) tree from BEAST using our gold standard MCMC run, selected an order from the MCC tree, and then calculated the percentage of tree topologies from the BEAST gold standard that are within the "cube space" implied by this ordering. This process estimates the percentage of the posterior that is impossible to reach using the restricted tree space from [Bouckaert2024]:
DS | \% of MCMC trees outside cube space
---|---
1 | 29.2
2 | 15.2
3 | 76.8
4 | 79.7
5 | 98.0
6 | 94.7
7 | 69.9
8 | 42.7
9 | 99.9
10 | 84.6
11 | 99.9
COV| 99.9
These results are striking, but [Bouckaert2024] mentions that CubeVB may struggle on high-entropy posteriors in their discussion. We will add this Table and discussion to the Appendix of the camera-ready.
*It is not easy to intuitively understand what improvement VIPR has over the variational distribution of the standard mean-field approximation.*
One of the key challenges for variational inference over phylogenetic trees is that using a mean-field approximation is not straightforward. There are two main reasons for this.
First, we can only apply a mean-field approximation after decomposing the distribution of the tree as a product over cliques of random variables. There is no standard way of doing this for trees. In Matsen IV (2024) this is done by forming a subsplit Bayesian network with one node per subtree appearing in the MCMC samples used to initialize the support. Our novel decomposition (Proposition 1) is another way of decomposing the distribution of the tree as a product over coalescent times. This results in $\mathcal{O}(N^2)$ parameters, improving upon the worst-case performance of Matsen IV (2024), in which the number of parameters could be super-exponential in the worst case. We then use our novel decomposition for the mean-field approximation (using state-of-the-art techniques for gradient evaluation and optimization: autograd, VIMCO, REINFORCE and the Reparameterization Trick).
Second, for ultrametric trees we cannot assume independence between coalescent times, lest the resulting tree violate the ultrametric constraint. To overcome this challenge, we form our our variational family to approximate the matrix of pairwise coalescent times (the matrix bold T). We then map T to ultrametric trees using single-linkage clustering.
*A quantitative comparison with another related study might have strengthened its persuasiveness.*
We chose VBPI for a baseline comparison because it was the only VI-based method for ultrametric trees that we are aware of in the literature. We did not include [Bouckaert2024] because it does not rely on optimization, so we could not include it in our trace plots of marginal log-likelihood vs iteration number. As you have suggested, restricting our method (and the BEAST gold-standard) to the same restricted tree space as [Bouckaert2024] is a valuable experiment to isolate the effects of optimization versus unrestricted tree space, and we aim to complete this experiment in a follow-up paper.
Thank you for catching the title formatting error—we have now corrected it. | Summary: This paper introduced a new method, VIPR, for phylogenetic inference. This new method greatly improves the computational efficiency without sacrificing accuracy compared with the traditional MCMC based method. The new method derives a closed-form density of the distribution over the entire tree space based on coalescent times and single-linkage clustering. This study proposed a new variational distribution based on coalescent time and single linkage clustering, which makes the computation more efficient. Experiments on benchmark dataset and one empirical dataset shows comparable accuracy and improved computational efficiency.
Claims And Evidence: This paper claims that the new method VIPR relaxes the dependency on MCMC subroutines and achieves better efficiency on phylogenetic inference. Despite lower computational complexity, this new method achieves comparable accuracy with the golden standard Bayesian phylogenetic inference methods.
The claims are supported by experiments on benchmark datasets and one empirical dataset, SARS-CoV-2, comparison across baselines, including BEAST, the gold standard MCMC-based method for phylogenetic inference, and approximate the true posterior distribution, and VBPI, a recent variational inference method for phylogenetic inference using subplot Bayesian networks but still rely on MCMC for tree sampling.
The experiment was conducted on 11 standard benchmark datasets with a wide range of taxa numbers and sequence length, and one empirical dataset, SARS-CoV-2, to evaluate real-world dataset with rapid evolving speed.
The results show that VIPR performs comparable with two baselines on MLLs and ELBO. The running time of the experiments shows that VIPR has a time complexity of roughly O(N^2).
One minor concern is that VBPI shows roughly the same computational complexity on empirical dataset. Could the author elaborate more on how does the parameter numbers influence the time complexity of VBPI? Why VIPR should have a lower computational complexity?
Methods And Evaluation Criteria: VIPR is proposed based on the previous studies of Bouckaert (2024), and Zhang and Matsen IV (2024) with improvements on the scalability and computational efficiency.
VIPR does not rely on MCMC sampling like the other traditional methods. It directly models the distribution over the tree space. Compared with VBPI, this new method uses a variational distribution over distance matrix. This derives a differentiable variational distribution over the tree space, makes it possible to apply efficient gradient estimation for faster and more stable inference.
VBPI directly optimizes the coalescent times/branch lengths, relaxes the limitation of to Bouckaert (2024) method matrix representation approach on the ability of tree representation.
The methods are evaluated on benchmark datasets and empirical dataset. The benchmark datasets covers a wide range of taxa numbers and sequence length, representing a range of complexity.
VIPR assume a log-normal variational distribution. What could be the impact of this assumption? Any chance to relax this assumption to achieve better flexibility on the inference?
Another minor issue is that the sequence divergence is not included for the datasets. It would be helpful to get a rough sense of how difficult are those datasets and what is the impact on the method performance.
The author can also consider to include simulated dataset with better control on tree depth, sequence divergence, mutation rates, etc.
The paper only considered Jukes-Cantor model, which may be over-simplified. Could the author consider more complex evolutionary models such as GTR?
Theoretical Claims: This paper defines the tree space and shows its variational distribution covers the entire tree space. VIPR variational family enables gradient-based optimization. Proposition 1 shows a closed-form solution for the density function of trees. The probability density function over trees looks good. The derivations of gradient estimators look correct.
The theoretical claims are mostly valid and proofs look correct.
Experimental Designs Or Analyses: The experiment design is reasonable.
The datasets covers a relatively wide range of difficulty levels. Including 11 standard benchmark datasets and 1 empirical dataset.
The method performance is compared against the MCMC-based golden standard method, BEAST, and a latest variational bayesian phylogenetic inference method, VBPI.
The metrics used for evaluation are valid.
Could the author add more baseline methods such as faster heuristic methods like RAxML?
The experiment does not cover the uncertainty estimation.
Supplementary Material: Supplementary materials provide proof for proposition 1, additional experiment results, and gradient estimator derivation. Overall, the supplementary materials are well-structured, provide sufficient details to support the claims of the main manuscript.
Relation To Broader Scientific Literature: VIPR improves the previous methods introduced by Bouckaert (2024), and Zhang and Matsen IV (2024) with better scalability and computational efficiency. Current MCMC-based methods are limited to small datasets <100 taxas due to high computational cost. With the new method, it could enable Bayesian method on much larger dataset.
This new method also enables better integration with machine learning pipelines. Traditional phylogenetic inference methods are not differentiable. VIPR provides a differentiable method that is compatible with deep learning pipelines.
Essential References Not Discussed: To achieve a comprehensive landscape of phylogenetic inference studies, the author should consider to discuss other tree inference methods, such as maximum likelihood methods and distance based heuristics.
Other Strengths And Weaknesses: This new method shows good novelty. VIPR addresses an important limitation of previous methods, removes dependency of MCMC sampling. VIPR makes the Bayesian tree inference more scalable for larger datasets.
The paper is well-structured, with a clear motivation, theoretical proof, and experimental results.
Overall, the paper shows a strong contribution to Bayesian phylogenetic inference method.
The paper is relatively simplified on the theoretical assumptions. Please consider to expand the methods to more complex evolutional model to better fit the real-world applications. The paper can be stronger by adding discussion on uncertainty estimation, comparison with other mainstream phylogenetic inference methods, such as RAxML, neighbor-joining, and Vaiphy.
Other Comments Or Suggestions: I have listed my comments in previous sections.
Questions For Authors: I'm curious about the following questions:
1. How would VIPR handle non-ultrametric trees?
2. How robust is VIPR when handling noisy or high divergent dataset?
3. What is the limitation of VIPR on extreme large dataset, for examples dataset with 100+ taxa?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review: please find detailed responses below.
*How do parameter numbers influence the time complexity of VBPI?*
In [Zhang and Matsen, JMLR2024], as the number of taxa grows, the number of parameters grows with the number of trees in the SBN. There is no closed form for the number of trees; it depends on the MCMC algorithm and posterior concentration. We empirically calculated the number of parameters in VBPI vs the number of taxa on datasets simulated with "ms" [Hudson 2002] with 1,000 sites.
\# of tree structure parameters:
taxa | VBPI | VIPR |
---|---|---
8 | 4 | 56
16 | 44 | 240
32 | 55 | 992
64 | 3,826 | 4,032
128 | 29,939 | 16,256
256 | 127,217 | 65,280
512 | 319,533 | 261,632
Computing the variational density of VBPI is linear in the number of taxa, but normalizing the SBN scales with the number of parameters. We will add this experiment to the Appendix of the camera-ready copy.
*Why should VIPR have a lower computational complexity?*
In our experiments, VIPR attains accurate marginal log-likelihoods estimates in fewer parameter updates than VBPI (Appendix B, Figure 5). VIPR has O($N^2$) parameters, and the number of parameters in VBPI can be larger than that if the SBN support is large (see Table above).
We improved our code using *scipy.cluster.hierarchy.linkage* and streamlined our phylogenetic likelihood function. We performed new speed comparisons for all methods on simulated datasets with varying numbers of taxa. See our response to Reviewer 3 for results.
*What is the impact of log-normal branch-lengths? Any way to relax this?*
After running BEAST, we plotted histograms of pairwise log-coalescent times across sampled trees for some of the datasets. In most cases these histograms looked normal, motivating our log-normal branch lengths. We will include a some of these histograms as a supplement. VIPR can incorporate any branch length distribution with continuously differentiable density. We will consider flexible branch-length distributions in future work.
*Sequence divergence is not included.*
We calculated pairwise Hamming distances between each taxa for each dataset (dropping sites with missingness). Values in parentheses are standard deviations:
DS | Hamming distance/\#sites
---|---
1 | .040(.017)
2 | .214(.057)
3 | .230(.051)
4 | .138(.055)
5 | .192(.041)
6 | .056(.029)
7 | .203(.069)
8 | .082(.031)
9 | .025(.014)
10 | .070(.026)
11 | .082(0.053)
COV | .008(0.003)
We will add this Table to Appendix B in the camera-ready.
*Include simulated datasets with better control on tree depth, sequence divergence, mutation rates, etc.*
This simulation is an excellent idea. We aim to do this in a follow-up paper.
*Jukes-Cantor may be over-simplified vs. complex evolutionary models*
We agree that Jukes-Cantor is a simplified assumption. We aim to include K2P [Kimura 1980] and GTR [Rodriguez 1990] in a follow-up paper. (Note that Zhang and Matsen JMLR2024 only consider JC.)
*Add more baseline methods like RAxML, neighbour-joining, and Vaiphy? The experiment does not cover uncertainty estimation.*
These methods do not apply specifically to variational inference over ultrametric trees. RAxML and Neighbour-joining do not provide estimates of the marginal likelihood. Vaiphy is for multifurcating trees. We have already mentioned VaiPhy and we will add more about non-Bayesian methods in the introduction:
"Phylogenetic inference can also be performed using non-Bayesian methods, including RAxML, Neighbour-joining, and Phyloformer. Phyloformer uses deep learning to construct pairwise representations of evolutionary distances between taxa. Phyloformer then uses pairwise distances to construct a tree using a neighbor-joining algorithm similar to the method described here. Non-Bayesian methods do not provide estimates of marginal likelihood, which are useful for model selection."
Regarding uncertainty estimation, we aim to add posterior predictive checks of tree length and clade support on simulated data to the Appendix for the camera-ready.
*How would VIPR handle non-ultrametric trees?*
A natural approach for non-ultrametric trees with our framework is to extend our strict clock models to relaxed clock models. Preliminary calculations suggest it is possible to do so at the cost of roughly twice as many variational parameters compared to the VIPR variational family.
*How robust is VIPR when handling noisy or high divergent dataset?*
VIPR struggled most with the COVID-19 dataset, where genomes varied relatively little (see Hamming distance Table above). We assume that highly divergent datasets would also be challenging. Future studies can quantify how VI methods such as VIPR are affected by noisy or divergent datasets.
*What is the limitation of VIPR on dataset with 100+ taxa?*
VIPR's empirical computation time per iteration is approximately linear in the number of taxa (see response to Reviewer 3). Future work can apply VIPR to larger datasets. | null | null | null | null | null | null | null | null |
Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM | Accept (poster) | Summary: The paper proposed Freeze-omni, which can enable speech input and output capabilities for any LLM backbone without tuning its parameters. In such a case, the model can achieve end-to-end chat experience without losing the intelligence behind the LLM backbone. The framework consists of multiple speech encoder and decoder modules to be aligned with LLM, training with different stages, e.g. speech input modules for understanding, speech output modules for generation, and chunk-level state prediction for interruption. The highlight of the performance is the high accuracy on several speech synthesized language benchmark compared to other speech LLM models, which benefits from the freezing LLM.
Claims And Evidence: The core idea of the framework is to maintain the intelligence of LLM by freezing its parameters. The results of high accuracy on language benchmark provided the evidence for such claims.
A potential issue is the claim of omni for the proposed framework. It seems the LLM is expanded with speech modality only. How about the vision modality? The omni is over-claimed considering the model capabilities. I have also noticed that there are many references in this paper using omni but with audio modality only. This can be misleading to the entire community.
Methods And Evaluation Criteria: The proposed method is intuitive, and the evaluation criteria is pretty standard in speech/audio LLM research.
Theoretical Claims: This is not a theoretical paper, but more based on empirical study.
There is a very fundamental issue that the paper didn't discuss, that is, how would the framework compare to the ASR+LLM+TTS cascades system? There is no such comparison in the experimental section.
Experimental Designs Or Analyses: The paper lacks some important comparisons, especially when this paper is more based on empirical study.
1. No comparison to cascades systems. The proposed system is very similar to train an ASR model as LLM input and a TTS model with LLM output. The comparison would provide the community insight and better evident the effectiveness of the method.
2. In terms of the ASR performance in Table 1, pls compare with the model using continuous features as input, e.g. Qwen2-audio, since Freeze-omni is using the similar.
3. It seems that Table 3 presents the speech-to-text QA performance. How about speech in and speech out QA? One of the contributions of the paper compared to previous work is the speech output. The paper should discuss about speech-to-speech QA performance.
4. What is the performance of speech-to-text QA after aligning the speech input module? Is it the same as what is shown in Table 3 after aligning the speech out modules? The question is regarding to whether adding speech output modules can affect the performance for speech-to-text QA.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: Freeze LLM for enabling speech functions is explored previously in literature. See "Essential References Not Discussed". This paper expands the core idea with speech output.
Essential References Not Discussed: There are several papers discussing how to enable speech functions for LLM with its parameters freezing. However, these papers do not expand the model with speech output, but I think it is worthing noting these works:
Wang, Chen, et al. "Blsp: Bootstrapping language-speech pre-training via behavior alignment of continuation writing." arXiv preprint arXiv:2309.00916 (2023).
Fathullah, Yassir, et al. "Audiochatllama: Towards general-purpose speech abilities for llms." arXiv preprint arXiv:2311.06753 (2023).
Lu, Ke-Han, et al. "Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data." arXiv preprint arXiv:2409.20007 (2024).
Kang, Wonjune, et al. "Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech." arXiv preprint arXiv:2410.01162 (2024).
Fan, Ruchao, et al. "AlignFormer: Modality Matching Can Achieve Better Zero-shot Instruction-Following Speech-LLM." arXiv preprint arXiv:2412.01145 (2024).
Other Strengths And Weaknesses: Strengths: This work is a great study to expand LLM with speech functions without destroying the language capabilities.
Weakness:
1. I don't like the omni as the model does not include the vision modality. Shouldn't it be Freeze-audio or something?
2. The paper lacks comparison and ablation studies.
3. Reference missing.
Other Comments Or Suggestions: Overall, the paper is more like a technical report and does not provide enough ablation studies to provide insights to the community.
The proposed method is without strong theoretical claims. More experimental study is very necessary to strengthen the paper.
The proposed method involves many stages of training. People might be very interested in how each part affect the performance.
Questions For Authors: I have read the paper very carefully and do not have further question.
The authors, please correct my comments if you feel I don't understand it in the right place.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Summary: The paper introduces Freeze-Omni, a novel speech-text multimodal large language model (LLM) designed for speech-to-speech interaction while keeping the backbone LLM’s parameters frozen throughout the training process. This architecture enables low-latency, end-to-end spoken response while preserving the intelligence of the original LLM, addressing key challenges such as catastrophic forgetting and high computational costs associated with fine-tuning.
Claims And Evidence: Yes. The claims are all supported.
Methods And Evaluation Criteria: Yes. The methods and evaluation make sense for the problem.
Theoretical Claims: None
Experimental Designs Or Analyses: More Comprehensive Evaluation on Speech Input and Output:The paper mainly evaluates the accuracy of speech recognition (ASR) for speech input and the character error rate (CER) for speech output. However, additional metrics such as Word Error Rate (WER) for output speech and intelligibility scores (e.g., MOS – Mean Opinion Score) could provide a more comprehensive assessment of speech quality.
Comparative Analysis with More Baselines:While the paper compares Freeze-Omni with a few existing models, more state-of-the-art speech-to-speech systems (e.g., recent versions of GPT-based multimodal models) could be included for a broader comparison.
Latency Optimization:The paper provides latency analysis but does not discuss potential optimizations. Exploring methods to reduce the response time further, such as improving the efficiency of the speech encoder or speech decoder, could be beneficial.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: None
Essential References Not Discussed: None
Other Strengths And Weaknesses: -Strengths:
1. Innovative Frozen-LLM Architecture: The proposed Freeze-Omni model maintains the parameters of the backbone LLM completely frozen throughout training. This prevents catastrophic forgetting and ensures that the model retains the intelligence of the original LLM while integrating speech modalities.
2. Three-Stage Training Strategy: The proposed three-stage training process efficiently models both speech input (ASR) and speech output (TTS) while keeping the LLM frozen.
3. Low-Latency Speech Interaction: The model is optimized for real-time speech-to-speech dialogue, achieving an average response time of 1.2 seconds in real-world scenarios. This is significantly lower than traditional ASR + LLM + TTS pipelines.
4. High Accuracy in Spoken Question Answering: The model achieves competitive performance in spoken Q&A tasks, with an accuracy gap between Freeze-Omni and its text-only backbone LLM smaller than other speech-enabled LLMs like Moshi.
Freeze-Omni presents a well-designed, efficient, and practical approach to integrating speech-to-speech interaction into LLMs while preserving intelligence, reducing computational overhead, and maintaining low latency. The combination of freezing the LLM, modular training strategies, and real-time duplex dialogue capabilities makes it a notable advancement in the development of multimodal conversational AI systems.
-Weaknesses:
1. More Comprehensive Evaluation on Speech Input and Output:The paper mainly evaluates the accuracy of speech recognition (ASR) for speech input and the character error rate (CER) for speech output. However, additional metrics such as Word Error Rate (WER) for output speech and intelligibility scores (e.g., MOS – Mean Opinion Score) could provide a more comprehensive assessment of speech quality.
2. Comparative Analysis with More Baselines:While the paper compares Freeze-Omni with a few existing models, more state-of-the-art speech-to-speech systems (e.g., recent versions of GPT-based multimodal models) could be included for a broader comparison.
3. Latency Optimization:The paper provides latency analysis but does not discuss potential optimizations. Exploring methods to reduce the response time further, such as improving the efficiency of the speech encoder or speech decoder, could be beneficial.
4. Scalability and Adaptation to Different LLMs Not Fully Explored: While the paper claims Freeze-Omni can work with any LLM, it is only tested on Qwen2-7B-Instruct. There is no empirical evaluation of how the approach generalizes to larger models or smaller, more efficient models that might be deployed on edge devices.
5. Training Efficiency vs. Larger-Scale Training: The paper highlights the efficiency of training on 8 GPUs with only 60K Q&A data, but it does not explore whether performance would scale with larger datasets.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Summary: This paper proposes a framework to provide a frozen text LLM with spoken dialogue abilities, by integrating it with a speech encoder and a speech generation system. When orchestrated by an auxiliary turn prediction module, this allows for the model to interact along multi-turn conversations with a latency lower than one second. While easily turning text LLMs into spoken dialogue systems is of wide interest, this paper is inappropriate for publication at ICML in my opinion. First, combining a text LLM with a speech encoder and a speech decoder has been done several times (Spectron, GLM-4Voice, Llama-Omni, etc.) and while this does not make the topic outdated, this paper does not present contributions in a way that convinces me that it does something fundamentally better than others: the model section is too vague (e.g. see below the relation between the NAR and AR modules) and the relation to previous work is not discussed enough to highlight the novelty of the framework. Second, the experiments are limited to ASR (as a proxy for speech encoder quality) and single-turn QA which are not sufficient to properly evaluate what claims to be a multi-turn spoken dialogue system. Finally, comparing Freeze-Omni to end-to-end models such as Moshi is a bit unfair: as Freeze-Omni uses a frozen LLM, it can be adapted easily, but the counterpart is that its text backbone acts as an information bottleneck that loses all non-linguistic information. Avoiding this limitation is the exact motivation behind end-to-end speech models! Overall, while quick adaptation of text LLMs to speech is of crucial interest both in research and in industry, the paper in its current state does not provide a convincing and replicable method and I strongly encourage the authors to submit a more detailed paper, with a more precise focus on the model and more extensive experiments in particular regarding multi-turn performance.
Claims And Evidence: See "Summary".
Methods And Evaluation Criteria: The writing of the method section could be considerably improved. In particular, I found the mechanism of the NAR and AR models to remain mysterious after carefully reading this section several times. On the other hand, some elementary descriptions such as Figure a are not useful to improve the understanding of the method.
Theoretical Claims: None.
Experimental Designs Or Analyses: Overall, the experimental setup poses several problems:
1) The word error rates reported in Table 1 are much worse than sota in some cases. This may mislead the reader thinking that the proposed method is e.g. sota on Librispeech test-clean while sota on this dataset is below 1.5% WER.
2) Evaluating on question answering and emphasizing that Freeze-Omni gets a performance that is close to its LLM backbone is questionable: as the LLM is frozen, it is expected that answers will be identical to the textual topline as long as a) the speech encoder transcribes input audio properly b) the speech decoder produces speech that is intelligible enough to Paraformer for the transcribed answer to be verified against the ground-truth. This score here thus characterizes the performance of ASR and TTS (and probably to some extent the ability of the LLM backbone to correct ASR errors) which defeats the purpose of this metric, which is intended to measure the knowledge of end-to-end spoken LLMs.
3) The Q&A evaluation only evaluates single-turn interactions. As the authors perform training on multi-round sequences, they should provide metrics for multi-turn behaviour beyond the latency metrics reported in Table 4. Otherwise, it is not possible to favor their multi-turn setting rather than resetting the state of a single-turn dialogue model between every turns.
4) There are almost no ablation studies whatsoever. This does not allow identifying the key findings of authors through their experiments, and does not facilitate replication as it is unclear which components should be the most precisely reproduced.
Supplementary Material: N/A
Relation To Broader Scientific Literature: See "Summary"
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - The presentation of the paper is below ICML standards. As an example, the style of Figure 1 which is mostly empty space and with a large logo is unusual. Moreover, most acronyms are never defined. This is particularly problematic for not-so-standard acronyms such as NAR which afaik was introduced in Vall-E for "non-autoregressive". Moreover, the writing style can be improved, with many typos and inconsistent tense (e.g. Section 2.4).
Other Comments Or Suggestions: N/A
Questions For Authors: - Section 2.2.2. mentions that the second stage of training---which connects the pretrained speech encoder to the frozen LLM--- involves adding several trainable special tokens. What are those? How many of them, what separates them? How are they presented to the model and for which precise purpose?
- Section 2.3.1 explains that the speech generator combines a non-autoregressive (NAR) and an autoregressive (AR) model similarly to Vall-E. However, Vall-E first predicts the first codebook of the neural codec, while the NAR model generates the other levels. This is unlike Freeze-Omni which is claimed to first apply a NAR model, followed by an AR one. This would benefit from clarifications.
- Section 3.1.1 mentions that an LLM and a TTS systems are used to generate synthetic data. Which ones?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | null | null | null | null | null | null | null | null | |||
Learning Adversarial MDPs with Stochastic Hard Constraints | Accept (poster) | Summary: This paper considers adversarial MDP problems with stochastic constraints. The paper seeks to bound both the regret and the violations (hard). The paper shows that $O(\sqrt{T})$ regret and $O(\sqrt{T})$ violations bound only when the Slater's condition is satisfied. The paper shows that $O(\sqrt{T})$ regret and no violation when a strictly feasible policy is known. In fact, it also shows that it is possible to achieve constant violation and $O(\sqrt{T})$ regret only when one does not know the strict feasibility parameter.
Claims And Evidence: The paper is theoretical in nature. All the claims have been proved.
Methods And Evaluation Criteria: Not applicable as the paper's scopes are theoretical in nature.
Theoretical Claims: I briefly checked the correctness which seems to be correct.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: I checked the proofs.
Relation To Broader Scientific Literature: This paper seeks to contribute in the space of adversarial MDP with stochastic constraints. The paper extends the adversarial MDP work to the CMDP setup with stochastic constraints. The paper achieves optimal regret and violation bound.
Essential References Not Discussed: The paper has not discussed with some of the major papers in the CMDP domain. While I agree that those papers did not consider adversarial rewards, however, in order to understand the technical contributions, one needs to understand the technical novelties required. In particular, why not combining all these approaches will not be enough to solve this problem?
[A1]. Efroni, Yonathan, Shie Mannor, and Matteo Pirotta. "Exploration-exploitation in constrained mdps." arXiv preprint arXiv:2003.02189 (2020).
[A2]. Bura, A., HasanzadeZonuzy, A., Kalathil, D., Shakkottai, S. and Chamberland, J.F., 2022. DOPE: Doubly optimistic and pessimistic exploration for safe reinforcement learning. Advances in neural information processing systems, 35, pp.1047-1059.
[A3]. Liu, T., Zhou, R., Kalathil, D., Kumar, P. and Tian, C., 2021. Learning policies with zero or bounded constraint violation for constrained mdps. Advances in Neural Information Processing Systems, 34, pp.17183-17193.
[A4]. Ghosh, Arnob, Xingyu Zhou, and Ness Shroff. "Towards achieving sub-linear regret and hard constraint violation in model-free rl." In International Conference on Artificial Intelligence and Statistics, pp. 1054-1062. PMLR, 2024.
Other Strengths And Weaknesses: Strengths:
1. The paper is well written.
2. The proof ideas seem to be correct.
Weaknesses:
1. The major weakness is that the technical novelties are not clear. CMDP setup is well studied. As I discussed, it is not clear what are the major technical innovations.
2. The paper only considers tabular case.
**Post Rebuttal:**
This work is solid and I am convinced about the contributions. I have raised my score.
Other Comments Or Suggestions: Please see the questions
Questions For Authors: 1. In the guaranteeing safety Algorithm (S-OPS Section 5), the paper utilizes a random exploration with a certain probability in the first episode, and also when the state-action occupancy found by the algorithm is not safe. However, is there any assumption which states that random exploration is safe? In general, such an exploration might not be safe.
2. The algorithm relies on building optimistic state-action-occupancy measures, how easy (computationally) it is?
3. What are the values of $\lambda_t$? How they are refined with time $t$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the Reviewer for the effort in evaluating our work.
> On the comparison with existing works and [W1]
[A1] focuses on CMDPs with stochastic rewards and constraints, showing how it is possible to achieve $\sqrt T$ regret and violation. The only technique proposed by [A1] which is of interest in our setting is the optimistic safe set estimation, which is employed by SV-OPS.
[A2] and [A3] study the stochastic version of our second scenario. Their randomization techniques cannot work in our second scenario, since in adversarial settings, it is not possible to play the safe policy for a fixed amount of rounds and then switch to an adversarial online learning algorithm on a pessimistic decision space. To see this, notice that pessimistic decision spaces increase over time and that adversarial online learning algorithms do not work on increasing decision spaces in $t$.
Moreover, notice that even if [A3] shows how to obtain constant violation, but this is done employing a weaker version of constraints violation (which allows for cancellations between episodes), thus they are not applicable to our third scenario.
Similarly to [A1], [A4] focuses on CMDPs with stochastic rewards and constraints, showing how it is possible to achieve $\sqrt T$ positive regret and violation. Their techniques cannot be generalized to adversarial settings, since they assume that there exists an underlying reward distribution in their analysis. Moreover, notice that the algorithm proposed in [A4] has an exponential worst-case running time, while our algorithm is polynomial.
To conclude, our paper's main contributions are highlighted as follows. First, we study a novel CMDP setting, encompassing both adversarial losses and hard constraints. Specifically, we focus on three scenarios, which are all novel and cannot be tackled by existing algorithms. The first one is novel in terms of results, while we agree that we combine existing techniques to get our theoretical results. The second and third scenarios are instead novel both in terms of results and in terms of algorithmic techniques. Furthermore, the third scenario has never been studied even in simpler stochastic settings. Finally, we propose a novel lower bound which can be of independent interest, since it applies to the stochastic setting, too.
We will surely include this discussion in the final version of the paper.
> [W2]
We agree with the Reviewer that studying CMDPs with infinite state-action spaces is an interesting future direction. Nevertheless, since this is the first work to tackle both adversarial losses and hard constraints, providing a large set of different results, we believe it is still of interest for the community.
> [Q1]
As is standard in the adversarial MDPs literature, the performance metrics are computed taking the expectation over policies and transitions. Since S-OPS plays non-Markovian policies, the expected violation is computed taking into account the randomization selected by the algorithm (please refer to Theorem 5.1. for additional details).
> [Q2]
Optimistic occupancy measures can be computed in $O(|X|^3|A|^2)$ steps (please refer to Jin et al. (2020) for the associated algorithm pseudocode).
> [Q3]
$\lambda_t$ is proportional to the uncertainty the algorithm has on both constraints and transitions. Thus, $\lambda_t$ is greater in the first episodes while converges to $0$ as the learning dynamic evolves.
Since these are crucial aspects of our work, please let us know if further details are necessary. We would be happy to engage in further discussion.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their responses. Now, the contributions become much more clearer. I have gone over the paper again, and have a few more comments which are required to understand the paper in a clearer way.
1. My first comment related to the second setting that you consider, i.e., ensuring that safety is not violated in the exploration. The basic idea is similar to the DOPE paper [A1] If you see the DOPE work where one uses a mixture policy. The mixture is based on the pessimistic estimate and the safe policy. The key is that because of the adversarial loss, you have to update the state-action-occupancy measure in a different way, that contribution is clear to me from the first setting now.
Now, coming to the question--> The DOPE paper has to use double optimism in the reward (an additional bonus term) because of the pessimism used in the state-action occupancy measure. However, here, you are not using it to achieve a sub-linear. What is the reason behind it?
2. Also, regarding the third setting, it is very difficult to get any intuition. Can you please explain how you are estimating the safe policy and the value function corresponding to it in a constant number of episodes? Once you get a safe policy, you can apply the algorithm in the second setting, hence, understanding the question is very important.
[A1]. Bura, A., HasanzadeZonuzy, A., Kalathil, D., Shakkottai, S. and Chamberland, J.F., 2022. DOPE: Doubly optimistic and pessimistic exploration for safe reinforcement learning. Advances in neural information processing systems, 35, pp.1047-1059.
---
Reply to Comment 1.1.1:
Comment: > The DOPE paper has to use double optimism in the reward (an additional bonus term) because of the pessimism used in the state-action occupancy measure. However, here, you are not using it to achieve a sub-linear. What is the reason behind it?
We thank the Reviewer for the opportunity to better clarify this aspect. While our approach and the one of [A1] may seem similar since both of them employ strategy mixtures to guarantee the safety property, there are more than one key differences.
Specifically, the main idea of Dope is the following. The strictly safe policy is played for a certain amount of time, after which a good estimate on constraints costs and transitions is available. This allows the pessimistic set to be large enough to use it. Indeed, notice that the pessimistic safe set is empty in the first episodes, when no information about the environment is available. This approach requires optimism on reward and transition, while pessimism on the costs, as pointed out by the Reviewer, to properly tackle the exploration-exploitation trade off.
To summarize, all the techniques employed in [A1] are required because they directly optimize over the pessimistic decision space (see (10) in their paper).
Optimizing over the pessimistic decision space cannot be done in our setting, since adversarial no-regret algorithms do not work in increasing decision spaces. Thus, our idea is pretty different. We do not play the strictly safe policy for a fixed amount of time. Instead, at each episode, we first allow the algorithm to select a policy in an optimistic manner, that is, employing the optimistic safe set as the decision space of the OMD update. Notice that, the policy selected optimistically is clearly no-regret given the results shown for SV-OPS. Then, we combine this policy with the strictly safe one, selecting the combination factor such that the final output policy is safe with high probability while being as close as possible to the one obtained with the OMD update. Finally, showing that the combination factor is lower-bounded by a constant factor allows us to get the final result of sublinear regret.
We hope the explanation above addressed the Reviewer concern.
> Also, regarding the third setting, it is very difficult to get any intuition. Can you please explain how you are estimating the safe policy and the value function corresponding to it in a constant number of episodes?
We thank the Reviewer for the interesting question. Indeed, the fundamental reason why our approach works is that our technique only needs to estimate the safe policy within a constant **multiplicative** factor. Our result may look surprising since, most of the time, we focus on minimizing additive regret. Here, instead, we aim at the far less challenging goal of finding a strategy that is a constant fraction approximation of the optimal one (in our case, the strictly safe one). Intuitively, after a constant number of rounds $\tau$, the regret with respect to the most feasible policy is of order $\sqrt{\tau}$, which is only a constant fraction of $\tau \rho$. Hence, the per round feasibility margin is $\frac{\tau \rho- \sqrt{\tau}}{\tau}= \Omega(\rho)$.
We will surely include this discussion in the final version of the paper and we hope to have properly addressed the Reviewer concern. | Summary: This paper studies online learning in constrained Markov Decision Processes with adversarial losses and stochastic hard constraints under bandit feedback. The authors introduce novel algorithms for three distinct scenarios of CMDPs, ensuring sublinear regret while managing constraint violations in different ways. The authors provide theoretical guarantees for the proposed algorithms.
Claims And Evidence: The core theoretical claims (sublinear regret, constraint violation guarantees, and the lower bound) are well supported by clear mathematical proofs.
Methods And Evaluation Criteria: The regret and constraint violation are standard evaluation criteria for online CMDPs.
Theoretical Claims: The theoretical claims are well-supported by the proofs, though I do not have time to check the details of the proofs.
Experimental Designs Or Analyses: The paper does not include any experimental results.
Supplementary Material: I checked the supplementary materials and found them helpful for understanding the proofs.
Relation To Broader Scientific Literature: The paper is well-positioned in the literature on online CMDPs.
Essential References Not Discussed: I did not find any essential references missing.
Other Strengths And Weaknesses: ### Strengths
1. This paper is the first to study CMDPs with both adversarial losses and stochastic hard constraints, extending previous work on constrained RL.
2. This work presents different algorithms tailored to distinct CMDP scenarios and provides rigorous theoretical guarantees for each.
3. The paper establishes a lower bound, highlighting the fundamental trade-off between constraint satisfaction and regret minimization.
### Weaknesses
1. The paper lacks experimental results, making it difficult to assess the practical effectiveness and computational feasibility of the proposed algorithms.
2. The paper could benefit from a more detailed discussion of the computational complexity of the proposed algorithms, for example, the KL-divergence-based projection in SV-OPS and S-OPS (Equation 2).
3. The paper does not compare its work against alternative CMDP approaches, such as Lagrangian-based methods, which relax constraints using dual variables and model-based safe RL techniques that explicitly incorporate safety models. Moreover, a discussion on how existing methods fail to handle the hard constraints would help justify the novelty of the proposed approach.
4. Distinguishing the new components from existing techniques in the algorithms would help clarify the novelty of the proposed methods. For example, the upper occupancy bound, optimistic loss estimator, and OMD in SV-OPS appear similar to Jin et al. (2020) [1], which should be explicitly highlighted. Since I am not deeply familiar with the CMDP literature, it is unclear whether the constraint-handling components are new or adaptations of prior techniques.
[1]. Jin et al., Learning adversarial Markov decision processes with bandit feedback and unknown transition. In ICML 2020.
Other Comments Or Suggestions: I suggest adding a discussion on the computational complexity of the proposed algorithms and comparing them against alternative CMDP approaches. Moreover, a detailed comparison with existing methods and a clear exposition of the novel components in the algorithms would enhance the paper's clarity and impact.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the Reviewer for the effort in evaluating our work.
> W1
We agree with the Reviewer that experiments are always beneficial; nevertheless, we underline that both in the online CMDPs literature and the adversarial MDPs one, many works do not have experimental results (see e.g., Rosenberg et al. (2019b), Jin et al. (2020), Efroni et al. (2020), Stradi et al. (2024b) and many others). Despite the lack of experimental evaluation, many of the aforementioned works (and many others) have been published in top AI conferences, such as ICML.
> W2
We thank the Reviewer for the opportunity to clarify this aspect. Please notice that Eq. (2) is a convex program with linear constraints, which can be approximated arbitrarily well in polynomial time. We underline that solving this kind of projection is standard in the adversarial MDPs literature (see e.g., Rosenberg et al. (2019b), Jin et al. (2020)). Similarly, it is standard for adversarial online learning algorithms to require projections to be solved at each episode, even in easier single-state full-feedback settings (e.g., OGD and OMD algorithms).
> W3
We thank the Reviewer for the opportunity to clarify these aspects.
Regarding Lagrangian methods, state-of-the-art primal-dual algorithms are not suited to handle any of our scenarios. Broadly, these algorithms can be categorized into two main families: (i) primal-dual methods designed specifically for stochastic settings (e.g., [1], [2], [3]), and (ii) algorithms that address both stochastic and adversarial CMDPs (e.g., [4], [5]). All the algorithms in the first class assume that the losses/rewards are stochastic in nature, thus, their techniques and results are not applicable to our three scenarios. For the latter, [4], [5] achieve $\sqrt T$ regret and violation when the constraints are stochastic and the losses adversarial. Nevertheless, notice that their violation definition allows per-episode violations to cancel out, thus they employ a weaker violation definition w.r.t. ours. Moreover, the aforementioned papers assume Slater’s condition to hold, which is not the case for our first scenario. Our second and third scenarios have never been tackled employing primal-dual methods even in simpler stochastic rewards settings.
As concerns model-based techniques, our work is the first to tackle adversarial losses. Additionally, our work is the first one to provide constant violation bounds (employing our positive violation definition) without assuming the knowledge of a safe policy, even considering stochastic settings. Finally, existing model-based algorithms tailored for stochastic versions of our second scenario do not work in adversarial settings, and cannot be easily extended due to a different randomization approach ([6], [7]). For further details on these aspects please refer to the next answer.
We will surely include this discussion in the final version of the paper.
> W4
We thank the Reviewer for the opportunity to better discuss the algorithmic components of our techniques.
SV-OPS employs the optimization approach of Jin et al. (2020) on optimistic safe decision spaces. Optimistic safe decision spaces were originally introduced by [1] in the context of stochastic CMDPs.
The idea behind S-OPS and its randomization approach is novel in the CMDPs literature. We acknowledge that there exist other forms of randomization which are suitable for stochastic CMDPs (see, e.g., [6], [7]), but they fail in working in adversarial settings. To see this, notice that the algorithms presented in [7], [8] play the safe policy for a fixed amount of rounds and then resort to pessimistic decision spaces which are increasing in $t$. It is well-known that adversarial online learning algorithms do not work on increasing decision spaces. Thus, their approach cannot be easily generalized to tackle adversarial loss.
BV-OPS employs techniques which are completely novel in the literature.
We hope to have properly addressed the Reviewer concerns. Please let us know if further discussion is necessary.
[1] Efroni et al (2020), “Exploration-Exploitation in Constrained MDPs”
[2] Müller et al. (2024), “Truly No-Regret Learning in Constrained MDPs”
[3] Stradi et al. (2024), “Optimal Strong Regret and Violation in Constrained MDPs via Policy Optimization”
[4] Qiu et al. (2020), “Upper confidence primal-dual reinforcement learning for cmdp with adversarial loss.”
[5] Stradi et al. (2024), “Online Learning in CMDPs: Handling Stochastic and Adversarial Constraints”
[6] Liu et al. (2021), “Learning policies with zero or bounded constraint violation for constrained mdps”
[7] Bura et al. (2022), “DOPE: Doubly optimistic and pessimistic exploration for safe reinforcement learning.” | Summary: This paper studies episodic Constrained Markov Decision Processes (CMDPs) with adversarial losses and stochastic hard constraints under bandit feedback. It is the first to address a setting that combines both adversarial losses and strict hard constraints, whereas prior work has either considered adversarial losses with soft constraints, allowing negative violations to cancel out with positive ones, or stochastic losses with hard constraints. The authors propose three algorithms tailored to different scenarios and under different assumptions: the first ensures sublinear regret while keeping cumulative constraint violations sublinear, the second guarantees that the constraints are satisfied at every episode assuming the learner knows a strictly feasible policy, and the third achieves constant violation regret assuming a strictly feasible policy exists but is not known to the learner.
Claims And Evidence: All theoretical claims are followed by proofs in the appendix.
Methods And Evaluation Criteria: As a theoretical paper, the algorithms make sense for the problem.
Theoretical Claims: Proofs for theoretical claims seem correct.
Experimental Designs Or Analyses: not applicable.
Supplementary Material: I briefly checked the proofs for section 4 as they are based on standard techniques from online MDPs. I checked in detail the proofs for sections 5.
Relation To Broader Scientific Literature: This paper is the first to tackle the case of constrained MDPs when facing both the challenges of adversarial losses and hard constraints. Existing results so far would either treat adversarial losses with soft constraints, or stochastic losses with hard constraints.
However, the SV-OPS algorithm from Section 4 (the first algorithm proposed) primarily builds on existing ideas from the online MDP literature, introducing only minor generalizations. The algorithm follows the same OMD iterative scheme as in [Jin 2020]. But instead of projecting onto the space of occupancy measures that satisfy the dynamic constraints for any probability transition within a confidence set around the true transition, SV-OPS also projects onto the set of estimated constraints to ensure sublinear positive constraints violation. Projecting on the estimated constraints has already been introduced by [Efroni et al. 2020] in the set of stochastic losses.
On the other hand, the algorithm proposed in Section 5, which ensures safety in every episode, and the one proposed in Section 6, which guarantees constant violations, introduce novel ideas that may be of interest to the broader online C-MDP community.
Essential References Not Discussed: Related work is clear.
Other Strengths And Weaknesses: - **Strenghts**: First, the paper is well-written. Second, the extension of SV-OPS to scenarios that ensure safety at evert episode and guarantee constant violation appears novel and may be of significant interest to the online C-MDP community.
- **Weaknesses**: For SV-OPS, the sublinear constraint violation is ensured due to the accurate estimation of $ G_t $ and the projection in Eq. (2). However, implementing this in practice would require access to a solver for a linear programming problem with at least $ \Omega(XLA) $ decision variables and constraints, making it computationally expensive for frameworks with large state spaces.
Other Comments Or Suggestions: No further comments
Questions For Authors: Regarding the concern raised above about the projection, could it be feasible to design a primal-dual algorithm (which performs well in practice) that employs a similar OMD iterative scheme to handle adversarial losses while achieving both sublinear regret and sublinear violations?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the Reviewer for the positive evaluation and for the interesting question. Indeed, this is an interesting future direction, nonetheless, we believe that it would be highly non-trivial to employ a primal-dual approach to solve any of the scenarios we study, due to the following reasons. For the first scenario, primal-dual methods generally require Slater’s condition to hold, which is not the case of our algorithm. Moreover, primal-dual methods generally fail to remove the $1/\rho$ dependence in the regret and violation bound, thus achieving worse regret and violation guarantee than SV-OPS. Finally, notice that primal-dual methods developed for adversarial loss settings do not guarantee sublinear positive violation, but employ a violation definition where the cancellations are allowed (see, e.g., Stradi et al. (2024b)). As concerns the second and third scenario, we believe that the randomization with the strictly feasible policy requires an optimistic safe-set estimation, which is generally not employed in primal-dual methods.
Finally we want to underline that solving this kind of projection is standard in the adversarial MDPs literature (see e.g., Rosenberg et al. (2019b), Jin et al. (2020)). Similarly, it is standard for adversarial online learning algorithms to require projections to be solved at each episode, even in easier single-state full-feedback settings (e.g., OGD and OMD algorithms).
Please let us know if further discussion is necessary. | Summary: This paper introduces algorithms for constrained Markov Decision Processes (MDPs) with stochastic hard constraints, considering different assumptions and objectives for constraint violations. Specifically, it examines three key cases: (1) when constraints are feasible, (2) when constraints are strictly feasible with a known feasible policy, and (3) when constraints are strictly feasible but no strictly feasible policy is known beforehand. The paper establishes sublinear regret and constraint violation bounds for all three scenarios. The writing is clear, and the proofs are straightforward to follow. The core approach involves optimistically satisfying cost constraints in the first case and randomizing between an optimistic and a strictly feasible policy in the second.
Claims And Evidence: The lower bound in Theorem 6.6 seems a bit puzzling to me. Please refer to the questions for the authors field below.
Methods And Evaluation Criteria: This is fine.
Theoretical Claims: This is mostly fine.
Experimental Designs Or Analyses: No experimental result has been provided.
Supplementary Material: Yes
Relation To Broader Scientific Literature: Please refer to the questions for the authors field below.
Essential References Not Discussed: [1] Sinha, Abhishek, and Rahul Vaze. "Optimal algorithms for online convex optimization with adversarial constraints." Advances in Neural Information Processing Systems 37 (2025): 41274-41302.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Statements on Constraint Violation and Regret appear first in Theorems 4.2 and 4.3, but I could not find their formal definitions earlier in the paper. Clearly stating these definitions upfront would improve readability and help the reader follow the theoretical results more easily.
Questions For Authors: 1. Prior work [1] demonstrated that $O(\sqrt{T})$ regret and violation bounds can be achieved in online convex optimization with adversarial hard constraints without assuming Slater’s condition (i.e., when $\rho=0$). Keeping this result in view, could the authors offer some intuition on why Slater’s condition assumption is necessary for online reinforcement learning with hard constraints? Specifically, why does the result in [1] not contradict the lower bound given in Theorem 6.6, given that the lower bound proof involves a single state MDP?
2. The paper does not include any numerical comparisons with prior algorithms in the MDP setting, making it difficult to assess the practical effectiveness of the proposed theoretical results.
3. In Equation (2), computing the projection may be computationally challenging. Can the authors provide an upper bound on its worst-case computational complexity?
References:
[1] Sinha, Abhishek, and Rahul Vaze. "Optimal algorithms for online convex optimization with adversarial constraints." Advances in Neural Information Processing Systems 37 (2025): 41274-41302.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the Reviewer for the positive evaluation of the paper.
> Question 1.
We thank the Reviewer for the opportunity to clarify this aspect. Indeed, there is no contradiction between [1] and our lower bound, since our lower bound holds for our second and third scenario only, namely, when we aim to guarantee a violation of order $o(\sqrt{T})$, such as $0$ or constant violation. In our first scenario, we **do not assume Slater's condition** and we guarantee $O(\sqrt{T})$ regret and violation, thus, there is no contradiction w.r.t. [1].
> Question 2.
We agree with the Reviewer that experiments are always beneficial; nevertheless, we underline that both in the online CMDPs literature and the adversarial MDPs one, many works do not have experimental results (see e.g., Rosenberg et al. (2019b), Jin et al. (2020), Efroni et al. (2020), Stradi et al. (2024b) and many others). Despite the lack of experimental evaluation, many of the aforementioned works (and many others) have been published in top AI conferences, such as ICML.
> Question 3.
We thank the Reviewer for the opportunity to clarify this aspect. Please notice that Eq. (2) is a convex program with linear constraints, which can be approximated arbitrarily well in polynomial time. We underline that solving this kind of projection is standard in the adversarial MDPs literature (see e.g., Rosenberg et al. (2019b), Jin et al. (2020)). Similarly, it is standard for adversarial online learning algorithms to require projections to be solved at each episode, even in easier single-state full-feedback settings (e.g., OGD and OMD algorithms).
We finally thank the Reviewer for the other comments and suggestions, we will surely update the final version of the paper taking them into account. | null | null | null | null | null | null |
Efficient Diffusion Models for Symmetric Manifolds | Accept (poster) | Summary: The paper introduces a new framework for designing efficient diffusion models on symmetric manifolds, including torus, sphere, special orthogonal group and unitary group. The paper incorporates a spatially varying covariance structure that allows efficient training without computing the manifold heat kernel. In addition, the forward process involves a projected Euclidean Brownian motion, and thus avoids costly numerical solvers.
Claims And Evidence: The proposed diffusion model based on manifold projection for symmetric manifolds is claimed to be efficient and there exists evidence both in terms of complexity analysis and runtime comparison in experiments. The method seems also improve sample quality on synthetic datasets compared to prior manifold-based diffusion models.
Methods And Evaluation Criteria: The proposed methods are based on properties of symmetric manifolds and appear to be correct. The evaluation is standard as in existing works.
Theoretical Claims: Did not closely check the proofs.
Experimental Designs Or Analyses: Experiments seem to verify the benefits of the methods in terms of efficiency and sample quality.
Supplementary Material: Did not check the supplementary material.
Relation To Broader Scientific Literature: Given the need for efficiency and scalability in generative models, the motivation and findings of this work is significant.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Because I am not an expert in the field of manifold diffusion models, my judgement on the paper may not be less confident. In general, I found the paper to be well-written and the method has merits even though the idea may seem natural (as projection and its inverse are used for efficient training and sampling).
Other Comments Or Suggestions: NA
Questions For Authors: 1. Could the authors comment on how to generalize the method to non-symmetric manifolds?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We are glad that you appreciate that the motivation and findings of this work is significant, and the benefits of our methods in terms of efficiency and sample quality. We answer your specific question below.
> Could the authors comment on how to generalize the method to non-symmetric manifolds?
Thank you for this important question. Our theoretical guarantees for runtime and sampling accuracy rely on the following three properties:
1. *Exponential map oracle:* An oracle for computing the exponential map on the manifold $\mathcal{M}$.
2. *Projection map oracle:* A projection map $\varphi: \mathbb{R}^d \rightarrow \mathcal{M}$, where $d = O(\mathrm{dim}(\mathcal{M}))$ , which can be computed efficiently, along with its Jacobian $J_\varphi(x)$ and the trace of its Hessian $\mathrm{tr}(\nabla^2 \varphi(x))$, as these appear in our training objective.
3. *Lipschitz SDE on $\mathcal{M}$:* The projection $Y_t = \varphi(H_t)$ of the time-reversal $H_t$ of Euclidean Brownian motion should satisfy a stochastic differential equation (SDE) on $\mathcal{M}$ with drift and covariance terms that are $L$-Lipschitz at every point on $\mathcal{M}$, with $L$ growing at most polynomially in $d$. This ensures accurate and efficient simulation of the reverse diffusion process.
Conditions (1) and (2) can, in principle, be satisfied even when $\mathcal{M}$ is not a symmetric manifold. For example, consider the setting where $\mathcal{M}$ is the boundary of a compact convex polytope $K \subseteq \mathbb{R}^d$ as a structured non-symmetric domain. $K$ is assumed to contain a ball of some small radius $r>0$ centered at some point $p$. While not a Riemannian manifold due to non-smooth points at vertices and edges, the polytope boundary $\mathcal{M}$ is composed of piecewise flat $(d-1)$-dimensional faces. Geodesics within faces are linear and can be computed efficiently, satisfying the spirit of (1). For (2), one can define the projection $\varphi: \mathbb{R}^d \rightarrow \mathcal{M}$ to be the map which maps a point $x \in \mathbb{R}^d$ to the intersection of the ray $\ell$ with the polytope boundary $\mathcal{M}$, where $\ell$ is the ray extending from the center $p$ and passing through $x$. This projection can be computed efficiently, for example via binary search.
However, condition (3) is harder to guarantee in such cases. The drift of the projected reverse SDE has discontinuities at the vertices (and lower-dimensional faces) of the polytope. In our paper, we rely on continuous symmetries of the manifold to "smooth out" such irregularities and prove the necessary Lipschitz properties (see the paragraph "Showing 'average-case' Lipschitzness" on page 6).
Even among smooth Riemannian manifolds, generalizing beyond symmetric spaces is nontrivial. Examples include surfaces of revolution with non-uniform curvature (e.g., a torus with a varying cross-section) or higher-genus compact manifolds (e.g., a double torus), which lack the high degree of symmetry leveraged by our current analysis.
Extending our framework to such settings is a compelling and challenging direction for future research. We will include this discussion in the final version of the paper. | Summary: To improve the efficiency and accuracy of diffusion model on manifold, this work first defined a novel diffusion process on a so-called symmetric manifold by applying the projection map with mild smoothness condition. For the reverse process, instead of considering the manifold's head kernel that has no closed form, they design a new training objective to obtain the drift and covariance term. Furthermore, they obtain the complexity bound of the Wasserstein distance between the generated distribution and target distribution by using the comparison theorem for Riemannian manifolds, which is bounded by $O(\text{poly}(d))$.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: - (Line 179-192) What is the motivation of the symmetry property of $\mathcal{M}$? Here, I don't understand $z = z(U,\Lambda)$ and $\Lambda \in \mathcal{A}$ with $\mathcal{A} \subset \mathbb{R}^{d - \text{dim}\mathcal{M}}$. For the example of $SO(n)$,d $z = U\Lambda U^*$ but $\Lambda \in \mathbb{R}^d$ not in $\mathbb{R}^{d - \frac{d(d-1)}{2}}$. Or you mean $\mathcal{A} \subset \R^d$ is a $d - \text{dim}\mathbb{R}$ submanifold? Does this concept come from the concept of symmetric space?
Experimental Designs Or Analyses: Yes
Supplementary Material: I have checked the Appendix A of proof outline and Appendix B for examples of classifcal symmetric manifolds.
Relation To Broader Scientific Literature: This work provides a novel approach to consider the diffusion on manifolds. Unlike the previous works that requires the heat kernel on manifolds, which has no closed form so that it requires a lot of calculations, their forward process, reverse process, and training strategy are much more computationally efficient. So it has the potential to be applied to unknown data manifolds.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ### Strength:
- This work provides another approach to consider the diffusion on Riemannian manifold by training the drift and variance terms to avoid the problem of computing the manifold's heat kernel which has no closed form, so it improves the efficiency significantly.
- The assumptions are not too strict, because it does not require the smoothness of the construction maps $\phi \colon \mathbb{R}^d \rightarrow \mathcal{M}$, $\psi \colon \mathcal{M} \rightarrow \mathbb{R}^d$. So it may be applied to considering the learnable $\phi, \psi$ for unknown manifold $\mathcal{M}$.
### Weakness:
- In general, the problem is how to check a data manifold satisfying the so-called symmetry property.
- The sampling algorithm needs the previous knowledge of the exponential map, which may prevent the algorithm from being applied to unknown data manifold. The problem are whether we can train an exponential map and how the error effects the final accuracy.
Other Comments Or Suggestions: - In Theorem 2.2 (Line 171), $\varphi \colon \mathbb{R}^d \rightarrow \mathcal{M}$, and (Line 173) $\phi(\mathcal{M})$?
- I thought it is better to provide a brief preliminary knowledge of geometry and diffusion on manifold in the appendix part.
Questions For Authors: - About the Assumption 2.1, what is the norm $\Vert\cdot\Vert_{2 \rightarrow 2}$ meaning? Is it the operator norm of a matrix, because I thought $\nabla \phi(x) \in \mathbb{R}^{d \times d}$ is a matrix when extending $\phi \colon \mathbb{R}^d \rightarrow \mathcal{M}$ to $\phi \colon \mathbb{R}^d \rightarrow \mathbb{R}^d$? Furthermore, I am not sure what $\frac{d}{dU}\varphi(x)$ and $\frac{d}{dU}x$ exactly mean.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We are glad you appreciate the novel approach to diffusion on manifolds, and our significant improvement in computational efficiency. We answer your specific questions below.
>…motivation of the symmetry property of $\mathcal{M}$?
Thank you for this thoughtful question. The symmetry property, together with the average-case Lipschitz property for the map $\varphi$ (Assumption 2.1), allows us to show that the SDE for the projected reverse diffusion $Y_t=\varphi(H_t)$ on $\mathcal{M}$ satisfies a Lipschitz property at *every* point on $\mathcal{M}$ (We prove this in Lemma C.6; see also the paragraph *Showing "average-case" Lipschitzness* in Section 3.2). This everywhere-Lipschitz property in turn allows us to bound the numerical error of our algorithm's SDE solver.
Roughly, the average-case Lipschitz property (Assumption 2.1) says the projection map $\varphi$ satisfies a Lipschitz condition which holds on a subset of “average-case” points $\Omega_t\subset\mathbb{R}^d$ which contains the Euclidean-space forward diffusion $Z_t$ with high probability. Moreover, $\Omega_t$ satisfies a symmetry property: the indicator function $1_{\Omega_t}(x),$ which determines if a point $x\in\mathbb{R}^d$ is in $\Omega_t$, is independent of the projection $U:=\varphi(x)$ (e.g., for the sphere, $1_{\Omega_t}(x)$ depends only on the radial magnitude $\Lambda=||x||$ and not on the projection $U=\frac{x}{||x||}$ onto the sphere).
We show Assumption 2.1 holds for symmetric manifolds studied in our paper (see the paragraph following Assumption 2.1, and Lemma C.4).
>I don't understand $z=z(U,Λ)$…
We appreciate the opportunity to clarify. The dimension $d$ of the Euclidean space $\mathbb{R}^d$ is within a small constant factor of the dimension of the manifold $\mathcal{M}$ ($d=O(\mathrm{dim}(\mathcal{M})$). The dimension of the Euclidean space $\mathbb{R}^{d-\mathrm{dim}(\mathcal{M})}$ that contains the $\Lambda$'s, and the dimension of $\mathcal{M}$, add up to $d$. In our paper we sometimes abuse notation and refer to the manifold's dimension as $d$ rather than "$O(d)$", as this does not change the runtime bounds beyond a small constant factor. We will clarify this.
When $\mathcal{M}=\mathrm{SO}(n)$, each element of $\mathrm{SO}(n)$ is an $n\times n$ orthogonal matrix. The map $\varphi:\mathbb{R}^d→\mathcal{M}$ takes as input an $n\times n$ upper triangular matrix, and outputs an $n\times n$ orthogonal matrix in $\mathcal{M}=\mathrm{SO}(n)$. As each upper triangular matrix has $\frac{n(n+1)}{2}$ nonzero entries, the space of upper triangular matrices (in vector form) is $\mathbb{R}^d$ with $d=\frac{n(n+1)}{2}.$
The dimension of $\mathcal{M}=\mathrm{SO}(n)$ is $\mathrm{dim}(\mathcal{M})=\frac{n(n+1)}{2}-n$ (as $n\times n$ orthogonal matrices have $\frac{n(n+1)}{2}-n$ degrees of freedom). As $\Lambda$ are the diagonal entries of an $n\times n$ diagonal matrix, $\Lambda\in\mathbb{R}^n$. Thus, $\Lambda\in\mathbb{R}^{d-\mathrm{dim}(\mathcal{M})}$, as $d-\mathrm{dim}(\mathcal{M})=\frac{n(n+1)}{2}-(\frac{n(n+1)}{2}-n)=n$.
>how to check…symmetry property
In this paper, we assume the constraint manifold is known a priori and is one of several standard symmetric manifolds (e.g., sphere, torus, $\mathrm{SO}(n)$, $\mathrm{U}(n)$ or their direct products). This is typical in many applications—e.g., molecular data on tori or quantum evolution matrices in $\mathrm{U}(n)$. In such applications, numerical methods are not required to verify the symmetry property.
>…previous knowledge of the exponential map…
In the setting where the manifold is a symmetric-space, there are closed-form expressions which allow one to efficiently and accurately compute the exponential map. E.g., on $\mathrm{SO}(n)$ or $\mathrm{U}(n)$, this map is given by the matrix exponential.
>In Theorem 2.2 (Line 171), $\varphi:\mathbb{R}^d→\mathcal{M}$
We will fix this typo.
>…(Line 173) $\phi(\mathcal{M})$?
The map $\psi$, defined in Section 2, is the (restricted) inverse of $\varphi,$ where $\varphi(\psi(x))=x$ for all $x\in\mathcal{M}$. $\psi(\mathcal{M}):=${$\psi(x):x\in\mathcal{M}$}$\subset\mathbb{R}^d$ is the pushforward of $\mathcal{M}$ w.r.t. $\psi$.
>…preliminary knowledge…appendix.
We will include a brief primer on Riemannian geometry and manifold-based diffusion in the appendix.
>$‖\cdot‖_{2→2}$ … Is it the operator norm
Yes, this is the operator norm (induced 2-norm) of a matrix. We will clarify this in the final version.
>…what $\frac{d}{dU}\varphi(x)$ and $\frac{d}{dU}x$ exactly mean.
In the decomposition $x=x(U,\Lambda)$, we define the partial derivative $\frac{d}{dU}x(U,\Lambda)$ as the derivative of the parameterization with respect to $U\in\mathcal{M}$. For instance, when $\mathcal{M}$ is the special orthogonal group $\mathrm{SO}(n)$, we have $x(U,\Lambda)=U\Lambda U^\top$, and the derivative corresponds to projecting $U\Lambda+\Lambda U^\top$ onto the tangent space of $\mathrm{SO}(n)$. | Summary: This paper introduces a new method for producing scalable diffusion models on Riemannian manifolds with certain symmetries. The method is constructed by placing a diffusion process on a Euclidean space that can be projected entirely onto the manifold, with a partial inverse.
The training speed of this algorithm is significantly faster than the current best methods for applicable manifolds. It significantly improves on the runtime growth rate with manifold dimension, for special cases to be the same rate as Euclidean diffusion models.
The authors contribute several pieces of new theory to prove results regarding the runtime and accuracy of the model and the training objective.
Experimental validation proves out the theory by showing excellent results in high dimension scaling.
Claims And Evidence: I believe all the claims are well supported.
Methods And Evaluation Criteria: They do.
Theoretical Claims: I checked some of the derivations, that of the loss, the training and sampling algorithm.
The results on run time and accuracy are beyond my expertise.
Experimental Designs Or Analyses: The evaluation section is well conceived and demonstrates the advantage of the model well.
One missing experiment would be on real data of some kind - for example the earth sciences dataset typically used for Riemanian diffusion models, or something like the the experiments on molecule generation in papers such as Torsional Diffusion [1]. I suggest this as usually real data is significantly more mode-concentrated than synthetic data and it is useful to see if methods can handle this. I do not see this as a barrier to publication, but practitioners may find it more convincing to use the method if such experiments are included.
[1] https://arxiv.org/abs/2206.01729
Supplementary Material: I checked the sections relevant to the loss, the training and sampling algorithm, and the additional experimental detail.
Relation To Broader Scientific Literature: The methods here relate to the body of work modeling densities on Riemannian manifolds.Typically these methods have struggled to scale to higher dimensions outside of the special case of the torus. This method significantly improves on prior work on the subset of manifolds it applies to.
Essential References Not Discussed: Not essential, but I would have like to see a mention of [1], as they also develop a model for placing density on the torus with runtimes on the same order as Euclidean space. This method is clearly different and applicable to other settings however.
[1] https://arxiv.org/abs/2206.01729
Other Strengths And Weaknesses: I found the presentation of sections 2 & 3 a little confusing. I found it strange to present the results ahead of the derivation of the algorithm and found myself constantly referring forwards. I would perhaps reorder these section to present the method first, followed by the results and then the proof highlights. This is just a suggestion however.
I was also unsure about the use of the word `oracle` throughout the paper to describe what to me are just known functions, such as the exponential map and the projection maps. Could the authors explain why it is needed to term these maps this way?
Other Comments Or Suggestions: None
Questions For Authors: Please see the relevant sections of the review.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We are glad that you appreciate the significant runtime improvement, our contribution of several pieces of new theory, and excellent experimental results in high dimension scaling. We answer your specific questions below.
> One missing experiment would be on real data of some kind - for example the earth sciences dataset typically used for Riemanian diffusion models, or something like the the experiments on molecule generation in papers such as Torsional Diffusion [1]. I suggest this as usually real data is significantly more mode-concentrated than synthetic data and it is useful to see if methods can handle this. I do not see this as a barrier to publication, but practitioners may find it more convincing to use the method if such experiments are included.
Thank you for this helpful suggestion. We agree that evaluating our method on real-world datasets would enhance its practical relevance.
We considered the datasets you mentioned. Our theoretical results and experiments on synthetic datasets indicate that our method yields greater gains in sample quality and training runtime as the dimension of the manifold increases—for example, on tori with $d \geq 10$, and on $\mathrm{U}(n)$ and $\mathrm{SO}(n)$ for $n \geq 9$ (see Table 1 and Figure 1). As such, low-dimensional datasets (e.g., on $\mathbb{S}^2$) are less likely to reveal the advantages of our approach.
In contrast, the GEOM-DRUGS dataset from the paper *Torsional Diffusion for Molecular Conformer Generation* [1] offers a promising setting. It consists of molecules whose torsion angles can be represented as points on tori of varying dimensions (average $d \approx 8$, with some above $d = 30$), provided one first applies a preprocessing model [1] that infers torsion angles from 3D molecular structures. However, one cannot directly apply our framework to this dataset, as the torsion angles of different molecules lie on tori of different dimensions. Applying our model to this dataset would require extending our framework (and code) to handle a union of tori of varying dimensions, and adapting it to perform conditional generation based on molecular graphs.
We are actively exploring this direction and will include a discussion of such extensions and limitations in the final version.
> Not essential, but I would have like to see a mention of [1], as they also develop a model for placing density on the torus with runtimes on the same order as Euclidean space.
Thank you for pointing us to this work. We will include a discussion of this work in Section 1 (Introduction) and Section 2 (Results) of the final version.
> I found the presentation of sections 2 & 3 a little confusing... I would perhaps reorder these section to present the method first, followed by the results and then the proof highlights.
Thank you for this feedback. We will revise the paper so that the algorithmic details precede the presentation of results.
> I was also unsure about the use of the word oracle throughout the paper to describe what to me are just known functions, such as the exponential map and the projection maps...
You are right — the term "oracle" is not necessary here, since the exponential and projection maps are known and computable in closed form for the manifolds considered. We will remove the term in the final version. | Summary: This work proposes a new efficient algorithm to generate symmetric manifold data, which enjoys $O(1)$ gradient evaluation and nearly $d$ arithmetic operations (exactly $d$ for sphere and torus data) and significantly improves previous results. The main intuition is to take advantage of Ito Lemma and the projection operator to allow the use of closed-form score on the Euclidean space. They also use synthetic experiments to support their theoretical results.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I partly check the correctness of the proof (proof sketch) for theoretical claims.
Experimental Designs Or Analyses: I have checked the soundness of the analysis and experimental designs.
Supplementary Material: No.
Relation To Broader Scientific Literature: Previous works on the manifold data either have $O(d)$ gradient evaluation or exponential $O(d)$ arithmetic operations. In this work, they improve these two terms at the same time.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths 1: The intuition behind the algorithm design is helpful, and the technique novelty is clear (Sec. 3.2.).
Weakness 1: It would be better to add a notation part at the beginning of the appendix.
Other Comments Or Suggestions: Comment 1: It seems that this work does not introduce the definition of $J_{\varphi}$
Questions For Authors: Please see the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We are glad that you appreciate that our method significantly improves previous results, the intuition behind the algorithm design, and the technique novelty. We answer your specific questions below.
> It would be better to add a notation part at the beginning of the appendix.
Thank you for the suggestion. We will add a notation section at the beginning of the appendix in the final version.
> It seems that this work does not introduce the definition of $J_\varphi$
We appreciate the opportunity to clarify. The Jacobian $J_\varphi$ is the matrix whose $(i,j)$-th entry is the partial derivative $\frac{\partial \varphi_i(x)}{\partial x_j}$. While this is briefly mentioned at the top of page 5 (end of the first paragraph in the right column), we agree that the definition can be made more explicit. We will revise the explanation on page 5 and also include a formal definition in the new notation section. | null | null | null | null | null | null |
Efficient Long Context Fine-tuning with Chunk Flow | Accept (poster) | Summary: This paper introduces ChunkFlow, an LLM training (fine-tuning) method that aims to improve the computational as well as memory efficiency of long-context training / fine-tuning. The authors start from three empirical observations in long-context fine-tuning, point out existing efficiency bottlenecks, and design ChunkFlow based on them. Given a batch of input sequence, ChunkFlow reorganizes it to a new set of chunks based on heuristics, so that the size of each chunk does not exceed a pre-defined chink size. The method also incorporates state-aware chunk scheduling and state-aware 1F1B for better training efficiency. Evaluations show that as compared to Megatron-LM, ChunkFlow accelerates long-context fine-tuning by up to 4.53x.
Claims And Evidence: The claims made by the authors in this paper are supported by citations or experiment results.
Methods And Evaluation Criteria: No particular flaw in evaluation design. See issues/concerns in "Questions For Authors".
Theoretical Claims: N/A
Experimental Designs Or Analyses: No particular flaw in experiment design. See issues/concerns in "Questions For Authors".
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper lies within the broader area of LLM fine-tuning systems. In particular, it addresses the computational and memory efficiency of long-context fine-tuning (a special case of LLM fine-tuning).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Issues not mentioned in prior sections:
- Missing an impact statement.
Other Comments Or Suggestions: N/A
Questions For Authors: Thank you for submitting this paper to NeurIPS. This is a well-motivated paper, drawing inspirations from empirical observations and presenting a concrete solution that is well-evaluated.
Below are a few questions and concerns:
- The figures do a good job in illustrating different parts of the system (esp. reorganizing the chunks) but really need to be fixed. Fonts are too small to be read clearly (esp. figures 2, 3, 5, 6, 7).
- You mention that you use grid search to find best K and chunk size --- I am curious what the cost is for the grid search process, before fine-tuning begins. And, in particular, how that cost compares to the training cost itself, e.g. the cost in terms of latency in table 6.
- How does ChunkFlow generalize to pre-training? I understand that pre-training experiments might be too hard to implement, but I am curious how and why many of the design choices/intuitions in ChunkFlow could help improve the computational/memory efficiency of pre-training.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are sincerely grateful to Reviewer **dcDa** for devoting time to review our work and providing invaluable feedback. Regarding the clarity of figures and the impact statement, we will, in strict compliance with Reviewer dcDa’s suggestions, **incorporate an impact statement and refine all figures—with particular focus on Figures 2, 3, 5, 6, and 7—in the final version of our release**. Moreover, we offer a comprehensive response below to other concerns raised by Reviewer dcDa.
***(Q1) The cost of grid search compares to training cost itself***
We sincerely appreciate reviewer dcDa's meticulous review of our paper and point out a key implementation detail regrading to grid search and allows us to furtherly clarify the whole training process of ChunkFlow. When compared to the remarkable benefits of identifying the optimal `(ChunkSize, K)`, the overhead of grid search is almost negligible.
Below, we use the grid search process presented in Table 6 to illustrate this. In our research on the 72B model (relevant data can be found in Table 6), we evaluated several configurations, namely `[(32K, 1), (16K, 2), (8K, 4)]` . To accurately estimate training performance, each `(ChunkSize, K)` combination had to undergo 10 training steps. The entire grid search process accumulated 15 min. Significantly, training with a full 256K-context using ChunkFlow takes approximately multiple days or even weeks(~70 hours in our case). **Evidently, the overhead of grid search accounted for less than 0.3% of the total training time**.
From a cost-benefit perspective, the optimal configuration `(8K, 4)` achieved a **1.21x** speed-up compared to the worst-performing configuration(i.e.`(2K, 16)` ). As a result, approximately **12 hours of training time are saved** over the entire training process in our case.
Moreover, these validated optimal parameters can be reused across repeated training runs, such as training new 72B model versions with new algorithms or training on updated datasets. This effectively amortizes the one-time cost of grid search.
***(Q2): How and why many of the design choices/intuitions in ChunkFlow could help improve the computational/memory efficiency of pre-training***
Although ChunkFlow is mainly designed to boost training efficiency for LLM's post-training phase like SFT and RLHF which involve training on variable-length sequence datasets. It's design can also be applied to other training stages like pretraining and long context continual pretraining.
As mentioned in Meta's tech report[1], there are three main stages in developing a new LLM model: pretraining, long context continual pretraining (for context length extension), and post-training (for human preference alignment). In Section 1(Introduction), we claim that ChunkFlow can enhance long context continual pretraining efficiency because this phase also involves training on datasets with variable-length sequences[1] and exhibits efficiency challenges similar to those in the SFT stage (e.g., pipeline bubbles and resource underutilization). **Consequently, ChunkFlow’s design naturally improves computational and memory efficiency in long context continual pretraining scenario**.
Regarding pretraining, even though it typically operates on uniformly sized sequences, splitting sequences into chunks and utilizing ChunkFlow's state-ware scheduler to process them sequentially can still reduce pipeline bubbles(due to increased number of micro-batches)and alleviate memory pressure. **This suggests that ChunkFlow’s chunk-centric approach retains value even in scenarios with uniform sequence lengths**.
[1] Meta(2024). The Llama 3 Herd of Models. ArXiv, abs/2407.21783.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for their detailed and helpful response! I have also read other reviews in detail. At this point, I don't have more questions related to design and evaluation, and I decide to maintain my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer dcDa's time and valuable feedback on our work, we will keep moving to make Chunkflow better. | Summary: This paper propose Chunk Flow, a novel chunking and scheduling method for pipeline parallel long sequence training. It first discussed the long-tail phenomena of long-context LLMs training, and the potential issues result from it, such as underutilize of GPU memory and pipeline bubbles. Then the authors propose to chunk the longer sequence to a max chunk size, with a scheduler handling the defendency of the causal mask of long sequence. The author compares chunk flow against the state of the art pre-training framework megatron and find chunk flow well maintained the peak memory across various length long sequence during the training process, while achieve a 4x acceleration compared to megatron.
Claims And Evidence: -
Methods And Evaluation Criteria: The strength of this paper:
1. Addressing a critical real world problem. Training long sequence model is very expensive and many existing frameworks have very limited optimization on long sequence pre-training. Chunk Flow ensures a upper bound of the peak memory and reduce pipeline bubble which can help the scaling of sequence length and size of LLMs.
2. Good empirical results. The proposed method achieves good peak memory optimization, enables lower 60GB memory with 8k chunk size, which is quite impressive. The end2end speedup is 4x faster against megatron, which can significantly reduce the cost to train long sequence LLMs.
The limitation and questions for the authors:
1. When the N > K, ie, the longest sequence exceed the num of chunks, it did duplicate computations for the overlapping chunks, and it keeps dependent chunk on GPU, which can leads to significant overhead.
2. It is unclear that what components are actually retained for dependent sequence, especially within causal attention.
3. What is the training data has more longer sequence than shorter sequence, how much is the benefit to use chunk flow in this scenarios.
Theoretical Claims: -
Experimental Designs Or Analyses: -
Supplementary Material: -
Relation To Broader Scientific Literature: -
Essential References Not Discussed: -
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: This is overall a solid study and have provide well explaination for the majority of the paper. Good experimental results. But the author should redo the figures as they are not quite readable.
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank **xBtd** for their very positive review of our work, noting that our paper is "a solid study and have provide well explaination for the majority of the paper." and emphasizing strongly the importance of the research problem. In accordance with reviewer xBtd’s suggestions, we will redo all figures to enhance their readability in the final release. Additionally, we provide detailed responses to reviewer xBtd’s other concerns.
***Q1: Overhead of keeping dependent chunks when N > K***
We sincerely appreciate the reviewer’s attention to the issue of computational overhead when the number of dependent chunks (N) exceeds K. In ChunkFlow, when N>K, the forward passes of the first `N-K` chunks are executed twice. After the first-time forward passes of these `N-K` chunks, **we discard their activation values**. Instead, **we only retain the key/value tensors of their attention mechanisms as states for subsequent reuse**.
Since Multi-Query Attention (MQA) and Group-Query Attention (GQA) are widely adopted in mainstream large language models (LLMs) like LLama, Qwen, and Gemini, **storing these attention key/value tensors does not impose a substantial burden**. As can be seen from Table 5 in the paper, compared to processing 32K-length sequences, processing 256K-length sequences only consumes approximately 4GB more memory(Still much less memory footprint compared to Megatron-LM baseline). This clearly demonstrates that retaining these states in memory does not pose a severe problem.
When training models with longer contexts, such as 2M token context models, retaining these states could ultimately lead to excessive memory consumption. As noted in Section 6.3.1, we plan to optimize this memory consumption through carefully designed memory offloading strategies.
***Q2: Components retained for dependent sequences in causal attention***
For dependent chunks (split from long sequences), ChunkFlow retains key/value (K/V) tensors and their gradients in causal attention layers. These components are critical for:
1. Forward Dependency : Subsequent chunks depend on K/V tensors from prior chunks to maintain causal masking (Section 4.2).
2. Backward Dependency : Gradients for K/V tensors in earlier chunks require activations from later chunks(Figure 5).
***Q3: Efficacy in datasets dominated by long sequences***
Even when long sequences are more prevalent in datasets, ChunkFlow continues to demonstrate substantial advantages. However, the extent of performance improvement hinges on the distribution of data. For scenario analysis, we classify sequences as long if they exceed ChunkSize; otherwise, they are categorized as short. Whenever short sequences are present, we can consolidate them into a single chunk, significantly enhancing the utilization of GPU resources. As demonstrated in Figure 6, for long sequences, ChunkFlow splits them into uniformly sized chunks, effectively reducing pipeline bubbles. Subsequently, we carried out an experiment on the LongBench dataset [1]. The dataset exhibits a sequence-length distribution pattern consistent with what the reviewer envisioned.
LongBench dataset is used for bilingual, multitask, and comprehensive assessment of long context understanding capabilities of LLMs and the table below demonstrates sequence length distribution in the dataset. When setting ChunkSize=8K, we can see that more than 50% sequences are long sequences and ChunkFlow achieve **1.7X** speeup over Megatron-LM, showing its effectiveness in a broader scenario. If we further filter the LongBench dataset by removing sequences with fewer than 8K tokens, we can obtain a new dataset that **solely consists of long sequences**. Thanks to the Chunking mechanism of ChunkFlow, which can reduce pipeline bubbles, ChunkFlow still achieves a **1.4X** speedup compared to Megatron-LM.
| Sequence Length| Propotion Of Sequences |
|---------|---------|
| < 1K | 0.26% |
| < 4K | 22.65% |
| < 8K | 48.4% |
| < 16K | 82.59% |
| < 32K | 98.06% |
| Longest | 64K |
[1] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., Du, Z., Liu, X., Zeng, A., Hou, L., Dong, Y., Tang, J., & Li, J. (2023). LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding. ArXiv, abs/2308.14508. | Summary: This paper introduces ChunkFlow, a novel method for efficient long-context fine-tuning of large language models (LLMs). ChunkFlow addresses the challenges of variable sequence lengths in training datasets by reorganizing input sequences into uniformly sized chunks, merging short sequences and splitting long ones. It employs a state-aware chunk scheduling mechanism to manage computational dependencies and ensure consistent memory usage, primarily determined by the chunk size. The method also reduces pipeline bubbles and improving distributed training efficiency.
Claims And Evidence: The paper claims that ChunkFlow improves training efficiency for long-context fine-tuning by reorganizing sequences into uniform chunks and using state-aware scheduling. Evidence in the experiments in Section 6 includes the comparison to Megatron-LM, with Peak Memory and training performance.
Methods And Evaluation Criteria: ChunkFlow uses chunk construction and state-aware scheduling to manage variable-length sequences. Evaluation is based on training performance metrics like iteration time and memory usage.
Theoretical Claims: The paper argues that chunk-based training and state-aware scheduling can optimize GPU utilization and reduce pipeline bubbles in distributed training. This is the claim, but no so theoretical.
Experimental Designs Or Analyses: Experiments involve fine-tuning Qwen2.5-series LLMs with varying context lengths, comparing ChunkFlow to Megatron-LM on metrics like memory consumption and training speed.
Supplementary Material: No Supplementary Material provided.
Relation To Broader Scientific Literature: ChunkFlow builds on existing work in sequence packing and pipeline parallelism.
Essential References Not Discussed: The paper does not discuss other efficient training methods like LoRA, or other long-sequence papers, like RingAttention.
Other Strengths And Weaknesses: This paper did not compare with any other long-context models, on any long-context benchmark, like LongBench.
Other Comments Or Suggestions: N/A
Questions For Authors: How does ChunkFlow perform on datasets with even longer sequences (e.g., >1M tokens)?
Could the method be extended to non-causal models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer **HVAi**'s insightful feedback. Below we clarify the concerns regarding related work discussion and broader evaluations, with references to our methodology and results in the paper.
***(Q1): The paper does not discuss other efficient training methods like LoRA, or other long-sequence papers, like RingAttention.***
We would like to clarify that ChunkFlow emerged in response to the practical need to accelerate the fine-tuning of LLMs on datasets with sequences of varying lengths. Within this context, we discuss several relavant directions in Section 2(Premilinaries) and Section 7(Related Works). We also give a brief introduction about long sequence training methods such as sequence parallelism and token-level pipeline parallelism in Section 2 and RingAttention is also referenced in this section.
**It is worth to highlight that ChunkFlow is orthogonal to LoRA and Ring-attention**. We conduct a discussion here to provide further explanations.
● LoRA focuses on Parameter-Efficient Fine-Tuning by substantially reducing the number of training parameters. Conversely, ChunkFlow aims at enhancing the training efficiency of LLMs on variable-length datasets. Moreover, integrating ChunkFlow into the LoRA training process can effectively boost the training efficiency of LoRA.
● RingAttention is an effective approach for handling extremely long sequences. However, it does not address training issues such as low resource utilization and pipeline bubbles caused by the variability in sequence lengths. When performing long context finetuning at scale, ChunkFlow and the RingAttention can complement each other. The RingAttention mechanism offers a method for distributed attention computation. ChunkFlow, through its unified chunking and state-aware scheduling strategy, reduces pipeline bubbles and enchances computation efficiency (as depicted in Figure 6), thereby improving training performance. We will incorporate the above-mentioned suggestions into the final version.
***(Q2): This paper did not compare with any other long context models, on any long context benchmark, like LongBench.***
We sincerely appreciate the valuable suggestions put forward by reviewer HVAi and we conduct more experiments using the ***Llama3-8B*** model on LongBench dataset. The results and analysis will also be reflected in our final version.
It is worth to clarify that the LongBench dataset is primarily employed to evaluate the long context understanding capabilities of LLMs, rather than for long context fine-tuning. The table below presents the distribution characteristics of sequence lengths in LongBench dataset and demonstrate significantly difference distribution with our previously mentioned SFT dataset, as well as Llama SFT dataset[1]. ***Those distinctions helps explain why LongBench wasn't initially included in our experiment datasets.***
| Sequence Length | Propotion Of Sequences in LongBench| Propotion Of Sequences in Meta |
|:-------|:--------:|-------:|
| < 1K | 0.26% | ~ 99% |
| < 4K | 22.65% | - |
| < 8K | 48.4%| - |
| < 16K | 82.59% | - |
| < 32K | 98.06% | - |
| Longest| 64K | 128K |
However, we agree that benchmarking on LongBench provides important validation of ChunkFlow's effectiveness, and we conduct the experiments using llama3-8B on longbench to further highlight our contributions. For ChunkFlow, we configure the `ChunkSize=16384, K=2`. Both experiments adopt the identical `<TP=2, SP=2, PP=4>` parallelization strategy. Under these circumstances, ChunkFlow demonstrates a **1.7x** speedup compared to Megatron-LM. Evidently, this showcases the superiority of our design.
***(Q3): How does ChunkFlow perform on datasets with even longer sequences (e.g., >1M tokens)? Could the method be extended to non-causal models?***
As shown in Figure 8, ChunkFlow shows greater performance improvement when fine-tuning models with longer context lengths. As the fine-tuning context length increases, the sequence length distribution widens. Training strategies for the longest sequences perform more poorly for most short ones. This implies that for models with even longer contexts (e.g., >1M tokens), ChunkFlow can achieve far better performance than baselines.
For applying ChunkFlow to non-causal model, it has two chunk-construction methods: short-sequence consolidation and long-sequence splitting. The former can be directly used in non-causal model training. The latter depends on causal-attention. So, when using ChunkFlow for non-causal models, setting ChunkSize to the dataset's maximum sequence length can effectively improve training efficiency.
Due to limitations in computational resources and the timeline for this submission, we regret that we were unable to include these specific (>1M, speedups on LoRA and non-causal) experiments. We will incorporate the above-mentioned experiments in the final version.
[1] Meta(2024). The Llama 3 Herd of Models. ArXiv, abs/2407.21783.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply. My concerns have been resolved. I increased the rate for this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and recognizing the importance of our work. | null | null | null | null | null | null | null | null |
GRAM: A Generative Foundation Reward Model for Reward Generalization | Accept (poster) | Summary: The paper proposes a method for training a generative reward model that generalizes better across domains with minimal fine-tuning. The approach involves a two-step training process: large-scale unsupervised learning followed by supervised fine-tuning on labeled data. The paper also demonstrates that applying label smoothing during training can be interpreted as optimizing a regularized pairwise ranking loss, highlighting a connection between training discriminative and generative reward models. The resulting 'foundation reward model' outperforms discriminative baselines on tasks including response ranking, best-of-n sampling, and task adaptation with fine-tuning.
Claims And Evidence: The main claims about the proposed GRAM are that it generalizes better than both discriminative and prior generative reward models, outperforming them on response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning.
Figure 2 and Table 1 present the main results supporting the generalization claim, showing that GRAM performs better on out-of-distribution test data, including RewardBench and HHH-Alignment. However, the results in Figure 2 and for Llama-3.1-8B-Instruct in Table 1, where GRAM underperforms compared to discriminative reward models on in-distribution test data, raise the question of whether this is due to discriminative models overfitting to in-distribution data rather than indicating a better generalization capability. The results for Llama-3.2-3B-Instruct in Table 1 are arguably more convincing, as GRAM outperforms on both in-distribution and out-of-distribution test data.
The best-of-n sampling results also show that GRAM is less susceptible to overoptimization, which appears closely related to its generalization performance. However, the paper does not include reinforcement learning experiments where an LLM is trained to optimize the reward model using RL or methods like DPO that optimize for the same objective. Without reinforcement learning results, the claim in the abstract that GRAM outperforms in reinforcement learning from human feedback seems less convincing.
Methods And Evaluation Criteria: The proposed 'pre-training' step, where a generative reward model is trained to generate two responses per input, makes some sense. However, the extent to which this training improves performance beyond fine-tuning is not thoroughly demonstrated empirically. Additionally, the paper does not discuss in detail how the two responses should be prepared -- specifically, how to prompt an LLM to generate multiple responses (e.g., simply different responses or responses of varying quality). Also, some discussion on how the proposed training scheme could be extended to multi-objective settings, where two responses are evaluated according to multiple criteria, would have been more insightful.
The evaluation on several common use cases of reward models, including response ranking, alignment, and reward model adaptation, seems appropriate for the problem. To strengthen the results, RL fine-tuning using the different reward models would have been beneficial.ㅒ
Theoretical Claims: No major theoretical claims are made in the paper.
Experimental Designs Or Analyses: Overall, the use of two open instruct models for reward model training, along with common reward model benchmarks such as RewardBench and HHH-Alignment, for comparison with both discriminative reward models and strong LLMs like GPT-4, seems sound. More ablations, such as evaluating how much the proposed 'pre-training' helps, would have been insightful.
Supplementary Material: No supplementary materials have been reviewed.
Relation To Broader Scientific Literature: Reward models are essential for evaluating and aligning generative models, and, as the paper also suggests, much less work has been done on developing 'foundation' reward models compared to foundation generative models. This paper contributes to the recent efforts in developing generative models, particularly those considered 'foundational,' where pre-training is used to train domain-agnostic models that can be easily adapted to different domains with minimal extra data. More efforts will need to be devoted to developing such generic reward models in an unsupervised manner for continued improvement of generative models going forward.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: Typos:
- "an" $\rightarrow$ "a" near line 104.
- "Best-of-n" $\rightarrow$ "best-of-n" near line 146.
- "Opeen" $\rightarrow$ "Open" near line 282.
Questions For Authors: Q. Have you considered using reasoning models for generative reward models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer Jdyw,
We appreciate that you find "the superior generalization of our GRAM compared to both discriminative and prior generative reward models" and our pre-training approach is "make sense".
We will provide explanations for the main points that you are concerned about.
---
>*W1: The results in Figure 2 and Table 1, where GRAM underperforms compared to discriminative models on ID data, raise the question of whether this reflects overfitting by discriminative models rather than better generalization.*
We would like to clarify this question in two aspects.
- The ID test set evaluates the ability of reward modeling to learn human preferences from labeled data. Our goal is to excel in reward modeling and generalization, i.e., obtaining strong performance on ID and OOD tasks simultaneously. Thus, the LLaMA-3.1-8B-Instruct results in Table 1 show our method's effectiveness, achieving the best OOD results and second-best ID results.
- Although GRAM underperforms compared to the ```Discriminative RM+Regularization``` when using LLaMA-3.1-8B-Instruct, it significantly outperforms both the ```Discriminative RM (Baseline)``` and ```Discriminative RM+Freeze```, supporting the validity of our method. We also see that ```Regularization``` may not be a universally effective method across all models, e.g., on the LLaMA-3.2-3B-Instruct model, ```Regularization``` performs notably worse than GRAM, suggesting that its effectiveness could be model-dependent.
The partial results of Table 1 are shown below:
|Method|UniFeed(ID)|RewardBench(OOD)|
|:-|:-:|:-:|
|**LLaMA-3.1-Instruct**|
|Discriminative RM (Baseline)|69.3|74.1|
|Discriminative RM+Freeze|66.6|74.9|
|Discriminative RM+Regularization|**72.7**|77.4|
|GRAM (Ours)|70.4|**85.1**|
|**LLaMA-3.2-Instruct**|
|Discriminative RM (Baseline)|68.3|72.8|
|Discriminative RM+Freeze|63.0|70.5|
|Discriminative RM+Regularization|65.6|71.2|
|GRAM (Ours)|**70.6**|**83.6**|
>*W2: The paper lacks reinforcement learning experiments where an LLM optimizes the reward model using RL.*
We apologize for any misunderstanding regarding the RL experiments. In fact, we have already included reinforcement learning experiments in Appendix C.1. Specifically, we comprehensively compare PPO performance with different reward models, as shown in Figure 8 and Table 2. The results align with the response ranking, showing that GRAM exhibits strong generalization and provides more accurate rewards. To respond to your concern, and in the revised version, we promise to include partial results in the body rather than only in the appendix.
>*Q1: The paper does not detail how to prepare the two responses, particularly how to prompt an LLM for multiple responses (e.g., simply different responses or responses of varying quality).*
Thanks for this insightful suggestion! We have explored this issue with two GRAM variants.
- GRAM-Sim-Diff: We explore the impact of response diversity on pre-training performance. Specifically, we select the 200k response pairs with the highest semantic differences from a dataset of 600k responses and compare them to 200k randomly selected pairs.
- GRAM-Qua-Diff: We test quality differences by scoring responses with GPT-4o on a 0-5 scale and compare the 200k pairs with the largest quality differences to 200k random pairs.
The results are as follows:
|Method|RewardBench|
|:-|:-:|
|GRAM-Sim-Diff|74.8|
|GRAM-Qua-Diff|75.4|
|GRAM|**76.2**|
We find that randomly selected pairs outperformed the carefully selected ones, possibly due to increased diversity from randomness. We also see that larger quality differences have little impact on reward model training, with random selection performing better. This is also shown in previous work (Filtered Direct Preference Optimization), where larger quality differences benefited DPO training but not reward model training. We promise to add more experiments and analysis in the revised version.
>*Q2: A discussion on extending the training scheme to multi-objective settings, where responses are evaluated by multiple criteria, would be insightful. Have you considered using reasoning models for generative reward models?*
Thanks for your valuable suggestion! GRAM can easily be extended to multi-objective or reasoning settings, and this extension can be implemented in stage two. Specifically, we would describe the objectives, e.g., fluency and accuracy, in the prompt $c$. When generating preferences, we can use the format ```#Fluency Preferred: [labeled token] #Accuracy Preferred: [labeled token]``` instead of ```#Preferred: [labeled token]```. Similarly, we can also add the CoT in front of the preferences. We can also use a reasoning model to develop GRAM further. These improvements don’t require changes to our pre-training, highlighting its robustness and scalability for reward modeling. We promise to include more analysis in the revised version.
---
We truly appreciate your positive feedback on our paper!
Best,
Authors | Summary: This paper proposes an interesting reward model training method using both unlabeled and labeled data. Building on the generative models in LLMs, the authors develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. This method produces a foundation reward model, which can be applied to different tasks with little or no further fine-tuning effort, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving performance improvements over baseline models.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: No
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths**:
1. The proposed training paridigm for reward model is interesting and sound.
2. The proposed method is extensively evaluated across different tasks.
**Weaknesses**:
1. I am wondering whether using such enhanced RM could improve the reasoning ability of existing LLMs. More downstream evaluations should be conducted.
2. The paper does not introduce much details about "large-scale unsupervised learning", the influence of pretraining data on target domain should also be discussed.
Other Comments Or Suggestions: The authors should conduct more evaluations and provide more details/discussions as the weakness part comment.
Questions For Authors: Would the domain difference between pretraining and finetuning has a big impact on RM performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer pUcN,
We sincerely thank the reviewer for your positive and insightful feedback.
We greatly appreciate your recognition of our paper's proposed training paradigm for the reward model as "interesting and sound", and are pleased that the method is considered to be "extensively evaluated across different tasks".
We will provide explanations for the main points that you are concerned about.
---
>*W1: I am wondering whether using such enhanced RM could improve the reasoning ability of existing LLMs.*
Thanks for your insightful suggestions! Indeed, using an enhanced RM can improve the reasoning ability of existing LLMs. We have already conducted an experiment to validate this with a math comparison pair dataset (huggingface tag: ```reciprocate/math_dpo_pairs```). Specifically, we used this dataset during the second stage of GRAM training, with all baseline RMs also trained on the same dataset. We performed reward accuracy experiments on the corresponding test set and with best-of-n sampling on GSM8K, respectively. For the best-of-n sampling, we sample 16 outputs on LLaMA-3.1-8B-Instruct with 8-shot for each input.
|Method|RM Accuracy|GSM8K (Best-of-n Sampling)|
|:-|:-:|:-:|
|LLaMA-3.1-8B-Instruct|-|83.9|
|Discriminative RM|63.2|84.6|
|Generative RM|61.3|85.5|
|GRAM|**66.8**|**87.2**|
The experimental results show that our method effectively improves RM accuracy on reasoning-related downstream tasks. Also, it further enhances the reasoning ability of LLMs with best-of-n sampling. We promise to include more experiments, including additional baselines and RL experiments, in the revised version to further verify the effectiveness of GRAM in improving reasoning abilities.
>*W2: The influence of pre-training data on target domain should also be discussed.*
In Figure 6 of the paper, we already show the impact of pre-training data on general domain adaptation. In response to your concern, we further conduct experiments in specific domains using LLaMA-3.2-3B-Instruct, along with 5k labeled summarization and harmlessness data.
|Domain|Amount of Unlabeled Data|
|:-|:-:|
|| 0k 100k 200k 400k 600k|
|Summarization|56.5 62.7 66.3 71.6 73.4|
|Harmlessness|58.4 63.2 65.9 69.8 72.1|
The results from these experiments align with those in Figure 6. We see that as the amount of unlabeled data increases, the accuracy of GRAM generally improves for both models. This also highlights the crucial role of unlabeled data and the scaling effect on performance, suggesting that using larger unlabeled datasets can lead to better reward models. We promise to include these experiments in the revised version.
>*Q1: Would the domain difference between pre-training and fine-tuning has a big impact on RM performance?*
To respond to your concern, we conduct the following experiments on GRAM:
- GRAM-v1: Pre-training on 100k unlabeled summarization response pairs (derived from TL;DR comparison data;huggingface tag: ```openai/summarize_from_feedback```), followed by fine-tuning on 5k labeled summarization data.
- GRAM-v2: Pre-training on 100k general unlabeled data (including summarization responses), followed by fine-tuning on 5k labeled summarization data.
- GRAM-v3: Pre-training on 100k general unlabeled data (without summarization-related data; specifically, we are using ChatGPT to filter out preference data related to summarization), followed by fine-tuning on 5k labelled summarization data.
In these experiments, summarization is the downstream domain. GRAM-v1 uses pre-training data closest to summarization, fully utilizing summarization responses. GRAM-v2 uses general pre-training data, which might include some samples aligned with the downstream summarization task. GRAM-v3 uses pre-training data that is the most different from the summarization domain. Specifically, we use ChatGPT to exclude all samples related to summarization tasks. The results are as follows:
|Method|Accuracy (Summarization)|
|:-|:-:|
|RM w/o Pre-training |56.5|
|GRAM-v1|**74.7**|
|GRAM-v2|71.6|
|GRAM-v3|67.4|
The experimental results demonstrate that pre-training data more closely aligned with the target domain results in better performance in that domain. In fact, this experimental observation is consistent with the common practice in LLMs, where incorporating as much domain-specific data as possible during pre-training typically leads to better performance on downstream tasks. Additionally, our results show that the pre-training approach exhibits strong robustness—despite significant domain differences, it still yields positive contributions to performance.
We appreciate your valuable suggestion and promise to include more discussion and experiments in the revised version.
---
We sincerely thank you for your positive feedback on our paper!
Thank you once again for your time.
Best,
Authors | Summary: Authors propose improvements on the training of generative reward models (GenRMs). First, they pre-train GenRMs on pairs of responses. Second, they apply label smoothing. This approach is called GRAM. Authors also make an observation that label smoothing shall be understood as the regularization of Bradley-Terry scores. Authors experimentally validate their approach with UnifiedFeedback and Llama-3.1-8B and 3.2-3B instruct models. When trained on the same 400k preference data, proposed GRAM approach outperforms generative and discriminative RM baselines. When applied to Best-of-n sampling, GRAM is also shown to generalize better on AlpacaEval. When fine-tuned on specific domains (summarization and helpfulness), GRAM is also more sample-efficient than baseline methods.
## update after author rebuttal
I will maintain my score. Authors answered my clarifying questions very clearly, but I expected them to be answered in my original score. I still feel this is a solidly executed yet somewhat incremental paper, hence I would be happy to see it accepted, yet open to change my opinion.
Claims And Evidence: The main claim of this paper is that GRAM generalizes better than previous reward modeling approaches. Across three different use-cases of RMs - preference ranking, best-of-n sampling, task-specific fine-tuning - the sample efficiency of GRAM is consistently shown. Therefore, the claim is empirically well supported.
Methods And Evaluation Criteria: The proposed two extensions of GenRM are standard modeling techniques and hence they are appropriate. The first approach of pre-training on response pairs can be understood as continued pre-training on the same domain of unlabeled data; this is a standard, well-established technique originating from AdaptaBERT https://aclanthology.org/D19-1433/ . The second, application of label smoothing is also standard in the training of classification models (from Szgedy et al https://arxiv.org/abs/1512.00567 ) and already explored in reward modeling context in the AlpacaFarm paper https://arxiv.org/abs/2305.14387 . Also, the concept of reward model pretraining was considered in Bai et al (2022) https://arxiv.org/abs/2204.05862 although in a different form. Hence these techniques are all well-established, principled methods.
The evaluation criteria are also standard. RewardBench is well-established benchmark for evaluating generic-purpose reward models, although there are new ones such as RM-Bench, JudgeBench, FollowBenchEval. See Saha et al. https://arxiv.org/abs/2501.18099 for their experimental setting. Evaluation of Best-of-N on AlpacaEval is acceptable but the ideal setup would be to use the standard GPT-based evaluator from AlpacaEval itself rather than using a proxy reward model. Fine-tuning experiments also use standard domain-specific preference datasets, which is good.
Theoretical Claims: I checked the correctness of the regularization characterization of label smoothing for Bradley-Terry models. I checked steps in Appendix D.2, which are straightforward algebra.
Experimental Designs Or Analyses: I checked experimental designs and analyses for all experiments in the main paper and Appendix Figure 9, 10. As I discussed in 'Methods And Evaluation Criteria', there is some room for improvements, but they are not critical.
Supplementary Material: I checked derivations in Appendix D.2. Read experiments in Section D.1 and Figure 10 and 11.
Relation To Broader Scientific Literature: Connections to previous work in literature I already made in 'Methods And Evaluation Criteria' section are notable. The idea of pre-training reward models was explored in Bai et al (2022) https://arxiv.org/abs/2204.05862, but they focused on sourcing preference data from web. Authors' approach has the benefit of leveraging unlabeled data. However, in the experiments, I believe they pre-train only on UnifiedFeedback data, which are labeled.
This approach can also be understood as a form of unsupervised domain adaptation https://aclanthology.org/D19-1433/ , and the connection may foster other ideas, techniques, and theory from domain adaptation to apply to reward modeling.
Essential References Not Discussed: AlpacaFarm paper https://arxiv.org/abs/2305.14387 is cited but their use of label smoothing was not discussed. AlpacaFarm paper considered it as synthetic label noise.
Label smoothing wasn't cited. Although it's well-established, it's worth citation: Szegedy et al https://arxiv.org/abs/1512.00567
Generative Reward Models (Mahan et al, 2024 https://arxiv.org/abs/2410.12832) paper has already made an observation that GenRMs generalize better on out-of-domain. I understand the paper was rejected from ICLR but it is worth citation and discussion. Just to be sure, I didn't discount the contribution of authors' paper because of Mahan et al because it's not yet accepted.
Other Strengths And Weaknesses: Experimental analyses are very comprehensive, meticulously studying each of the design choices, for example Table 3/Figure 9 and also studying scaling aspects (Figure 6). The strength of the proposed method is also consistent across experiments.
On the other hand, I find technical contribution of the paper is a bit simple - pre-training on pairs of responses and label smoothing. These are already very well-established concepts in machine learning, hence the contribution of this paper is mostly empirical, and maybe better suited at NLP conferences than ML conferences.
Other Comments Or Suggestions: - Line 047 column 2: the systems cannot directly generate from text their own supervision signals -> There actually are papers on this type of approach, hence the statement is a bit unfair to them. For ex Self-Rewarding Language Models https://arxiv.org/abs/2401.10020 .
- Line 028 column 2: Lee et al reference doesn't have year information
Questions For Authors: On which dataset do authors run their stage 1 (task 1 pre-training)? I suspect they just used responses from UnifiedFeedback but I wasn't able to find an explicit discussion of the data used for stage 1 training.
In Figure 5, do RM fine-tuned with G/D-baselines mean RMs trained on UnifiedFeedback?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer WL65,
We appreciate the reviewer’s constructive and thoughtful feedback.
We appreciate your recognition that the main claim of our paper is "empirically well supported", and that our experiments are "comprehensive, meticulously studying each of the design choices".
We provide explanations for the main points that you are concerned about.
---
>*W1: The technical contribution seems simple—pre-training and label smoothing—both well-established in ML. The contribution is empirical and may be more suited for NLP conferences than ML.*
Thanks for your insightful comment! As you said "pre-training and label smoothing are well-established concepts in ML", we would like to clarify that our contribution is not about innovating them, but rather introducing these concepts to achieve superior reward modeling. The rationale behind this is as follows:
- While applying reward models to align LLMs is a compelling direction, training these models still heavily relies on labeled data. We expect to enhance reward modeling by pre-training on unlabeled data to reduce dependence on labeled data. However, this pre-training approach has never been explored and presents significant challenges. The difficulty arises from that, unlike self-supervised approaches, systems cannot directly generate their own supervision signals from text for training reward models. In this work, we thoroughly explore the use of unlabeled data in reward modeling and propose a specific pre-training reward model procedure. We also discuss its effectiveness from both theoretical and practical perspectives.
- Although label smoothing has been shown to be effective in many tasks, it has not been well-established in reward modeling. In this work, we theoretically demonstrate that the training objective of generative reward models can be reformulated into a more elegant form: we are essentially optimizing the Bradley-Terry model with modified label smoothing. This result is significant as it establishes a connection between discriminative and generative reward modeling methods—both of which fundamentally train LLMs to perform pairwise ranking, thereby pushing the boundaries of reward modeling and directly contributing to improved generalization.
>*Q1: On which dataset do authors run their stage 1 (task 1 pre-training)? The similar idea of pre-training reward models is mentioned in Bai et al. (2022).*
The pre-training responses also come from Unified Feedback. Please note that while this data includes preference labels, we do not use these labels in our pre-training process. Instead, we only use the response pairs to simulate unlabeled data and validate the effectiveness of our method.
Since our pre-training approach only uses responses for unsupervised training (which can be directly sampled from LLMs), it allows for easier scalability. We also present two key insights from our experiments for pre-training. First, as shown in Figure 5, during downstream adaptation, our pre-training effectively achieves generalization and reduces the need for specific labeled data. For example, in the summarization task, we achieve performance comparable to training from scratch with approximately 100k labeled data using only 5k data points. Second, as demonstrated in Section 5.1, we find that the more unlabeled data used in training, the greater the benefit to downstream performance, regardless of whether the downstream labeled data is small or large. Building on these insights, in real-world reward model training, we could first collect many responses using cost-effective methods, such as sampling from LLMs, and then fine-tune with a small amount of labeled preference data.
In contrast, Bai et al. (2022) propose a pre-training approach that uses large, labeled preference data sets for supervised training, followed by domain-specific fine-tuning. This method is difficult to scale as it still relies heavily on labeled preference data, which is often scarce in real-world scenarios.
Thank you for your helpful suggestion! In the revised version, we promise to provide a more detailed description and experiment-based analysis for your concern.
>*Q2: In Figure 5, do RM fine-tuned with G/D-baselines mean RMs trained on UnifiedFeedback?*
Yes, the data used to fine-tune the RMs for forming the G/D-baselines comes from UnifiedFeedback. Note that this data is the same as the one used by GRAM in the fine-tuning stage, which includes preference annotations.
>*S1: Insufficient description of related work and errors in reference presentation.*
Thank you for your helpful feedback! We promise to correct the description and presentation of the related work in the revised version.
>*S2: Essential references are not discussed.*
Thanks for your valuable suggestion! We promise to include the discussion of the essential references in the revised version.
---
We sincerely appreciate your positive feedback and thank you again for your time.
Best,
Authors
---
Rebuttal Comment 1.1:
Comment: I recognize that I undervalued the paper's methodological contributions, hence I increased my score accordingly.
>The pre-training responses also come from Unified Feedback. Please note that while this data includes preference labels, we do not use these labels in our pre-training process. Instead, we only use the response pairs to simulate unlabeled data and validate the effectiveness of our method.
I understand this, and I understand the convenience of this setup, but this point could've been much stronger if authors leveraged organically unlabeled, large-scale data to show this point. This setup is a bit artificial since the original data, including prompts & responses were already curated to be useful for the supervised setting, just that labels are hidden for pre-training.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and active engagement during the rebuttal process. We appreciate your insightful suggestion regarding the reconstruction of our pre-training data setup. We promise to include a discussion of this aspect in the revised version. Thank you once again for your endorsement! | Summary: This work introduces GRAM, a generative foundation reward model for aligning LLMs with human preferences. Unlike conventional reward models that rely only on labeled human preference data, GRAM incorporates both labeled and unlabeled data through a two-stage training process: unsupervised “pre-training” on input-response pairs, followed by fine-tuning with human preference data for task-specific alignment. The authors demonstrate that the incorporation of label smoothing unifies generative and discriminative reward models under a shared training objective. The authors demonstrate the effectiveness of their proposed approach by first pre-training on 400,000 examples Extensive evaluations across response ranking, reinforcement learning from human feedback (RLHF), and task adaptation show that GRAM generalizes effectively across tasks, achieving improvements over baselines.
Claims And Evidence: The claims about GRAMs performance are supported by their experiments. In particular, the method leads to a reward model that can generalize to out-of-distribution data, and can be used in multiple settings (as both a pairwise reward model and list-wise reward model). Some of the experimentation details are not clear, which, if improved, will make the evidence even more convincing.
Methods And Evaluation Criteria: One component of GRAM that does not make sense for this problem is the pre-training stage, where the model learns to generate 2 responses to a prompt. Theoretically, it’s not clear to me why the unsupervised learning portion should be effective (although the experiments show it to be, specifically in Section 5.1). In particular, given that the reward model is never asked to generate responses, and the responses at test time are likely to come from a different distribution than the training data, the reason this is beneficial is unclear to me. This is one area that could use some explanation or exploration.
The evaluations make sense for the problem at hand though.
Theoretical Claims: I did not verify the derivation of equation 8.
Experimental Designs Or Analyses: - The experimental design stands to be improved slightly. For example, the authors describe a version of label smoothing used for their method, while the ablation on baselines uses a different version of label smoothing. This difference of methods should either be explained, or rectified to use the same version of label smoothing.
- Additionally, some of the details in the experiment on list-wise response ranking are unclear. What is the purpose of comparing with a proxy model that is trained on less data? Additionally, which model is used as the policy to generate the responses being ranked?
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: I think the use of generative reward models has become a very popular idea in the past 6-9 months and the community may appreciate this work. In particular, the idea of performing unsupervised training specifically for reward modelling is novel, however, I think the exact setup of unsupervised learning is not well motivated and unclear.
Essential References Not Discussed: The following works both propose a variant of a generative reward model. The main difference that I can see with what is done here is that both these works allow the model to reason over the 2 preference options prior to providing its judgement. They are similar enough to the current work that readers should know that they exist, as they can provide a direction for future works.
- Ankner et al., 2024. Critique-out-Loud Reward Models
- Mahan et al., 2024. Generative Reward Models
Other Strengths And Weaknesses: Strengths:
- The results are quite good compared to baselines.
- This version of pre-training for reward modeling is quite novel, to my knowledge.
Weaknesses:
- The experimental setup is unclear. For example, in Table 1, what are the methods denoted (baseline)?
- The main set of experiments seem to use different versions of label smoothing for the baseline vs. the full method. Because we only see the label smoothing + pretraining together, this makes it hard to determine the effects of each individual component
Other Comments Or Suggestions: Some of the experimental details need to be clarified. See the comments in “Experimental Designs Or Analyses” and “Questions for Authors”.
Questions For Authors: - Some aspects of the experimental setup are unclear to me. Are there 2 splits of 400,000 examples? One split used for pre-training and the other for fine-tuning?
- Where do the pretraining responses come from (y1 and y2)? Given that the training dataset contains pairwise examples, do you use those for the pre-training stage, or do you actually generate the responses with the models you will train (Llama-3.1-8B and Llama-3.2-3B)? In general, are the responses that are used for pre-training generated by the same model that you will then train with that data?
- Starting at line 126, “When applying this model to score a new input-response”, you suggest that you generate a reference response with the current LLM. I am unfamiliar with this method of scoring a single response. Has this method previously been used in other works? In particular, one concern I have with this method is that it may lead to vastly different scores depending on which LLM generates the reference response.
- Why is the “label smoothing” applied to the models in your experiments different from the method that you proposed in Section 3.3?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer 3tRc,
We would like to thank the reviewer for the positive feedback regarding the novelty of pre-training for reward models and the strong results.
Below, we explain the main points you are concerned about.
---
>*One component that does not make sense for this problem is the pre-training stage.*
We would like to clarify the motivation behind our pre-training design. As discussed in Section Appendix A, we can understand our method from a feature learning perspective. In generative reward modeling, the loss function can be given by
$$
L\_{\mathrm{g}}(\theta) = L\_{\mathrm{gp}}(\theta\_{\mathrm{gp}})+L\_{\mathrm{gf}}(\theta)
$$
The feature optimization term $L\_{\mathrm{gf}}(\theta)$ is implicitly defined and optimized by adjusting preference generation. Traditional generative reward modeling optimizes these objectives using costly preference labels. This work explores a pre-training approach to optimize $L\_{\mathrm{gf}}(\theta)$ first.
To this end, we propose using an auxiliary task to optimize these features without relying on labeled preferences. Specifically, we utilize conditional probabilities to characterize interrelationships, i.e., $-\log \pi\_{\theta}(y\_{b}|x,y_a)$. Besides, considering a distributional error arising from both $\pi\_{\theta}$ and the $y_a$ or $y_b$ to a given input $x$, we introduce an SFT loss as a regularization term, i.e., $-\log \pi\_{\theta}(y|x)$. After careful derivation, we obtain
$$
L\_{\mathrm{gf}}(\theta) = -\mathbb{E}\_{(x, y_a, y_b) \sim D_u} [ \log \pi_\theta([y_a, y_b] | x) ]
$$
This predicts two responses. The model can better generalize and improve its reward capability by focusing on learning meaningful features rather than surface-level ones (e.g., length). Also, this pre-training strategy has been highlighted to make sense by Reviewer Jdyw. Thanks for the feedback!
>*What is the purpose of comparing with a proxy model trained on less data? Which model is used as the policy for generating ranked responses?*
Training a proxy model tests the method's modeling capability. A good proxy score indicates strong performance on ID, reflecting good modeling. Our goal is to design a method with both strong modeling and generalization. Thus, testing with the proxy score is crucial. This setup is also supported by Yang et al. (2024).
As for the sampling policy, we use the LLaMA-3.1-8B model, as described in Section 4.4 (line 319).
>*W1: In Table 1, what are the methods denoted (baseline)?*
Our main baselines include several methods (such as Freeze and Regularization) aimed at enhancing generalization and discriminative models to demonstrate the promising generalization of our GRAM.
>*W2&Q4: Lack of detailed description on the baseline version of label smoothing (LS).*
The LS we use differs from the standard LS. Our method applies LS to all candidate label tokens, as described in Section 3.3, whereas the traditional method applies it to all tokens in the vocabulary. We theoretically demonstrate that our method is more effective, potentially optimizing with a constrained Bradley-Terry model. Apart from the explanation in Appendix D.2, we have conducted further experiments on the LLaMA-3.2-3B-Instruct to explore this in more detail below.
|Method|UniFeed(ID)|RewardBench(OOD)|
|:-|:-:|:-:|
|GRAM w/o LS|68.4|82.1|
|GRAM w/ Our LS|**70.6**|**83.6**|
|GRAM w/ Standard LS|69.1|82.5|
|G-Baseline w/ Our LS|66.7|80.2|
|G-Baseline w/ Standard LS|66.2|79.0|
As the experimental results show, our LS leads to better reward modeling compared to the standard LS. These results also show the role of LS in training GRAM. We promise to add more discussions on this in the revised version.
>*Q1&Q2: Are there two 400,000-example splits, one for pre-training and one for fine-tuning? Can we collect pre-training data by sampling from LLaMA models?*
Yes, both splits of 400,000 examples are randomly selected from the Unified Feedback dataset. The pre-training responses also come from Unified Feedback, but we only use the responses without their preference labels to simulate unlabeled responses. Sampling responses from LLaMA to obtain unlabeled data is a good idea, and we commit to exploring this method further. This work focused on the time-consuming nature of sampling large-scale labeled response pairs. Also, we expect such available data to exist for direct training in real-world scenarios. Thus, we randomly select response pairs from Unified Feedback to simulate available labeled response pairs.
>*Q3: This scoring by Eq. 3 may yield vastly different scores depending on the reference response.*
This setup is reasonable. In BoN sampling, we keep the same reference for all candidate outputs, ensuring the scores are comparable. In RL, the recent work *Remax* uses the difference from a reference as the reward and interprets its reasonableness from a reward baseline perspective.
---
We sincerely thank you for your positive feedback on our paper!
Best,
Authors | null | null | null | null | null | null |
Generalized Interpolating Discrete Diffusion | Accept (poster) | Summary: The method overcomes existing limitations in autoregressive models and discrete diffusion approaches by introducing a generalized interpolating discrete diffusion. This innovation offers enhanced flexibility in the noising process design by combining masking and uniform noise, enabling the revision of previously generated tokens.
### After rebuttal comment
Overal, the paper is of sufficient quality to warrant acceptance given how flexible the method enables for self-correction, which is also an important aspect of not just diffusion methods but also AR models. That's said, the concerns regarding overfitting problem is not fully addressed and I believe the authors should be upfront about this issue. Since looking at table 4, the model with less training iterations obtain better metrics than the longer one, yielding a concern regarding the scalability of the model on large-scale training like bigger model size, large-scale dataset. In addition, the evaluation of self-correction using LLM-based metrics should be included in the final manuscript and discussed thoroughly. Given these factors, I decide to keep my current rating.
Claims And Evidence: The problem of keeping already generated tokens unchanged is clearly presented, highlighting the essence of the proposed method. However, to revise generated tokens, the method still relies on a uniform diffusion process that is not deterministic and controllable. Regarding the self-correction step, the resampling strategy for low-confident tokens is similar to the MaskGIT paper.
Methods And Evaluation Criteria: See above.
Theoretical Claims: NA
Experimental Designs Or Analyses: - Should not let empty cells in table 4.
Supplementary Material: 1. The appendix C (self-corection step) should be moved to the main manuscript since it is the main contribution of the paper.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strength: Paper is well-written and easy to follow.
Weakness: See questions below.
Other Comments Or Suggestions: NA
Questions For Authors: 1. The use of uniform noise for mask diffusion is also introduced as a mask-and-replace transition in "Vector Quantized Diffusion Model for Text-to-Image Synthesis" paper. A discussion should be included.
2. The proof of proposition 3.3. should be put in the supplementary as it is not a significant part of the paper. And It is quite similar to the derivation of existing works like "Simplified and generalized masked diffusion for discrete data", "Structured denoising diffusion models in discrete state-spaces". Section 3 appear to present background information rather than the paper's technical contribution, excluding the ELBO 3.3.
3. Self-correction step is similar to the sampling strategy in MaskGIT paper (MaskGIT: Masked Generative Image Transformer).
4. What is the method's performance when ignoring weighting term? This mean the weight is set to a constant 1.
5. Table 4 raises questions about GIDD's performance when trained for longer periods, as the results suggest potential overfitting. It's unclear why the authors include other baselines with 1.1B parameters while omitting results for their method at the same parameter size, making direct comparisons difficult.
6. How does large uniform noise (e.g., 0.5) influence model performance?
7. Table 3 demonstrates that the use of uniform noise does not improve perplexity (PPL). An alternative approach would be to report entropy instead.
8. Since the method has an additional self-correction step, I wonder it will introduce a latency for the sampling process? A report of sampling speed is needed (e.g. tokens/sec)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and insightful questions. We especially feel that 2) deserves careful discussion since we consider the theoretical contributions a key part of our work. We would like to encourage the reviewer to share some additional detail on their concerns.
1. We agree with the reviewer that a discussion of [4] needs to be added. While [4], and also [2] (App. A.2.6), experiment with a BERT-like combination of masking + uniform noise, these approaches are only discrete-time and do not investigate what improvements/challenges arise from it.
2. We agree that there are some similarities between our theoretical contributions and the suggested references [1, 2] and would like to highlight the important differences. The theory behind GIDD is a strict generalization of masked diffusion (MD) in [1] (see Cor. 3.8). Unlike MD, GIDD allows for a time-variable mixing distribution $\pi_t$, which makes MD a special case where $\pi_t=\mathbf{m}$ is constant w.r.t. time. Another example would be setting $\pi_t=\mathbf{1}/|V|$ where $|V|$ is the vocabulary size, which simplifies to uniform diffusion from [2]. As for [2], this paper covers the most general case of discrete diffusion and forms the foundation of much subsequent work, including [1] and ours. In this framework (and in discrete diffusion in general), complete knowledge of the transition matrix is paramount and necessary both for computing the ELBO as well as for sampling. In this regard, our theoretical contribution is to solve the inverse problem of finding the (closed-form) Markovian transitions given only the marginal forward transitions for a special family of linearly interpolating diffusion processes (see Prop. 3.3). In addition, while [2] only covers discrete-time, the GIDD ELBO and Markov transitions are stated in continuous-time by applying the tools from [3]. We hope that this sufficiently highlights our theoretical contributions and that we were able to address the concerns of the reviewer.
3. We agree with the reviewer that the confidence-weighting in the self-correction step is similar to the MaskGIT sampling algorithm. However, ours is a fixed-point iteration that resamples one token at a time and is only applied after denoising, whereas MaskGIT sampling is an adaptive masked diffusion sampler prioritizing high-confidence tokens. Indeed, applying MaskGIT sampling to GIDD ($p_u=0.0$; base) finds that this is prone to collapse, with generated samples being low-diversity and consisting mostly of repeated tokens. This manifests in a low generative entropy (via Gemma-2-9B) of 1.31 compared to the baseline of 3.13. MaskGIT sampling is unfortunately not applicable to $p_u>0$ models since those require token replacements in addition to unmasking. We thank the reviewer for pointing out this similarity and will discuss it in an updated version.
4. Preliminary experiments showed that setting the ELBO weights to 1 works for $p_u=0.0$ but is challenging for $p_u>0$, and in light of the clipping strategy working much better, we did not investigate this any further. We hypothesize that the weight ratio between uniform and noise-free tokens is important for learning the correct posterior: What is the distribution of the correct token, given that it is incorrect with only a small probability?
5. Regarding overfitting, we would like to point out that for $p_u=0.0$, MDM and GIDD are equivalent, so it is likely that if GIDD is overfitting, then the baselines are also overfitting. One potential cause could be the way long sequences are handled in our data loader (random cropping instead of splitting), which leads to more frequent repetition of short sequences, potentially leading to overfitting. The 1.1B MDM baseline is included purely for context as training 1.1B GIDD models was unfortunately not feasible given the available resources.
6. Discouraged by the higher loss and PPL of $p_u>0$ models, we did not experiment much with noise levels above $p_u=0.2$. Generally speaking, higher values are more challenging as the SNR drop gets more concentrated on early steps.
7. Unfortunately, we are not entirely sure which entropy the reviewer is referring to. Since PPL is just the exponent of cross-entropy, the comparison would not qualitatively change by reporting cross-entropy.
8. Since self-correction is run for at most 128 iterations (subject to early stopping) and the samples are generated in 128 denoising steps, the worst-case computational overhead is double. However, even if self-correction incurs some overhead, it is generally possible to improve sample quality while keeping the inference compute budget constant by reducing the number of denoising steps in favor of additional post-hoc correction steps.
- [1] Shi et al., 2024. https://arxiv.org/abs/2406.04329
- [2] Austin et al., 2021. https://arxiv.org/abs/2107.03006
- [3] Campbell et al., 2022. https://arxiv.org/abs/2205.14987
- [4] Gu et al., 2021. https://arxiv.org/abs/2111.14822
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response! Here are my follow-up questions:
1. Regarding Q7, I mean the entropy of the model's generated samples. Though the paper proposes mixing noise (including mask and uniform noise) that supports the ability of token revision, it lacks strong evidence to support the claim. So, additional results are (both quantitative and qualitative) encouraged. This stems from the fact that PPL does not reflect the "true" quality of generated samples.
2. Could you provide that actual inference speed (i.e. token/sec) in comparison with MDM baseline? It is essential to include this aspect for the sake of paper completeness.
3. Regarding the absence of the 1.1B model: I believe that whatever is included in a paper does count and has it purpose. If the 1.1B model is not available at the submission, at least the authors should mention it somewhere or just discard it. Still, it is nice to have.
4. The explanation of overfitting is not quite convincing to me. Is it just solely about data processing or model itself? If it is due to the model, the authors should admit it in the limitation section.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their rebuttal comment and will address their follow-up questions in the following.
1. To prevent any potential confusion, we would like to clarify that there are two different PPLs used throughout the paper: Validation PPL (model's PPL on the validation set) is the upstream metric, whereas generative PPL (PPL of Gemma-2-9B on generated samples) is the downstream metric. It is common to ablate training settings on the upstream metric (i.e. val. PPL), which is what is reported in Tables 2 and 3. The entropy of generated samples is also a downstream metric, so using it for ablations would be unusual and somewhat impractical. However, it is well-suited for evaluating the final GIDD+ checkpoints and for comparing different levels of $p_u$, which we have done in our reply to Reviewer wojY (Q2, Table 1). Regarding the request for additional results, we would like to point towards Q3 and Table 2 in our response to Reviewer wojY, where we have conducted an LLM-based evaluation of the self-correction abilities of the different noise schedules.
2. When generating 128 samples with a batch size of 1 and 128 denoising steps, we get a sampling speed of 1.27 sec/seq for GIDD and 1.18 sec/seq for MDM on a RTX 4090. This translates to a speed of 404 tok/sec and 433 tok/sec respectively. The slight overhead (~7%) of GIDD stems from handling general-case mixing distributions and is constant w.r.t. the model, so it will shrink as the model size increases. Since the architecture is shared between GIDD and MDM, model inference takes the exact same amount of time. It is also important to mention that even in the absence of any self-correction steps, the gen. PPL of $p_u>0$ models is significantly better than $p_u=0.0$ and MDM (see Fig. 6, App. D), making it worth the additional cost. Given the slight overhead imposed by GIDD, we agree with the reviewer that this is important to mention and will include it in future revisions of the paper.
3. While we are working on scaling up the proposed models, we cannot promise any results on that front in the near future, given that the primary constraint was and is the availability of computational resources. In any case, we will make sure to mention this in future revisions.
4. It is important to highlight that overfitting results from our data processing and is not model specific--if it is happening at all. As Reviewer wojY correctly points out, the differences may be too small to even make such a claim. | Summary: As a class of models currently attracting significant attention, masked diffusion models suffer from a fundamental limitation: once a token is generated, it cannot be modified. To address this issue, this paper introduces General Interpolating Discrete Diffusion (GIDD), which allows for a more flexible noise formulation. The authors derive the forward process, backward process, and loss function for GIDD. Additionally, they propose several techniques to stabilize training and improve performance. Experimental results show that while GIDD increases the modeling difficulty—leading to a performance drop in language modeling perplexity and downstream tasks—it effectively demonstrates self-correction capabilities.
Claims And Evidence: The claims made in this paper are not well supported by the experimental results.
Specifically, language modeling perplexity and downstream task performance are the primary metrics of interest, yet the proposed method leads to worse performance on both.
Regarding generative PPL, this metric is not entirely reliable. Even low-quality sentences (e.g., repetitive words) can receive low PPL from large language models, a limitation that the authors themselves acknowledge. Given this, the advantage demonstrated by GIDD is only observed on this unreliable metric, raising concerns about the validity of the claimed improvements.
Methods And Evaluation Criteria: No. As discussed in Claims and Evidence, the primary evaluation metrics (language modeling PPL and downstream task performance) show a decline, while the generative PPL metric used to support the method is unreliable. Thus, the evaluation does not convincingly justify the claims.
Theoretical Claims: No. I did not verify every step of the derivations, but the theoretical arguments appear logically sound and well-structured.
Experimental Designs Or Analyses: Yes. Please see Claims And Evidence.
Supplementary Material: Yes. I reviewed Appendix B of the supplementary material.
Relation To Broader Scientific Literature: Discrete diffusion models have recently made significant progress in text generation, particularly masked diffusion models. However, a key limitation of masked diffusion models is that once a token is generated, it cannot be modified—an inherent constraint of their noise injection process. Addressing this issue has been a major focus in the field.
This paper introduces a generalized forward process that combines masked noise and uniform noise, representing a valuable exploration of this problem.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper has a clear motivation, well-structured theoretical derivations, and is written in a clear and concise manner.
However, the main weakness is that the experimental results do not sufficiently support the effectiveness of the proposed method (as detailed in Claims and Evidence). Although the authors introduce "self-accuracy" as an evaluation metric, they do not provide sufficient justification for its validity.
Other Comments Or Suggestions: 1. In Section 5.3, the authors claim that GIDD trained on only 131B tokens surpasses models trained for twice as long, attributing this to overfitting on spurious patterns in the training data. However, this explanation seems questionable, as such results are more likely due to random variation rather than overfitting. Moreover, on datasets such as ARC-c, BoolQ, OBQA, and WinoG, both models perform close to random guessing, making it difficult to determine which is superior.
2. When using generative perplexity as an evaluation metric, I suggest including entropy as a complementary measure to assess the diversity of generated sentences, providing a more comprehensive perspective.
3. To better evaluate the quality of generated text, I recommend conducting a user study or leveraging LLM-based scoring, as these approaches would offer more reliable and interpretable assessments.
4. Would it be possible to include results on reasoning benchmarks, such as GSM8K?
If the author provides the aforementioned more detailed evaluation metrics (or at least some of them), I will increase my score.
Questions For Authors: 1. In Figure 3, what does "Tokens changed" represent? How does it relate to the number of sampling steps?
2. For GIDD with uniform noise, is it possible to set the number of sampling steps to an arbitrarily large value? I am particularly curious whether tokens continue to change in the later stages of sampling when the number of steps is very large.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful review and constructive feedback. In the following, we would like to respond to the reviewer’s comments and suggestions and provide additional experimental results to further bolster the claim of self-correction in the presented models.
1. We mostly agree with the reviewer on the comment regarding overfitting. The difference is indeed small, and performance is almost random on some datasets. Given that the difference is consistent, albeit small, perhaps the appropriate characterization would be to say that it is likely run-to-run variance with some likelihood of overfitting. As mentioned in our response to Reviewer LeK6, we see a potential cause of overfitting in the way long sequences are handled in our data loader, which leads to more frequent repetition of short sequences, potentially causing overfitting.
2. We agree with the reviewer that this would be a good addition and would help quantify the diversity loss during self-correction. Given the mode-seeking nature of the self-correction step, some decrease in diversity is to be expected, but this should be restricted to a reasonable amount. Indeed, we find a decrease in entropy for $p_u>0$ models, with the decrease correlating linearly with the number of changed tokens (see Table 1 below). However, the decrease is moderate and does not indicate any collapse. This is also supported by the LLM-based evaluation (see 3.). For further context, we also provide qualitative self-correction examples in Appendix K, which give some intuition on the nature and extent of the reduced diversity.
3. Unfortunately, conducting a user study comes with its own host of challenges and is beyond the scope of this project. However, we have conducted an LLM-based evaluation of the quality of generated samples via GPT-4o using single-answer grading [1] (prompt omitted due to char. limit). The samples are graded on a scale from 1 to 10 in terms of clarity, grammaticality, factuality, writing style, and creativity. We would like to emphasize that these absolute numbers are highly dependent on the judge model’s calibration and should therefore be taken with a grain of salt. Nevertheless, we find consistent improvements at high significance levels, with $p_u=0.2$ exhibiting both the largest improvement and the highest scores overall (see Table 2 below). We report the self-correction setting with the largest effect for each noise level.
4. Given that our models are comparatively small and not trained on instruction following or conditional generation, evaluation on GSM8k is unfortunately not possible. Even if we try to do pseudo-conditional generation by forcing the logits of prompt tokens, the model does not generate meaningful continuations.
Regarding the reviewer’s questions:
1. The “number of tokens changed” refers to the number of tokens that differs from the initial sequence after convergence/termination of the self-correction step. While this roughly correlates with the number of inference steps, it is not a one-to-one correspondence since self-correction may, at times, oscillate between two or more similar states (e.g. multiple options that are equally correct/incorrect). We therefore deem the number of changed tokens after convergence to be a more meaningful metric compared to simply the number of self-correction steps.
2. Indeed, due to the continuous-time nature of the GIDD diffusion process, the number of denoising steps can be set arbitrarily high. Due to our chosen parameterization, where the model predicts the fully noise-free data which is then re-noised up to the appropriate level, uniform noise is actually continually injected at low levels, so tokens continue to change throughout. Future work can explore different approaches that avoid this, which may lead to improvements.
[1] Zheng et al., 2023. https://arxiv.org/pdf/2306.05685
---
**Table 1**
||$p_u=0.0$|||$p_u=0.1$|||$p_u=0.2$||
|-|-|-|-|-|-|-|-|-|
|Temperature $\tau$|#tokens changed|Entropy||#tokens changed|Entropy||#tokens changed|Entropy|
|(no self-correction)|0.0|3.13||0.0|3.05||0.0|3.03|
|0.01|14.5|3.12||5.78|3.04||7.65|3.01|
|0.05|46.2|3.06||23.7|2.98||28.2|2.94|
|0.1|62.6|3.12||44.8|2.94||58.4|2.87|
|0.5|52.4|3.13||37.8|2.95||60.2|2.87|
|1.0|41.4|3.12||35.3|2.96||51.1|2.89|
**Table 2**
|Model|Clarity|Grammaticality|Factuality|Writing style|Creativity|
|-|-|-|-|-|-|
|GIDD ($p_u = 0.0$)|2.51|2.96|3.61|2.84|4.48|
|+ self-correction ($\tau = 0.1$)|1.99 (-20.9%**)|2.39 (-19.3%**)|3.02 (-16.2%**)|2.24 (-21.1%**)|3.60 (-19.5%**)|
|GIDD ($p_u = 0.1$)|2.51|2.85|3.66|2.78|4.26|
|+ self-correction ($\tau = 0.1$)|2.69 (+7.2%**)|3.05 (+6.9%**)|3.88 (+6.0%**)|2.98 (+7.1%**)|4.35 (+2.1%*)|
|GIDD ($p_u = 0.2$)|2.49|2.82|3.70|2.79|4.25|
|+ self-correction ($\tau = 0.5$)|2.90 (+16.5**)|3.29 (+16.6%**)|4.01 (+8.5%**)|3.16 (+13.4%**)|4.48 (+5.5%**)|
Significance levels: * $>2\sigma$ difference, ** $>5\sigma$ difference.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's reply!
After checking Table 2, I’m keeping my promise: *“If the author provides the aforementioned more detailed evaluation metrics (or at least some of them), I will increase my score.”* Since the other two reviewers gave a score of 3, I’ve decided to raise mine from 2 to 4 because I hope the paper gets accepted.
However, I still have some concerns. The entropy values reported by the authors appear to be unusually low. According to Section 6.1 in [1], the normal entropy range is approximately 5.6–5.7. Moreover, if the authors could pre-train or fine-tune a larger model to show the method’s performance on math or code tasks, I would definitely be inclined to highlight this paper.
[1] Zheng et al. Masked Diffusion Models are Secretly Time-Agnostic Masked Models and Exploit Inaccurate Categorical Sampling. ICLR 2025.
-----------------------
**After Reply Rebuttal Comment by Authors**
I would like to clarify that **entropy is computed directly on the generated tokens and is unrelated to the choice of the reference model.** The authors claim that the poor entropy results are due to their choice of reference model, which is confusing. I believe the authors must have made a mistake here, so I have adjusted my score to a 3.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their rebuttal comment and for the positive response to the additional experiments.
Regarding the lower entropy compared to [1], this stems from the fact that we use Gemma-2-9B as our reference model, implying that absolute numbers are not comparable between ours and [1]. The reason we choose Gemma-2-9B over GPT2-large is that we find the latter to be too small of a model to constitute a reasonable approximation of the true distribution of natural language. For the same reason, absolute gen. PPL numbers are also not comparable between ours and prior work using GPT2-large as a reference model. This is in addition to the numerical instabilities of Gumbel sampling, highlighted in [1] and also Appendix H, making our numbers not comparable to much of the existing MDM literature.
---
**EDIT (April 8):** Thank you to the reviewer for pointing out our oversight regarding how **entropy is computed directly on the generated tokens**. While we believe that the generative entropy reported in our initial rebuttal (Table 1) is still a useful metric, it is indeed not how entropy is commonly computed in the literature. Following [1] (App. H.1), we have redone the entropy calculation via sequence-level unigram modeling, the result of which is reported in Table 1 below. We still observe an expected but moderate drop in entropy after self-correction. Hybrid models ($p_u>0$) actually have higher entropy scores already before self-correction, indicating greater sample diversity.
We hope that the reviewer will still see this update and reconsider their final score accordingly.
[1] Zheng et al., 2024. https://arxiv.org/abs/2409.02908
**Table 1**
||$p_u=0.0$||$p_u=0.1$||$p_u=0.2$||
|-|-|-|-|-|-|-|
|Temperature $\tau$|#tokens changed|Entropy|#tokens changed|Entropy|#tokens changed|Entropy|
|(no self-correction)|0.0|4.95|0.0|5.09|0.0|5.08|
|0.01|14.5|4.67|5.78|5.04|7.65|5.03|
|0.05|46.2|4.54|23.7|4.97|28.2|4.94|
|0.1|62.6|4.66|44.8|4.99|58.4|4.94|
|0.5|52.4|4.83|37.8|5.01|60.2|4.98|
|1.0|41.4|5.06|35.3|5.10|51.1|5.09| | Summary: This paper generalizes discrete diffusion models with masked or uniform transition kernels to a larger design space. Specifically, the authors introduce a Generalized Interpolating Discrete Diffusion process (GIDD), which transfers data not to the [mask] state but to an arbitrary predefined distribution. They further prove the existence of such transition kernels and provide ELBO for GIDD. This paper especially focuses on mixing the masked and the uniform noising schemes and demonstrates the empirical advantage of GIDD against existing non-autoregressive methods of language modeling.
Claims And Evidence: Theoretical and empirical contributions are claimed in extending the discrete diffusion framework, which is well supported by evidence.
Methods And Evaluation Criteria: The proposed method, GIDD, extends discrete diffusion models to generate categorical data. It is evaluated on language generation (by perplexity) and accuracy over downstream tasks, which is an effective evaluation criterion. The effectiveness of the proposed hybrid noising process is evaluated by performing self-correction sampling and computing the generative perplexity under Gemma 2 9B.
Theoretical Claims: The paper characterizes the Markov chain for the GIDD process and shows the ELBO of GIDD. The reviewer checked the proofs of Proposition 3.3, Lemma 3.6, and Theorem 3.7 and believes that the proofs are correct.
Experimental Designs Or Analyses: The reviewer checked the experimental design and analyses. The implementation of GIDD with $p_u=0.0$ resembles MDM, which is consistent with the theoretical claims. The additional uniform noise slightly worsens the PPL of GIDD, as shown in Table 3 and Table 5, which seems to weaken the claim that combining masking and uniform noise improves sample quality. The authors also analyze the proposed self-correction sampling method, showing better self-correction ability of GIDD with additional uniform noise.
Supplementary Material: The reviewer checked most parts of the supplementary, including some additional results and some of the proof details.
Relation To Broader Scientific Literature: This paper may be of interest to a broader audience in other scientific domains, such as protein generation with discrete diffusion models.
Essential References Not Discussed: Related works are well discussed in this paper.
Other Strengths And Weaknesses: #### Strengths
- This paper provides a richer design space for discrete diffusion models that would be intriguing to explore for future works. This generalized framework is already a contribution by itself.
- The proposed GIDD process is rigorously formulated and well articulated.
#### Weaknesses
- It seems that the additional uniform noise on top of the masked diffusion process does not improve the overall performance of the discrete diffusion model, which undermines the meaningfulness of the hybrid noising process.
- The advantage of GIDD with $p_u>0$ is discussed in Section 5.4, but the importance of self-correction is not well explained. For example, does this self-correcting scheme improve downstream benchmark accuracy for GIDD with $p_u>0$?
Other Comments Or Suggestions: See above.
Questions For Authors: Some of my points listed in weaknesses may be inaccurate, and I would be grateful for any clarification. Specifically, what is the importance of self-correcting and why is it not demonstrated in Table 2~4?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful and detailed review and for acknowledging the theoretical contributions of GIDD.
In the following, we would like to shed some additional light on the empirical evaluation of the proposed model and, hopefully, address the reviewer’s concerns.
Broadly speaking, our evaluation (and evaluation of language models in general) consists of two parts. The first part is likelihood-based evaluation, which tests the model’s ability to _recognize_ well-formed, high-likelihood sentences, and includes PPL on the test set as well as downstream multiple-choice benchmark performance. The second part is to test the model’s ability to _generate_ well-formed, high-likelihood sentences and for unconditional generation, this is most commonly measured with generative-PPL. This ability is equally, if not more, important because it is how language models are usually used in practice. And while the two abilities may correlate in general, sampling from the model can sometimes introduce challenges not present when simply evaluating likelihood. For example, teacher-forcing is used both when training autoregressive models and when evaluating their likelihood, but not when generating samples, which makes them prone to self-induced mistakes [1]. The same holds true for mask-only diffusion models. The point of our experiments is to show that while, indeed, hybrid diffusion models find it more challenging to evaluate the likelihood of given samples, they have self-correction capabilities and are more robust/less prone to self-induced errors at sampling time, thus generating higher-quality samples overall and especially so for low inference-compute budgets (see Sec. 5.4, L435 ff.; Fig 6, App. D). To further bolster this point, we provide an additional LLM-based evaluation in our response to Reviewer wojY, where we quantify the improvements in sample quality during the self-correction step in terms of clarity, grammaticality, factuality, writing style, and creativity. Perhaps the distinction between _recognition_ and _generation_ should be highlighted and discussed more prominently in the paper, which we will be happy to do in an updated version.
Regarding the second weakness: The reason we cannot naively apply self-correction to the multiple-choice benchmarks is because these benchmarks rely on likelihood-based answer selection. Specifically, for a given question (or prompt) with a set of possible answers (or continuations), the one with the highest likelihood under the model is selected. Since this process does not involve sampling from the model, it is a priori not possible to utilize the self-correction capabilities. As a result, benchmark scores of $p_u > 0$ models are hampered in correlation with their worse likelihood (see Tab. 5, App. B). Future work may aim to close the gap by using different criteria for selecting the correct answers (e.g. self-accuracy instead of likelihood), but unfortunately, this was beyond the scope of this project. Alternatively, future work may extend the proposed models to conditional generation, which will allow evaluation on generative benchmarks like GSM8k or HumanEval, where the generative strength and self-correction capabilities of hybrid models may bring substantial improvements over mask-only diffusion models.
[1] Bachmann & Nagarajan, 2024. https://arxiv.org/abs/2403.06963 | null | null | null | null | null | null | null | null |
Reward-Augmented Data Enhances Direct Preference Alignment of LLMs | Accept (poster) | Summary: This paper addresses a common limitation in preference-based alignment methods, where only relative preferences are considered, while qualitative aspects of responses are overlooked. It introduces reward-conditioned LLM policies that are trained to generate responses conditioned on rewards. Leveraging a simple relabeling strategy, it constructs preference pairs based on the quality scores to train the reward-conditioned LLMs. Experimental results show that the approach consistently improves the performance of DPO across diverse models and benchmarks.
Claims And Evidence: Regarding the limitations of vanilla direct alignment, the paper highlights two key issues: (a) high-quality rejected responses may be unlearned, and (b) low-quality chosen responses may be reinforced. While these are indeed potential issues, the claims are made more at a conceptual level, based on possible language policies that can be learned. However, the extent to which these issues arise depends on the optimization and the composition of the preference dataset. For instance, assuming overfitting is avoided, the likelihood of a high-quality rejected response can remain unchanged while the likelihood of the chosen response increases to drive down the loss. Similarly, both low-quality chosen and rejected responses can have their likelihoods reduced, with the rejected response being penalized more to achieve a lower loss. While precisely describing all possible optimization outcomes is challenging, this part of the paper could be made more rigorous to strengthen the claims.
The claim that reward-conditioned policies learn from the full spectrum of responses is sound, as they are explicitly conditioned on the target score of the response to be generated.
Methods And Evaluation Criteria: Given quality scores of responses, the paper proposes a simple labeling strategy to implement reward-conditioned alignment. Specifically, given $(x, y_w, r_w, y_l, r_l)$, it proposes to create two new preference pairs with each of the two responses as the target response to generate for the corresponding score. This is a simple, intuitive approach to utilizing quality scores of individual responses that allows direct application of exiting alignment methods such as DPO.
For evaluation, the paper considers one preference dataset (UltraFeedback) and five different language models to assess the proposed method against vanilla DPO. While the method does not always outperform DPO, in cases where it does, the performance margin is sometimes large depending on the base model. The evaluation could have been strengthened by applying the reward-conditioned alignment to alternative preference-based alignment methods such as IPO, etc.
Theoretical Claims: Theorem 4.1 shows that under mild conditions, the proposed reward-augmented DPO converges to the optimal policy.
Experimental Designs Or Analyses: The paper adopts UltraFeedback for preference-based alignment and assesses five open language models across six academic benchmarks. Both the number of language models and benchmarks seem sufficient for a thorough evaluation. However, the experiments could have been strengthened if preference-based alignment methods other than DPO, which have been introduced to address different limitations of DPO, are also evaluated.
Supplementary Material: No supplementary material has been reviewed.
Relation To Broader Scientific Literature: The paper proposes a simple data relabeling method that, in cases where individual quality scores are available, allows for better use of those scores to train models in a way that mitigates the limitations of vanilla DPO, such as unlearning high-quality responses and reinforcing low-quality responses. This is related to several prior works that attempt to address similar problems, such as IPO (reduces overfitting), conservative DPO (uses label smoothing), MMPO (considers relative quality differences), etc. While the paper demonstrates performance gains over vanilla DPO, additional comparisons with other closely related alignment methods could have further strengthened the study.
Essential References Not Discussed: Several prior works, such as [1] and [2], also study closely related problems, including incorporating qualitative aspects of responses into alignment and mitigating overfitting in DPO. These methods should ideally be evaluated alongside vanilla DPO or, at the very least, discussed.
---
[1] Kim et al., Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback.\
[2] Park et al., Disentangling Length from Quality in Direct Preference Optimization.
Other Strengths And Weaknesses: While the proposed method is a simple approach to utilizing quality scores of individual responses, it is unclear whether it is the most effective. For example, given individual quality scores, responses from different original preference pairs could be combined to create a significantly larger set of preference pairs. I am curious if the authors have considered or evaluated alternative approaches to using quality scores.
Other Comments Or Suggestions: None.
Questions For Authors: Q. After the proposed reward-conditioned training, how do the models perform on the flip task, i.e., evaluating the score of a given response similar to a type of a generative verifier?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Your valuable comments have greatly helped us improve our manuscript. Below are our specific responses to the raised questions:
**Weakness 1: Analysis of possible optimization outcomes.**
- In Section 3, we analyzed categorical LLM policies, i.e., tabular stochastic policies without function approximations. In this setting, for any prompt $x$ with the chosen and rejected responses $y_w$ and $y_l$, the optimal policy after RLHF must satisfy $\pi^*(y_w|x)=1$ and $\pi^*(y_l|x)=0$ in order to maximize the expected reward, i.e., $\max_\pi E_{y\sim\pi(\cdot|x)}[r(x, y)]$, since $r(x, y_w) > r(x, y_l)$. When only slightly more annotators prefer $y_w$ over $y_l$, i.e., $r(x, y_w)$ is slightly larger than $r(x, y_l)$, $\pi^*(y_l|x)=0$ will cause the LLMs unnecessarily unlearn the high-quality $y_l$. Similar limitations are discussed in more detail in Section 3.1.
- For function-approximated LLM policies, the analysis becomes significantly more complex. To demonstrate that these issues persist in practice, we provided empirical evidence in Figure 3 and will incorporate the newly conducted ablation on the O.O.D. dataset HelpSteer 2:
| | 8| 9 |10|
|-|-|-|-|
| Qwen2-7B-It |-416.7 | -356.5 | -334.8 |
| +DPO (UF)|-484.5 | -419.4 | -401.7 |
| +DPO (RA)| -438.6 | -366.4 | -341.1 |
**Weakness 2: Missing related works including [1] and [2].**
- The comparison with R-DPO [2] was provided in Table 12. We thank the reviewer for pointing out the missing related work [1], which we empirically compared in the following table. We report the MT-Bench scores of performing our method on the Llama3-SFT checkpoint:
| | Ours | MMPO | DPO |
|------|------|------|------|
| MT-Bench | 7.66 | 7.58 | 7.41 |
- We will add the above results to experiments and incorporate the following paragraph to related work:\
"Similar to our work, [1] also investigates how preference optimization can overlook qualitative aspects of responses. However, their focus is on overfitting to preference data, and they propose incorporating quality margins into the optimization objective. In contrast, our approach does not involve algorithmic modifications, but rather directly targets the limitations identified in Section 3. Our work also differs from [2], which introduces a constraint-based regularization term specifically aimed at mitigating verbosity bias."
**Weakness 3: Lack of empirical comparisons with DPO variants such as IPO and alternative approaches that use quality scores.**
In addition to DPO, we compared our method against **15** SOTA baselines. These include approaches that enhance DPO from various perspectives, such as IPO, as well as methods that incorporate quality scores during fine-tuning, including SteerLM, DPA, and MMPO. The results are presented in Figure 4 and Tables 10 and 12. For your convenience, we summarize the results below and will move Tables 10 and 12 to the main body of the manuscript.
| | Zephyr-SFT | DPO | DPA | SteerLM | NCA-P | NCA-R | INCA-P | INCA-R | Ours |
|-|-|-|-|-|-|-|-|-|-|
| LC Win Rate | 6.21 | 11.60 | 11.13 | - | 11.50 | 12.87 | 13.68 | 14.83 | **16.66** |
| Win Rate | 3.94 | 8.58 | 10.58 | 8.21 | 8.43 | 9.56 | 11.00 | 11.34 | **13.37** |
| |Llama-3-8B-It| SLiC-HF | ORPO | CPO | RRHF | KTO | IPO | RPO | R-DPO | SimPO | **Ours** |
|-|-|-|-|-|-|-|-|-|-|-|-|
| LC WR |22.92 | 26.9 | 28.5 | 28.9 | 31.3 | 33.1 | 35.6 | 40.8 | 41.1 | 44.7 | **48.2** |
| WR |23.15 | 27.5 | 27.4 | 32.2 | 28.4 | 31.8 | 35.6 | 41.7 | 37.8 | 40.5 | **53.2** |
If the reviewer has other baseline methods in mind, please let us know and we will be happy to include them as comparisons.
**Question 1: How do the models perform on the flip task, i.e., evaluating the score of a given response similar to a type of generative verifier?**
Our method, which is designed for direct preference optimization, is not directly applicable to training generative verifiers, which typically involve predicting reward tokens during training. That said, our approach is not incompatible with generative verifiers. For instance, at inference time—when the model is prompted to generate high-reward responses—the probability assigned to a given response can serve as an approximate measure of its quality. More generally, for LLM-as-a-Judge settings, reward-conditioned training can be applied to preference data that reflects the quality of judgments. By conditioning on the highest quality scores, the model exhibits its best judgment capabilities, rather than simply assigning high scores indiscriminately.
---
We hope the reviewer could consider raising the score if we resolved the reviewer's concerns. We would be happy to have further discussions if the reviewer has any additional questions or comments.
[1] Kim et al., ''Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback.''\
[2] Park et al., ''Disentangling Length from Quality in Direct Preference Optimization.'' | Summary: The paper presents a data augmentation approach for learning on pairwise preference data that doubles the amount of data by modifying the prompt to include a description of the quality (a reward score) of the preferred response and treating each response as the chosen response. DPO is then used to update the parameters of the LLM using the newly constructed preference pairs. The authors cast the data augmentation in the framework of goal-conditioning. To motivate the data augmentation method the authors examine several limitations with DPO. The paper evaluates training with their augmented data on several LLMs, but different LLMs are used for different experiments. The method is compared to a couple of ablations or modifications of the proposed augmentation approach. The only baselines are training without the augmented data and using SPPO. On average the proposed augmentation improves performance according to AlpacaEval2.0 win rates. However, for experiments with results for multiple LLMs the magnitude of the gains is not consistent. Additionally, the gains are small for MTBench. There is a slight, potentially not meaningful, average performance gain on NLP benchmark tasks.
Claims And Evidence: The authors make broad claims about the benefits of the data augmentation method. While the authors include a variety of LLMs in their experiments, very few experiments are conducted across LLMs. Therefore, it is difficult to understand how general the claims apply across LLMs and their various training and data regimes. For example, the authors claim that their Half RA configuration has comparable performance to RA on the full dataset. However, for the performance gaps vary by the LLM (and by task) with a difference of 6 and 11 for AlpacaEval LC WR and WR, which would not be considered comparable. Therefore, it is important to point the reader to results that are reported for all LLMs. A version of Figure 2, but across all LLMs is provided in Table 9 (Appendix B.2). However, this is not referenced anywhere in the main body of the paper. Such references must be made the analysis should reflect that the LLM used influences the performance gains. The analysis should also reflect that there are no meaningful performance gains on MTBench along with an explanation or hypothesis for why this is the case when AlpacaEval2.0 performance has much higher performance gains.
The authors claim that DPO is limited in its ability to model preference data because of overfitting and unlearning of high-quality responses, because they are the rejected response in a given pair. For example, in the second paragraph of the introduction. Many claims are made about how DPO behaves, but no evidence or citation is provided. Additionally, it is not mentioned that some of these issues are tested for in the paper.
Methods And Evaluation Criteria: The proposed method makes sense for the problem at hand and the evaluation benchmarks are standard for alignment focused tasks.
Theoretical Claims: I did not assess as they are all in the supplementary material.
Experimental Designs Or Analyses: 1. The authors motivate the structure of DPO and its offline nature as its key limiting factor. However, this is partially addressed by training with PPO+RM. The authors should include this as a baseline to compare against.
2. The proposed approach is to modify the prompt to be more specific about the quality of the response. Therefore, SFT on the augmented prompt + response is an important baseline.
3. The hyper-parameters used are listed in Appendix B.1. However, there is a lack of detail about how any parameters beyond the smooth parameter was set. The strategy for selecting the hyper-parameters MUST be detailed.
4. DPO can be sensitive to the exact hyper-parameters and the best hyper-parameters can vary across LLMs. If the hyper-parameters are not optimized per LLM, the true differences in performance may not be accurately reflected.
5. The experiments section should discuss the impact of LLM and compare against more baselines (these are mentioned above and in other sections). While additional experiments are included in the Appendix, they are not referenced in the main body and their conclusions are not discussed in the main body.
6. The paper states that the reward augmentation method improves performance of any direct alignment method. This is done in the appendix, but is not referenced at all in the main body of the paper.
7. The standard error bars for AlpacaEval2.0 win rate should be reported in Figure 2, Table 9, etc.
8. A main claim of the paper is that the dataset augmentation helps with generalization. However, all non-benchmark experiments use the same dataset used to train the model. This means that the prompts and responses are reasonably within distribution of the training data, especially as the data comes from GPT. To fully support the generalization claims, the type of analysis reported in Figure 3 should additionally be reported on on at least one dataset from a different distribution, e.g. HH-RLHF or OpenAssistant.
Supplementary Material: I did not read Appendix A.
As far as I can tell, Appendix B is not referenced in the main body of the text despite containing crucial information such as the prompts used during training and inferences, the hyper-parameters, the full results across all LLMs, comparisons to multiple baselines, as well as two additional experiments.
In the absence of the material in the appendix, this paper is incomplete, which is driving my current accept/reject recommendation. The authors MUST reference that this information is in the Appendix and include analysis of the complete results in the main body of the paper.
Relation To Broader Scientific Literature: The paper has interesting learnings and take aways for different strategies to steer LLM behavior.
There are contemporary papers that have explored similar methods, e.g. "Towards Aligning Language Models with Textual Feedback" (EMNLP 2024), and that attempt to solve similar issues with DPO, such as "Iterative Reasoning Preference Optimization" which uses a NLL-loss to help with unlearning. Any final version of the paper should discuss such contemporary literature and help readers to understand how they are distinct.
Essential References Not Discussed: The authors position the paper relative to the literature by pointing to a difference in goals. However, a difference is goals is not a strong distinction as a method designed to address a different set of issues may additionally address the goals outlined in the paper. The authors do not go into detail how or why their method addresses is a better solution than the related work that is mentioned.
The method is similar to Decision Transformer ("Decision Transformer: Reinforcement Learning via Sequence Modeling") where generated behaviors are controlled by conditioning on the desired reward the generate behaviors should receive. The similarity and the relationship to Decision Transformer should be discussed.
If the paper is accepted, the camera ready version should discuss "Towards Aligning Language Models with Textual Feedback" (EMNLP 2024) as contemporary work.
Other Strengths And Weaknesses: 1. The authors make many claims about the weaknesses of DPO and use these claims to motivate their data augmentation approach, however when making those claims, the authors do not present evidence to support them (neither a citation nor experimental result). It isn't until the experiments/results section that the authors begin to provide evidence for their motivating claims. The experiments should be pointed to earlier in the paper when the claims are first raised. Something as simple as "(see Section X)" would be sufficient.
Other Comments Or Suggestions: All comments and suggests are included in previous sections.
Questions For Authors: I do not have questions for the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **The only baselines are DPO and SPPO.**
In addition to DPO and SPPO, we compared with 15 baselines in Figure 4 and Table 12.
**Gains across LLMs are inconsistent. Marginal improvements on NLP benchmarks.**
- The effectiveness of our method is demonstrated on 5 LLMs. It consistently offers improvements, with most gains substantial. Since our method's hyperparameters were not extensively tuned for each model, variability in performance gains is expected. Notably, even SOTA alignment methods [1, 2] reported inconsistent improvements across different models.
- Alignment tax [1, 3] can reduce common-sense QA performance. So we primarily evaluated on instruction-following benchmarks, where our method yields strong improvements while avoiding the alignment tax.
**Claim of comparable performance between Half RA and RA.**
- While we initially considered RA and Half RA comparable, each outperforming the other on at least one benchmark, we agree that this claim is not essential. We will remove it as the comparison with DPO already supports this ablation: fine-tuning on reward-augmented data yields better performance with half of the prompts and the same compute.
- We conducted main experiments across five models and selected one or two for ablation studies. Due to resource constraints, we were unable to run all 10 ablations on all 5 models.
- We extended this ablation to Llama, in addition to Qwen and Gemma. The results are consistent with our original findings.
| | LC|WR|MTB | Arena |
|-|-|-|-|-|
| Llama-3.1-8B-It | 24.79 | 27.38 | 8.44 | 26.9 |
| +DPO (UF)| 28.67 | 30.21 | 8.47 | 33.0 |
| +DPO (RA) | 31.20 | **35.93** | 8.47 | **34.4** |
| +DPO (Half RA) | **31.66** | 34.37 | **8.50** | 33.6 |
**A version of Figure 2 across all LLMs is provided in Table 9 but not referenced.**
Figure 2 and Table 9 present the same results, both across all LLMs.
**The gains are small for MTBench.**
- Since we did not perform extensive hyperparameter tuning for each benchmark, it is expected that performance gains are modest on some benchmarks.
- On MTBench, the average gains obtained by our method are **~1.55** more than DPO. In comparison, all four models fine-tuned with SimPO [1] fail to outperform DPO on MTBench.
**The analysis in Figure 3 should also be reported on O.O.D data.**
We conducted additional experiments on HelpSteer2 and had similar observations as in Fig. 3:
| | 8| 9 |10|
|-|-|-|-|
| Qwen2-7B-It |-416.7 | -356.5 | -334.8 |
| +DPO (UF)|-484.5 | -419.4 | -401.7 |
| +DPO (RA)| -438.6 | -366.4 | -341.1 |
**No evidence or citation about how DPO behaves is provided.**
- In introduction, we cited [4] as the first to identify the unlearning issue of DPO, and compared with it in related work and experiments.
- We offered empirical evidence in Fig. 3. We will also include the new ablation on HelpSteer2 at L384 and add pointers to these results in Sec. 1 and 3.
**Limitations are partially addressed by PPO.**
Our primary motivation is to address the limitations of direct alignment, instead of PPO that exhibit different limitations not covered here (see Sec. 5). We will include a comparison with Llama-3-PPO, which scores $21.27$ on AlpacaEval.
**SFT on the augmented prompt + response is an important baseline.**
Please refer to Fig. 4, where we compared with SOTA conditional SFT baselines including DPA and SteerLM.
**The strategy for selecting the hyperparameters must be detailed.**
We will add the following paragraph:\
"We tune $\beta$ within $[0.001, 0.01, 0.1]$ and batch size within $[64, 128, 256]$. We find $\beta=0.01$ and batch size $128$ yield the overall best performance for DPO across models. Our method uses the same hyperparameters as DPO."
**Additional experiments in Appendix are not referenced.**
We will add references to these ablation headers in the experiments and move most of them to the main body as space permits.
**The error bars for AlpacaEval should be reported.**
Reporting error bars requires training at least three times more models, which was not feasible given our resource constraints. For similar reasons, most prior alignment works [1, 2] also report results from single runs.
**Missing related works.**
We will add the following paragraph to L262:\
"Pang et al. (2024) addressed DPO’s tendency to reduce the probability of the chosen response by incorporating an NLL loss. In contrast, our work focuses on a different limitation of DPO—its tendency to overlook qualitative aspects of responses—and proposes a data relabeling approach that requires no algorithm changes. It also differs from conditional sequence modeling based on SFT (Chen et al. 2021, Lloret et al. 2024). Due to the lack of textual feedback in UF, we empirically compare with the reward feedback variants of Lloret et al. (2024), including SteerLM and DPA."
[1] Meng et al. ''SimPO.''\
[2] Wu et al. ''SPPO.''\
[3] Askell et al. ''Language Assistant as Alignment Laboratory.''\
[4] Adler et al. ''Nemotron-4 340B.''
---
Rebuttal Comment 1.1:
Comment: Sorry for the delay in this message. I posted it in the wrong spot.
Thank you for your responses. They have answered a number of my questions. I have an additional question.
Can you please elaborate on what is meant here, "Since we did not perform extensive hyperparameter tuning for each benchmark, it is expected that performance gains are modest on some benchmarks."? Which benchmarks were used to select the hyper-parameters?
Per this response, "Reporting error bars requires training at least three times more models, which was not feasible given our resource constraints. For similar reasons, most prior alignment works [1, 2] also report results from single runs." The AlpacaEval report includes a standard error score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the time and feedback. We are glad that our responses addressed your previous concerns, and we now address your remaining questions as follows.
**1. Can you please elaborate on what is meant here, "Since we did not perform extensive hyperparameter tuning for each benchmark, it is expected that performance gains are modest on some benchmarks."? Which benchmarks were used to select the hyper-parameters?**
We tune the KL regularization coefficient within $[0.001, 0.01, 0.1]$ and batch size within $[64, 128, 256]$. Among these, $\beta=0.01$ and a batch size of $128$ achieve the overall best performance *for DPO* (hyperparameters are not specifically tuned for our method) averaged across the LC win rate, MT-Bench average score, and Arena-Hard-Auto score. Specifically, $\beta=0.01$ consistently yields the best results across all models. While the optimal batch size varies slightly between models, $128$ performs best for most models, and for models with other optimal batch sizes, the gaps are minor. Our method adopts the same hyperparameter settings as DPO. We will add the above paragraph to our experimental setups.
**2. The AlpacaEval report includes a standard error score.**
- The standard error scores reported in the AlpacaEval GitHub repository and paper reflect variability across all instructions in the dataset and across different verbosity prompts, respectively. They are designed to assess the robustness of the leaderboard setup (specifically, the instruction designs and LC score calculation), rather than the robustness of the algorithms or the resulting models that the reviewer concerned.
- To further address the reviewer’s concern, we report the average scores and their corresponding variances across three independent training runs in the table below:
| | Qwen-DPO | Qwen-Ours | Gemma-DPO | Gemma-Ours |
|-------|------------|------------|------------|------------|
| LC WR | 21.39±0.39 | 31.10±0.33 | 50.49±0.35 | 59.06±0.16 |
| WR | 19.68±0.28 | 28.22±0.31 | 35.40±0.19 | 54.48±0.12 |
| MTB | 8.35±0.001 | 8.46±0.001 | 8.54±0.001 | 8.58±0.001 |
These results are consistent with the scores presented in the paper, which supports the statistical significance of our results and demonstrates the robustness of the proposed method and DPO. Due to time constraints, we were unable to retrain the other models across multiple runs. However, we will include a complete table with error bars for all models in the next version of the manuscript.
---
We hope these responses have fully addressed your concerns. We would be happy to have further discussions if the reviewer has any additional questions or comments. | Summary: This paper studies the preference alignment problem in Large Language Models (LLMs) and proposes a Reward-Augmented Data Relabeling method to improve Direct Preference Optimization (DPO). Traditional preference learning focuses only on relative preferences while ignoring the absolute quality scores of responses. This leads to unnecessary unlearning of high-quality rejected responses, indiscriminate reinforcement of low-quality chosen responses, and poor generalization to optimal responses, which are sparse in the preference dataset.
To address these issues, the authors introduce reward-conditioned policies, leveraging reward scores from AI feedback to relabel preference data and construct a reward-augmented dataset. Experiments on various instruction-following and academic benchmarks demonstrate that training LLMs with DPO on this enhanced dataset consistently improves performance. Additionally, results confirm that this method effectively mitigates the unlearning problem of high-quality rejected responses, making preference optimization more robust and generalizable.
Claims And Evidence: The paper provides a detailed and convincing explanation of the issues in preference learning. In the experimental section, it conducts extensive testing across multiple models and explores various aspects, particularly dataset size impact (Half RA). The authors perform in-depth experiments on the UltraFeedback dataset, carefully controlling variables such as DPO hyperparameters and benchmark settings to ensure the reliability of results. Additionally, the paper validates the effectiveness of its method for learning from high-quality rejected responses from two perspectives: log probability of high-quality rejected responses and a test where low-quality rejected responses are filtered out before training (reported in the appendix). Overall, the paper presents a meaningful research problem, develops a model to address it, and conducts extensive experiments to verify its effectiveness.
However, the paper also has some limitations:In Section 3 (Method), the newly proposed training objective is not fully reflected in later experiments.Instead, in Section 4 (Experiments), the authors directly apply DPO on the re-labeled dataset, rather than using the training objective introduced in Section 3.As a result, the experiments only verify that re-labeling preference data with reward scores improves generalization performance (as the paper claims, allowing the model to “learn from the full spectrum of response quality”). However, the current setup does not directly validate the theoretical claims made in the paper.
If possible, please supplement the paper with additional experiments or provide more theoretical justification to strengthen the connection between the proposed training objective in Section 3 (Method) and the experimental setup in Section 4 (Experiments).
Methods And Evaluation Criteria: - (1) The paper chooses well-established benchmarks (AlpacaEval 2.0, MT-Bench, TruthfulQA, GSM8K etc.), which are widely used to evaluate preference alignment in LLMs.
- (2) The evaluation metrics (win rate, accuracy) align well with existing literature.
- (3) Ablation studies effectively analyze different aspects of the method.
Theoretical Claims: In this paper, a convergence guarantee Theorem (Theorem 4.1) is provided, and it is proved that the method can converge to the optimal strategy under certain assumptions, and the error bound is $O(N^{-1/2})$. The logic and derivation of the proof are correct, clear in form, and consistent with previous DPO-related work.
Experimental Designs Or Analyses: The paper evaluates multiple LLM architectures (Mistral-7B, Qwen2-7B, Llama-3-8B, Gemma-2-9B, SPPO).
It controls variables properly, comparing DPO with and without reward augmentation.
Ablation studies effectively isolate the impact of reward augmentation.
Supplementary Material: I read all the parts of the appendix. I have read all sections of the appendix. Appendix A establishes the preference learning model under the goal reward conditions, presents the training algorithm, and proves its convergence. Appendix B provides detailed experimental settings and additional results from ablation experiments.
Relation To Broader Scientific Literature: The experiments in this paper are closely related to prior work on Direct Preference Optimization (DPO), particularly the method proposed by Rafailov et al., 2024. The approach in this paper is similar to Reinforcement Learning from AI Feedback (RLAIF), such as LLM-as-Judge (Zheng et al., 2024; Cui et al., 2023) and Reward-Model-as-Judge (Adler et al., 2024; Dong et al., 2024).
Compared to traditional DPO, where the model learns directly from preference data, the method in this paper reconstructs the learning objectives by conditioning on goal rewards rather than directly learning from preference pairs. This reward-conditioned approach aims to optimize the model’s ability to learn from the entire spectrum of response quality.
The key distinction from RLAIF is that while RLAIF primarily leverages AI feedback preferences, the method in this paper goes further by incorporating AI feedback scores. This allows the model to learn from the full spectrum of response quality, rather than only relying on binary preference information. In other words, the paper’s approach uses the rating scores from the AI judge to enable more nuanced learning across a wider range of response qualities.
Essential References Not Discussed: All essential references has been carefully addressed.
Other Strengths And Weaknesses: **Strengths**
- (1) The background of the problem proposed in the paper is very worthy of investigation. The study of the full spectrum of response quality is often overlooked in preference learning, and the reward-conditioned preference learning method proposed in the paper is an effective approach to address this issue. It also holds promise for generalizing LLMs to high-quality output distributions.
- (2) The paper includes detailed ablation experiments that demonstrate the role of high-quality outputs in model training, which is an interesting finding.
**Weaknesses**
- (1) The experimental setup section could be more detailed. (For example, the time spent on experiments, more complete parameter configurations, IRA prompt settings, etc.).
Other Comments Or Suggestions: None
Questions For Authors: - (1) Could you provide a more complete explanation of the relationship between the theoretical model you proposed in Section 3 and the experiments in Section 4? From my perspective, the experiments in Section 4 modify the data within the prompt to construct a dataset with reward value information, aligning the training objective with the preset reward goals. However, what is the specific connection to the goal involving R(x, y, g) introduced in Section 3? Alternatively, could you explain how the results from the experiments in Section 4 support the validity of the theory in Section 3?
- (2) For the IRA experiments, since the implicit reward values provided by the model are unnormalized, how did you handle them and use them in your experiments? It might be helpful if you could provide the prompt settings for the IRA experiments.
- (3) If possible, could you provide the code for your experiments? This would help me better understand your work.
- (4) The experimental results shown in Figure 3 are quite striking, but they only demonstrate that the forgetting of high-quality rejected responses is alleviated. It does not show whether the log probability for low-quality rejected responses also increases. Could you provide more comprehensive experimental results? This would help present more convincing findings in your work.
- (5) Regarding your experiments, I believe they can be viewed as augmenting the dataset by swapping the accepted and rejected data pairs and incorporating robust learning with confidence-based parameters. Is my understanding correct? Could you elaborate on the connection and advantages of your work compared to traditional data-driven robust training methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for identifying our work's soundness and technical contributions. Your valuable comments have greatly helped us improve our manuscript. Below are our specific responses to the raised questions:
**Weakness 1 and Question 1: The authors directly apply DPO on the re-labeled dataset, rather than using the training objective introduced in the method.**
We will incorporate the following clarifications—extending lines 150–219—in the revised manuscript. The training objective introduced in our method is $$\max_{\pi\in\Pi}\mathbb{E}_ {x,g\sim\mathcal{D},y\sim \pi(\cdot\mid x, g)}[R(x,y, g)-\beta_0\text{KL}(\pi(\cdot\mid x,g)\| \pi_{\text{ref}}(\cdot\mid x,g))],$$ where $\Pi$ denotes the class of all goal-conditioned policies $\pi(y\mid x,g)$ and the relabelling distribution for $g$ is
$$\mathbb{P}( g= r(x,y_w) \mid x,y_w,y_l) = P(g=(r(x,y_l) \mid x,y_w,y_l)=1/2.$$ It has the closed-form solution as follows: $$\pi_ R(y\mid x, g)\propto \pi_{\text{ref}}(y\mid x,g)\exp(\beta_0^{-1} R(x,y,g)).$$
Using the reward reparameterization trick from DPO, the reward can be written as $$R(x,y,g) = \beta_ 0 \log\pi_R(y\mid x,g) -\beta_0 \log\pi_{\text{ref}}(y\mid x,g) -Z_ R(x,g).$$ Based on the relabeling distribution, we define the augmented preference dataset $\overline{\mathcal{D}}$ as $$\{(x^i,\tilde{y} _ w^i=y_ w^i,\tilde{y}_ l^i = y_ l^i,g^i = r(x^i,y^i_ w))\}_ {i\in[M]}\cup\{(x^i,\tilde{y} _ w^i=y_ l^i,\tilde{y}_ l^i = y_ w^i,g^i = r(x^i,y^i_ l))\}_ {i\in[M]},$$ which doubles the size of the original dataset $\mathcal{D}$. The resulting goal-conditioned DPO objective becomes:
$$
\max_{R} \mathbb{E}_ {x,\tilde{y}_ w, \tilde{y}_ l, g\sim \overline{\mathcal{D}}}[\sigma(R(x,\tilde{y}_ w,g) - R(x,\tilde{y}_ l,g))]
=\max_{\pi}\mathbb{E}_ {x,\tilde{y}_ w,\tilde{y}_ l,g\sim \overline{\mathcal{D}}}\Bigl[\sigma\Bigl(\beta_0 \log\frac{\pi(\tilde{y}_ w\mid x,g)}{\pi_{\text{ref}}(\tilde{y}_ w\mid x,g)}- \beta_0 \log\frac{\pi(\tilde{y}_ l\mid x,g)}{\pi_ {\text{ref}}(\tilde{y}_ l\mid x,g)}\Bigr)\Bigr].
$$
This objective directly corresponds to our implementation, which applies DPO on the relabeled dataset.
**Weakness 2: The experimental setups could be more detailed.**
We have included our hyperparameters and prompts in Appendix B.1, which we list as follows. On 8 A100 GPUs, the training takes about 5-7 hours.
| KL regularization | batch size | learning rate | warmup ratio | max prompt length | max completion length | optimizer | lr_scheduler|
|--|-|-|-|-|-|-|-|
| 0.01|128 | 5e-7 | 0.1 | 512 | 512 | AdamW | cosine|
**Question 2: Could you provide the prompt settings for the IRA experiments?**
The prompts used in the IRA experiments are identical to those in the main experiments (as detailed in Appendix B.1). The only difference lies in the rescaling of reward values to the range $[1, 10]$ using the following linear transformation: $\max(\min(10*(\text{reward} - \text{low}) / (\text{high} - \text{low}), 10), 1)$, where high and low denote the maximum and minimum implicit rewards computed from a small subset of the training data.
**Question 3: If possible, could you provide the code for your experiments?**
We provided the code in the anonymous link: https://anonymous.4open.science/r/anonymous-9208-id.
**Question 4: Figure 3 is quite striking but does not show whether the log probability for low-quality rejected responses also increases.**
We report the log probabilities for low-quality rejected responses (with scores less than 5) in the following table:
| |1.0|2.0|3.0 |4.0 |
|-|-|-|-|-|
| Qwen2-7B-Instruct | -471.5 | -283.6 | -302.7 | -334.6 |
|+DPO (UF)| -673.8 | -488.5 | -513.9 | -517.6 |
|+DPO (RA)| -674.2 | -485.9 | -495.6 | -420.4 |
It can be observed that the log probability of low-quality rejected responses for our method DPO (RA) has a similar scale as the vanilla DPO (UF) and is smaller than that of Qwen2-7B-Instruct.
**Question 5: Relationship to traditional data-driven robust training.**
We will incorporate the following discussions to our manuscript:\
"Both our method and robust training techniques are motivated by enhancing the model generalization by leveraging augmented datasets that capture alternative outcomes. However, robust training methods typically focus on generic uncertainty or perturbations, which are not involved in our method. Instead, it concerns the limitations in direct alignment algorithms, such as the unlearning issue, by explicitly conditioning on quality scores to steer the model toward learning patterns associated with varying levels of response quality. Moreover, robust methods often emphasize worst-case scenarios or boundary conditions, which can lead to conservative generalization. In contrast, our method promotes generalization toward sparse, high-quality responses."
---
We hope the reviewer could consider raising the score if we resolved the reviewer's concerns. We would be happy to have further discussions if the reviewer has any additional questions or comments.
---
Rebuttal Comment 1.1:
Comment: Having reviewed the authors' clarifications, I now understand the relationship between their theoretical model and experimental implementation, which is solid enough. Their code release enables me to have a closer look at the method's details and confirm the reproductivity of the method.
However, the method relies heavily on the stability of pre-trained models used for scoring, particularly the implicit assumption that the AI judge’s reward scores are both accurate and consistent. This dependency—while common in RLAIF-inspired methods—introduces unquantified risks (e.g., potential error propagation due to bias or noise in the judge models) that may accumulate during training and affect the model’s robustness and its performance in real-world scenarios.
I will raise my score to 3 in recognition of the authors ’detailed and rigorous technical clarifications and empirical validations. However, the dependency on pre-trained judge stability limits its broader impact and application.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for recognizing the rigor and reproducibility of our work and for raising the score to 3. We will address your remaining concern as follows.
**The method relies heavily on the stability of pre-trained models used for scoring, particularly the implicit assumption that the AI judge’s reward scores are both accurate and consistent. This dependency—while common in RLAIF-inspired methods—introduces unquantified risks (e.g., potential error propagation due to bias or noise in the judge models).**
- Compared to direct preference optimization methods such as DPO, our approach is **more robust** to the bias or noise of scalar rewards. For instance, when only a sample-based estimate of the true preference is available or when using function approximators, it is common for high-quality rejected responses to be favored by more annotators. DPO, which strives to maximize the reparameterized reward gap, may degrade the model in such cases. In contrast, our method is aware of the quality levels and learns from the full spectrums.
- The AI judge's reward scores have been shown to be accurate and consistent, conditioning on which leads to superior performance compared to DPO. In our experiments, we found that most conditional SFT methods, including DPA [1], SteerLM [2], NCA-P, NCA-R, INCA-P, and INCA-R [3], outperform DPO, which fails to account for the qualitative aspects of responses.
| | Zephyr | DPO | DPA | SteerLM | NCA-P | NCA-R | INCA-P | INCA-R | Ours |
|---------------|------------|-------|-------|---------|-------|-------|--------|--------|-------|
| LC Win Rate | 6.21 | 11.60 | 11.13 | - | 11.50 | 12.87 | 13.68 | 14.83 | **16.66** |
| Win Rate | 3.94 | 8.58 | 10.58 | 8.21 | 8.43 | 9.56 | 11.00 | 11.34 | **13.37** |
- We further performed an ablation study to assess our method's robustness to variations in the reward scales provided by the AI judges. Specifically, we utilized the UltraFeedback dataset, originally containing reward scores ranging from 1–10, and rescaled these scores to alternative ranges of 1–5 and 1–100 through linear transformations. The results are summarized in the following table:
| | Qwen2-7B-It | +DPO (UF) | +DPO (RA, 5) | +DPO (RA 10) | **+DPO (RA 100)** |
|--------------|-------------|-----------|--------------|---------------|-------------------|
| LC Win Rate | 20.93 | 21.46 | 29.85 | 31.17 | 31.81 |
| Win Rate | 18.22 | 19.35 | 26.12 | 27.58 | 27.96 |
It can be observed that the performance of our method remains stable and robust across these varying reward scales, indicating resilience against potential biases or noise introduced by different reward scales.
---
We hope these responses have fully addressed your concerns. We would be happy to have further discussions if the reviewer has any additional questions or comments.
[1] Wang et al. ''Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards.''\
[2] Dong et al. ''SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF.''\
[3] Chen et al. ''Noise Contrastive Alignment of Language Models with Explicit Rewards.'' | Summary: Direct preferential optimization (DPO) has shown great potential for finetuning language models with user preferences. However, DPO highly depends on positive vs negative samples, and therefore, if some of the relatively good samples are rejected by evaluator model, that can significantly worsen the performance of DPO. In this paper, the authors rather proposed to generate a reward value (e.g., between 1 to 10) for each sample and finetune a reward-conditioned LM for alignment. Experimental results on several academic benchmark dataset (e.g., Alpaca and MT-bench) demonstrate that such reward conditioning can improve the performance of finetuning over traditional DPO ultrafeedback method.
Claims And Evidence: Performance of DPO is very sensitive towards evaluator model's preference as it is a binary classification, whereas adding scaler reward metrics can alleviate this sensitivity issue. Authors claimed that with simple tweaking of the loss function and asking the evaluator model to generate scaler goal-conditioned reward value can significantly improve the finetuning performance. Experimental results validated such claim and demonstrate superiority over ultrafeedback method.
Methods And Evaluation Criteria: The evaluation metrics and dataset chosen for experiments make sense to. While Ultrafeedback DPO is a good benchmark to compare with, there are also recent SOTA works cited in the related work but not been considered as benchmark methods.
Theoretical Claims: Theorem 1 shows that the reward-condition formulation is guaranteed to reach optimality, and the detailed proofs are provided in supplement.
Experimental Designs Or Analyses: Experiments are carefully designed. Performance is validated on well-known benchmark dataset. Several ablation studies demonstrate that reward-conditioning can help improve performance of DPO on standard Q/A tasks, mitigate the issues with unlearning, and also the proposed framework is highly dependent on goal reward setting.
Supplementary Material: I have read the supplementary material at a high level and might have missed some of the mathematical proofs.
Relation To Broader Scientific Literature: Preference optimization and alignment is an important problem and will have broader interest in the community. This work provides an alternative to DPO with reward conditioning for SFT, and therefore, will be interesting broader academic community.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: While DPO has been widely adopted, it fails in many problems due to sensitivity w.r.t. good samples being rejected. Therefore, reward-conditioned alignment propose a good alternative towards DPO. Experimental results also validated that reward-conditioning alignment performs better in many scenarios over traditional DPO. Therefore, this solution would be of interest for the broader community.
Having said that, I have a few concerns:
1. LLM evaluators are typically not that stable when generating scaler rewards (as opposed to generating preferences), and thus many RLAIF works suffer. So, some analysis on how to address the errors or biases from evaluator models would have been great.
2. The proposed method is only considered with ultrafeedback DPO, but there exists other recent works that improves on the DPO, e.g., RPO (Adler et al., 2024). Also, some comparison with RLAIF line of works is expected.
Other Comments Or Suggestions: Majority of the space is allocated towards experimental results only. I would recommend expanding the technical method section with more proofs and guarantees on how the proposed direction can change the preference selection research direction.
Questions For Authors: 1. How do you address the errors in generating reward scores by the evaluator model? Typically, LLMs struggle to identify right scaler rewards.
2. If we can generate the reward values, then we can directly use RL for the alignment. However, I do see any comparison with wide variety of RLAIF works. Please explain why?
3. How to identify the optimal goal reward value?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for identifying our work's soundness and technical contributions. The valuable comments have greatly helped us improve our manuscript. Below are our specific responses to the raised questions:
**Weakness 1 and Question 2: SOTA works cited in the related work but not compared, such as recent works that improve on the DPO, e.g., RPO (Adler et al., 2024), and RLAIF line of works.**
We have compared with **15** additional baselines in Figure 4 and Table 10, 12 in the Appendix, including RPO and other SOTA RLAIF methods. We listed the results below for your reference and will incorporate Tables 10 and 12 into the main body of the manuscript:
| | Zephyr-SFT | DPO | DPA | SteerLM | NCA-P | NCA-R | INCA-P | INCA-R | Ours |
|---------------|------------|-------|-------|---------|-------|-------|--------|--------|-------|
| LC Win Rate | 6.21 | 11.60 | 11.13 | - | 11.50 | 12.87 | 13.68 | 14.83 | **16.66** |
| Win Rate | 3.94 | 8.58 | 10.58 | 8.21 | 8.43 | 9.56 | 11.00 | 11.34 | **13.37** |
| |Llama-3-8B-It| SLiC-HF | ORPO | CPO | RRHF | KTO | IPO | RPO | R-DPO | SimPO | **Ours** |
|---------|---------|---------|------|------|------|------|------|------|-------|-------|----------|
| LC WR |22.92 | 26.9 | 28.5 | 28.9 | 31.3 | 33.1 | 35.6 | 40.8 | 41.1 | 44.7 | **48.2** |
| WR |23.15 | 27.5 | 27.4 | 32.2 | 28.4 | 31.8 | 35.6 | 41.7 | 37.8 | 40.5 | **53.2** |
If there are other baselines you would like us to consider, please let us know and we would be happy to include them in our comparisons.
**Weakness 2 and Question 1: LLM evaluators are typically not that stable when generating scalar rewards.**
- The LLM or reward model (RM) judges are typically trained on human preference data, and the resulting reward scores reflect the expected preferences across annotators—akin to Elo scores in Bradley-Terry models.
- Compared to direct preference optimization methods such as DPO, our approach is more robust to the instability of scalar rewards. For instance, when only a sample-based estimate of the true preference is available or when using function approximators, it is common for high-quality rejected responses to be favored by more annotators. DPO, which attempts to maximize the reparameterized reward gap, may degrade the model in such cases. In contrast, our method is sensitive to varying response quality and effectively learns from the full spectrum of rewards.
- We further performed an ablation study to assess the robustness of our method to different scalar reward scales generated by LLM evaluators. Using the UltraFeedback dataset, which provides reward scores in the range of 1–10, we linearly rescaled the rewards to 1–5 and 1–100. We then applied our method to these modified datasets. We observed that our method is robust to the scale of the reward scores. The results are summarized in the following table:
| | Qwen2-7B-It | +DPO (UF) | +DPO (RA, 5) | +DPO (RA 10) | **+DPO (RA 100)** |
|--------------|-------------|-----------|--------------|---------------|-------------------|
| LC Win Rate | 20.93 | 21.46 | 29.85 | 31.17 | 31.81 |
| Win Rate | 18.22 | 19.35 | 26.12 | 27.58 | 27.96 |
**Suggestion 1: Expand the method section with more proofs and guarantees on how the proposed direction can change the preference selection research direction.**
We thank the reviewer for the helpful suggestion. In the revised version, we will include a proof sketch in Section 4, incorporating the key analysis and lemmas from Appendices A.5 and A.6. Additionally, we will highlight the observed limitations of DPO—such as those illustrated in Figure 3 and similar trends reported on the O.O.D. data HelpSteer2 below—to provide a more comprehensive discussion.
|Rejected score| 8| 9 |10|
|-|-|-|-|
| Qwen2-7B-It |-416.7 | -356.5 | -334.8 |
| +DPO (UF)|-484.5 | -419.4 | -401.7 |
| +DPO (RA)| -438.6 | -366.4 | -341.1 |
**Question 3: How to identify the optimal goal reward value?**
The optimal goal reward depends on the value range of the judge model and its scale as presented in the training prompt. For instance, the optimal goal is $1$ when using sigmoid-based reward models, or $5$ when using LLM judges that follow evaluation criteria with a maximum score of $5$. Both types of reward values can be rescaled, e.g., via linear transformations in the training prompt, as demonstrated in Appendix B.3.
---
We hope the reviewer could consider raising the score if we resolved the reviewer's concerns. We would be happy to have further discussions if the reviewer has any additional questions or comments. | null | null | null | null | null | null |
Enhancing Visual Localization with Cross-Domain Image Generation | Accept (poster) | Summary: Summary
This paper proposes a novel cross-domain data generation framework to enhance visual localization in scenarios with significant domain variations. The main results look solid and impressive. The contributions include 1) A modified 3D Gaussian Splatting framework that models photometric variations via learnable embeddings, suppresses dynamic objects using confidence maps, and employs a two-stage training strategy to mitigate hallucination noise. 2) A text-guided image editing model (fine-tuned with scene priors) to augment sparse secondary domains (e.g., nighttime data). 3) A method to synthesize pose-consistent training data and positional attention to resolve cross-camera feature misalignment.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No Theoretical Claims.
Experimental Designs Or Analyses: The experimental designs are sound, including comparisons on dataset 360Loc (Atrium Day, Concourse Day, Hall Day, Piatrium Day, Atrium Night, Concourse Night, Hall Night, Piatrium Night).
The ablation is also conducted on Effectiveness of Cross-Domain 3DGS, Effectiveness of Fine-Tuning Strategy, and Effectiveness of Positional Attention.
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: 1) 3DGS: Extends Scaffold-GS with photometric modeling by addressing limitations in dynamic scenes.
2) NeRF-W: Adopts uncertainty-aware rendering for dynamic object suppression.
3) InstructPix2Pix: Leverages diffusion models for domain transfer and adds scene-specific fine-tuning.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths: 1) The practical focus on real-world challenges including cross-camera and long-tail data.
Weakness: 1) Limited discussion of failure cases.
2) No analysis of computational costs.
3) Is there any possibility to include more comparison methods?
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1 Limited discussion of failure cases.**
Thank you for your valuable suggestions.
We observe two main failure cases in our current method. First, in scenes like Piatrium_Night that contain extremely dark regions, the fine-tuned image editing model sometimes fails to produce realistic textures. This highlights the limitation of fine-tuned InstructPix2Pix in handling low-light appearances. Second, when significant static geometry changes are present in the query images (such as structural modifications), our cross-domain localization performance may degrade. In the future, this issue could be addressed by incorporating static geometry editing into the cross-domain image generation process, thereby enhancing data diversity and improving robustness in such scenarios. We have updated the manuscript to include the discussion of failure cases.
**2 No analysis of computational costs.**
We apologize for the missing details. All experiments were conducted on a single NVIDIA RTX 4090 GPU. Fine-tuning the image editing model for each scene takes approximately 10 hours on average. Cross-domain 3DGS training and image generation require at most 4 hours per scene. Visual localization training takes roughly 16 hours per scene. During inference, the localization method MS-T with the proposed positional attention mechanism runs at 32.51 ms per frame. We have updated the manuscript to include the computational costs of the experiments.
**3 Is there any possibility to include more comparison methods?**
1. For the cross-domain visual localization task, our work primarily focuses on Absolute Pose Regression (APR) methods, and the baselines we compare against are the state-of-the-art APR methods provided by the 360Loc benchmark. To the best of our knowledge, there are currently no other advanced methods targeting this task.
2. For the image generation task, we have included comparative experiments with the advanced GS-W [1], which is designed for unconstrained scenes. Since GS-W does not address the long-tail data distribution issue, we conduct quantitative experiments on the Atrium_Day scene, which contains sufficient data. Additionally, we extend its training iterations to **140,000** for the large-scale evaluation scene.
| | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
|-------------------------|--------|---------|---------|
| GS-W | 18.85 | 0.635 | 0.594 |
| **Cross-domain 3DGS(Ours)** | **24.64** | **0.758** | **0.231** |
As shown in the table, our method significantly outperforms GS-W while using fewer iterations (**60,000**). This demonstrates that our method has stronger scene modeling capabilities for large-scale scenes.
[1] Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections [ECCV 2024] | Summary: The paper focuses on improving visual localization accuracy with cross-domain image generation by three contributions. First, a crossdomain is developed 3DGS to to generate realdomain consistent images. Second, a text-guided image editing model is presented to enhance data diversity for addressing the long-tail distribution problem. Third, an anchor-based method is developed to generate highquality datasets for visual localization. Extensive experiments demonstrate that the method improve visual localization performance on Loc360 dataset.
Claims And Evidence: Insufficiently. The authors claim that the proposed text-guided image editing model enhances data diversity for addressing the long-tail distribution problem. In my opinion, the validation is not sufficient. To validate the effectiveness of fine-tuning strategy, the paper provides some visualization results, such as Fig. 7 and Fig. 2 in the Supplementary Material. However, the quantitative results are missing. How the strategy improve actual localization accuracy is not discussed.
Methods And Evaluation Criteria: Yes, the proposed methods can be used for enhancing visual localization performance.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiment is somewhat insufficient at the following aspects.
1. Limitation results on 360 image. In Tables 1,2, the localization improvements on 360Loc are presented. It is suggested to add the results of the common localization dataset.
2. Limitation results on APR methods. The experiments results demonstrate the improvement of APR methods. Additionally, SCR methods always show more accurate performance than APR methods. Is the proposed method can enhance localization accuracy of SCR methods?
3. More quantitative results of ablation studies are preferred. To validate the effectiveness of fine-tuning strategy, the paper presents visualization results. However, the quantitative result is missing.
Supplementary Material: Yes. I have reviewed all the supplementary material.
Relation To Broader Scientific Literature: The main contribution lie in the usage of cross-Domain image generation with 3D GS and image editing model for enhancing visual localization.
Essential References Not Discussed: Essential References are discussed.
Other Strengths And Weaknesses: Strength.
1.A cross-domain 3D GS to generate real-domain consistent images.
2.A text-guided image editing model to enhance data diversity.
3.An anchor-based method to generate high-quality datasets for visual localization.
Weakness.
1.The contributions seem incremental.
(1)The proposed Cross-Domain 3D Gaussian Splatting is based on Scaffold-GS with dynamic object suppression and training strategy, which is engineered.
(2)The proposed text-guided image editing model seems a application of structPix2Pix. The original contribution of the paper is not clear.
2.The writing needs further improvements.
(1)The of Introduction is somewhat confused, which is hard to read. It is suggested to rewrite it to express the challenge and contributions more clearly.
(2)In Section Method, it is suggested to present the overview of the whole method, especially where the paper contributions. Meanwhile, Fig.2 is also suggested to be modified.
3.The motivation for APR method. The pipeline of localization enhancement is designed for APR method. However, as far as current works cover, APR methods are always less accurate than SCR methods. Why not use the cross-domain image generation for SCR methods.
4.The experiments are somewhat insufficient. Please see the section “Experimental Designs Or Analyses” for details.
Other Comments Or Suggestions: The comments and suggestions are listed in the Strengths And Weaknesses part.
Questions For Authors: In Tables 1,2, the paper presents the localization accuracy with cross-domain image generation. There are two questions. Which method does the paper use and How many generated images are used? Please present more details.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. Limitation results on 360Loc.**
Thank you for your valuable suggestion. Our work targets the challenging task of cross-domain visual localization, which requires datasets containing query images captured by **various types of cameras**. To the best of our knowledge, 360Loc is the only existing benchmark with cross-camera query images. Therefore, we have exclusively selected 360Loc as our evaluation benchmark.
**2. Why not use the cross-domain image generation for SCR methods? Is the proposed method can enhance localization accuracy of SCR methods?**
Thank you for your valuable question. We would like to clarify the following points: 1) The baseline methods we evaluated follow the 360Loc benchmark, which does not include SCR methods for comparison. 2) In theory, our method could enhance the accuracy of SCR by using cross-domain 3DGS to render paired RGB and depth images. 3) Adapting SCR to cross-domain tasks is more complex, as SCR relies on **camera intrinsics and distortion parameters**, such as scene coordinate calculations, reprojection, and pose solving via PnP. However, existing SCR methods are designed for **pinhole cameras**. Adapting SCR to 360° and fisheye cameras requires deriving tailored methods. These enhancements to SCR are beyond the scope of our work, but it is a promising direction for future research.
**3. More quantitative results to validate the effectiveness of the fine-tuning strategy.**
Thank you for your valuable suggestion. To validate the effectiveness of our fine-tuning strategy, we conducted quantitative experiments on the atrium scene. We employed image similarity metrics SSIM and LPIPS to evaluate the similarity between the images generated and the ground truth.
| | SSIM ↑ | LPIPS ↓ |
|------------|--------|---------|
| Pretrained | 0.338 | 0.428 |
| Finetuned | **0.709** | **0.337** |
As shown in the table, the image similarity significantly improved after fine-tuning, which demonstrates the effectiveness of the proposed fine-tuning strategy.
**4. The proposed Cross-Domain 3DGS is engineered.**
Our method builds upon Scaffold-GS but introduces key innovations to address challenges like appearance variations, dynamic object interference, and long-tail data distributions. We propose a photometric modeling scheme combining histogram-based embeddings and compensation to handle appearance changes, incorporate photometric priors for dynamic object uncertainty, and fine-tune a text-guided image editing model to augment sparse datasets in long-tail distributions. These improvements form the core innovation of our Cross-Domain 3DGS.
**5. The proposed text-guided image editing model seems an application of InstructPix2Pix. The original contribution of the paper is not clear.**
To address the long-tail distribution issue in the dataset, we use InstructPix2Pix to transform daytime images into sparse nighttime images for data augmentation. However, directly using the pre-trained model leads to domain inconsistencies and hallucinations, as shown in Figure 7. To overcome this, we propose a fine-tuning strategy. We use 3DGS to render daytime images corresponding to sparse nighttime images, creating a domain shift dataset that enables InstructPix2Pix to learn domain-consistent transformations. Additionally, to mitigate hallucinations due to the sparse domain shift dataset, we generate a scene prior dataset by rendering daytime images that cover the entire scene, enriching the model with more scene priors.
**6. It is suggested to rewrite the Introduction to clearly express the challenge and contributions, provide an overview of the entire method, and modify Fig. 2.**
1. Thank you for your constructive suggestions. We have revised the Introduction to better highlight the core challenges of cross-domain visual localization and our contributions. We begin by emphasizing the challenges in cross-domain localization, followed by the motivation for introducing 3DGS to address these issues. We then discuss the technical challenges in designing cross-domain 3DGS and the solutions we developed. Finally, we discuss the issues encountered when training visual localization methods with data generated based on 3DGS and how our method is designed to overcome these training issues.
2. We have rewritten a more structured overview of our method and modified Figure 2 to more clearly express the design of cross-domain 3DGS and the training of the visual localization method. We have also updated the paper to include the Impact Statement.
**7. In Tables 1,2, which method does the paper use, and how many generated images are used?**
1. Sorry for the unclear details. The method used in the paper is MS-T.
2. We provide the detailed counts used for each scene (including self-rotated images and ground truth).
| Atrium | Concourse | Hall | Piatrium |
|--------|-----------|-------|----------|
| 17,430 | 14,730 | 16,200 | 18,960 |
We have updated the paper to include these details.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. After reading the rebuttal and other reviews, most of my concerns are addressed. I change my Overall Recommendation to weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your engagement and constructive feedback, which has helped us improve our paper. We sincerely appreciate your raised score. | Summary: This paper addresses cross-domain visual localization challenges by proposing a novel data generation framework based on 3D Gaussian Splatting (3DGS). The key contributions include: (1) a cross-domain 3DGS that models photometric variations and mitigates dynamic object interference, (2) a text-guided image editing model for addressing long-tail distribution problems, (3) an anchor-based method for high-quality dataset generation, and (4) a positional attention mechanism for cross-camera data ambiguities. Experiments show improvement across domains compared to baselines.
Claims And Evidence: This paper clearly identify limitations of single-domain methods and provide a convincing rationale for the method. Experimental results on the 360Loc dataset with different domains demonstrate improvements.
Methods And Evaluation Criteria: Using 3DGS for high-fidelity image synthesis and introducing learnable photometric embeddings are effective to handling variations. The evaluation on the 360Loc dataset with different camera types and lighting conditions provides a realistic and diverse experiment.
Theoretical Claims: The theoretical aspects are sound.
Experimental Designs Or Analyses: The use of a dataset that includes multiple camera domains and varying lighting conditions and construction of a long-tailed distribution dataset effectively simulates real-world scenarios. The reported improvement demonstrates the effectiveness.
Supplementary Material: All
Relation To Broader Scientific Literature: Building upon advances in neural rerendering for unconstrained photo collections, this paper makes a contribution by leveraging these techniques to enhance visual localization through cross-domain image generation.
Essential References Not Discussed: The paper would benefit from discussing several relevant works in the areas of neural rerendering for unconstrained photo collections and cross-domain visual localization:
1. Neural Rerendering for Unconstrained Photo Collections:
- Neural Rerendering in the Wild [CVPR'19]
- NeRF in the Wild [CVPR'21]
- Ha-NeRF [CVPR'22]
- Neural Scene Chronology [CVPR'23]
- NeRF On-the-go [CVPR'24]
- SpotLessSplats [TOG'25]
2. Cross-Domain Visual Localization:
- Adversarial training for adverse conditions: Robust metric localisation using appearance transfer [ICRA'18]
- Night-to-Day Image Translation for Retrieval-based Localization [ICRA'19]
- Retrieval-based localization based on domain-invariant feature learning under changing environments [IROS'19]
- Adversarial feature disentanglement for place recognition across changing appearance [ICRA'20]
- Place Recognition under Occlusion and Changing Appearance via Disentangled Representations [ICRA'23]
Including these references would strengthen the paper by providing a more comprehensive context for the proposed methods and highlighting the novelty of the approach in relation to existing work.
Other Strengths And Weaknesses: **Strengths:**
- Addresses a relevant real-world problem
- Innovative cross-domain data generation framework
- Creative use of text-guided image editing for data augmentation
- Effective positional attention mechanism for cross-camera data
**Weaknesses:**
- Limited discussion of related work
Other Comments Or Suggestions: Figure 5 layout should be improved with a horizontal arrangement: GT, 360, fisheye3, fisheye2, fisheye1, pinhole.
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **1. Limited discussion of related work**
Thank you for your valuable suggestion. We have expanded the related work section in the revised manuscript by incorporating the recommended references to better highlight the novelty of our proposed method. The revised portion of the related work is shown below:
Recent advances in neural rendering have enabled the 3D reconstruction from unconstrained photo collections. Neural Rerendering in the Wild [1] combines traditional 3D reconstruction with neural networks to handle unconstrained scenes. Extensions to the NeRF [2,3] further address challenges in uncontrolled scenes by embedding appearance information and transient uncertainty. In addition, methods such as Ha-NeRF [4] and Neural Scene Chronology [5] focus on capturing temporal variations by employing modules for appearance hallucination and temporal step function encoding. However, their slow training and rendering make large-scale scene modeling and data generation time-consuming. Additionally, the limited parameters of NeRF hinder its ability to effectively represent large outdoor scenes. Recently, 3DGS-based methods have garnered attention due to their faster optimization and rendering efficiency compared to NeRF. Among them, SpotLessSplats [6], GS-W, WE-GS, and WildGaussians have shown potential in modeling appearance variations and dynamic objects in wild scenes. However, they cannot explicitly control photometric properties and are constrained by the performance bottlenecks of pre-trained detectors. Existing methods are also not suitable for scenes with long-tail distribution problems. In this paper, we propose a method that models appearance variations by explicitly encoding photometric histograms, mitigates the impact of dynamic objects without relying on pre-trained detectors, and employs a fine-tuned image editing model to effectively address the long-tail distribution problem.
Several studies have explored cross-domain visual localization under pinhole camera settings. To enhance localization performance, [7] uses invertible generators to produce synthetic images, while [8] converts nighttime images to a more discriminative daytime representation. Other methods focus on learning domain-invariant features to bridge the gap between varying environmental conditions [9]. Additionally, several works [10,11] advocate for disentangling image representations into separate codes that isolate place-specific cues from appearance and occlusion factors, ensuring reliable place recognition. However, these methods are limited to single-camera localization and rely on image retrieval-based localization. Compared to APR methods, such approaches suffer from significantly higher computational costs and storage requirements due to the need to construct and maintain a retrieval database. This paper enhances APR-based cross-domain localization, including cross-camera scenarios, by proposing a novel cross-domain image generation method. Unlike the prior method with limited image augmentation, our method facilitates diverse cross-domain image generation.
[1]Neural Rerendering in the Wild [CVPR'19]
[2]NeRF in the Wild [CVPR'21]
[3]Ha-NeRF [CVPR'22]
[4]Neural Scene Chronology [CVPR'23]
[5]NeRF On-the-go [CVPR'24]
[6]SpotLessSplats [TOG'25]
[7]Adversarial training for adverse conditions: Robust metric localisation using appearance transfer [ICRA'18]
[8]Night-to-Day Image Translation for Retrieval-based Localization [ICRA'19]
[9]Retrieval-based localization based on domain-invariant feature learning under changing environments [IROS'19]
[10]Adversarial feature disentanglement for place recognition across changing appearance [ICRA'20]
[11]Place Recognition under Occlusion and Changing Appearance via Disentangled Representations [ICRA'23]
**2. Figure 5 layout should be improved with a horizontal arrangement: GT, 360, fisheye3, fisheye2, fisheye1, pinhole.**
Thank you for your valuable suggestion. We have revised the layout of Figure 5 to present the images in a horizontal arrangement as suggested. The manuscript has been updated accordingly.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. After reading the other reviews and the rebuttal, I recommend accepting this paper. I highly encourage the authors to revise the paper to incorporate the rebuttal, either in the main text or in the supplementary materials.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback and kind recommendation. We will carefully revise the paper to incorporate the clarifications and improvements discussed in the rebuttal. | Summary: This paper proposes a novel cross-domain data generation framework to enhance visual localization in scenarios with significant appearance variations (e.g., lighting conditions, camera types). The key contributions include a cross-domain 3D Gaussian Splatting framework, a text-guided image editing model, an anchor-based dataset generation method, and a positional attention mechanism. Experiments on the 360Loc benchmark demonstrate state-of-the-art performance.
## update after rebuttal
After carefully reading the reviews from other reviewers and the authors' rebuttal, I have decided to maintain my original rating (Weak Accept).
Claims And Evidence: The claims are generally well-supported.
Methods And Evaluation Criteria: 1. The integration of 3DGS with photometric embeddings and dynamic suppression is novel and appropriate for modeling cross-domain variations.
2. The 360Loc dataset is suitable for evaluation.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: 1. What is the APR method used in the proposed approach—PN, MS-T, or other methods?
2. The ablations are mainly evaluated based on rendering quality. It would be beneficial to compare localization performance to demonstrate the relationship between rendering quality and localization performance.
Supplementary Material: I have read the supplementary material, including the implementation details and the quantitative results.
Relation To Broader Scientific Literature: This work builds on 3D Gaussian Splatting, image editing and visual localization.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
1. Novel framework for cross-domain data generation.
2. SOTA performance on the 360Loc benchmark.
Other Comments Or Suggestions: None.
Questions For Authors: Please refer to "Experimental Designs Or Analyses".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. What is the APR method used in the proposed approach—PN, MS-T, or other methods?**
Sorry for the unclear details. Our proposed approach employs the MS-T method for absolute pose regression (APR).
**2. The ablations are mainly evaluated based on rendering quality. It would be beneficial to compare localization performance to demonstrate the relationship between rendering quality and localization performance.**
We sincerely appreciate your insightful suggestion. To evaluate the impact of rendering quality, we conduct a quantitative experiment on the Atrium scene. Specifically, we compare the original Scaffold-GS (which produces lower-quality renderings compared to our cross-domain 3DGS) combined with our proposed data generation and positional attention mechanism.
| Method | Atrium_Day | Atrium_Night | Average |
|-------------------------|------------|--------------|-------------|
| Scaffold-GS | 3.5/18.2 | 8.4/37.5 | 5.95/27.85 |
| **Cross-domain 3DGS(Ours)** | **2.6/14.5** | **2.2/12.8** | **2.4/13.65** |
As shown in the table, localization performance drops when using the lower-quality images generated by Scaffold-GS. This result demonstrates a positive correlation between rendering quality and localization accuracy. | null | null | null | null | null | null |
Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks | Accept (poster) | Summary: This paper studies the implicit bias of gradient descent for separable classification task and non-homogeneous model. The class of non-homogeneous models studied in this paper is quite general and covers many of the common deep learning models. The results of this paper shows that, if the model's difference to a homogeneous function is bounded, then the iterates of gradient flow/descent converges in direction to a maximum-margin solution. Most importantly, this paper extends the analysis of implicit bias from homogeneous models to a much more general class of non-homogeneous models that are relevant to deep learning.
Claims And Evidence: The theorem statements are clear and easy to understand. And the paper devotes significant length into illustrating the relevance and generality of its results. I do not have any major issue regarding the claims of the paper.
The main results are not too surprising because they are intuitively the direct extension of the results on homogeneous models. And the high-level proof strategy seems to follow the existing literature. But of course the generalization to these "nearly-homogenous" function class would be very technically involved. In light of this, the theorems all make sense and are solid contribution to the literature.
One big issue, but I think can be fixed easily:
In the definition of $(M, N)$-nearly-homogeneous functions, $M, N$ are assumed to be positive, but Example 4.1.C has $M = 0$?
A few small issues:
1. some of the "theorems," such 5.1, 5.2 and 5.3, should be downgraded to a lemma or proposition. Otherwise the results look too crowded. 2. the definitions and assumptions are bit scattered and so an index of notations/definitions would be a good addition to the appendix.
3. I would like to see $M$-nearly-homogeneous and $(M, N)$-nearly-homogeneous be unified into one definition.
4. I do not quite get how the discussion on o-minimal structure connects with rest of the paper, the author should elaborate on this.
Methods And Evaluation Criteria: n/a, this is a theory paper
Theoretical Claims: see above
Experimental Designs Or Analyses: n/a, this is a theory paper
Supplementary Material: I checked up to Appendix B. So, I inspected the proof sketch but not the full proofs. Given that the overall strategy does not deviate too much from the existing works and the results are not surprising, I assume that it is unlikely this paper contains unsalvageable errors.
Relation To Broader Scientific Literature: This work shows the implicit bias of gradient descent for a large class deep learning models. Many of the existing work in this topic only apply to limited settings such as linear models, deep linear networks, or 2-layer relu networks, etc, so this paper is definitely a very valuable addition.
As I previously discussed, the technical work of this paper is very profound but the high-level strategy is already present in the literature. So, I don' think this work offers new insights on the understanding of implicit bias. But given that the technically work is clearly a step above the current literature, I think a weak accept would be most appropriate for this paper.
Essential References Not Discussed: I believe that author should also mention this work:
Du, Simon, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. "Gradient descent finds global minima of deep neural networks." In International conference on machine learning, pp. 1675-1685. PMLR, 2019.
Other Strengths And Weaknesses: see ab
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback. For the technical novelty, please refer to the section **Technical novelty** in our response to the reviewer R5kD. For insights of our results, please refer to the section **Insights of our results** in our response to the reviewer NQfK. Below, we address your other questions.
---
**Q1.** “One big issue, but I think it can be fixed easily: In the definition of (M,N)-nearly-homogeneous functions, M,N are assumed to be positive, but Example 4.1.C has M=0?”
**A1.** Thank you for pointing this out. This issue can be solved by relaxing our near-(M,N)-homogeneity definition to allow $M=0$. Specifically, we call a block s(θ; x) near-(0, N)-homogeneous, if it is independent of θ and near-N-homogeneous in $x$. We will clarify this in the revision.
---
**Q2.** “Some of the ‘theorems,’ such as 5.1, 5.2, and 5.3, should be downgraded to a lemma or proposition. … The definitions and assumptions are a bit scattered, so an index of notations/definitions would be a good addition to the appendix.”
**A2.** We will revise the paper according to your suggestions. Thank you.
---
**Q3.** “I would like to see M-nearly-homogeneous and (M, N)-nearly-homogeneous be unified into one definition.”
**A3.** While these two definitions can be unified, we prefer to keep them separate for the sake of clarity. Since our main results in Sections 3, 5, and 6 only make use of near-M-homogeneity, unifying these two definitions would make these results harder to unpack for the readers.
---
**Q4.** “I do not quite get how the discussion on o-minimal structure connects with the rest of the paper. The author should elaborate on this.”
**A4.** We will add a paragraph to clarify the role of o-minimal structures. Specifically, the o-minimal structure enables the chain rule of Clarke’s subdifferentials (see Lemma A.6), the existence of a desingularizing function (see Lemma C.14), and Kurdyka–Łojasiewicz inequalities (see Lemmas C.19 and C.20). These are crucial tools in our analysis.
---
**Q5.** “As I previously discussed, the technical work of this paper is very profound, but the high-level strategy is already present in the literature. So, I don't think this work offers new insights on the understanding of implicit bias.”
**A5.** We respectfully disagree with these comments. For the technical novelty, please refer to the section **Technical novelty** in our response to the reviewer R5kD. For insights of our results, please refer to the section **Insights of our results** in our response to the reviewer NQfK.
---
**Q6.** “I believe that author should also mention this work … ‘Gradient descent finds global minima of deep neural networks.’....”
**A6.** We will consider discussing this work in the revision. Based on our understanding, the paper you mentioned mainly uses neural tangent kernel (NTK) style techniques to study global convergence. This seems to be quite different from our focus: characterizing the asymptotic implicit bias of GD/GF in near-homogeneous models assuming a strong separability condition. We appreciate it if the reviewer could further elaborate on the connections between this work and ours.
---
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response and I appreciate their efforts to address the concerns I have raised.
Regarding A6, I would like see this discussion be included in the paper because the differences may not be immediately obvious the readers.
Overall, I think the contribution of this paper is solid even though it can be hard to digest, and I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for recognizing our contribution! We will add a discussion on the differences between our work and the NTK papers in the revision. With the discussions on the "Insights of our results" and "Technical novelty" from our responses to Reviewers NQfK and R5kD, we believe our contribution has been made very clear. If you find any specific place hard to digest, please let us know, and we will be happy to provide further clarification! | Summary: The main contribution of this work is a generalization of previous theoretical results on the implicit bias of gradient descent for homogeneous networks to the case including non-homogeneous networks that satisfy a mild near-homogeneous condition, such as linear layers with an additional bias term (i.e., $Ax + b$) which are not homogeneous but empirically shown to converge to max-margin solution in previous works.
In general, I think the theoretical results in this paper are significant, in the sense that they make the study of implicit bias of gradient descent for classification problem (this topic has been studied for a long time) more comprehensive.
Claims And Evidence: The claims are supported by the mathematical proof.
Methods And Evaluation Criteria: Not applicable, as this paper does not have experimental results.
Theoretical Claims: I checked the outline of the proofs, which appears to be correct to me.
Experimental Designs Or Analyses: This paper does not contain experimental results.
Supplementary Material: Not applicable, as this paper does not have supplementary material.
Relation To Broader Scientific Literature: The contributions are mainly related to the implicit bias of gradient descent, where the most prominent results in my view are the convergence to a KKT of margin-maximization problem (Lyu & Li, 2019) and the alignment and directional convergence (Ji & Telgarsky, 2020) for homogeneous deep neural networks.
Essential References Not Discussed: The discussion of essential references is sufficient.
Other Strengths And Weaknesses: A minor weakness is that there lacks experimental evidence in this paper.
In addition, as many techniques have already been discussed in previous works (since the implicit bias of gradient descent is an important topic), I think the core contribution of this work is how to handle the difficulties brought by the non-homogeneity, hence it would be better for the authors to have a section to discuss the technical novelty.
Other Comments Or Suggestions: 1. The main results are using exponential loss only, while through the whole paper it is replaced implicitly by the notation $\ell$. Then my question is whether the formulation of exponential loss is necessary. Can the results still be valid for multi-classification with cross-entropy loss? This is the case for homogeneous networks. Will the non-homogeneity bring any additional difficulties?
2. I think it would be best to have some experimental results to make this paper more complete, but I understand that the current version is also fine.
Questions For Authors: What are the main difficulties when solving the non-homogeneity compared to previous works and how do the authors introduce new techniques to solve them?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for supporting our paper! We address your questions first and then discuss our technical novelty at the end of this response.
---
**Q1.** “...my question is whether the formulation of exponential loss is necessary. Can the results still be valid for multi-classification with cross-entropy loss?...”
**A1.** Our results can be extended to other loss functions with an exponential tail such as logistic loss. This is because our analysis focuses on the late training regime assuming that the predictor already classifies all data very well. In this regime, only the tail property of the loss function matters. Additionally, using arguments in [Lyu & Li 2020, Appendix G], there is no mathematical difficulty extending our results to multi-class settings using cross entropy loss. We will explain these in detail in the revision.
---
---
## **Technical novelty**
We make significant technical innovations in our analysis. Below, we discuss them in our GF and GD analysis, respectively.
### A. Technical novelty in GF analysis
In our gradient flow analysis, we make the following innovations rather than simply combining our near-homogeneity conditions and techniques from [Lyu and Li 2020, Ji and Telgarsky 2020]
**1. Margin improvement.** [Lyu and Li 2020] directly analyzed the smoothed margin (see their equation (3)). This does not work for non-homogenous predictors. Instead, we have to analyze a modified margin (see (6) in lines 213-215). Identifying the correct modification of the margin involves a sharp balance of many error terms. This is quite technical as we aim to cover as many non-homogeneous functions as possible; behind the scenes, we solve multiple ODEs to identify the right formula. Coming up with this modified margin is a significant technical contribution.
**2.Directional convergence.** The analysis in [Ji and Telgarsky 2020] heavily relies on a specific form of GF defined using the minimal norm subgradient; they then analyzed the corresponding spherical and radial components (see their Lemms C.2 and C.3)). This again fails to work for non-homogenous predictors. To address this issue, we have to consider a different form of GF defined with a special subgradient, which enables a special spherical and radial decomposition as shown in equations (21) and (22) and Lemma C.17. Moreover, we have to use an advanced property of the o-minimal structure to show that the choice of sub-gradient does not affect the global path property of GF (see Lemma A.6).
**3.KKT convergence.** Our KKT convergence proof for non-homogenous predictors is new. Although KKT conditions are well established for a homogeneous predictor that maximizes the margin, it is a priori unclear why an optimal non-homogenous predictor admits KKT conditions. To address this issue, we first need to prove the existence of a sufficiently good homogenization of a near-homogeneous predictor (see Theorem 5.1). Then we come up with a set of KKT conditions of a margin maximization program — however, this is not defined using the original predictor — this is defined using its homogenization. Because of this discrepancy, we have to design and analyze new dual variables that involve both the non-homogenous predictor and its homogenization (see equation (44)).
### B. Technical novelty in GD analysis
We solve many additional technical challenges when extending our GF analysis to GD. Due to the space limit, we highlight two of them:
**4.Stepsize condition.** Although [Lyu and Li 2020] handled GD in the homogenous case, their stepsize assumption depends on a “constant” $C_{\eta}$ (see their Assumption (S5) in Appendix E.1), which turns out to be function of the initial margin (see their Page 32), preventing the stepsize to be large. In comparison, our assumptions on the stepsize are weak, determined by an explicit and interpretable separability condition (see Assumption 5). To achieve this, we have to conduct a technical, yet much tighter, analysis of the GD path (see our Theorem F.9 and Lemma F.10). This is beyond the techniques from [Lyu and Li 2020].
**5.Directional convergence.** Our directional convergence analysis for GD is new. Note that [Ji and Telgarsky 2020] only analyzed GF but not GD, where tools from differential equations are unavailable. To address this issue, we construct an arc by connecting all GD iterates using line segments. Even so, analyzing the directional limit of this arc is extremely technical, involving careful estimations of the spherical and the radial parts of this arc (see Lemmas F.16 and F.17, which are not needed for GF). These additional technical challenges do not exist in the GF analysis but must be addressed in the GD analysis.
We will highlight these in the revision.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. I do not have further questions, and I support "Accept" as before. | Summary: The paper characterizes the implicit bias of non-homogeneous deep models trained with gradient-flow or gradient-descent to minimize an exponential loss, under some seperability and near-homogeneity conditions. The results are extensions of previous works that found similar implicit bias in strictly homogeneous models.
## Update After Rebuttal
I have increased my score, assuming that the authors will incorporate the changes discussed during the rebuttal into the final version of the paper.
Claims And Evidence: The paper is theoretical in nature, and supports its claims with clear and convincing proofs.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked the proofs in Sections C.1-C.4, only briefly read the proofs in the other appendices, and did not find any fundamental issues.
Experimental Designs Or Analyses: There are no experiments in the paper.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The paper contributes to the study of the implicit bias of gradient-based training of deep neural networks. Specifically, the paper extends previous results from the strictly-homogeneous setting to the nearly-homogeneous setting, thus extending the types of models to which the results apply.
Essential References Not Discussed: I do not think that there are any essential references not discussed.
Other Strengths And Weaknesses: The paper is well organized, and clearly written.
Other Comments Or Suggestions: 1. Possible typo in line 136-l.
2. Typo in line 557.
3. Typo in line 598.
4. Typo in line 344-r.
5. Possible typo in line 885.
6. Is there an intuitive meaning to $p_a$? If so, it should be discussed briefly in the main paper.
7. Possible typo in line 1043.
8. Typo in line 4008.
9. Line 1187 — $\zeta_t$ is not the curve swept by the normalized parameters, but its length.
10. Line 1212 — which proof does “We skip the proof here” refer to? Seems to be regarding Lemma C.14
which is proved later on, in page 27.
11. Line 1624 — what does “for all $i \in [n]$” refer to? Is the supremum also being taken over $i$?
12. Proof of Lemma C.22 — there is a mismatch between the value of $\delta$ in line 1630, and the RHS in line 1681.
13. Starting from page 35, there seems to be a change in the naming of the assumptions, e.g. (A1)-(A3), (B1)-(B3), which does not match the main text and is not clearly addressed.
14. Line 1925 — “Proposition” should be “Lemma”.
Questions For Authors: As shown in Theorem 5.1, Assumption 1 implies that the model is asymptotically $M$-homogeneous, and gives conditions for the increase of the norm of the parameters in Lemma C.4. In light of this, Assumption 2 seems to be “assuming what needs to be proven”. Specifically, it assumes that the parameters are already in a regime where (1) the dominant term is of order $>M-1$ (since $deg (p_a) = M-1$, and this exponent bounds the loss), and (2) due to Lemma C.4, that the norm of the parameters shall increase such that this behavior persists. Together, this implies that the main result applies only in the regime where the model is already “practically” homogeneous.
This is explicitly said in line 198-r.
In light of this, the contributions of the paper given previous results seem marginal, as, in addition, and as far as I noticed, there is no significant technical novelty beyond that of [1, 2].
Can you please clarify whether this observation is correct? Is there some delicate technical issue that I am missing?
[1] Lyu, Kaifeng, and Jian Li. "Gradient Descent Maximizes the Margin of Homogeneous Neural Networks." International Conference on Learning Representations.
[2] Ji, Ziwei, and Matus Telgarsky. "Directional convergence and alignment in deep learning." Advances in Neural Information Processing Systems 33 (2020): 17176-17186.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and will fix all grammar issues and typos in the revision. For the technical novelty, please refer to the section **Technical novelty** in our response to the reviewer R5kD. For insights of our results, please refer to the end of this response. Below, we address your other questions.
---
**Q1.** "Is there an intuitive meaning to $p_a$? If so, it should be discussed briefly in the main paper."
**A1.** $p_a$ is used in our Assumption 2. This choice of $p_a$ guarantees that the class of near-homogeneous functions under Assumptions 1 and 2 all behave close enough to a homogeneous function along the GF path for $t\ge s$. Concretely, as shown in Lines 945-950, $p_a$ is chosen such that $g:= f - p_a$ satisfies a one-sided inequality of the homogeneous condition (see equation (3) in Line 152). This is a crucial design choice that enables our proof.
---
**Q2.** "Possible typo in line 136-l, line 885 and line 1043."
**A2.** Thanks for pointing these out. These are all typos. (i) The period in line 136-l should be a comma; (ii) $\log \phi$ in line 885 should be replaced by $\phi$; (iii) In line 1043, $G$ in the denominator should be $G_t$. We will make sure to correct them and other typos in the revision.
---
**Q3.** Starting from page 35, there seems to be a change in the naming of the assumptions, e.g. (A1)-(A3), (B1)-(B3), which does not match the main text and is not clearly addressed.
**A3.** We will fix this in the revision.
---
**Q4.** “Line 1212 — which proof does “We skip the proof here” refer to? Seems to be regarding Lemma C.14 which is proved later on, in page 27.”
**A4.** You are correct. The proof of Lemma C.14 is on page 27. We will polish this part (and all the appendix) carefully in the revision.
---
**Q5.** “Line 1624 — what does “for all $i \in [n]$” refer to? Is the supremum also being taken over $i$?”
**A5.** This is a typo. In this place, we define $B\coloneqq [ (\gamma^{GF}(\theta_s)]^{-1/M}$. Then we can verify that
$$
\sup_{t \ge s, i \in [n]} \bar{f}_{M,i}^{-1/M} (\theta_t) \cdot \\|\theta_t\\|_2 \le B.
$$
This suffices for the remaining proof. We will fix this in the revision.
---
**Q6.** "Proof of Lemma C.22 — there is a mismatch between the value of in line 1630, and the RHS in line 1681."
**A6.** This is a typo. The correct formula for $\delta$ is
$$
\delta := n B^2 \frac{1+2 p_a(\rho_t)}{M \bar f_{M,\min }(\theta_t)}.
$$
---
**Q7.** "In light of this, the contributions of the paper given previous results seem marginal, as, in addition, and as far as I noticed, there is no significant technical novelty beyond that of [1, 2]. "
**A7.** We respectfully disagree with your assertions that our contributions are marginal and our technical novelty is not significant. For the technical novelty, please refer to the section **Technical novelty** in our response to the reviewer R5kD. For insights of our results, please refer to the end of this response.
---
---
## **Insights of our results**
We highlight insights from our results in the following three aspects.
**1. A good definition = good insights.** As a first step towards understanding implicit bias for non-homogeneous models, identifying the proper function classes for which meaningful theoretical insights can be extracted is already a challenging task. There are a few attempts in prior literature, but they seem to be far less fundamental than ours (see paragraph “Non-homogeneous Predictors” in Section 1.1). Our Definition 1 provides a natural quantification of the homogeneity error, covers a large class of networks, and has rich implications on the implicit bias. Such a powerful definition is rare, which already carries significant insights in our opinion.
**2. Insights for understanding implicit bias.** Our results suggest that a near-homogeneous predictor has the same implicit bias as its companion homogenization (see Theorem 3.4 and 5.1). This provides an important insight: understanding the implicit bias of a non-homogenous predictor can be reduced to understanding that of its homogenization. Compared to a generic non-homogenous model, its homogenization is much simpler to study (despite still being challenging).
**3. Broader implications beyond DL theory.** To the best of our knowledge, our near-homogeneity definition is novel and cannot be found even in pure math literature. As homogeneity is a fundamental math concept, widely used in many areas (such as PDE, harmonic analysis, and semi-algebraic geometry), our notion of near-homogeneity could motivate broader mathematical research beyond deep learning theory. In this regard, we believe technical tools established in our paper, integrating the near-homogeneity with o-minimal structures and non-smooth analysis (see also section **Technical novelty** in our response to reviewer R5kD), might be of broader interest.
We will include these discussions in the revision.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response.
I agree with the authors' claims regarding the insights and technical novelty, in particular about the novelty of the definitions, and will keep my score.
In light of this discussion, I think that future revisions of the paper can benefit from a more thorough discussion of the necessity of Assumption 2. In particular, the assumption can be separated into two components — 1) a separability condition that is also used for homogeneous predictors, which has the intuitive interpretation of perfectly classifying the training set, and 2) an assumption on the margin/magnitude of the predictor. Is the additional assumption only an artifact of the analysis, or is it a fundamental requirement for near-homogeneous models?
---
Reply to Comment 1.1.1:
Comment: We are glad that you agree with the insights of our results and our technical novelties. If you have any further concerns, please let us know, and we will be happy to discuss them more.
Your interpretation of our Assumption 2 is correct. Regarding your follow-up question, we argue that our Assumption 2 is necessary (in the worst-case sense) for genetic near-homogeneous models to exhibit implicit bias.
To see this, consider the following simple example: $\theta\in \mathbb{R}$, $(x,y)=(1,1)$, $p_a(|\theta|) = M |\theta|^{M-1}$ for an odd integer $M\ge 3$, and $f(\theta) = \theta^M+p_a(|\theta|)$.
It is clear that such a predictor satisfies Assumption 1. Moreover, our Assumption 2 is equivalent to
$$L(\theta_s) = \exp( - f(\theta_s) ) < \exp( - p_a(|\theta_s|) )
\quad \Longleftrightarrow\quad \theta_s^M > 0
\quad \Longleftrightarrow \quad \theta_s > 0.
$$
If the above condition does not hold, then $\theta_s \le 0$. Notice that $0$ is a stationary point for $L(\theta)$. So GF initialized from $\theta_s$ cannot produce positive parameters in the future, that is, $\theta_t \le 0$ for every $t\ge s$. Hence, GF cannot minimize the loss or exhibit any implicit bias suggested by our theorems.
This explanation should clarify your concerns. We will add this discussion in the revision. | Summary: This paper establishes the asymptotic implicit bias of gradient descent for generic non-homogeneous deep networks under exponential loss. Specifically, the authors show that (1) the normalized margin increases nearly monotonically, (2) the direction of the parameters converges, and (3) the directional limit satisfies the KKT conditions of the margin maximization problem.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No, but the results look reasonable to me.
Experimental Designs Or Analyses: No experiments.
Supplementary Material: No.
Relation To Broader Scientific Literature: The major contribution of this paper is to generalize previous results on simpler models to similar conclusions for a more general model. Additionally, I find the near-homogeneity definition (Definition 1) somewhat interesting, though I am unsure whether it was designed to make the proof more tractable or if it provides deeper insight.
Essential References Not Discussed: In the subarea of implicit bias, the key contribution of this paper is its consideration of a more general non-homogeneous model instead of a homogeneous one. Regarding the related work on homogeneous predictors, there are also special cases, such as shallow neural networks, with similar results. I think it would be helpful if the authors also referenced those papers.
Other Strengths And Weaknesses: Strengths:
1. Generalization to Non-Homogeneous Models: The paper extends prior results from homogeneous models to a more general non-homogeneous setting, contributing to a broader understanding of implicit bias.
2. Interesting Definition of Near-Homogeneity: The introduction of the near-homogeneity measure (Definition 1) is novel and could provide insights into implicit bias, depending on its broader applicability.
Weaknesses:
1. Limited Insight into Key Design Choices: While the paper presents generalizations of existing results, the motivation behind certain technical definitions—such as the choice of a polynomial upper bound—could be better justified. Explaining how this choice impacts proof tractability or brings new analytical tools would strengthen the contribution.
Other Comments Or Suggestions: A small suggestion about the writing style: Since most people in this subarea are already familiar with the key results and proof techniques from previous works, I find the explanation of the implications of the theorems in this paper somewhat plain and less informative. In many places, the paper simply states that the results generalize previous work, but that is something the audience likely already knows. Personally, I would appreciate more discussion on the key insights and motivation behind introducing the near-homogeneity measure. For example, how does defining a polynomial upper bound make the proof more tractable compared to previous works? This kind of explanation would help readers better assess the contribution of this paper—specifically, whether the introduction of this new concept brings any novel analytical tools to the field.
Questions For Authors: 1. I am wondering whether the upper bound on the degree of the polynomial in Definition 1 can be relaxed—either to some function of M, or even made independent of M. If not, why does the upper bound have to be M, and is there any intuition behind this choice?
2. Why is it helpful to define the upper bound as a polynomial of the weight norm? Did you use any specific tools for handling polynomials? If not, why not simply set the upper bound as $O(||\theta||_2^M)$ instead of introducing the polynomial concept $p'(||\theta||_2)$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and suggestions on the writing style. We will include more discussions on intuitions and technical innovations for our theorems in the revision. For the technical novelty, please refer to the section **Technical novelty** in our response to the reviewer R5kD. For insights of our results, please refer to the section **Insights of our results** in our response to the reviewer NQfK. Below, we address your other questions.
---
**Q1.** “I am wondering whether the upper bound on the degree of the polynomial in Definition 1 can be relaxed…why does the upper bound have to be $M$, and is there any intuition behind this choice?”
**A1.** In Definition 1, the upper bound of the degrees of the polynomials have to be $M$. We justify this from the following two aspects.
First, we discuss its importance. If we relax the degree upper bound from $M$ to $M+1$, then every sufficiently smooth function $f(\theta; x)$ that is uniformly bounded by $O(\\\| \\theta \\\|\_2\^M)$ for large $\\\| \\theta \\\|\_2$ satisfies Definition 1. This includes predictors that do not admit a homogenization (see Theorem 5.1). Note that homogenization plays a crucial role in our analysis. For instance, without homogenization, it seems impossible to even define a KKT problem, let alone prove the implicit bias of KKT directional convergence of GF/GD.
Second, we explain the intuition for choosing $M$ as the degree upper bound. In Definition 1, the polynomial $p’$ quantifies the deviation of $f$ from an $M$-homogeneous function (see $f_M$ in Theorem 5.1). Given that we require $f$ to be “near-homogeneous”, it is natural to assume that the discrepancy between $f$ and $f_M$ (quantified by $p’$) is of an order lower than $f_M$. The natural assumption, $\deg p \leq M$, suffices for this purpose.
We will clarify these in the revision.
---
**Q2.** “Why is it helpful to define the upper bound as a polynomial of the weight norm? Did you use any specific tools for handling polynomials? If not, why not simply set the upper bound as $O( \\| \\theta \\|_2^M)$ instead of introducing the polynomial concept $p' ( \\| \\theta \\|_2) $?”
**A2.** Good question. But before we address it, we would like to point out that the error upper bound is $p' ( \\| \\theta \\|\_2) = O(\\| \\theta \\|\_2^{M-1})$ in our Definition 1. So in your question, the error upper bound should be replaced by $o( \\| \\theta \\|_2^{M})$ instead of $O( \\| \\theta \\|_2^{M})$. We suspect this is a typo. But please let us know if we misunderstood your question.
We choose to define the upper bound as $p’$ in Definition 1, primarily for the simplicity of the exposition. This choice allows us to explicitly define the function $p_a$ (see Eq. (4)), which streamlines the theorem statements and their proof by making the inequalities more explicit and tractable.
Note that our results are not limited to this specific polynomial form. If the upper bound in Definition 1 is replaced by a function uniformly bounded by $O(\\\| \\theta \\\|\_2^{M-1})$ for large $\\\| \\theta \\\|\_2$, our analysis still goes through when Assumption 2 is adjusted accordingly. However, we do not feel this provides new information compared to our current version.
In general, if the error upper bound is only controlled by $o( \\\| θ \\\|\_2\^{M})$, we expect additional regularity conditions are needed to carry out the analysis. As our main focus is neural networks, we feel our current definition is both clean and sufficiently broad. We leave it as future work to further extend our results.
We will include these discussions in the revision.
--- | null | null | null | null | null | null |
Vision Graph Prompting via Semantic Low-Rank Decomposition | Accept (poster) | Summary: The paper introduces Vision Graph Prompting (VGP), a novel parameter-efficient fine-tuning method tailored for Vision Graph Neural Networks (ViG). The authors propose that semantic information in vision graphs resides primarily in low-rank components of the latent feature space, a key insight derived from PCA-based analysis of graph structures. Building on this, VGP incorporates three prompt types—SeLo-Graph Prompt, SeLo-Edge Prompt, and SeLo-Node Prompt—each leveraging semantic low-rank decomposition to capture global and local semantic dependencies within ViG topologies. The method freezes the pre-trained ViG backbone and fine-tunes only the prompts and a downstream head, achieving performance comparable to full fine-tuning with significantly fewer trainable parameters. Extensive experiments on ten vision datasets (e.g., CUB, Flowers, GTSRB) and nine graph datasets (e.g., BBBP, Tox21, PPI) demonstrate that VGP outperforms existing visual and graph prompting methods, achieving an average accuracy of 89.6% on vision tasks and 76.39% on graph tasks, surpassing full fine-tuning in several cases. The main contributions include the VGP framework, the low-rank prompting insight, and its superior transfer performance across diverse downstream tasks.
Claims And Evidence: The claims in the paper are generally well-supported by clear and convincing evidence. The primary claim—that VGP achieves performance comparable to full fine-tuning while being parameter-efficient—is substantiated by quantitative results in Tables 1 and 2, showing VGP’s accuracy surpassing or matching baselines across diverse datasets. The assertion of semantic information residing in low-rank components is convincingly supported by PCA visualizations (Figures 2 and 5) and theoretical discussion in Appendix A.3, linking shared PCA components to low-rank properties. Ablation studies (Table 4, Figure 4) further validate the effectiveness of individual components (SeLo-Graph, SeLo-Edge, SeLo-Node) and hyperparameter choices (e.g., rank $r$, blending factors $\alpha$ and $\beta$). However, the claim of generalizability to traditional graph tasks (Section 5.3) is slightly weaker due to the lack of detailed analysis on why low-rank properties extend to chemistry/biology domains beyond a hypothesis. While plausible, this claim could benefit from additional evidence, such as a similar PCA analysis on graph datasets, to strengthen its foundation.
Methods And Evaluation Criteria: The proposed VGP method and its evaluation criteria are well-suited to the problem of adapting ViG models for downstream vision tasks. The method’s design—introducing low-rank prompts at graph, edge, and node levels—aligns logically with the topological nature of ViG, addressing the limitations of Transformer-centric prompting methods. Using ten vision datasets (e.g., CUB, GTSRB, SVHN) with diverse categories and distributions is a robust choice for evaluating transfer performance, as is the extension to nine chemistry/biology graph datasets to test generalizability. The evaluation metric (classification accuracy) is standard and appropriate for these tasks. The experimental setup, including freezing the backbone and training for 100 epochs with AdamW optimization, is reasonable and consistent with prior work (e.g., DAM-VP, InsVP). However, the choice of a single ViG-M backbone (pre-trained on ImageNet-21k) could be expanded to other ViG variants (e.g., MobileViG) to validate robustness further, though this is a minor concern given the focus on prompting efficiency.
Theoretical Claims: The paper includes a theoretical claim in Appendix A.3 linking the low-rank property of semantic information to PCA and eigenvalue decomposition (Equations 13-15). I reviewed the correctness of this analysis, which builds on standard PCA principles to argue that semantically connected nodes share dominant components, implying a low-rank structure. The formulation appears mathematically sound: the covariance matrix decomposition and rank estimation based on eigenvalue thresholds are consistent with PCA theory. The error term estimation ($O(\lambda_{r+1})$) is a reasonable approximation, though it assumes a clear eigenvalue drop-off, which is visually supported by Figure 5’s long-tail distribution. No significant issues were found, but the analysis could be strengthened by quantifying the variance captured by the chosen rank $r$ (e.g., 50 for CUB) to directly tie it to the experimental choice of $r=32
Experimental Designs Or Analyses: I examined the experimental designs and analyses in Sections 5 and 5.4, including the quantitative results (Tables 1, 2, 5) and ablation studies (Table 4, Figure 4). The design is sound: comparing VGP against full fine-tuning and state-of-the-art prompting methods (e.g., InsVP, GraphPrompt) on diverse datasets ensures a fair and comprehensive evaluation. The ablation studies systematically test core components, rank $r$, and blending factors $\alpha$ and $\beta$, with results consistently showing performance improvements (e.g., 5.7% gain from SeLo-Graph on CUB). The statistical validity is supported by the use of standard splits (e.g., scaffold split for chemistry datasets) and consistent augmentation strategies. One minor issue is the lack of statistical significance testing (e.g., confidence intervals) for accuracy differences, which could bolster claims of superiority (e.g., VGP’s 89.6% vs. InsVP’s 84.6% on vision tasks). Additionally, the computational efficiency claim (3.1% FLOPs overhead, Table 5) is plausible but could be clarified by detailing how FLOPs were calculated for prompt operations.
Supplementary Material: I reviewed the supplementary material in the Appendix (Section A). Specifically, I examined A.1 (Efficiency Analysis), A.2 (Implementation Details), A.3 (Semantic Low-Rank Property), and A.4 (Vision Dataset Details). These sections provide valuable context: A.1 quantifies parameter reduction (94.6%) and FLOPs overhead (3.1%), A.2 details training settings (e.g., AdamW, 100 epochs), A.3 supports the low-rank claim with PCA theory and visualizations, and A.4 lists dataset statistics (e.g., CUB: 200 classes, 5,794 test samples). The material is well-organized and enhances the main paper’s credibility. I did not review additional figures (e.g., Figure 5) beyond their mention, as they were adequately described.
Relation To Broader Scientific Literature: The paper’s key contributions align well with trends in parameter-efficient fine-tuning (PEFT) and graph neural networks (GNNs). The use of prompting for vision tasks builds on prior work like VPT (Jia et al., 2022) and InsVP (Liu et al., 2024), extending it to ViG, a graph-based vision backbone introduced by Han et al. (2022). The low-rank decomposition idea echoes techniques in efficient Transformer adaptation (e.g., LoRA, Hu et al., 2021, not cited) but is novel in its application to graph structures. The extension to traditional graph tasks (e.g., MoleculeNet) ties into GNN prompting literature (e.g., GraphPrompt, Liu et al., 2023; GPF-Plus, Fang et al., 2023), offering a bridge between vision and graph domains. The insight into low-rank semantic properties also resonates with dimensionality reduction studies in GNNs (e.g., Kipf & Welling, 2016b), though applied uniquely to vision graphs.
Essential References Not Discussed: While the paper cites relevant prior work, two areas could benefit from additional references:
Low-Rank Adaptation: The low-rank decomposition approach shares conceptual similarities with LoRA (Hu et al., 2021, "LoRA: Low-Rank Adaptation of Large Language Models," ICLR 2022), a PEFT method for Transformers. Discussing LoRA could contextualize VGP’s novelty in adapting low-rank ideas to graph structures.
Graph Compression: The low-rank insight might relate to graph compression techniques like "GraphSAGE" (Hamilton et al., 2017, NIPS), which aggregates neighborhood features efficiently. Citing this could clarify how VGP differs from prior graph feature reduction methods.
These omissions do not undermine the work but could enhance its positioning within the broader PEFT and GNN literature.
Other Strengths And Weaknesses: **Strengths**:
*Originality*: VGP is a pioneering effort in prompting ViG models, creatively adapting low-rank concepts to graph structures.
*Significance*: The method’s parameter efficiency (94.6% reduction) and strong performance (e.g., 89.6% average accuracy) make it highly practical for resource-constrained settings.
*Clarity*: The paper is well-written, with clear explanations of the method (Section 4) and insightful visualizations (Figures 2, 3).
**Weaknesses**:
*Clarity of Generalizability*: The extension to graph tasks (Table 2) is compelling but lacks depth in explaining why low-rank properties hold beyond vision, limiting interpretability.
*Limited Backbone Variety*: Testing only on ViG-M restricts insights into broader applicability across ViG variants.
Minor Presentation Issue: The abstract could better highlight the low-rank insight as a core novelty, as it currently focuses more on the framework.
Other Comments Or Suggestions: I have no other comments or suggestions.
Questions For Authors: **Generalizability to Graph Tasks**: Your hypothesis suggests low-rank patterns exist in chemistry/biology graph data (Section 5.3). Could you provide PCA or similar analysis on these datasets (like Figure 2 for vision) to confirm this? A positive response with evidence would strengthen my confidence in VGP’s broader applicability, potentially raising my rating from "accept" to "strong accept."
**Choice of Rank $r=32$**: Table 4 shows peak performance at $r=32$, but Appendix A.3 estimates $r=50$ (CUB) and $r=60$ (Flowers). Why was $r=32$ chosen over these values? Clarification could resolve this apparent discrepancy, impacting my view on the method’s optimization rigor.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback.
### Q1,W1. Generalizability to Graph Tasks
From the efficacy of our method on chemistry/biology graph datasets, we hypothesize **similar latent semantic low-rank patterns** also exist in these graph data. In particular, **chemical bonds** and **protein interaction** structures exhibit structured low-rank properties similar to **semantic regions in images**.
For example, in *PPI* (Protein-Protein Interaction) dataset, each node in graph represents a type of protein, while edges denote interaction relationships. These interactions are primarily driven by **specific functional groups** such as hydroxyl and carboxyl groups, which are crucial to biochemical reactions, analogous to **low-rank semantic features in vision images**. Conversely, other chemical groups that do not significantly contribute to interactions correspond to **high-frequency local details** in images, which tend to be redundant.
So when extracting features from these protein graph data for tasks such as protein function prediction, it is essential to **identify the key functional groups** that drive interactions. Since our **VGP model is designed to capture low-rank semantic structures**, it effectively generalizes to chemistry and biology graph datasets, surpassing prior graph prompting methods.
Due to the inherent abstract nature of protein interaction graphs, it is challenging to visualize this similar semantic pattern like 2D images in Figure 2. To illustrate the underlying patterns, we visualize the graph structures and color the nodes with corresponding the first three PCA components. We make comparisons between node features obtained from trained and untrained GNN models on *PPI* dataset. Interestingly, we find that node features from trained GNN models present significantly better low-rank features consistency. The visualization is provided in an anonymous link(https://anonymous.4open.science/r/ICML25-anonymous-DF1B/PPI-PCA.pdf).
### Q2. Choice of Rank $r=32$
Our choice of $r=32$ is motivated by two key considerations:
1) **Trade-off between performance and parameter efficiency**. As shown in Table 4, *CUB* gets near-optimal results with **87.4%** of $r=32$ and **87.2%** of $r=64$ and its estimated rank is about 50, falling between these two values. Besides, *CIFAR* gets peak results when $r=64$ and second-best performance when $r=32$. In aspect of performance, choose $r=32$ or $r=64$ both seems plausible. However, in aspect of parameter efficiency, increasing $r$ from 32 to 64 **nearly doubles the trainable parameters**. So we tend to choose the smaller one, as $r=32$ for an optimal balance between performance and parameter efficiency.
2) **Consistent hyperparameters across datasets**. To maintain a unified hyperparameter setting and avoid dataset-specific tuning, we adopt consistent $r=32$ across all the datasets. Even though estimated ranks of *CUB* and *Flowers* are 50 and 60 respectively, other datasets like *SVHN* and *CIFAR10* exhibit lower ranks as 18 and 20, as table shown below. To reach a reasonable compromise between these datasets, we set $r=32$ to satisfy majority of datasets.
| | *DTD* | *CUB* | *NABirds* | *Dogs* | *Flowers* | *Food* | *CIFAR* | *CIFAR10* | *GTSRB* | *SVHN* |
| :-----------: | :---: | :---: | :-------: | :----: | :-------: | :----: | :-----: | :-------: | :-----: | :----: |
| estimated $r$ | 36 | 50 | 90 | 30 | 60 | 46 | 55 | 20 | 26 | 18 |
### W2. Backbone Variety
We supplement additional experiments based on other representative graph-based vision models, including MobileViG and GreedyViG. The experiments are conducted on six vision datasets and the backbones are pre-trained on ImageNet-1k. As table shown below, our VGP consistently excels other SOTA vision prompting and graph prompting methods, demonstrating robustness across backbones due to our adaptability to diverse graph structures.
| Method | *DTD* | *CUB* | *Flowers* | *Food* | *CIFAR10* | *SVHN* | Average |
| :------: | :------: | :------: | :-------: | :------: | :-------: | :------: | :------: |
| | | | MobileViG | | | | |
| GPF-Plus | 68.5 | 81.4 | 94.3 | 82.0 | 94.1 | 82.2 | 83.7 |
| InsVP | 68.1 | 84.0 | 95.2 | 83.9 | 95.2 | 88.9 | 85.9 |
| **VGP** | **71.6** | **84.9** | **97.6** | **88.0** | **96.8** | **94.7** | **88.9** |
| | | | GreedyViG | | | | |
| GPF-Plus | 69.3 | 81.7 | 94.9 | 82.2 | 94.5 | 82.7 | 84.2 |
| InsVP | 68.8 | 84.1 | 95.5 | 84.0 | 95.5 | 89.1 | 86.2 |
| **VGP** | **72.1** | **85.4** | **98.0** | **87.3** | **97.2** | **94.5** | **89.1** | | Summary: This paper introduces a novel parameter-efficient method called **V**ision **G**raph **P**rompting (**VGP**) with semantic low-rank decomposition for Vision GNNs. Empirical results demonstrate that the proposed approach achieves impressive performance on both image classification and traditional graph classification tasks. In addition, the paper provides supporting visualization evidence via PCA, which underscores the motivation behind the low-rank decomposition design in the prompts.
Claims And Evidence: The authors assert that semantically connected components in the graph exhibit low-rank properties, as evidenced by visualizations produced using PCA and t-SNE. However, I believe this visualization approach may have a critical limitation due to the capacity of PCA in effectively extracting the target object in complex images. Specifically, PCA may struggle to isolate the target object when selecting the top components, particularly in scenarios where images contain multiple objects or intricate backgrounds. Therefore, I suggest providing additional visualization results of PCA components, especially for images with multiple objects and complex backgrounds, to better evaluate the method’s effectiveness under such conditions.
Methods And Evaluation Criteria: Overall, the proposed method, VGP, provides an effective solution for leveraging semantic graph information within a low-rank space. Specifically, the semantic low-rank decomposition framework of VGP, including the SeLo-Graph Prompt, SeLo-Edge Prompt, and SeLo-Node Prompt, facilitates both structural adaptation and feature enhancement.
Theoretical Claims: I have carefully reviewed the theoretical aspects of this paper and did not identify any obvious errors. However, I noticed that the authors did not provide an equation summarizing the proposed three modules for parameter updating. Including such an equation is recommended to enhance the clarity and understanding of the overall method.
Experimental Designs Or Analyses: The experimental results of the proposed method, as presented in Tables 1 and 2, are promising. However, the authors do not appear to provide sufficient analysis regarding the differences between ViT-based methods and ViG-based ones.
Firstly, the parameter sizes of the selected backbones remain unclear and should be explicitly stated for better comparison. Secondly, it is recommended that the authors analyze why the basic visual prompting method for ViTs (i.e., VPT) outperforms ViG-based prompting methods on certain datasets, such as CUB, NABirds, Dogs, and Flowers. Specifically, the authors should offer a more detailed discussion on the advantages and disadvantages of ViT-based and ViG-based prompting methods. This would help readers better understand the critical discrepancies between these two approaches.
Supplementary Material: I have reviewed the appendix part, including implementation details and the datasets statistics.
Relation To Broader Scientific Literature: The authors validate the effectiveness of the proposed method solely on image classification and graph classification tasks in this paper. However, it remains unclear whether the method can be extended to other vision tasks, such as object detection and segmentation.
Essential References Not Discussed: I am not an expert in this field, so I am unsure if there are any related references that have not been cited in this paper.
Other Strengths And Weaknesses: Strengths:
1. The originality of the proposed method stems from the innovative combination of existing ideas, including low-rank adaptation and Vision GNN, which demonstrates a creative and thoughtful approach.
2. The writing in this paper is clear and well-structured, making it easy for readers to understand both the motivation behind the work and the effectiveness of the proposed method.
Weaknesses:
My major concerns can be found in previous parts of Claims And Evidence, Theoretical Claims, and Experimental Designs Or Analyses.
Other Comments Or Suggestions: I have no further comments.
Questions For Authors: I am curious about the cluster located in the bottom-right corner of Figure 1. It appears to differ significantly from the other clusters. While the patches in the top figure seem to represent the background of the overall image, it is unclear what this particular cluster corresponds to. Could you clarify its meaning or significance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback.
### Q1. Cluster Located in Bottom-Right Corner of Figure 1
We further check the correspondence between the t-SNE clusters and images patches, finding that the bottom-right cluster corresponds to **the bird's reflection on the water**. This observation aligns with the PCA visualizations in Figures 1 and 2, where the bird’s reflection is also highlighted.
Interestingly, the ViG model appears to learn semantic information about reflections as a byproduct of supervision in the bird classification task.
However, since the model lacks explicit supervision on background elements, the background features exhibit a sparse distribution in the upper region of the t-SNE figure.
### W1. PCA with Multiple Objects and Complex Backgrounds
In Figure 2(b) of our paper, the samples from *Flowers* dataset already contains **complex backgrounds with cluttered grass and leaves**, as well as instances with multiple objects (e.g., **the top-right sample with two flowers**).
The results demonstrate that PCA effectively extracts target objects using trained ViG model’s features, attributed to the **semantic low-rank property** of the ViG's latent feature space. In the final version, we will provide additional visualizations specifically focusing on multiple objects and complex backgrounds.(https://anonymous.4open.science/r/ICML25-anonymous-DF1B/multi-objects-w-complex-backgrounds.pdf)
### W2. Summarizing Proposed Three Modules for Training
Combining Equation 11 and 12 in the paper, we further summarize the prompted ViG model as equation below. The updated modules during training are underlined. Only the three **low-rank prompt matrices**, one **semantic feature extraction MLP** and the **low-rank virtual nodes** are trained, while all other modules in the ViG backbone remain frozen:
$\hat{f}(\mathbf{x}\_i)= (1-\beta)\cdot\mathbf{x}\_i+\hat{g}(\mathbf{x}\_i, \underline{\mathbf{P\_n}}) \cdot \mathbf{W}\_{update} +\sum\_{\mathbf{x}\_j \in \hat{\mathcal{N}}(\mathbf{x}\_i)} \beta\cdot \underline{\mathrm{MLP\_s}}(\mathbf{x}\_j)\cdot\underline{\mathbf{P\_e}}, ~~~\hat{\mathcal{N}}(\mathbf{x}\_i) \subseteq [\mathbf{X},~[\underline{\mathbf{n}\_1, \dots, \mathbf{n}\_M}]\cdot \underline{\mathbf{P\_g}}]$
### W3. Differences between ViT-based and ViG-based Prompts
The ViT-based prompting methods either prompting on images pixels like VP, explicitly adjusting the RGB channels space, or prompting on image tokens like VPT, functioning via feature similarity-based attention mechanism.
However, these methods lack awareness of **graph structures**, such as edge connections between patches. While ViG-based graph prompting methods like GraphPrompt and GPF-Plus explicitly alters graph structures, including modifying node features, inserting new nodes and constructing new edges, thus better leveraging the graph representation.
As for ViT-based prompting methods outperform ViG-based one on certain datasets, this is likely because vision datasets contain both visual features and latent graph structures. ViT-based vision prompting methods excel in processing **raw vision data**, whereas ViG-based methods are more effective at **graph-based reasoning**. Consequently, each approach has its own advantages, leading to instances where ViT-based prompting achieves superior results.
### W4. Parameter Sizes of the Backbone
We have provided details of both parameter sizes and computation costs of backbone ViG and our VGP in **Appendix A.1 and Table 5**. The ViG model has **48.68M** parameters averagely and our VGP only has **2.61M** trainable parameters, reducing **94.6%** from full fine-tuning.
### W5. Extending to Other Vision Tasks
We supplement additional semantic segmentation tasks on *ADE20K* dataset. As table shown below, our method consistently excels other vision and graph prompting methods on semantic segmentation tasks, demonstrating its effectiveness across different vision tasks.
|Method|ViG-M|Adapter|VPT|InsVP|GraphPrompt|VGP|
|:------:|:---:|:-----:|:--:|:---:|:---------:|:------:|
|mIoU(%)|47.9|44.2|41.6|42.3|44.4|**47.6**| | Summary: This paper proposes a novel approach called Vision Graph Prompting (VGP), which enables parameter-efficient fine-tuning of the Vision GNN (ViG) model. Additionally, the paper observes that essential semantic information in Vision Graph structures is concentrated in low-rank components and leverages this insight to introduce a Semantic Low-Rank Decomposition-based prompting method. To capture both global and local semantic features within the graph structure, three key components—SeLo-Graph, SeLo-Edge, and SeLo-Node Prompt—are introduced. Experimental results demonstrate that this approach significantly enhances the transfer learning performance of the ViG model while requiring far fewer parameters compared to full fine-tuning.
Claims And Evidence: - This paper visually demonstrates, through Figure 2 and Figure 5, that the primary semantic information of the Vision Graph is concentrated in the lower-dimensional components via PCA analysis. Additionally, the Ablation Study in Table 3 experimentally proves that SeLo-Graph, SeLo-Edge, and SeLo-Node Prompt each contribute to performance improvement.
- The experimental results in Table 1 and Table 2 further indicate that the proposed method outperforms existing approaches across various benchmark experiments.
- However, there is a lack of experiments comparing the cases with and without the application of low-dimensional decomposition. Therefore, it remains unclear how significant the performance improvement is compared to the basic ViG.
- It is necessary to provide a more detailed explanation of why the graph prompting technique is structurally optimized for the ViG model.
- In this paper, the impact of blending factors α and β on performance is discussed, indicating that the optimal values are found within a specific range (0.1 to 0.3). Although this claim is supported by quantitative analysis, more detailed insights should be provided regarding why deviations from this range lead to performance degradation.
Methods And Evaluation Criteria: - This paper conducts experiments using various benchmarks (CIFAR, CUB, GTSRB, and chemical/biological graph data) and appropriately analyzes the contribution of each technique through an Ablation Study. Additionally, the evaluation considering the balance between parameter efficiency and performance appears to be well-justified.
- While the paper claims that the proposed method achieves results comparable to full fine-tuning, it does not specify the exact performance evaluation metrics used for comparison. Including metrics such as accuracy, F1 score, or AUC would provide clearer insights into the effectiveness of the proposed method.
- Additional explanations on the graph datasets should be provided. While it appears that datasets from GPF-PLUS and MoleculeNet were used, there is a lack of detailed descriptions regarding their characteristics (even the appendix does not provide an explanation).
- It is necessary to verify whether the proposed method demonstrates the same effectiveness in other graph-based vision models (e.g., MobileViG, GreedyViG).
Further analysis should be conducted to determine whether the performance of the proposed low-dimensional decomposition method varies depending on the dataset.
- A clearer analysis of how changes in graph structure affect performance during the prompting process would be beneficial.
Theoretical Claims: - This paper demonstrates through PCA-based analysis that the semantic information of the Vision Graph is primarily contained in low-dimensional components. By utilizing Eigenvalue Decomposition (EVD) of the Covariance Matrix, it shows that semantic information is concentrated in a few principal components. Furthermore, based on this mathematical foundation, it logically validates the effectiveness of Semantic Low-Rank Decomposition.
- This paper claims that the proposed method effectively captures critical semantic information in Vision GNNs, thereby enhancing feature extraction. While this claim is supported by experimental results, the theoretical justification for how this improvement is achieved through low-rank decomposition and graph adaptation needs to be clearly articulated.
- The paper explains that extensive experiments demonstrate significant improvements in transfer performance across various downstream tasks. However, the logical connection between the theoretical claims and the experimental results should be further strengthened. In particular, providing a more detailed explanation of how the proposed theoretical framework translates into practical performance gains would enhance the coherence of the paper and establish a clearer pathway from theory to application.
Experimental Designs Or Analyses: - This paper has appropriately set up comparison groups for the experiments and conducted a comprehensive comparative analysis, including existing prompting techniques (VPT, InsVP) and the graph prompting (GPF-Plus) technique. The Ablation Study verifies the contribution of each proposed technique to performance improvement, ensuring logical validity.
- However, providing more explicit information on the number of repetitions (epoch) would enhance the transparency of the experimental design.
- This paper claims significant improvements in transfer performance, but it does not include detailed statistical analyses or significance testing results. Incorporating such analyses would help assess the robustness of the findings and is essential for demonstrating that the observed improvements are not merely due to random chance.
Supplementary Material: - The paper enhances the reliability of the research by including additional experimental results, implementation details, and mathematical proofs in the Appendix.
Relation To Broader Scientific Literature: - The paper effectively summarizes the relationship with existing Vision GNN and Vision Prompting research, clearly explaining the differences from Transformer-based prompting techniques
- It is necessary to verify whether the proposed method demonstrates the same effectiveness in other graph-based vision models (e.g., MobileViG, GreedyViG).
Essential References Not Discussed: - The paper effectively summarizes existing research, particularly related to Vision GNN and Vision Prompting, and does not appear to have omitted any essential studies that should have been mentioned.
Other Strengths And Weaknesses: - The paper proposes the first Vision Graph Prompting technique for the ViG model, demonstrating high originality. It achieves superior performance compared to existing methods while maintaining parameter efficiency. The inclusion of an Ablation Study and various benchmark experiments enhances the reliability of the research.
- It would be beneficial to include additional statistical significance for the experimental results. There are changes in model performance based on hyperparameters (r, α, β), and it may be necessary to include a process for finding the optimal values.
Other Comments Or Suggestions: - Grammar errors and inaccurate expressions:
"textbf3.1%" → "3.1%" (Typo correction needed)
"demonstrated in Table reftab-datasets" → "demonstrated in Table 6" (Reference error correction)
- It is necessary to more clearly articulate the limitations of the study in the conclusion.
For example: analysis of the causes of poor performance on certain datasets, suggesting additional research directions, etc.
- Providing a clearer explanation of the PCA visualization in Figure 5 would be beneficial.
The current explanation is somewhat brief, which may make it difficult for readers to easily understand the meaning of the graph.
Questions For Authors: - Will the proposed method show the same effect in other graph-based vision models (e.g., MobileViG, GreedyViG)?
- How will performance change if the structure of the graph is altered?
- Has the impact of prompting on the model's explainability been analyzed?
- Is it possible to achieve the same performance improvements in other domains, such as autonomous driving, medical imaging, and remote sensing?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback.
### Q1. Experiments on Other Graph-based Vision Models
We supplement additional experiments on other graph-based vision models, including **MobileViG and GreedyViG**, across six vision datasets with ImageNet-1k pre-trained backbones. As table shown below, our VGP consistently excels other SOTA vision and graph prompting methods, demonstrating robustness across backbones due to our adaptability to diverse graph structures.
|Method|*DTD*|*CUB*|*Flowers*|*Food*|*CIFAR10*|*SVHN*|Average|
|:------:|:---:|:---:|:-------:|:----:|:-------:|:----:|:------:|
||||MobileViG|||||
|GPF-Plus|68.5|81.4|94.3|82.0|94.1|82.2|83.7|
|InsVP|68.1|84.0|95.2|83.9|95.2|88.9|85.9|
|**VGP**|71.6|84.9|97.6|88.0|96.8|94.7|**88.9**|
||||GreedyViG|||||
|GPF-Plus|69.3|81.7|94.9|82.2|94.5|82.7|84.2|
|InsVP|68.8|84.1|95.5|84.0|95.5|89.1|86.2|
|**VGP**|72.1|85.4|98.0|87.3|97.2|94.5|**89.1**|
### Q2. Ablation on Altered Graph Structures
We conduct additional ablation studies to analyze the impact of structural modifications in the SeLo-Graph Prompt. Our method **inserts virtual nodes** and **dynamically constructs edges** based on feature similarity, thereby altering the graph structure.
As table shown below, both virtual node insertion and edge construction enhance feature extraction. While static edge allocation improves performance, **dynamic edge construction based on feature similarity achieves the best results**, as it better captures complex semantic relationships.
|**Ablation**|**CUB**|**GTSRB**|
|------------------------------------------------|--------|---------|
|w/o SeLo-Graph Prompt|85.8|93.4|
|Only insert virtual nodes|86.2|94.1|
|Insert virtual nodes+static edge allocation|86.6|96.9|
|Insert virtual nodes+dynamic edge construction|**87.4**|**98.1**|
### Q3. Analysis of Explainability of Prompting Impact
Fig.2 and Fig.5 in paper show that fully fine-tuned vision graph models can recognize semantically related patches and connect them via edges, exhibiting a semantic low-rank property.
Our VGP reinforces this effect through three prompting modules, ensuring **low-rank feature consistency** across connected patches. This mimics the behavior of fully fine-tuned models, effectively linking semantically related regions.
As shown in Table 1, our method effectively extracts discriminative semantic information, leading to significant performance gains.
### Q4. Experiments in Other Domains
We supplement additional experiments on **remote sensing tasks on *EuroSAT* dataset**, which consists of satellite images from Sentinel-2. As table shown below, our method still compels other SOTA prompting methods, even getting comparable results with full fine-tuning with only **4%** trainable parameters.
|Method|ViG-M|Adapter|VPT|InsVP|GraphPrompt|GPF-Plus|VGP|
|:----------------:|:---:|:-----:|:---:|:---:|:---------:|:------:|:-------:|
|*EuroSAT* Acc.(%)|92.37|85.24|83.55|87.14|85.50|86.97|**91.98**|
### W1. Statistical Significance and Hyperparameter Selection
We run all experiments for **three times** with different seeds and report the highest results. The average standard deviation is **0.3%**, which is significant lower than our **4%** performance gain, confirming the robustness of our results. Detailed statistical significance for each dataset will be provided in final version.
For hyperparameter selection (r, α, β), we evaluate multiple candidates and select the optimal ones. And we use a fixed set of hyperparameters across all datasets.
### S1,S2,S3. Comments and Suggestions
We appreciate your feedback and will address the following in final version:
1) We have corrected typos in the supplementary materials.
2) We will discuss limitations and failure cases to guide future research.
3) Additional PCA visualization details are included. Specifically, we compute PCA components for all patch tokens encoded by the trained model. The latent feature space is decomposed into PCA components, where those with large coefficients capture major variance. We map the top three PCA components into RGB channels for visualization, ensuring patches with similar colors share similar PCA component distributions, thus indicating low-rank properties. This approach is similar to visualization of famous DINOv2.
### Graph Prompt Structurally Optimized for ViG
Standard vision prompting methods (e.g., VP, VPT) operate on **pixel-level** or **token-level** representations without explicit graph structures.
In contrast, graph prompting methods (e.g., GraphPrompt, GPF) **directly modify node features, insert virtual nodes, or establish new edges**, enabling structured graph-based prompting while overlooking semantic features in vision data.
Our VGP builds upon this principle, explicitly optimizing prompts for graph-structured vision models, incorporating semantic low-rank decomposition strategy.
### Number of Repetitions
Following DAM-VP (Appendix A.2), we train each dataset for **100 epochs**.
---
Rebuttal Comment 1.1:
Comment: The authors have provided sincere and well-reasoned responses to all of the reviewer’s questions. In particular, they effectively demonstrated the generalizability and extensibility of the proposed method through additional experiments on alternative backbones (MobileViG, GreedyViG) and the remote sensing domain (EuroSAT). They also convincingly explained the effects of graph structure modifications and the improvement in explainability brought by the prompting technique, supported by both quantitative and qualitative evidence. Plans to supplement statistical significance analysis and hyperparameter selection were clearly stated as well.
However, there are a few shortcomings. First, regarding statistical validation, conducting only three runs and reporting only the best performance may be somewhat insufficient in terms of consistency and reliability. In addition, among the various domain experiments, real-world applications such as autonomous driving or medical imaging—which are more complex—were not tested. Furthermore, the paper focuses more on overall system performance improvement rather than providing in-depth analysis on the causes of individual performance gains, which weakens the connection between theory and experiments. Addressing these issues in the future would further enhance the completeness of the paper.
Taking all these points into account, I will adjust my previous score slightly upward. I wish the authors the best with their final results.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thorough and constructive feedback. We sincerely appreciate your recognition of our efforts to address your concerns, especially regarding the generalizability and explainability of our method.
We acknowledge the limitations you raised. Regarding statistical validation, we agree that more comprehensive experimentation (e.g., more runs with mean and standard deviation) would enhance the reliability of our results, and we plan to incorporate this in future work. We also appreciate your suggestion on exploring more complex real-world domains such as autonomous driving and medical imaging—this is a valuable direction that we are actively considering for follow-up research.
Lastly, we agree that a deeper analysis into the individual contributions of each module would strengthen the theoretical-experimental connection, and we will work toward expanding this aspect in a future extended version of the paper.
Thank you again for your thoughtful comments and for adjusting your score. | Summary: In this work, the authors present Vision Graph Prompting (VGP), a parameter-efficient fine-tuning method for Vision Graph Neural Networks. The core insight is that semantic information in vision graphs primarily resides in the low-rank components of the latent feature space. The authors propose three semantic low-rank prompting methods: SeLo-Graph, SeLo-Edge, and SeLo-Node prompts, which capture global structural patterns and fine-grained semantic dependencies.
Claims And Evidence: Overall, the claims are well-supported and clear.
Methods And Evaluation Criteria: The evaluation criteria, including accuracy and parameter efficiency across diverse datasets, are appropriate and comprehensive.
Theoretical Claims: I roughly checked the theoretical claims in the article. They're mostly based on existing theories and seem reasonable.
Experimental Designs Or Analyses: The ablation study in the paper does not include experiments where only SeLo-Edge or only SeLo-Node is used, nor does it show the results of combining SeLo-Graph with SeLo-Node. This limits the thoroughness of the analysis of each component's individual contribution.
Supplementary Material: I read the supplementary material in its entirety.
Relation To Broader Scientific Literature: This work bridges the gap between Transformer-focused prompting techniques and graph-based vision models, contributing to the development of parameter-efficient fine-tuning methods for ViG and potentially other graph neural network applications.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: Strengths:
1. The authors present a parameter-efficient fine-tuning method specifically designed for Vision Graph Neural Networks (ViG), addressing a previously under-explored area.
2. The core insight regarding the low-rank properties of semantic information in vision graphs is well-supported and motivated.
3. Extensive experiments across diverse datasets demonstrate the effectiveness of the proposed method.
Weaknesses:
1. Missing detailed information on the implementation of virtual nodes, such as their initialization method and how the number of virtual nodes is determined.
2. The author does not discuss potential issues that may arise when dealing with very large or complex graph structures. Additionally, the author does not clarify the number of prompts M in the SeLo-Graph Prompt, which may also be crucial to the the effectiveness of the method.
3. The proposed method appears to be instance-level, which may result in a significantly larger number of parameters compared to more generic prompt methods. Though a comparison with the full fine-tuning method is given in the appendix, the paper does not provide a detailed comparison of parameter quantities, which is important for assessing the method's efficiency.
4. It is unclear whether SeLo-Node acts on all nodes in the graph or if it includes the virtual nodes from SeLo-Graph. This ambiguity affects the understanding of the method's application scope and its impact on different parts of the graph.
Other Comments Or Suggestions: N/A.
Questions For Authors: I'm curious, given the authors' finding that the original graph structure's features have a low-rank decomposition property, could they consider adding LoRA to ViG for fine-tuning as a prompt alternative?
Ethical Review Concerns: N/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback.
### Q1. Adding LoRA as a Prompt Alternative
We supplement additional experiments comparing with LoRA across ten vision datasets. As table shown below, LoRA surpasses traditional visual prompting method (VPT) due to its **low-rank adaptation property** and matches GPF-Plus’s performance.
While LoRA focuses on low-rank adaptation within the model’s parameter space, it **does not leverage graph topology**, unable to refine structural relationships, limiting further gains. Our VGP achieves SOTA performance by jointly optimizing visual semantics and graph structures via semantic low-rank prompting.
|Method|*DTD*|*CUB*|*NABirds*|*Dogs*|*Flowers*|*Food*|*CIFAR*|*CIFAR10*|*GTSRB*|*SVHN*|Average|
|:------:|:------:|:------:|:-------:|:------:|:-------:|:------:|:------:|:-------:|:------:|:------:|:------:|
|VPT|71.4|77.3|76.4|73.1|95.3|81.9|76.3|93.2|79.7|82.4|80.7|
|GPF-Plus|71.0|82.0|77.2|78.2|95.7|82.6|80.9|94.5|90.5|83.1|83.6|
|LoRA|69.7|79.2|77.3|74.0|94.6|83.5|81.4|94.9|90.2|90.8|83.6|
|**VGP**|**74.8**|**87.4**|**80.9**|**81.7**|**98.2**|**89.5**|**89.7**|**98.3**|**98.1**|**96.9**|**89.6**|
### W1. Implementation Details of Virtual Nodes
Thanks for your reminder. We provide additional implementation details of virtual nodes:
1) The virtual nodes in SeLo-Graph Prompt are initialized using **Kaiming Normal distribution**.
2) The number of virtual nodes $M$ is set to **14**. Ablation experiments is conducted as table below. A smaller number leads to suboptimal prompting effects due to insufficient guidance, while an excessive number does not yield further improvements but incurring additional parameter cost.
|Virtual Node Number $M$|0|3|7|14|28|42|
|:---------------------:|:--:|:--:|:--:|:------:|:--:|:--:|
|*CUB*|85.8|86.3|86.9|**87.4**|87.2|86.9|
|*GTSRB*|93.4|95.5|97.3|**98.1**|98.0|97.6|
|*SVHN*|95.1|96.0|96.2|**96.9**|96.7|96.8|
### W2. Dealing with Large or Complex Graph
Our VGP is capable of handling large and complex graph data. In our experiments on chemistry/biology graph datasets in Table 2, the number of nodes can reach **5,000** with **non-uniform edges distribution**, far more complex than the **196-nodes** image graphs. The table below presents the average graph sizes for different chemistry/biology datasets. Even though, our VGP still achieves **seven** SOTA results across nine benchmarks with only **0.15M** parameters, verifying its robustness and generalizability.
As for number of prompts $M$ in the SeLo-Graph Prompt, we follow the same setting with vision datasets as 14, not specifically tuned for chemistry/biology datasets. Even though the graph are much larger and more complex graphs, our method still excels with other graph prompting methods with a general hyperparameter setting, verifying its robustness.
|Datasets|*BBBP*|*Tox21*|*ToxCast*|*SIDER*|*ClinTox*|*MUV*|*HIV*|*BACE*|*PPI*|
|:--------:|:----:|:-----:|:-------:|:-----:|:-------:|:---:|:---:|:----:|:------:|
|Graph Size|776|516|583|741|989|828|893|1074|**5139**|
### W3. Parameter Quantities Comparison
We provide a parameter comparison of SOTA methods on the *CUB* dataset, as table shown below. Our method achieves high efficiency, requiring only **2.63M** trainable parameters (**5%** of ViG-M’s full fine-tuning at **48.71M**). This is due to our **lightweight low-rank design**, which avoids large parameter matrices while maintaining strong performance.
|Method|ViG-M|VPT|Ins-VP|GPF-Plus|Adapter|DAM-VP|VGP|
|:--------:|:---:|:--:|:----:|:------:|:-----:|:----:|:--:|
|Param.(M)|48.71|1.77|1.83|2.29|3.48|6.24|2.63|
### W4. Whether SeLo-Node Prompt Acts on Virtual Nodes
Yes, the SeLo-Node Prompt acts on all the nodes within graph, including virtual nodes inserted by SeLo-Graph Prompt. We will provide a more explicit description of prompting process in final version for better clarity as below.
1) SeLo-Graph Prompt inserts virtual nodes and builds virtual edges, updating graph structures
2) SeLo-Edge Prompt refining the edge-level semantic interactions via edges within updated graph
3) SeLo-Node Prompt intensifies node-level semantic information on each node in updated graph
### Each Component's Individual Contribution
We supplement additional ablation experiments on different components combinations as table shown below. While SeLo-Graph Prompt refines graph structures, SeLo-Edge and SeLo-Node Prompt enhance low-rank semantics between and within nodes. Each component contributes to performance gains.
|SeLo-Graph|SeLo-Edge|SeLo-Node|*CUB*|*GTSRB*|
|:--------:|:-------:|:-------:|:------:|:------:|
|-|-|-|76.2|77.4|
|√|-|-|81.9|86.9|
|-|√|-|82.3|87.5|
|-|-|√|81.0|86.5|
|√|√|-|85.3|93.0|
|√|-|√|85.5|93.3|
|-|√|√|85.8|93.4|
|√|√|√|**87.4**|**98.1**|
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. Since most of my concerns have been addressed, I am inclined to increase my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your positive feedback and for considering increasing your score. We truly appreciate your thoughtful review and are glad that our rebuttal addressed your concerns. We are committed to improving our work and are grateful for your constructive comments, which helped us strengthen the paper. | null | null | null | null | null | null |
Constant Stepsize Local GD for Logistic Regression: Acceleration by Instability | Accept (poster) | Summary: The paper establishes improved convergence rates for local gradient descent in the context of distributed logistic regression with separable data. This improvement is attained by employing significantly larger step sizes than those typically used for general smooth loss functions.
# Update after rebuttal
In the rebuttal, the authors discussed my concerns regarding the tightness of the bound and the technical challenges in the proofs. Although it is not clear whether the bound is tight with respect to $\gamma$, I am leaning towards acceptance.
Claims And Evidence: Yes. All of the theoretical results are proved in the paper.
Methods And Evaluation Criteria: Yes. The paper is mostly theoretical and contains only basic numerical evaluations.
Theoretical Claims: I checked the correctness of the proofs in the main text.
Experimental Designs Or Analyses: The experiments focus primarily on synthetic and MNIST datasets.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper improves the convergence rates established in (Woodworth et al., 2020b), (Koloskova et al., 2020), and (Crawshaw et al., 2025) for the problem of distributed logistic regression with separable data.
From a technical perspective, much of the methodology builds upon the approach of (Wu et al., 2024a) for gradient descent in the non-distributed setting.
Essential References Not Discussed: To the best of my knowledge, the authors have discussed all relevant related works.
Other Strengths And Weaknesses: ### Strengths
1. The paper is well-written and clearly structured. The authors provide a detailed analysis and offer valuable intuition to aid the reader's understanding.
2. The authors achieve the best-known convergence rates for distributed logistic regression with separable data.
3. The paper demonstrates that the technique introduced by Wu et al. (2024) for obtaining improved bounds with extremely large step sizes is also applicable in the distributed setting.
### Weaknesses
1. From a technical standpoint, the analysis largely builds on the work of Wu et al. (2024) for the non-distributed setting.
2. It remains unclear whether the bound provided by the authors is tight for the specific problem they consider.
Other Comments Or Suggestions: N.A
Questions For Authors: 1. In the authors' opinion, is the additional factor $ M $ and $1/\gamma $ in the bound, compared to the non-distributed setting, tight?
2. What are the main challenges in the analysis beyond those already addressed in Wu et al. (2024) for non-distributed gradient descent?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback on our submission. Below we have responded to the comments in your review.
1. **Tightness in terms of $M$ and $\gamma$.** This is an interesting question. For $M$, the current dependence may be tight, but of course we cannot know for sure without a lower bound. Here we offer some speculation about the tightness. Our $M$ dependence arises from the requirement that $F(w_r) \leq O(\gamma/(\eta KM))$ of Lemmas 4.6-4.8, which guarantees that Local GD has entered the stable phase. The factor of $1/M$ is needed in that requirement to ensure that $F_m(w_r)$ is small for every single client $m$, since $F(w) \leq C$ implies that $F_m(w) \leq MC$. Our proof of Theorem A.7 then guarantees stable descent when each client's loss $F_m(w_r)$ is small. For the question of tightness, it comes down to whether the inequality $F_m(w) \leq MC$ is tight. While this inequality may appear pessimistic, it is actually tight in the case that all client losses are close to zero except one, and we have experimentally observed cases like this even in simple settings like $M=2$. In those instances, there is a strong oscillatory behavior, where at each iteration one client loss is close to zero while the other is large. It is possible that such behavior also occurs with larger $M$. If so, then the previous inequality is sometimes tight, and in that case the $M$ dependence may be unavoidable. Based on this preliminary evidence, we guess that the complexity may have some unavoidable dependence on $M$, but again, there is no way to be totally rigorous without providing a lower bound.
For $\gamma$, we do believe that the dependence can be improved, although we guess that it requires some additional analysis which is outside the scope of the current paper. As we pointed out in our submission, the dependence in terms of $\gamma$ is slightly worse than the single-machine case, which suggests that some tightening is possible. However, we believe that it requires more fine-grained knowledge of the trajectory of Local GD. In particular, we may be able to tighten the dependence on $\gamma$ if we know the implicit bias of Local GD. We leave this kind of fine-grained trajectory analysis for future work, and for now we just focus on the convergence rate in terms of the number of communication rounds $R$. If you want to know more technical details about the origin and possible solution to this issue, please see our response to reviewer f231.
2. **Main challenges in the analysis.** The key challenge of the analysis is to bound the time to transition to the stable phase *even under local updates*. If $K=1$, then this can already be performed with the potential function argument of (Wu et al, 2024a). With local updates ($K > 1$), it is not immediately clear whether the same potential function can be used, and if it can be used, whether it decreases at the same rate as in the single-machine case. The key insight to bridge this gap is to decompose the round update $w_{r+1} - w_r$ into the contributions of each individual data point, and to upper and lower bound the contribution of each data point. This allows us to relate gradient potential of Local GD to that of GD, and the argument is executed in Lemma 4.9 (Lemma A.6 in the Appendix). This same decomposition is also the key step to prove Theorem 4.1. Our Section 4.2 also gives an overview of this decomposition and how it is used in the proof for both the stable and unstable phase. | Summary: This paper studies local gradient descent (GD) for logistic regression with separable data in a distributed setting. Building on prior work by [Wu et al., 2024], which showed that a large stepsize improves optimization efficiency in a single-machine setting, this work extends the analysis to multiple machines with multiple local GD steps. Similar to [Wu et al., 2024], the authors demonstrate that a large stepsize benefits the optimization process.
Claims And Evidence: See below.
Methods And Evaluation Criteria: See below.
Theoretical Claims: See below.
Experimental Designs Or Analyses: See below.
Supplementary Material: See below.
Relation To Broader Scientific Literature: See below.
Essential References Not Discussed: See below.
Other Strengths And Weaknesses: See below.
Other Comments Or Suggestions: Overall, I find the paper interesting and well written, but there are a few areas for improvement:
1. **Comparison to the Single-Machine Case:**
When $K=1$ or $M=1$, the problem reduces to the single-machine setting. However, the obtained bound appears worse than that in [Wu et al., 2024] by some factors of $\gamma$. While this issue is briefly mentioned after Corollary 4.3 and in Section 6, it would be valuable to further explore its technical origins. Specifically, identifying which step in the analysis introduces this looseness would provide greater clarity. It seems unlikely that the current bound is tight in terms of $\gamma$.
2. **Exploration of Local Steps ($K>1$) in Theory:**
The discussion on the benefits of local steps is interesting. In the context of logistic regression with separable data, more local steps help local GD enter the stable regime faster, enabling the use of a larger stepsize for a given optimization budget. While this phenomenon is demonstrated in simulations, providing theoretical support would significantly enhance the significance of the paper.
3. **Handling of Heterogeneous Data Distributions:**
The current analysis assumes a uniform margin $\gamma$ across all devices, which simplifies the problem but overlooks data heterogeneity. When performing local steps, local GD is influenced only by the local margin instead of the global margin, which can be much larger than the global margin $\gamma$. A more fine-grained local analysis could reveal additional benefits of local steps and offer new insights into distributed/federated optimization.
I think the paper would be much stronger if the above three issues could be resolved properly. In its current form, I think the paper is on the borderline case. At least the first issue should be addressed during the rebuttal.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. Below we have responded to the points in your review.
1. **Comparison to the single-machine case.** The issue of $\gamma$ dependence stems from the gradient bias $b_r$ in Lemma A.5. Notice that other conditions for entering the stable phase (Lemma A.3, Lemma A.4) only require $F(w_r) \leq O(1/(\eta KM))$, whereas Lemma A.5 requires $F(w_r) \leq O(\gamma/(\eta KM))$. This extra factor of $\gamma$ needed to bound $\lVert b_r \rVert$ creates the worse dependence on $\gamma$ compared with the single-machine case. Note that the gradient bias results from taking local steps before averaging, so it does not appear when $K=1$ or $M=1$. We will also note that while we may not have a tight dependence on $\gamma$ here, it is also not clear that it would be the same as in Wu et al in the case $K,M>1$.
Technically, the requirement $F(w_r) \leq O(\gamma/(\eta KM))$ might be weakened, but only with a more fine-grained trajectory analysis. First, note that the requirement on $F(w_r)$ is used in Equation (114) of Lemma A.5, for the inequality marked $(iv)$. The need for the factor of $\gamma$ arises from $(v)$, where we apply $F(w) \leq \frac{1}{\gamma} \lVert \nabla F(w) \rVert$ (from Lemma B.2). The additional factor of $\gamma$ is needed to cancel the $1/\gamma$ from Lemma B.2. Now, if we had a stronger bound in Lemma B.2 --- say $F(w) \leq \lVert \nabla F(w) \rVert$ --- then we could remove the extra $\gamma$ factor. Unfortunately, the bound $F(w) \leq \lVert \nabla F(w) \rVert$ does not hold for all $w$, but it does hold, for example, when $w = t w_*$, where $t$ is a large scalar. In summary, we might improve the gamma dependence if we knew that Local GD converges near the max-margin solution. Unfortunately, this kind of implicit bias result would require more analysis and is outside the scope of this paper. Even in the single-machine case, the implicit bias of GD for logistic regression is unknown when the learning rate scales linearly in the number of iterations (Wu et al, 2024a). Investigating the implicit bias would require a significant amount of work which we leave as a future direction.
2. **Exploration of local steps in theory.** The question of the benefit of local steps is a fundamental problem in distributed optimization, which we discussed thoroughly in Section 6. We acknowledge that our results do not show an improvement from local steps, though we would like to point out that the same can be said of nearly all results in this line of work (see (Woodworth et al 2020b) and (Patel et al, 2024) for thorough discussions of the literature). Even for a fixed setting like logistic regression, proving the benefit of local steps is nontrivial and is outside the scope of our single paper. We plan to address this fundamental question in follow up works.
3. **Handling of heterogeneous data distributions.** First, we should clarify we make no restrictive assumptions about data heterogeneity. Our analysis handles any heterogeneous dataset that is linearly separable. $\gamma$ is the maximum margin of the combined dataset, but we do not assume that it is the maximum margin of every local dataset.
The question remains whether the complexity can be improved with a fine-grained analysis that considers the local margins instead of just the global one. We believe the answer is no: changes in the local margins alone cannot improve the asymptotic convergence rate, as we explain below.
Theoretically, our analysis shows that once the loss is small, Local GD is essentially GD on the global dataset with some small gradient bias (Lemma A.5 + Theorem A.7). So after this threshold, the convergence rate is determined by properties of the global dataset. The local margins could affect the time it takes to reach this threshold, but after the threshold, the convergence rate is determined by the global margin $\gamma$. So the local margins do not affect the convergence rate as the number of iterations goes to $\infty$.
Experimentally, we designed a dataset to test Local GD while varying the local margins. We use three ways of splitting the global dataset which creates either homogeneous, partially heterogeneous, or totally heterogeneous local margins. The dataset is visualized in Figure 2 of https://anonymous.4open.science/r/25_icml_rebuttal-FEC8/, and the results are shown in Figure 3. The left subplots of Figure 3 show that the losses for each split are slightly different in early iterations, but quickly become nearly identical. The right subplots show that all three splits satisfy $\eta \gamma^2 Kr \cdot F(w_r) \rightarrow 1$ as $r$ increases, so that the asymptotic convergence rate is unaffected by heterogeneity in the local margins. This behavior is consistent across choices of $\eta$ and $K$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
Regarding 1, I think it will be useful to include these discussions in the revision so that experts could see where the extra $\gamma$ factor comes from.
Regarding 2, it might be worth expending the related discussions in the paper to better highlight the open issue.
Regarding 3, I was trying to suggest that considering local margin might help improve the phase transition bound, which might lead to some benefit of local step (sorry my wording might not be clear in the first place). But this is very much an open problem. Thank you for the additional simulations.
I will raise my score to 3 to indicate that I am still near the borderline but leaning towards acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We can definitely include the content from points 1 and 2 in the revised edition. Your point 3 is very interesting, and we agree that the local margins may play a role in the transition time. We hope to address this point in future work. | Summary: The authors demonstrate that Local GD for distributed logistic regression converges for any step size $\eta$ > 0 and any communication interval K ≥ 1. Experimental results on both synthetic and real-world data support the theoretical finding that acceleration is possible by permitting nonmonotonic decreases in the objective.
Claims And Evidence: All theorems and corollaries are claimed clearly and provided with proofs.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: The designs of experiments make sense. But still lack the performance with respect to different setups.
Supplementary Material: I reviewed the experimental details.
Relation To Broader Scientific Literature: The authors adapt techniques from the analysis of GD with large step sizes for single-machine logistic regression to demonstrate that Local GD for distributed logistic regression converges for any step size $\eta$ > 0 and any communication interval K ≥ 1. Experimental results on both synthetic and real-world data support the theoretical finding that acceleration is possible by permitting nonmonotonic decreases in the objective.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The theoretical proofs are very solid and clear, and the future study mentioned in the paper is interesting.
Other Comments Or Suggestions: Each equation should be labeled with a single number, rather than being numbered on every line.
Questions For Authors: 1. Have you tested more datasets to verify your method?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your efforts in the review process. Below we have responded to your comments about additional experimental setups.
1. **Additional experimental setups**. In the review, you asked "Have you tested more datasets to verify your method?". First, we would like to clarify that the goal of our paper is not to propose a new method that achieves the best possible performance, but rather to develop theory that accurately explains the practical behavior of fundamental machine learning algorithms, i.e. distributed gradient descent for binary classification with the logistic loss. With that in mind, the purpose of the experiments in our main submission is to experimentally verify our theoretical findings, that optimization can indeed be accelerated by choosing a large step size/communication interval.
To complement these results, we have added further experiments with the CIFAR-10 dataset, and we see largely the same behavior as predicted by our theory. The results can be seen in Figure 1 of https://anonymous.4open.science/r/25_icml_rebuttal-FEC8/. For these additional experiments, we used the same evaluation protocol as the experiments of the main submission: Figures 1(a), 1(b) correspond to Figure 1 of the main submission, Figure 1(c) corresponds to Figure 2 of the main submission, and Figure 1(d) corresponds to Figure 3 of the main submission.
These CIFAR-10 experiments further support our theoretical findings. In Figure 1(a) of the linked PDF, larger step sizes/communication intervals lead to faster convergence in the long run, despite the resulting slow/unstable convergence in early iterations. In Figure 1(b), we can see that a larger communication interval $K$ leads to faster convergence when $\eta$ is tuned to $K$. The results in Figure 1(c) are similar to the MNIST results in Figure 3 of the main body: when $\eta K$ is constant, $K=1$ is less stable and slower than other choices of $K$, and all other choices have roughly the same final loss. These results strengthen the evidence that our theoretical findings accurately describe the behavior of Local GD in practice.
Lastly, we added another experiment (in response to reviewer f231) with a synthetic dataset to investigate the effect of heterogeneity among the margins of the local datasets. Please see our response to reviewer f231 for more information on this additional experiment, whose results can also be found in the linked PDF. | null | null | null | null | null | null | null | null |
Memory Efficient Block Coordinate Descent Method for Forward-Only Second-Order Finetuning of LLM Models | Reject | Summary: This paper proposes a memory-efficient optimization method for fine-tuning large language models by integrating a block coordinate descent scheme with Hessian-informed zeroth-order optimization. The authors claim that their method achieves reduced memory overhead while maintaining comparable accuracy to existing techniques, particularly in memory-constrained environments. Experiments are conducted on OPT-1.3B and LLaMA-2-7B.
Claims And Evidence: Weak Theoretical Justification: While the paper includes some theoretical analysis adapted from prior work, it lacks rigorous new theoretical contributions that would convincingly support the efficiency and convergence claims of the proposed method.
Missing Baselines: The comparisons are limited to zeroth-order methods, while omitting strong first-order alternatives like low-memory gradient-based fine-tuning methods. Without these, the significance of the claimed efficiency improvements remains unclear.
Methods And Evaluation Criteria: It makes sense.
Theoretical Claims: There is no theoretical claim in this paper.
Experimental Designs Or Analyses: I check the soundness/validity of all experimental designs or analyses.
Supplementary Material: I review all parts of the supplementary material.
Relation To Broader Scientific Literature: There is no contribution of the paper related to the broader scientific literature. This paper focus on the application aspects.
Essential References Not Discussed: The comparison is primarily focused on zeroth-order methods, ignoring strong first-order baselines like gradient-checkpointed optimizers, low-memory fine-tuning techniques (e.g., LOMO, GaLORE), and better-engineered BCD implementations.
Other Strengths And Weaknesses: **Strengths:**
1. The problem of memory-efficient fine-tuning for LLMs is important, especially for resource-constrained environments.
**Weaknesses:**
1. Limited Novelty: The paper primarily combines existing techniques—HiZOO and BCD—without introducing fundamentally new theoretical insights. The contribution is incremental, as the method is an adaptation rather than a novel algorithmic breakthrough.
2. Unsubstantiated Claims on Memory Efficiency: The claim that the method is a practical, convergence-enhanced alternative to MeZO is not convincingly supported. MeZO is actually more memory-efficient than the proposed method, contradicting the core motivation of the work.
3. Minimal Performance Gains: The performance improvements over MeZO are marginal (70.2 vs. 70.0 in average score), and the convergence rate is nearly identical, as observed in prior evaluations. This raises questions about whether the added complexity of BCD is justified.
4. Weak Theoretical Justification: While the paper adapts theoretical results from HiZOO, it does not provide new theoretical contributions to support the efficiency or convergence guarantees of the proposed approach.
5. Lack of Strong Baselines: The comparison is primarily focused on zeroth-order methods, ignoring strong first-order baselines like gradient-checkpointed optimizers, low-memory fine-tuning techniques (e.g., LOMO, GaLORE), and better-engineered BCD implementations.
6. Limited Scope of Experiments: The evaluation is restricted to two models (OPT-1.3B and LLaMA-2-7B), whereas HiZOO was evaluated on much larger models (up to 66B parameters). This raises concerns about the generalizability of the approach to larger-scale LLMs.
Other Comments Or Suggestions: Please refer to my previous comments.
Questions For Authors: Please refer to my previous questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Dear Reviewer Nphh,**
We sincerely appreciate your feedback. In addressing the comments from all reviewers, we have made efforts to clarify and improve each point raised. Below, we provide a concise summary of the key changes made in response to your concerns and recommendations.
### **Weaknesses:**
1. Technical contribution: At the time of our first submission, no prior work had combined second-order Newton methods, zeroth-order optimization, and block coordinate descent. Through extensive parameter searches and experiments, we demonstrate that activating partial layers with an ascending order as the block coordinate descent form succeeded and match MeZO's performance as a second-order integrated method.
2. Memory Efficiency: We argue that as a second-order informed method, there must be a memory-convergence tradeoff, and we managed to reduce the impact to the minimum. **New experiment results** on OPT-30B demonstrate that our method improves speed and accuracy compared to MeZO, showing breakthroughs in efficiency (Refer to our responses to **Reviewers h68x and CqZr** ).
| Method | Model/Dataset | Batch Size | Notes | Accuracy | Training Time |
| ------ | --------------- | ---------- | ------------------ | -------- | ------------- |
| B-PDF | OPT-30B (SST-2) | 16 | 2×A100 80GB nodes | 92.89\% | **7.5 hours** |
| HiZOO | OPT-30B (SST-2) | 16 | 2×A100 80GB nodes | 90.3\% | 20.8 hours |
| MeZO | OPT-30B (SST-2) | 16 | 2×A100 80GB nodes | 90.6\% | 13.7 hours |
| B-PDF | OPT-**30B** (SST-2) | 128 | 8×A100 80GB nodes | **93.6\%** | 9.9 hours |
| HiZOO | OPT-66B (SST-2) | 16 | baseline | 93.6\% | - |
| MeZO | OPT-66B (SST-2) | 16 | baseline | 93.6\% | - |
3. Performance Gains: **We can reach 1.83x wall-clock time speedup than MeZO and 2.77x speedup than HiZOO baseline, training opt-30B on SST2.** As shown in our results on larger models (30B) as above, we achieve better accuracy and computational efficiency than MeZO. We will explore block selection strategies and perform thorough parameter searches in future work to support better baselines.
4. Theoretical Justification: For the BCD bound, we proposed a layer selection strategy based on adjusting bandit probabilities via block gradient norms in response to **Reviewer K578**. As a starting point, this may shed light on an efficient block selection method.
5. First Order Baselines: In response to **Reviewer h68x**, we show experiments comparing memory requirements and explain why first-order methods (even with BCD) are less efficient than zeroth-order methods. We attach the table as follows. Note that in low-end scenarios, first-order methods remain unsuitable for comparison with zeroth-order methods because they still require significant memory overhead. Even if optimized for consumer-level 24GB GPUs, their memory usage approaches hardware limits, easily leading to unstable training (OOM errors) and small usable batch sizes (1, 2, 4). See our baseline results in response to **h68x**, whereas our zeroth-order method could run far below memory limits (<8GB), or support much larger batchsizes (128).
| Method | Batch Size | Memory Consumption | Notes |
| ------------------- | ---------- | ------------------ | -------------------- |
| GaLore-AdamW (FP32) | 1 (OOM) | OOM | Failed due to OOM errors even at minimal batch size. |
| GaLore-AdamW (BF16) | 8 | 42,132 MiB | Computational overhead. Small batch size. |
| LOMO | 4 | 39,910 MiB | Stable operation with small batch size. |
| MeZO | 128 | 35,636 MiB | Highly memory-efficient implementation supports large batch sizes. |
| BAdam | 2 | 22,411 MiB | Uses paged optimizer. Causing instability in low-end scenarios. |
Note:
These results illustrate significant differences in memory requirements between first-order and zeroth-order methods.
First-order approaches, even though including memory-efficient or BCD variants, still incur memory overhead from storing states such as activations. When optimizing the first layer in a first-order BCD framework, peak memory usage remains high due to the need to retain activations for gradient computation in subsequent layers.
6. Scope of Experiments: After extending the parameter scale, our SST-2 result reaches 92.8\% is close to the accuracy reported by HiZOO for 66B models (93.6\%). We added an additional experiment with 8 GPUs (16 batch size per GPU), achieving 93.6\% accuracy with the fastest speed. This could prove the generalizability of our approach to larger-scale LLMs. We attach this result to the table "Memory Efficiency".
Thank you for your help in improving our work. We look forward to your feedback. | Summary: The authors propose a new zero-order optimizer for fine-tuning the pre-trained model to the downstream task that incorporates second-order information. The main issue addressed in the study is the infeasible memory consumption of classical optimizers for the fine-tuning process. The main idea is to use a block coordinate descent framework and update only a part of layers' parameters in every iteration. This approach makes the low-memory custom devices appropriate for fine-tuning the LLMs. LLaMa2-7b and OPT-1.3B models are considered for GLUE and SuperGLUE downstream tasks in experiments. The proposed approach leads to a reduction in memory footprint while preserving the same final accuracy.
Claims And Evidence: Most claims in the manuscript are supported by numerical or theoretical evidence. However, I would like to see training loss and test loss for the considered tasks and the corresponding runtime in Figure 3. In addition, the stability analysis for the proposed approach is ignored. I would suggest showing the dependence of the convergence on the batch size used in the gradient estimation from the zero-order information. Using batchsize=1 provides an extremely noisy estimate, I guess.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense and align with the problem stated in Section 3.
Theoretical Claims: The manuscript does not provide any rigorous proof, only the result of modification for the HiZOO theorem. A convergence rate like $O(1/ \sqrt{T})$ looks reasonable, although I did not check the proof line-by-line.
Experimental Designs Or Analyses: The design of the presented numerical experiments is sound and valid for the considered task. The selection of the competitors is also well-motivated and meaningful.
Supplementary Material: I have reviewed the supplementary materials and found them helpful in reproducing the presented experiments. The proof of the main convergence theorem is brief, so I expect it to be correct.
Relation To Broader Scientific Literature: The key contribution of the submitted manuscript is the combination of the zero-order optimization procedure, second-order preconditioner, and block coordinate descent framework. The authors find a practically important setup where such a combination becomes crucial for the overall performance of the fine-tuning process.
Essential References Not Discussed: Missing references to alternative zero-order methods developed for training neural networks: ReLIZO: Sample Reusable Linear Interpolation-based Zeroth-order Optimization, Xiaoxing Wang, Xiaohan Qin, Xiaokang Yang, Junchi Yan, NeurIPS 2024
Other Strengths And Weaknesses: I see three main weaknesses in this submission.
1. The experiments consider only two medium/small-scale models. So, the robustness of the proposed method in fine-tuning larger models remains unclear. I am sure the memory footprint will be smaller, but will the accuracy be preserved as in the non-block strategy?
2. While many blocking strategies are discussed, the single simplest blocking strategy is tested. I am sure many natural heuristic blocking strategies require the same amount of memory and could provide better results. For example, one can update all even layers and then all odd layers or something similar.
3. This study completely ignores low-precision formats. At the same time, quantization is the natural competitor in reducing the memory footprint during fine-tuning. The larger models in BF16 or even lower-bit formats could be discussed and tested on the mentioned GPUs. The synergy of such a memory-efficient optimizer and low-bit formats could provide more opportunities for fine-tuning huge models in user-level devices.
Other Comments Or Suggestions: No other comments or suggestions.
Questions For Authors: 1. What quantities are presented in Table 3?
2. Why do authors exclude BAdam from the competitors? Its performance could highlight the impact of the inexactness in the gradient estimation with the approximate second-order information.
3. How much the quality degrades if one uses the first-order approximation of the gradient based on $L(\theta)$ and $L(\theta+ \delta)$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Dear Reviewer CqZr,**
Thank you for your thoughtful questions and insights.
We appreciate your feedback and will address it as follows:
### **Claims and Evidences:**
Training and test loss for figure 3, train loss are smoothed as in the figure:
| Method | Training Loss | Test Loss |
|---------|--------------:|----------:|
| HiZOO | 0.1973 | 0.2410 |
| B-PDF | 0.2040 | 0.2266 |
| MeZO | 0.2308 | 0.2410 |
Stability analysis:
To address your request, we test under extreme conditions with batch size = 1 (without extensive tuning), observing a accuracy drop: SST2 acc=0.8337 (baseline 0.917).
Since using batch size = 1 introduces significant variance, in practice, we have avoided this by employing batch size $\ge$ 16, which also easily fits within memory limits for zeroth-order methods (see also our response to **Reviewer h68x**, Experimental Design).
Theoretical Claims: We direct you to the randomized BCD framework discussed in our response to **Reviewer K578**.
### **Missing Reference:**
We appreciate your suggestion to include ReLIZO (Wang et al., NeurIPS 2024).
We note that this work improves computation efficiency by modeling the gradient estimation into a QCLP problem, and applying query-reuse strategy in zeroth-order optimization, achieving impressive results across tasks (especially, 93.4\% accuracy on SST-2).
We will cite it as a representative example of computationally efficient zeroth-order techniques.
### **Weaknesses:**
1. As detailed in our response to Reviewer h68x, we **expanded our experiments** training OPT-30B on SST2 task, using 2×A100 nodes.
B-PDF achieves 92.89\% accuracy in 7.5 hours, outperforming HiZOO (90.3\%, 19.8 hours) and MeZO (90.6\%, 13.7 hours). This result validates the scalability and efficiency of our method when scaling up.
| Method | Model/Dataset | Batch Size | Hardware | Accuracy | Training Time |
| ------ | --------------- | ---------- | ------------------ | -------- | ------------- |
| B-PDF | OPT-30B (SST-2) | 16 | 2×A100 80GB nodes | 92.89\% | 7.5 hours |
| HiZOO | OPT-30B (SST-2) | 16 | 2×A100 80GB nodes | 90.3\% | 20.8 hours |
| MeZO | OPT-30B (SST-2) | 16 | 2×A100 80GB nodes | 90.6\% | 13.7 hours |
2. We tested some additional strategies: We evaluated following block selection strategies on OPT-1.3B, SST-2 (We note that arameters are not fully searched in these experiments, so there is room for improvement):
| Strategy | Accuracy |
|-------------------------------------|----------|
| Ascending Order (ours default) | 91.9% |
| Gauss-Southwell-Diagonal | 90.71% |
| Random Reshuffling | 90.71% |
| An Odd-Even Staged Strategy | 91.40% |
*Note: In practice, we found counting gradient norm introduces heavy computational overhead, making it less efficient than the natural ascending order or random sampling. This matches the results in BAdam [4]. As for the odd-even staged strategy, we perform [1, 3, 5, 7], followed by [2, 4, 6, 8], ... , as integrating our active block sets with your suggestion.*
> [4] Luo, Qi, Hengxu Yu and Xiao Li. “BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models.” *Neural Information Processing Systems* (2024).
We will add this analysis to appendix and extend it in future work. Currently, we see that the ascending order remains optimal for balancing performance and simplicity.
3. Regarding the application prospects of combining low-precision formats with memory-efficient optimizers, we fully agree with your point of view. In our current work, we conduct experiments based on MeZO framework with FP16 precision, and in the future we could explore combining our method and lower precision training. In addition, we observed challenges in utilizing Hessian information matrices under low-precision conditions during our experiments, and have applied clamping methods to stabilize numerical computations. We are following advancements in recent work and advanced GPU architectures supporting FP8 quantization, which we believe that quantization methods will significantly enhance training efficiency in the future.
### **Questions:**
1: Quantities in Table 3: The quantities represent test accuracy.
2: Due to low-end system instability on our consumer-grade hardware (RTX 4090 on an old and low-bandwidth motherboard), BAdam caused overheating and crashes.
For comparison, we have tested Adamw-HF(HuggingFace version) accuracy on SST-2, opt-1.3b: 93.70\% (with BS=8 LR=5e-6, peak allocated memory=10450MB, 10k training steps).
3: We conduct an experiment under a MeZO setting, which shows that the accuracy slightly improves (91.7\% to 91.86\%) when using single perturbations.
(MODEL=opt-1.3b TASK=SST2 BS=16 MODE=ft LR=3e-7 EPS=1e-3 STEPS=20000, running time is 2h57min on an A6000 node)
We sincerely express our gratitude for your guidance in improving our work. | Summary: This paper proposes B-PDF, a memory efficient bcd-newton optimization method for LLM fine-tuning, especially for low-end devices, which integrates block coordinate descent with a zeroth-order Newton-method optimizer. This approach reduces memory overhead by updating parameters and diagonal Hessian information in a layer-wise BCD scheme. Experiments show that the proposed method reduces the memory-intensive bottleneck of the second-order optimization while maintaining performance.
Claims And Evidence: 1. memory efficiency: experiments show that B-PDF reduces memory cost comparing to hizoo on opt-1.3B and llama2-7B.
2. convergence rate: Figure 3 shows that B-PDF converges better than mezo, and matches hizoo’s accuracy with faster wall-clock speed.
3. practical utility: B-PDF can fine-tune llama2-7b on an RTX A6000, a practical case for relatively low-resource deployment.
Methods And Evaluation Criteria: 1. Methods: BCD with Hessian-informed ZO is well motivated to reduce the memory cost and boost convergence, addressing the memory-convergence trade-off via layer-wise updates.
2. Evaluation: GLUE benchmarks are standard, and the low-end settings using small batch sizes seems practical for consumer-level gpus in real-world scenarios.
Theoretical Claims: The convergence proof aligns with the paper’s focus.
Experimental Designs Or Analyses: 1.measurements on opt-1.3B/llama2-7B are valid.
2.baselines: missing comparisons with optimizers such as GaLore, which are relevant for memory efficiency and first-order optimization.
Supplementary Material: I reviewed the supplementary materials in details, including implementation, hyperparameter search, visualizations and convergence analysis. They support the paper findings robustly. However, a brief comparison with other optimizers would be better.
Relation To Broader Scientific Literature: Findings: the work rethink the memory overhead and show the memory-convergence tradeoff in previous algorithms (hizoo).
Ideas: the work bridges gaps in zeroth-order optimization (mezo, hizoo) and block-wise training (badam, lisa).
Results: the work enables full-parameter fine-tuning with memory efficiency comparable to mezo and lora-like PEFT methods. it demonstrates that second-order brings faster convergence without sacrificing memory efficiency, and show practical 1.3B~7B model adaptation on relatively low-performance GPUs (4090 and A6000).
Essential References Not Discussed: These methods could be included as baselines to further prove performance:
1. GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
2. zo-adam: Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Other Strengths And Weaknesses: Strengths:
1.novelty: the problem and background is well motivated. The integration of BCD with Hessian-informed zeroth-order optimization is a sensible contribution to memory-efficient LLM fine-tuning, which avoids backpropagation and reduces storage.
2. technical impact: the proposed method is creative, resolving a key bottleneck in second-order methods, it sure can benefit real-world use case.
3. The paper is well-written and easy to follow.
Weaknesses:
1.baseline: while comparisons with hizoo and mezo are thorough, including bcd-mezo and recent memory-efficient methods (e.g. galore) would better prove B-PDF’s performance.
2. scalability: are there testing results on models >7B?
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Some efficient baselines like GaLore, bcd-mezo and ZO-Adam is not included as a baseline. Could B-PDF save more memory than them?
2. How does block selection impact performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Dear Reviewer h68x,**
Thanks for your feedback and valuable suggestions.
Below, we address each of your concerns to improve our manuscript.
### **Experimental Designs and Question 1:**
Baseline Comparisons with GaLore and Other Methods:
We have carefully considered your comments and those from Reviewers CqZr and Nphh, regarding comparisons with first-order memory-efficient baselines such as BAdam, GaLore, and LOMO.
Below, **new experiments** demonstrate the high memory consumption of these methods, conducted on a single RTX A6000 (48GB) with the LLaMA-3-8B model loaded in FP32 format for the SST-2 task (table shown in 5.First Order Baselines to Reviewer **Nphh**) :
- GaLore-AdamW: Encountered out-of-memory (OOM) errors even with a batch size of 1. When loaded in BF16 format, it could run with a maximum batch size of 8, consuming 42,132 MiB of memory. However, the initial training steps were significantly slower due to its additional computational overhead.
- LOMO: Achieved a maximum batch size of 4 with 39,910 MiB memory usage.
- MeZO: Supported a maximum batch size of 128 with 35,636 MiB memory consumption.
- BAdam: Required 23.5 GB of memory with a batch size of 2, as reported in the paper [1]. We note that while the official implementation utilizes a paged optimizer to reduce memory pressure, we observed high I/O costs between CPU and GPU memory in our test environment (consumer-grade motherboard and RAM), leading to system instability. This limitation is less pronounced in data center environments with optimized cooling and hardware support, but it highlights challenges for low-end systems.
> [1] Luo, Qi, Hengxu Yu and Xiao Li. “BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models.” Neural Information Processing Systems (2024).
These results illustrate significant differences in memory requirements between first-order and zeroth-order methods.
First-order approaches, including memory-efficient or BCD variants, still incur memory overhead from storing activations for backpropagation.
For example, when optimizing the first layer in a first-order BCD framework, peak memory usage remains high due to the need to retain activations for gradient computation in subsequent layers.
Thus, our original experiments focused on zeroth-order baselines. We will attach the comparison in the supplementary material to enable a more comprehensive understanding of the memory efficiency of first-order methods.
Regarding **ZO-Adam**, while it demonstrates higher accuracy than MeZO in certain scenarios, its memory footprint is substantially larger. As reported by Zhang et al. [2], fine-tuning the full OPT-13B model on the MultiRC dataset with a batch size of 4 requires 64 GB for ZO-SGD and 158 GB for ZO-Adam, exceeding the capacity of typical low-end devices.
> [2] Zhang, Yihua, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu and Tianlong Chen. “Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark.” ArXiv abs/2402.11592 (2024).
To ensure comprehensive analysis, we will include detailed comparisons with these methods in the supplementary material. This addition will clarify the trade-offs between memory efficiency and computational requirements across different optimization paradigms.
### **Weaknesses:**
Scalability to Models $>$ 7B Parameters:
While our initial experiments focused on OPT-1.3B and LLaMA-2-7B due to hardware limitations, now we have conducted **an additional experiment** to address scalability concerns and further validate our method. For OPT-30B fine-tuned on the SST-2 dataset with a batch size of 16 (2xA100 80GB nodes),
**B-PDF** achieved 92.89\% accuracy in **7.5 hours** , significantly outperforming HiZOO (90.3\% accuracy [3], 20.8 hours) and MeZO (90.6\% accuracy [3], 13.7 hours).
The improved accuracy and reduced runtime highlight the efficiency of our proposed method for both low-end and high-end environments. While zeroth-order methods traditionally trade accuracy for memory savings, and second order integration brings significant computing overhead, our method mitigates this trade-off, enabling competitive performance even on larger models.
> [3] Zhao, Yanjun, Sizhe Dang, Haishan Ye, Guang Dai, Yi Qian and Ivor Wai-Hung Tsang. “Second-Order Fine-Tuning without Pain for LLMs: A Hessian Informed Zeroth-Order Optimizer.” ArXiv abs/2402.15173 (2024).
### **Question 2:**
Impact of Block Selection Strategies:
Due to length limits, please refer to our answer to **Reviewer CqZr** for a detailed discussion on the impact of block selection strategies. We will include this analysis in the supplementary material.
We appreciate your guidance in helping us strengthen this work. | Summary: The paper proposes a zero-order method for fine-tuning large language models (LLMs), utilizing a block coordinate descent approach to reduce memory costs.
In this approach, blocks are defined as layers of the LLM, which are updated individually while the remaining layers are frozen.
To improve convergence speed, the authors incorporate Hessian information, which is estimated in a forward-only manner.
The method's competitiveness is demonstrated by comparing it against other zero-order methods, namely in terms of memory usage and time efficiency.
Claims And Evidence: The authors are not entirely honest when claiming an "improved block coordinate descent scheme" as their contribution.
In practice, Column 1 Lines 282-292, they use a standard BCD method with an ascending order rule.
Methods And Evaluation Criteria: The evaluation is fair
- the authors assess the timing gains and memory savings relative to other zero-order methods
- the accuracy of the method is also being evaluated
Theoretical Claims: Equation (2) does not correctly transcribe the update from [1].
Specifically, there is an issue with the second term on the right-hand side, which causes it to diverge from the standard EMA formulation.
In EMA, the first parameter corresponds to an accumulation of past updates, however the meaning of the absolute value $| \Sigma_t |$ is undefined in the equation.
There are concerns with Algorithm 1
- There is confusion in the indices $i, s, t$ which makes the algorithm unclear.
- The projected gradient steps should be computed with the Hessian approximation before the update, but this is not done.
- For the weight updates, the same random direction z used to compute the perturbation should be used for the updates, but there is no hint about that in the algorithm.
- The algorithm loops over $\theta_i$ in $\theta_b$, which gives the impression that a batch of blocks is being updated, but this is not clarified.
- The EMA step in line 17 does not correspond to an EMA process as described in the literature.
The convergence proof in Appendix D is invalid. [1], on which the authors base their analysis, provides a proof for the whole-update method, while the authors apply this analysis to BCD.
The proof sketch for full updates and coordinate updates are different, and BCD can lead to cyclic behavior, preventing convergence (see [3], Example 3.1).
Additionally, the role of the blocks is not addressed in the proof, which make it irrelevant to the specific case of BCD.
---
.. [1] Zhao, Yanjun, et al. "Second-order fine-tuning without pain for llms: A hessian informed zeroth-order optimizer." arXiv preprint arXiv:2402.15173 (2024).
.. [2] Tarvainen, Antti, and Harri Valpola. "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results." Advances in neural information processing systems 30 (2017).
.. [3] Wright, Stephen J. "Coordinate descent algorithms." Mathematical programming 151.1 (2015): 3-34.
Experimental Designs Or Analyses: The experimental design is sound.
In addition the authors provide baseline comparison with first-order methods.
Supplementary Material: I have reviewed the experimental setup, namely appendix A, B, C.
I glanced the proof in the appendix D.
Relation To Broader Scientific Literature: From the experimental results, the method provides only minor improvements compared to MeZO, as seen in Table 3.
Additionally, the claim of improving convergence speed is modest, with a maximum improvement of only 7%, according to Table 2.
Essential References Not Discussed: None
Other Strengths And Weaknesses: - Fix line numbers in Algorithm 1
- Avoid bold statement such as the one in Column 2 Line 295 "The consistent use of random vectors and selective parameter perturbation further enhance the method’s memory efficiency."
- in figure 2, color code in the bar chart is confusing especially between "parameters" and "gradients" bins
Other Comments Or Suggestions: None
Questions For Authors: No further questions
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Dear Reviewer K578,**
We sincerely appreciate your thoughtful review and apologize for the confusion caused by the typos and unclear claims. We are grateful for your careful reading and will address each point thoroughly.
### **Theoretical Claims:**
For Equation 2 and the EMA, the first term $\Sigma_{t+1}^{-1}$ represents the stored Hessian.
We will add a hat to indicate the $\hat\Sigma_t$ is the estimated Hessian, as described in line 311 of Algorithm 1, and the use of the absolute value sign to ensure non-negativity. We will remove the term diag() in line 312 since it is already in diagonal form. This revision will address your concern about the unaligned EMA process. (point 5)
Regarding the confusion in Algorithm 1:
- point 1, indices: we will explain them more carefully as below.
- point 4, the index $\pi_b$ means that in all we have $b$ active blocks for the step.
We find that selecting more than one blocks for efficiency works well, and this could be a hyperparameter to search for. In practice, using 4 blocks works effectively.
- The parameter $\mu$ represents the in-place perturbation, and we perform forward passes for $\theta$, $\theta+\mu\Sigma^{\frac{1}{2}}$, and $\theta-\mu\Sigma^{\frac{1}{2}}$ with $\mu_i$ in the directions 0, +$\mu$, and -2$\mu$.
- point 3, The index $s$ refers to the seed, and we use it to sample the same random direction $z$ for both perturbation and updates, which is a key idea in MeZO.
- point 2, we admit that our implementation follows HiZOO's design in computing Hessian first. In our practice, block switching is frequent, and this implementation sometimes converges faster since the Hessian initialization is an identity matrix. We will try to ablate its impact in revisions.
For the Convergence Proof for BCD, as appropriately noted, the process different from full-parameter optimization can raise theoretical concerns.
To address this, we provide the analysis to a randomized BCD framework by incorporating a probabilistic block selection mechanism.
Below is the proof sketch:
For brevity, consider the objective function $\mathcal{L}(\theta)$, where the parameters $\theta = [\theta_1, \dots, \theta_D]$ are partitioned into $D$ blocks. In each iteration, a block $i \in \\{1,\dots,D\\}$ is randomly selected with probability $p_i$ to be updated.
At each iteration, the gradient block $ \hat{\nabla} \mathcal{L}\_{t,i} $ is retained with probability $ p_{t,i} $ and dropped otherwise. This sparsification is formalized as: $\hat{\nabla} \mathcal{L}(\theta_t) = \sum_{i=1}^D \frac{\hat{\nabla} \mathcal{L}\_{t,i}}{p_{t,i}} Z_{t,i},$ where $ Z_{t,i} \sim \text{Bernoulli}(p_{t,i}) $. The sparsified gradient $\hat{\nabla} \mathcal{L}(\theta_t)$ is unbiased.
Under following assumptions:
- block L-smoothness $||\nabla_i \mathcal{L}(\theta) - \nabla_i \mathcal{L}(\theta')|| \leq L_i ||\theta_i - \theta_i'||$,
- bounded gradient estimation variance $\mathbb{E}[|| \hat{\nabla}\_i \mathcal{L}(\theta) - {\nabla}\_i \mathcal{L}(\theta)||^2]\le \sigma_i^2$,
- Hessian preconditioning constraints $ 0 < \beta_\ell \leq \lambda_{\min}(\Sigma_i) \leq \lambda_{\max}(\Sigma_i) \leq \beta_u $,
- Bregman divergence bound: $B(\theta, \theta') \leq R^2 $ for all $ \theta, \theta'$.
Define $\Sigma = \text{diag}(p_1 \Sigma_1, \dots, p_D \Sigma_D),$
we can finally prove that,
$\mathbb{E} \sum_{t=1}^T \sum_{i=1}^D\left[||\nabla \mathcal{L}(\theta_d^t)||^2_{\Sigma}\right] \le \frac{2 R^2}{\eta}+16\eta L_\infty (\text{tr}(\Sigma_\infty)+\beta_u) \sum_{t=1}^T \sum_{i=1}^D \left( \frac{||\nabla_i \mathcal{L}(\theta^t)||^2_{\Sigma_i}}{p_{i,t}}+\sigma_i^2 \right)$
Fortunately, to the best of our knowledge, existing approach can solve $p_{t,i} = \frac{||\hat{\nabla} \mathcal{L}\_{t,i}||}{\sum_{d=1}^D ||\hat{\nabla} \mathcal{L}\_{t,d}||}$ with a bandit trick, and it does not affect the $\mathcal{O}(1/\sqrt{T})$ bound[1].
> [1] Communication-efficient Distributed Learning for Large Batch Optimization. Rui Liu, Barzan Mozafari, PMLR 162:13925-13946, 2022.
Thus, we extend the convergence analysis to a block coordinate descent setting, demonstrating the applicability of our method. This also matches with that, while BCD may prevent convergence in non-convex settings, our experiments demonstrate stable convergence in practice. Due to response length limits, we can discuss this further in the final response period and we welcome your suggestions.
### **Broader Scientific Literature:**
Regarding performance: Initial experiments were limited by hardware. However, with additional computational resources, we have benchmarked on OPT-30B to validate scalability, which achieves better accuracy with less training time for larger models. Please, refer to our response to **Reviewers h68x and Nphh**. We leave hyperparameter tuning and BCD schemes for future work.
Thank you for your constructive feedback on strengthening both our theory and practical validation.
---
Rebuttal Comment 1.1:
Comment: Upon reading the answer, here is my insight:
The authors do not fully address my theoretical concerns, making the response difficult to assess.
- The theoretical results of the paper remain questionable. Providing a complete proof separately would have been better
- Similarly, a revised version of the algorithm would have been better
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer K578,
Thanks for your patience. With the expanded space, we now provide a more comprehensive analysis of the randomized version of our BCD algorithm.
Considering all necessary assumptions from [1], [2].
Suppose the model has $D$ blocks.
Denote $\theta_{t,[i]} = [0,\dots,\theta_{t,i},\dots,0]$ and $g_{t,[i]} = [0,\dots,g_{t,i},\dots,0]$.
At the $t$-th iteration, the $i$-th block in BCD has parameter $\theta_{t,[i]}$.
The bracket notation $[\,]$ also applies to the diagonal Hessian $\Sigma_t$ and random perturbation $u_t$.
Each block has a probability $p_i$ of being selected. At the $t$-th iteration, the sampling indicator $Z_{t,i}$ is drawn from a Bernoulli distribution with parameter $p_i$. We use the gradient estimate:
\begin{equation}
\tilde{g}\_{t,[i]} = \frac{Z_{t,i}}{p_i} \cdot \frac{L\left(\theta_t + \sum_{d=1}^D \mu\Sigma_{t,[d]}^{1/2}u_{[d]}\right) - L\left(\theta_t - \sum_{d=1}^D \mu\Sigma_{t,[d]}^{1/2}u_{[d]}\right)}{2\mu} \cdot \Sigma_{t,[i]}^{1/2}u_{[i]},
\end{equation}
where the term $\frac{Z_{t,i}}{p_i}$ ensures unbiased estimation.
From Taylor expansion:
\begin{equation}
\Delta L = 2\mu\nabla^\top L(\theta_t)\sum_{d=1}^D \Sigma_{t,[d]}^{1/2}u_{[d]} + \mathcal{O}(\mu^2).
\end{equation}
The expectation yields:
\begin{aligned}
\tilde{g}\_{t,[i]} &= \frac{Z_{t,i}}{p_i}\sum_{d=1}^D \Sigma_{t,[i]}^{1/2}u_{[i]}u_{[d]}^\top\Sigma_{t,[d]}^{1/2}\nabla L(\theta_t) + \mathcal{O}(\mu), \\\\
\mathbb{E}[\tilde{g}\_{t,[i]}] &= \Sigma_{t,[i]}\nabla L(\theta_t) + \mathcal{O}(\mu).
\end{aligned}
The update rule is given by:
\begin{equation}
\theta_{t+1,[i]} = \theta_{t,[i]} - \eta_t \cdot \tilde{g}\_{t,[i]}.
\end{equation}
Under the block Lipschitz assumption:
\begin{aligned}
L(\theta_{t+1}) - L(\theta_t)
&\leq \sum_{i=1}^D \left\langle \nabla L(\theta_t), \theta_{t+1,[i]} - \theta_{t,[i]} \right\rangle + \frac{L_\infty}{2}\|\theta_{t+1} - \theta_t\|^2 \nonumber \\\\
&= -\eta_t \sum_{i=1}^D \left\langle \nabla L(\theta_t), \tilde{g}\_{t,[i]} \right\rangle + \frac{L_\infty\eta_t^2}{2}\|\tilde{g}\_t\|^2.
\end{aligned}
Take expectation and according to HiZOO proof [2]:
\begin{aligned}
\mathbb{E}[L(\theta_{t+1})] - \mathbb{E}[L(\theta_t)]
&\leq -\eta_t||\nabla L(\theta_t)||\_{\Sigma_t}^2 + \eta_t\mathcal{O}(\mu||\nabla L(\theta_t)||) + \frac{L_\infty\eta_t^2}{2}\mathbb{E}||\tilde{g}\_t||^2 \\\\
&\leq -\frac{\eta_t}{2}||\nabla L(\theta_t)||\_{\Sigma_t}^2 + \frac{L_\infty\eta_t^2}{2}\mathbb{E}||\tilde{g}\_t||^2.
\end{aligned}
Summing over $T$ iterations:
\begin{aligned}
\sum_{t=1}^T \frac{\eta_t}{2}||\nabla L(\theta_t)||\_{\Sigma_t}^2
&\leq \mathbb{E}[L(\theta_1)] - \mathbb{E}[L(\theta^*)] + \frac{L_\infty\eta_t^2}{2}\sum_{t=1}^T\sum_{i=1}^D \frac{||g_{t,[i]}||^2}{p_i}.
\end{aligned}
The optimal probabilities minimizing the second term are:
\begin{equation}
p_{t,i} = \frac{||g_{t,[i]}||}{\sum_{i=1}^D ||g_{t,[i]}||}, \quad \forall d.
\end{equation}
JointSpar[1] solves this via bandit optimization and achieves $\mathcal{O}\left(\frac{1}{\sqrt{T}}\right)$ convergence rate.
> [1] Communication-efficient Distributed Learning for Large Batch Optimization. Rui Liu, Barzan Mozafari, PMLR 162:13925-13946, 2022.
> [2] Zhao, Yanjun, et al. "Second-order fine-tuning without pain for llms: A hessian informed zeroth-order optimizer." arXiv preprint arXiv:2402.15173 (2024).
An improved algorithm version is as below:
**Algorithm 1** Training Pipeline of the Proposed B-PDF
**Input**: Parameters $\theta \in \mathbb{R}^d$, loss function $\mathcal{L}$, perturbation scale $\mu$, learning rate $\eta$, smooth scale $\alpha$
**for** $t = 1, \dots, T$ **do**
**1.** Select block $\theta_{\pi_b} \in \theta$ according to the BCD rule
**2.** **if** a new block is selected **then**
$\Sigma_1 \gets \mathbf{I}\_{|\theta_{\pi_b}|}$ *// Diagonal Hessian initialization*
**3.** Freeze other layers
**4.** Sample a random seed $s$ *// First-time sampling*
**5.** **for** $\mu_i = 0, +\mu, -2\mu$ **do**
**6.** **for** $\theta_i \in \theta_{\pi_b}$ **do**
Sample $z\_i \sim \mathcal{N}\_s(0, \mathbf{I}\_{|\theta_i|})$
$\theta_i \gets \theta_i + \mu_i \Sigma^{1/2}\_{t,i} z\_i$ *// In-place perturbation*
**end for**
$\ell_{\texttt{sign}(\mu_i)} \gets \mathcal{L}(\theta)$ *// forward (3x in interations)*
**end for**
**7.** Compute projected gradient:
projected\_grad $\gets (\ell_{+} - \ell_{-}) \Sigma^{1/2}\_t / 2\mu$
**8.** Update Hessian:
$\hat{\Sigma}\_{t+1} \gets \frac{\Delta \ell}{2 \mu^2} \Sigma^{-1/2}\_{t} z z^\top \Sigma^{-1/2}\_{t}$
**9.** Smooth covariance:
$\Sigma_{t+1}^{-1} \gets (1 - \alpha_t) \Sigma_{t}^{-1} + \alpha_t \left| \hat{\Sigma}\_{t+1} \right|$
**10.** Reset random number generator with seed $s$
**11.** **for** $\theta_i \in \theta_{\pi_b}$ **do**
Sample $z\_i \sim \mathcal{N}\_s(0, \mathbf{I}_{|\theta_i|})$
$\theta_i \gets \theta_i - \eta_t \cdot$ projected\_grad $\cdot z_i$
**12.** **end for**
**end for** | null | null | null | null | null | null |
Analytical Lyapunov Function Discovery: An RL-based Generative Approach | Accept (poster) | Summary: This paper proposes to use transformers and reinforcement learning (RL) to construct local analytical Lyapunov functions for high-dimensional non-polynomial systems. The proposed framework consists of three components: 1) a symbolic transformer to generate candidate Lyapunov functions; 2) a numerical verifier for finding counterexamples; and 3) a risk-seeking policy gradient algorithm to optimize the symbolic transformer based on the Lyapunov risk. Furthermore, to improve the training efficiency, a genetic programming component is also included in the training framework. Experimental results on dynamical systems with up to 10 dimensions show that the proposed algorithm successfully found the analytical Lyapunov function in most cases, and even discovered new Lyapunov functions for the 2-bus lossy system.
## update after rebuttal
The rebuttal addresses my concerns, and I think the paper can benefit from the additional results provided in the rebuttal. I have also read other reviewers' comments and the authors' responses. The comments on the scalability of the SOS methods raised by Reviewer uz6L were also my concerns on the additional experimental results, but I think the authors' reply is fair. Overall, I think this is a good paper that provides valid insights on learning-based Lyapunov function discovery. I have increased my score.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand.
Theoretical Claims: I checked the correctness of any proofs for theoretical claims. They are standard results from the literature, and there are no issues.
Experimental Designs Or Analyses: I checked the soundness/validity of the experimental designs or analyses. Below are some potential improvements that could be made.
1. The baselines of the experiments can be improved. The proposed framework specifically targets **analytical** Lyapunov functions, but the introduced baselines are for finding **neural** Lyapunov functions. It would be great if more baselines for analytical Lyapunov functions were introduced, e.g., sum-of-squares methods (at least for the polynomial systems).
2. For the experiment to demonstrate the scalability of the proposed framework on the 10-D system, it would be more interesting to see if the ground truth Lyapunov function is not a simple form like $\sum_{i=1}^{10}x_i^2$ since this is generally the first analytical form that a human would try. Potentially, the authors could change the variables a bit to design a more difficult ground truth Lyapunov function, which would introduce more impressive results.
Supplementary Material: I reviewed the appendix.
Relation To Broader Scientific Literature: The key contributions of the paper are strongly related to the construction of Lyapunov functions using machine learning techniques. This is crucial for verifying the stability of dynamical systems and developing stable controllers for dynamical systems.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Other Strengths:**
1. The writing of the paper is clear and easy to follow.
2. All parts of the proposed framework are ablated in the experiments.
3. The newly discovered Lyapunov function part is impressive.
Other Comments Or Suggestions: I suggest the authors to use `\citep` and `\citet` carefully. The current citation format is messy.
Questions For Authors: 1. In Figure 2, where does the $x_1$ in the bottom-right green box (the input to the encoder) come from?
2. Can the authors discuss the completeness of the proposed framework? What if the algorithm is applied to an unstable system?
3. Is it possible to add more experiments considering my comments in **Experimental Designs Or Analyses**?
I will be willing to increase the score if the questions are addressed.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank reviewer s9Wv for the valuable time and constructive feedback. We provide point-by-point responses below.
**Q1: It would be great if more baselines for analytical Lyapunov functions were introduced, e.g., sum-of-squares methods (at least for the polynomial systems).**
Thanks for the valuable suggestion. We have added comparison to sum-of-squares methods as a baseline for both polynomial and non-polynomial systems detailed as below.
Polynomial System: For a given $n$-dimensional polynomial (poly) dynamics $f(x)$ and a degree 2$d$, to check the globally asymptotical stability of $f(x)$, SOS method aims to find a poly $V(x)$ of degree $2d$, such that 1) $V(x)-\sum_{i=1}^{n}\sum_{j=1}^{d}\epsilon_{ij}x_i^{2j}$ is SOS where $\sum_{j=1}^{d}\epsilon_{ij}>\gamma,\forall i=1,...,n$ with $\gamma>0$, and $\epsilon_{ij}\geq0$ for all $i$ and $j$, and 2) $-\frac{\partial V}{\partial x}f(x)$ is SOS [1].
For local stability analysis, consider a ball of radius $r$ centered at origin $\mathcal{B}_r(0)$, which can be represented by the semialgebraic set
$S=\\{x:g(x,r)\geq0$, where $g(x,r)=r-\sum_{i=1}^{n}x_i^2\\}$.
We require that the stability condition holds in $S$. Retaining the same optimization objective and constraints on $V(x)$ as before, a modified constraint on Lie derivative is imposed: $-\frac{\partial V}{\partial x}f(x)-s(x)g(x,r)$ is SOS for some SOS poly $s(x)$. If such an $s(x)$ exists, we can certify local stability.
We develop our code based on *findlyap* function from SOSTOOLS (MATLAB) and issue-16 of SOSTOOLS' official GitHub repo to examine the SOS method on poly systems in Appendix F. Table summarizes the results, and SOSTOOLS identifies valid Lyapunov functions for systems in Appendix F.1, F.2, and F.3 (up to 3-D). For example, $V(x) =0.86x_1^2-0.60x_1x_2-6.63e^{-7}x_1x_3+0.90x_2^2+4.49e^{-7}x_2x_3+0.79x_3^2$ certifies the global stability of 3-D systems in Appendix F.2.
## Table: Performance of SOS Method on Poly Systems
| **System**| **Degree 2d** | **Runtime** | **Region**|
|---|---|--|---|
| App. F.1| $2$|$0.697$s|$\mathcal{B}_1(0)$|
| App. F.2| $2$|$0.832$s|Global|
| App. F.3 - I|$2$|$0.497$s|Global|
| App. F.3 - II|$4$|$2.509$s|Global|
However, for higher-dimensional systems considered in Appendix F.4, F.5,\& F.6, SOSTOOLS fails to certify global stability - as expected, since these systems are inherently not globally stable. Furthermore, it is unable to verify local stability due to computational limitations arising from the complexity of the $s(x)g(x,r)$ term in the Lie derivative constraint, which grows quadratically with system dimension. For system in Appendix F.4, we have $g(x,1)=1-\sum_{i=1}^6x_i^2$ containing 7 terms, and degree 2 SOS poly $s(x)$ has 36 terms at most. Thus, the constraint on lie derivative would have hundreds of terms, which is unsolvable by SOSTOOLS.
Please refer to Response to Reviewer uz6L (Q1) for SOS method on non-poly systems.
**Q2: It would be more interesting to see if the ground truth Lyapunov function is not a simple form like $\sum_{i=1}^nx_i^2$.**
Consider the synthetic dynamics adapted from Appendix F.4 \& G.2 with interactions between two subsystems:
$$
\dot{x}_1=- x_1+0.5x_2-0.1x_5^2,
$$
$$
\dot{x}_2=-0.5x_1-x_2+0.1x_8,
$$
$$
\dot{x}_3=-x_3+0.5x_4-0.1x_1^2,
$$
$$
\dot{x}_4=-0.5x_3-x_4,
$$
$$
\dot{x}_5=-x_5+0.5x_6,
$$
$$
\dot{x}_6=-0.5x_5-x_6+0.1x_2^2,
$$
$$
\dot{x}_7=x_8,
$$
$$
\dot{x}_8=-\sin(x_7)\cos(x_7)-x_8-\sin(x_9)\cos(x_9)-0.1x_2,
$$
$$
\dot{x}_9=x_8-x_9.
$$
To properly address the trigonometric terms in $\dot{x_8}$, the Lyapunov function for this dynamics can't be a simple form like $\sum_{i=1}^{n}x_i^2$ and should include some trigonometric terms. Setting the state space $\mathcal{D}=\\{x\in\mathbb{R}^9| |x_i|\leq1.5,\forall i=1,\cdots,9\\}$, our method successfully identifies a valid Lyapunov function $V=\sum_{i=1}^6x_i^2+\sin(x_7)^2+x_8^2-\cos(x_9)+1$, which passes formal verification following settings in Section 5. This example and power system examples in Appendix G.3 \& G.5 demonstrate our method's capability to recover the Lyapunov function with complex structures.
**Q3: Messy citation format.**
We thank the reviewer for the helpful suggestion. We will solve the format issue in the final version.
**Q4: In Figure 2, where does the $x_1$ in the bottom-right green box come from?**
Consider the binary tree in the top-right corner, when generating the last token $x_2$, its parent is '+', and its sibling is $x_1$. As a result, the $x_1$ shows up in the bottom-right green box.
**Q5: Can the authors discuss the completeness of the proposed framework? What if the algorithm is applied to an unstable system?**
Please refer to response to reviewer uz6L (Q1) for the completeness of proposed method and reviewer ve2p (Q3) for the analysis on unstable systems.
[1] A. Papachristodoulou, and S. Prajna. A tutorial on sum of squares techniques for systems analysis. In Proceedings of the IEEE American Control Conference, 2005.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal! The rebuttal addresses my concerns, and I think the paper can benefit from the additional results provided in the rebuttal. Please include them in the final submission. I have also read other reviewers' comments and the authors' responses. The comments on the scalability of the SOS methods raised by Reviewer uz6L were also my concerns on the additional experimental results, but I think the authors' reply is fair. Overall, I think this is a good paper that provides valid insights on learning-based Lyapunov function discovery. I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful review and positive feedback! We are glad that the additional results addressed your concerns and improved the paper. We will definitely include these results in the final paper. Thanks again for your time and helpful suggestions. | Summary: 1. Similarly to other lines of work, authors train a new model for a single nonlinear dynamical system.
2. Key innovation lies on the symbolic formulation of the problem: instead of training a fixed model M (which would serve as the Lyapunov candidate) as per previous line of work, they work with symbolic transformer.
3. They employ an RL loop to discover local Lyapunov candidates. A good component introduced by authors is the SHGO algorithm.
4. The most real-world result is the fact that (a) the paper is not constrained by dimensions, (b) they discovered new local Lyapunov functions.
Claims And Evidence: The new framework can effectively discover valid analytical Lyapunov. Authors support the claim by considering different Lyapunov systems where a solution was not known in the past.
Table 1 provides data on runtime, verification time, discovered Lyapunov functions, stability type (local or global asymptotic stability), and success rates for various systems.
Methods And Evaluation Criteria: Adding the symbolic component was really missing from previous line of work where they parametrized Lyapunov function as a non-linear neural network.
Theoretical Claims: There is no particular theoretical claim made by authors.
Experimental Designs Or Analyses: Authors compared the new framework with standard baselines (ANLC and FOSSIL 2.0).
Supplementary Material: N/A
Relation To Broader Scientific Literature: The current scientific literature relies on either parametrized model (Lyapunov candidate) and use backpropagation + falsification to find a local Lyapunov function or train a general model and ask the model to guess a global Lyapunov candidate.
The key contributions is the symbolic parametrization and the use of RL to successfully train the model.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Given the symbolic answer provided by the model, can this help to enlarge the stability area given you are not overfitting on a particular locality domain?
2. It's not clear how much general can the approach be: due to the nature of symbolic representation, this forces the candidate to be always the same function in the entire domain. Can this be problematic?
3. Did you find some systems where the algorithm cannot find a Lyapunov candidate?
4. To get a better sense of the proposed improvement, would it be possible to sample random dynamical systems (similar to Alfarano et al) and see how many you can recover compared to Alfarano, ANLC and FOSSIL 2.0?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer ve2p for the valuable time and constructive feedback. We provide point-by-point responses below.
**Q1: Given the symbolic answer provided by the model, can this help to enlarge the stability area given you are not overfitting on a particular locality domain?**
Thanks for the insightful question.
Firstly, our approach implicitly promotes generalization beyond the local sample region. Since the symbolic transformer directly operates on analytical expressions of dynamics rather than finite samples from the state space, the resulting Lyapunov functions naturally generalize beyond the sampled region. Empirically, as demonstrated by the 6-D polynomial system (Appendix F.4), our method successfully identified Lyapunov functions valid across expanded state-space domains. Additionally, the Lyapunov function obtained for the 6-D quadrator (Appendix G.4) is globally valid.
Furthermore, to explicitly enlarge the stability region, a promising future direction is to characterize the region of attraction (ROA) of the learned symbolic Lyapunov function and incorporate it into the reward function of risk-seeking policy gradient. Related efforts to enlarge the ROA in learning-based Lyapunov function search include Chang et al. (2019), Wu et al. (2023), and Yang et al. (2024). The corresponding references can be found in the reference section of the original manuscript.
**Q2: It's not clear how much general can the approach be: due to the nature of symbolic representation, this forces the candidate to be always the same function in the entire domain. Can this be problematic?**
As mentioned in Section 3.1 Line 156 of our manuscript, ``Without loss of generality, we assume the origin to be the equilibrium point.'' For a general nonlinear system with equilibrium not at the origin or more than one equilibrium, our approach applies in the following way.
Suppose $\dot{x}=f(x),x\in\mathbb{R}^n$ has two equilibrium points, $x_1^*,x_2^*\in\mathbb{R}^n,f(x_1^*)=f(x_2^*)=0$. For each equilibrium, define a change of variable. First for equilibrium point $x_1^*$, define $\tilde{x}_1=x-x_1^*$. The dynamics of the transformed coordinate $\dot{\tilde{x}}_1=f(\tilde{x}_1+x_1^*):=f_1({\tilde{x}}_1)$ has an equilibrium at the origin. We can apply the proposed method to find a local Lyapunov function for $f_1$ near the origin, which corresponds to a local Lyapunov for $f(x)$ near equilibrium point $x_1^*$. Similar procedures can be applied to $x_2^*$ to obtain a local Lyapunov for $f(x)$ near equilibrium point $x_2^*$.
In summary, our method can apply to general nonlinear systems. Users need to first compute the equilibrium points of the original nonlinear system. For each equilibrium, define the change of variables and transform system dynamics to have equilibrium point at origin. Then, users apply the proposed approach to find local Lyapunov function for each of the transformed system separately, which correspond to different local Lyapunov functions near each equilibrium. We will include the above discussion in the final version.
We hope the above explanation clarifies this question. Should there be any additional question, we would be happy to address them in the discussion period.
**Q3: Did you find some systems where the algorithm cannot find a Lyapunov candidate?**
Thanks for the valuable question. Our framework will fail to return a valid Lyapunov function if the dynamical system is unstable. Also, the framework may encounter difficulties in highly complex dynamics. For example, the success rate for high-dimensional systems is not consistently 100\%, indicating occasional failures within a predefined number of training epochs under some random seeds. Importantly, such failures do not imply system instability. As shown in Table 1, one can successfully find valid Lyapunov functions from other random seeds (with different network initializations). In comparison, for unstable inverted pendulum without control, using the same experiment settings as in simple pendulum (Appendix G.1), our framework fails to recover an expression satisfying Lyapunov conditions over 200 epochs' training under different initializations.
**Q4: To get a better sense of the proposed improvement, would it be possible to sample random dynamical systems (similar to Alfarano et al) and see how many you can recover compared to Alfarano, ANLC and FOSSIL 2.0?**
Thanks for the constructive suggestion. To complete the comparison, we contacted the authors of Alfarano et al. (2024), who conducted the evaluation of their pre-trained model on our test systems. Due to constraints related to global stability and the dimensionality of their training data, their model can only successfully find Lyapunov functions for systems in Appendix F.1, F.2, F.3, \& G.1. These systems have dimensions lower than 6 and the found Lyapunov functions offer global stability guarantees. We will include these results in the experiment section of the final paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for these answers! For Q2, I was referring to the fact that you are forcing the candidate Lyapunov function representation to be defined in R^n, so you are excluding other possible candidates in this way.
---
Reply to Comment 1.1.1:
Comment: We really appreciate reviewer ve2p for the followup clarification. We are not sure if we correctly understand the original question "Due to the nature of symbolic representation, this forces the candidate to be always the same function in the entire domain. Can this be problematic?" and the followup clarification "For Q2, I was referring to the fact that you are forcing the candidate Lyapunov function representation to be defined in $\mathbb{R}^n$, so you are excluding other possible candidates in this way". Thus, we tried to provide a response based on our two interpretations of the question. If our response does not fully address your question, please let us know and we are happy to provide further clarification.
Our first interpretation of the question is maybe the reviewer is concerning that the Lyapunov function form is constrained to be same from across the whole domain, which may be restrictive for complex nonlinear systems with multiple equilibrium points. That's why in the original response, we provide a clarification about how to apply our method for systems with multiple equilibrium points, and the recipe for identifying *different Lyapunov functions* around the different equilibrium point. Thus, our method is not limited to the same Lyapunov function for the entire domain, but can be used for identifying different local Lyapunov functions around the different equilibrium points.
However, if that is not the reviewer is asking about, our second interpretation about the question is: for a dynamical system $\dot{x} = f(x), x \in \mathbb{R}^n$, whether the Lyapunov function candidate should be defined in i) $\mathbb{R}^n$, or ii) can it be defined on some $\mathbb{R}^m, m < n$, or iii) can it be defined on some $\mathbb{R}^m, m > n$.
Based on the definition of Lyapunov function (Nonlinear Systems, Khalil, 2002)[Theorem 4.1], for a dynamical system in $\mathbb{R}^n$, a Lyapunov function is a continuous differentiable function defined as $V: D \to \mathbb{R}$, where $D \subseteq \mathbb{R}^n$. If $D = \mathbb{R}^n$, $V$ is a global Lyapunov function. If $D$ is a subset of $\mathbb{R}^n$, i.e. $D \subset \mathbb{R}^n$, then $V$ is a local Lyapunov function, which only certifies the stability within the region $D$. Thus, case i) and case ii) are possible, and case iii) is not possible.
Next, we would like to clarify that our method, can represent Lyapunov functions in both case i) and case ii). Note that in our method, region $D$ is specified by the user and is provided as an input into the transformer-based reinforcement learning pipeline. $D$ can be either the entire state space $\mathbb{R}^n$, i.e. $D = \mathbb{R}^n$ (case i)) or a subset of $\mathbb{R}^n$, i.e. $D \subset \mathbb{R}^n$ (case ii), depending on the task. Furthermore, in our symbolic library, different dimensions of the state variable are included as different tokens, e.g., $\{x_1, ..., x_n\}$, together with the operation tokens. Thus, the Lyapunov function candidates produced by our transformer model may consist all of $x_1, ..., x_n$ or only a subset of them.
However, we do want to note that for Lyapunov asymptotic stability, it is required that $V$ is a positive definite function with $V(0) = 0$ and $V(x) > 0, \forall x \in D \backslash \\{0\\}$, and the Lie derivative is negative definite $L_f V(0) = 0, L_f V(x) < 0, \forall x \in D \backslash \\{0\\}$. Therefore, when $D$ is a set in $\mathbb{R}^n$ (not in a lower dimension subspace), then the Lyapunov function should consist all of $x_1, ..., x_n$. Otherwise, it will not meet the positive definiteness requirement and will require methods beyond Lyapunov stability theory to handle it (such as LaSalle's invariance principle) - which is beyond the scope of this work. | Summary: The paper presents an RL framework for discovering analytical Lyapunov functions for nonlinear dynamical systems. The proposed approach trains a symbolic transformer from scratch. The transformer generates candidate Lyapunov functions in a symbolic form, which is then refined and verified via a combination of global optimization (using SHGO), risk-seeking policy gradient updates, and genetic programming (GP) as expert guidance. The authors compared their approach with two other tools over a series of benchmarks.
Claims And Evidence: The authors claimed to address the challenge: "Can neural networks effectively discover valid analytical Lyapunov functions directly from complex system dynamics?"
It is partially supported by the experiment results, which show an improvement in scalability compared to ANLC and FOSSIL. However, all the benchmarks are addressable for classical SDP-based Lyapunov function synthesis techniques, e.g., using [1] to convert ODEs with elementary functions to polynomial ODE and using SOSTools with Sedumi to address them. Classical SDP-based techniques can typically handle the systems up to about 10 dimensions, e.g., in the demos provided by SOSTools, sosdemo5 is an 8-dimensional example using SDP, though it is not for Lyapunov stability analysis. Thus, the given experiments cannot fully support the claim.
Similarly, the authors claimed in the introduction that "despite the progress of SOS methods, several theoretical results (Ahmadi et al.,2011; Ahmadi, 2012; Ahmadi & El Khadir,2018) on asymptotic systems may not agree with a polynomial Lyapunov function of any degree. In addition, SOS methods suffer numerical sensitivity issues in practice." No experiment is given to demonstrate that the proposed technique outperforms SDP/SOS in sensitivity issues. In addition, the authors did not provide the proof of completeness of their approach. Thus, it is not fair to blame the incompleteness of the SDP/SOS techniques.
Finally, except for the classical SDP/SOS techniques, there are existing learning-based techniques, e.g., [2], that can address harder systems (hybrid systems) with up to 23 dimensions to generate analytical (Lyapunov-like) barrier functions. Considering the fundamental similarity between the Lyapunov function and barrier function, it is necessary to compare with these works.
[1] Liu, Jiang, et al. "Abstraction of elementary hybrid systems by variable transformation." International Symposium on Formal Methods. Cham: Springer International Publishing, 2015.
[2] Zhao, H., Liu, B., Dehbi, L., Xie, H., Yang, Z., & Qian, H. (2024). Polynomial Neural Barrier Certificate Synthesis of Hybrid Systems via Counterexample Guidance. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 43(11), 3756-3767.
Methods And Evaluation Criteria: Overall, the proposed technique follows the paradigm of data-driven or counter-example guided abstraction refinement (CEGAR) techniques.
However, there is a concern regarding Section 4.2. Section 4.2 introduces an intermediate soft verification -- I refer to it as soft verification as it does not provide hard guarantees and an additional SMT-based verification step is needed. It is not clear why this step is necessary. In existing approaches, no soft verification is used during the training, for instance, Chang et al.(2019), and hard verification is conducted when the training converges. Any specific reason to introduce this step into the framework?
Theoretical Claims: This paper does not involve theoretical claims.
Experimental Designs Or Analyses: As mentioned in the Claims And Evidence, it is expected to see the direct comparison with SOS/SDP methods over the given benchmarks. It is also expected to see the experiment on higher dimensional systems (>10), which is beyond the capacity of SOS/SDP methods. It helps demonstrate the advantages of the proposed technique.
Supplementary Material: The authors provided codes for two examples, which are runnable.
Relation To Broader Scientific Literature: None.
Essential References Not Discussed: There is a large amount of work [2] on the analytical neural barrier function learning that needs to be cited, considering the relation between barrier function and Lyapunov function. In [2], they handled a 13D continuous system and a 23D hybrid system, which is a significant result compared to the examples considered in this work.
Other Strengths And Weaknesses: The idea of using a symbolic transformer is quite interesting, and it is expected to see deeper analysis and discussion in this part, e.g., convergence, (statistical) completeness.
Other Comments Or Suggestions: As mentioned in the Other Strengths And Weaknesses, it is expected to see deeper analysis and discussion in symbolic transformer, e.g., convergence, (statistical) completeness. While the whole framework follows the classical paradigm, it is suggested to mostly focus on the symbolic transformer in this work.
Questions For Authors: See previous sections.
Thanks for the additional effort. I have gone through the additional response. My concern is fairly addressed, with some reservations about the local robustness. I have updated my rate. I suggest the authors further compare the latest SOS-based techniques on local robustness in the later version.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1 and Q2: Justify the claimed advantage over SOS techniques. Sensitivity issues of SOS. Completeness of their approach.**
Thanks for this valuable advice. We tested SOS using SOSTOOLS on polynomial (poly) and non-polynomial (non-poly) systems. Below is a summary of the setup and results.
Polynomial systems: SOS techniques successfully identify Lyapunov functions for poly systems up to 3-D (Appendix F.1, F.2, \& F.3) but failed to retrieve valid local Lyapunov functions for systems in Appendix F.4, F.5, \& F.6 of dimension $\geq6$. Please refer to our response to reviewer s9Wv (Q1) for detailed setup and results.
Non-polynomial systems: Following [3], we apply the SOS method on two non-poly systems: the simple pendulum and 3-D trig dynamics (Appendix G.1 \& G.2). Both are recast into poly form by introducing new variables and necessary (in)equality constraints. SOSTOOLS identifies valid Lyapunov functions for both cases (see Table). For the pendulum, it succeeds under 20s. However, for the locally stable 3-D trig system, the addition of one state introduces four extra constraints, and the resulting formulation leads to over an hour of solving time, in contrast to ours’ 157s.
## Table: Performance of SOS Method on Non-Poly System
| **System**| **Degree 2d** | **Runtime** | **Region**|
|-|-|-|-|
| App. G.1|2|19.58s|Global|
| App. G.2|2|6163s|$\\{x\in\mathbb{R}^3\|\|x_i\|\leq1.5\\}$|
To apply SOS methods on non-poly system $\dot{z}=f(z),z\in\mathcal{D}_1$, define $x_1=z$ as the original states and $x_2$ as the new variables. Let $x=(x_1,x_2)$. The system dynamics can then be written in rational poly forms: $\dot{x}_1=f_1(x),\dot{x}_2=f_2(x),$ with constraints: $x_2=F(x_1),G_1(x)=0,G_2(x)\geq0,$
where $F,G_1,G_2$ are vectors of functions.
Define $g(x)$ as the collective denominator of $f_1,f_2$, and local region of interest as a semialgebraic set: $$\\{(x)\in\mathbb{R}^{n+m}|G_D(x)\geq0\\},$$ where $G_D(x)$ is a vector of poly designed to match the original state space.
Let $x_{2,0}=F(0)$. Suppose there exist poly functions $V(x)$, $\lambda_1(x),\lambda_2(x)$, and SOS poly $\sigma_i(x),i=1,2,3,4$ of appropriate dimensions, s.t.
$$V(0,x_{2, 0})=0,\\;(1)$$
$$ V(x)-\lambda_1^T(x)G_1(x)-\sigma_1^T(x)G_2(x)-\sigma_3^T(x)G_D(x) -\phi(x)\in\text{SOS},\\;(2)$$
$$-g(x)(\frac{\partial V}{\partial x_1}(x)f_1(x)+\frac{\partial V}{\partial x_2}(x)f_2(x))-\lambda_2^T(x)G_1(x)-\sigma_2^T(x)G_2(x)-\sigma_4^T(x)G_D(x)\in\text{SOS},\\;(3)$$
where $\phi(x)$ is some scalar poly with $\phi(x_1,F(x_1))>0,\forall x_1\in\mathcal{D}_1\backslash \\{0\\}$. If (1), (2), and (3) hold, then $z=0$ is stable.
Compared with ours, the SOS method on non-poly systems has a few limitations: a) it requires substantial domain expertise for recasting ($G_D,F,G_1,G_2$) and manual design of coefficient constraints (ensuring $V(0)=0,V(x)>0$); b) additional state variables and rapidly growing constraint complexity hinder scalability—constraints (1), (2), and (3) are more complex than those in polynomial systems, which already struggle at dimension $\geq6$; c) the approach does not extend to certifying asymptotic stability.
As for completeness, symbolic transformers have shown strong empirical performance in expression search, supported by recent work (Alfarano et al., 2024; Holt et al., 2023). However, if our method fails to return a valid Lyapunov function, it does not imply the system is unstable as other constructive Lyapunov function methods. Formal completeness proofs remain an open question for future research. We will clearly state this limitation in the final paper.
Sensitivity issues are noted in Dawson et al. (2023b). Since our work lacks a thorough analysis of it, we agree with the reviewer and will remove the claim.
**Q3: soft verification**
We introduce soft verification to improve training efficiency and candidate quality. Specifically, our framework employs SHGO to find the minimizer of the Lyapunov function $V$ and the maximizer of the Lie derivative $L_f V$. We sample around these points to find counterexamples, that avoids the frequent timeouts observed with SMT-based verification during training and enhances Lyapunov quality (Appendix H.2). Formal SMT verification is applied post-training. Similar strategies include adaptive sampling (Grande et al., 2023) and projected gradient descent (Yang et al., 2024) to reduce verification costs.
**Q4: Discussion of [2]**
This paper uses NN poly expansions to reformulate barrier function search from nonconvex bilinear problems to CEGAR training with LMI verification. It handles hybrid systems up to 3-D and continuous systems up to 23-D, achieving scalability by using SOS only for verification. However, it is limited to polynomial dynamics and candidates. We will include [2] in the final paper.
[3] P. Antonis, and S. Prajna. Analysis of non-polynomial systems using the sum of squares decomposition. Positive polynomials in control. Springer Berlin Heidelberg. 2005.
---
Rebuttal Comment 1.1:
Comment: Thanks for the effort.
I still have concerns about the supplementary experiments. The results of the supplementary experiments by the authors showed that the SOS relaxation with SDP can only scale to 3D examples, which is not quite convincing, as in the tutorial about SOS 20 years ago, Antonis already showed a 4D example [a], 12D examples were given in the paper [b] a decade ago, and even 22D examples were addressed by improved SOS (DSOS, SDSOS [b]). May I know what solver is used? Please try MOSEK as the backend SDP solver.
I suggest the authors conduct two more experiments over the 10D example (5-link pendulum system) and the 12D example (6-link pendulum system) in [b], as we already know SOS can address these two. In [b], the authors tried to find the ROA, so the V function in the optimization is the local Lyapunov function.
[a] Papachristodoulou, Antonis, and Stephen Prajna. "A tutorial on sum of squares techniques for systems analysis." In Proceedings of the 2005, American Control Conference, 2005., pp. 2686-2700. IEEE, 2005.
[b] Majumdar, Anirudha, Amir Ali Ahmadi, and Russ Tedrake. "Control and verification of high-dimensional systems with DSOS and SDSOS programming." In 53rd IEEE Conference on Decision and Control, pp. 394-401. IEEE, 2014.
---
Reply to Comment 1.1.1:
Comment: We thank reviewer uz6L for constructive feedback. Here we provide point-by-point responses to address your concerns.
1. Scalability of the SOS methods:
First of all, we would like to clarify the correctness and the rationale of the SOS comparison results in the test cases. The 6-D, 8-D, and 10-D polynomial test systems in our Appendix F.4-F.6, are locally stable systems but not globally stable. Identifying Lyapunov functions for local stability is significantly more challenging in SOS method due to the added complexity of the $s(x)g(x,r)$ terms, which complicate optimization constraints on the Lie derivative. Consequently, SOSTOOLS is only able to find Lyapunov function for the polynomial systems in Appendix F.1-F.3 (2-D and 3-D), and becomes infeasible for the high-dimensional test systems (6-D, 8-D, and 10-D). Detailed results can be referred to s9Wv (Q1).
Furthermore, we added experiments of the SOS method on globally stable polynomial systems in response to the reviwer's question. We tested our implementation on the globally stable 4D system (Example 7 from [a]) and an additional globally stable synthetic 10D polynomial system. Using SOSTOOLS with the Sedumi and MOSEK solvers, we successfully computed degree-4 Lyapunov functions for the 4D system in 10.104 seconds with Sedumi, 9.441 seconds with MOSEK and for the 10D system in 95.878 seconds with Sedumi, 15.54 seconds with MOSEK. These results illustrate SOS methods can indeed scale effectively up to 10 dimensions for globally stable systems. Here is the dynamics of synthetic 10D polynomial system used in test:
$$\dot{x}_1=-x_1+0.5y_1+0.1x_1x_4^2,$$
$$\dot{y}_1=-0.5x_1-y_1+0.2y_1^3y_5^2,$$
$$\dot{x}_2=-x_2+0.5y_2,$$
$$\dot{y}_2=-0.5x_2-y_2,$$
$$\dot{x}_3=-x_3+0.5y_3,$$
$$\dot{y}_3=-0.5x_3-y_3,$$
$$\dot{x}_4=-x_4+0.5y_4- 0.1x_1^2x_4,$$
$$\dot{y}_4=-0.5x_4-y_4,$$
$$\dot{x}_5=-x_5+0.5y_5,$$
$$\dot{y}_5=-0.5x_5-y_5-0.2y_1^4y_5.$$
2. Fair comparison
For a fair comparison considering the non-polynomial systems, we followed your original suggestion to recast non-polynomial systems into equivalent polynomial forms. We tested the recasting approach with the SOS optimization for non-polynomial systems in both the 2D simple pendulum (Appendix G.1) and the 3D Trig dynamics (Appendix G.2). Although SOS-based methods handle these recast systems, the approach reveals practical limitations. Recasting requires significant domain expertise, and the introduced additional variables and the complicated constraints increase computational demands substantially. For instance, the recast 3D Trig dynamics required more than 6000 seconds (Sedumi) / 5800 seconds (MOSEK) for SOS optimization, while our proposed method discovered a valid Lyapunov function in just 157 seconds.
Regarding the suggested 5-link (10D) and 6-link (12D) pendulum examples from [b], we note **two critical differences** from our experimental settings:
(1) Reference [b] employs polynomial approximations of non-polynomial chaotic systems, reducing complexity at the expense of potential modeling errors. Our work, however, deals explicitly with original nonlinear dynamics without approximations. Thus, we regard the recasting approach as a more appropriate benchmark.
(2) Additionally, the examples in [b] focus on computing the region of attraction (ROA) by optimizing $\rho$ with a fixed pre-defined Lyapunov function (cost-to-go from LQR controller). This pre-selection using domain knowledge significantly simplifies the optimization task, whereas our method requires no expert-driven Lyapunov function selection. For completeness, the optimization in (4) [b] is
$$\max_{\rho, L(x)} \rho,$$
$$s.t.\; (x^Tx)(V(x)-\rho)+L(x)\dot{V}(x)\in DSOS_{2N,6},$$
where DSOS means the diagonally dominant sum of squares. We envision that the success of the method highly depends on the selection of $V(x)=x^TSx$.
Furthermore, when reference [b] optimizes both the Lyapunov function and ROA simultaneously, it addresses only a 4D system, aligning with our previous results.
3. Polynomial Systems without Polynomial Lyapunov Functions
An explicit example of a 2-D polynomial system is presented by Ahmadi \& El Khadir, (2018), which is globally asymptotically stable, but does not admit a (global) polynomial Lyapunov function. As a result, SOS techniques can't certify the global stability of this system. In contrast, our framework is not limited by this constraint. As a proof-of-concept, we applied our approach to this system using three different random initializations. Our approach successfully identified a valid Lyapunov function $V(x)=x_2^2+\log(x_1^2+1)$, with an average runtime under 185 seconds.
We sincerely thank the reviewer for the valuable feedback. We hope the above new experimental results and explanations help clarify the questions. | Summary: This paper presents a novel and promising approach to discovering analytical Lyapunov functions for nonlinear dynamical systems using reinforcement learning and transformer-based generative models. The work addresses a fundamental challenge in control theory with significant practical implications. The framework consists of three key components: (1) a symbolic transformer that generates candidate Lyapunov functions, (2) a global-optimization-based numerical verifier that checks Lyapunov conditions and provides counterexamples, and (3) a risk-seeking policy gradient algorithm that optimizes the transformer parameters. The approach is enhanced with genetic programming for expert guidance.
## update after rebuttal: I have decided not to change my assessment and maintain my score.
Claims And Evidence: Primary claim: The RL-based framework can discover valid Lyapunov functions for complex systems
Evidence:
1. The method achieves 100% success rate on most low-dimensional systems and maintains 60-80% success on higher-dimensional systems
2. The framework successfully handles non-polynomial systems with trigonometric terms
Secondary Claim 1: The approach discovers previously unknown Lyapunov functions
Evidence:
1. For the 4-D lossy power system, they discover two distinct valid Lyapunov functions
2. The authors claim these are the first analytical Lyapunov functions for certifying local stability of a 2-bus lossy power system
Secondary Claim 2: Risk-seeking policy gradient improves discovery performance
Evidence:
1. Ablation study (referenced in section 5.5) shows risk-seeking with α=0.1 achieves 66.67% success rate compared to 33.33% with α=0.5 and 0% with standard policy gradient (α=1)
Methods And Evaluation Criteria: The paper doesn't use standard benchmark datasets but rather a collection of dynamical systems of varying complexity. It compares against couple of baselines (ANLC and FOSSIL). Formal verification is done on discovered Lyapunov functions using various methods.
Theoretical Claims: I did not check for the correctness of any proofs or theoretical claims.
Experimental Designs Or Analyses: The experiments are quite thorough and the design seems valid. The evaluation metrics are clear and the ablation studies are well designed.
Limitations of the experiments:
1. Alfarano et al. (2024) was excluded from direct comparison due to its focus on global Lyapunov functions
2. Limited to only two neural network-based baselines
3. No comparison with traditional symbolic regression or optimization-based methods
Supplementary Material: No, I did not review the supplementary material.
Relation To Broader Scientific Literature: The topic is very much related to learning based search and can be applied to many areas of sciences and not just control theory. The approach could be adapted to discover invariants and correctness certificates for software verification. Instead of dynamical systems, the framework could analyze program state transitions and generate invariants that prove program correctness. Automated theorem proving could be another area of application.
Essential References Not Discussed: None, that i know of.
Other Strengths And Weaknesses: Weakness:
1. The method is quite complicated and contains multiple parts. Its difficult to understand which part provides the most gains.
Other Comments Or Suggestions: Could not find typos.
Questions For Authors: I am curious about the exploration-exploitation trade-off in your approach: Since the space of potential analytical expressions grows exponentially with expression complexity, how did you address the exploration challenge in your RL framework?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer SwEU for the valuable time and constructive feedback. We provide point-by-point responses below.
**Q1: Alfarano et al. (2024) was excluded from direct comparison**
To complete the comparison, we contacted the authors of Alfarano et al. (2024), who conducted the evaluation of their pre-trained model on our test systems. Due to constraints related to global stability and the dimensionality of their training data, their model can only successfully find Lyapunov functions for systems in Appendixs F.1, F.2, F.3, \& G.1. These systems have dimensions lower than 6 and the found Lyapunov functions offer guarantees of global stability. We will include these results in the experiment section of the final paper.
**Q2: No comparison with traditional symbolic regression or optimization-based methods.**
1. We conducted a comparison to the Genetic Programming (GP) algorithm, a representative symbolic regression method, on 6-D polynomial dynamics. See Appendix H.3 of the manuscript. Ours can achieve a 100\% success rate within 20 epochs, while GP alone failed all trials. As shown in Figure 7, GP initially improved generation quality (measured by Lyapunov risk reward) rapidly from random initialization, its evolutionary exploration failed to identify any valid Lyapunov functions due to insufficient incorporation of system dynamics.
2. We added the sum-of-square method as a baseline for optimization-based method. In our experiments, SOS techniques successfully identify Lyapunov functions for polynomial systems of dimension up to 3 (Appendix F.1, F.2, \& F.3) but failed to retrieve any valid local Lyapunov functions for polynomial test systems in Appendix F.4, F.5, \& F.6 of dimension $\geq6$. Due to the character limit, please refer to Response to Reviewer s9Wv (Q1) for the detailed comparison setup and results.
3. In addition, we apply the SOS method on non-polynomial systems by recasting the non-polynomial dynamics to polynomial dynamics. It successfully identifies valid Lyapunov function for the pendulum system (Appendix G.1) but suffers scalability issues for complex high-dimensional non-polynomial systems and requires substantial domain expertise in the recasting process and (in)-equality constraints design. Please refer to our response to reviewer uz6L (Q1) for detailed results and analysis.
We will highlight the comparisons to classic symbolic regression methods (GP) and add the comparison to optimization-based SOS method in the final version.
**W1: It's difficult to understand which part provides the most gains.**
Thanks for the valuable feedback. The backbone of our framework is a symbolic transformer trained via a risk-seeking policy gradient algorithm to efficiently discover Lyapunov functions, which provides the most gains. To clarify the gains of other parts of the model design, we performed detailed ablation studies in the appendices. The global optimization module is deployed to evaluate the generated expressions and guide the training, as shown in Appendix H.2 (Figure 6). The GP module provides expert guidance to accelerate training, illustrated in Appendix H.3 (Figure 7 \& 8). Together, the global optimization and GP modules assist the training of the symbolic transformer.
**Q4: Exploration-exploitation trade-off in your approach:**
Thanks for this insightful question. In our framework, exploration occurs at the candidate expression generation step and GP's evolutionary operations, while risk-seeking policy gradient and expert guidance loss are responsible for the exploitation.
As symbolic tokens in candidates are sampled according to some learned conditional probability distribution (the output of the decoder), and GP algorithms employ mutation, crossover, and selection operations to introduce randomness for the search of higher quality candidates, exploration is encouraged in these two steps.
For exploitation, the risk-seeking policy gradient optimizes around high-quality expressions, reinforcing high-reward candidates to focus the search on promising regions. The expert guidance loss also facilitates exploitation by encouraging the learned distribution to match the best-known candidates. We tuned the risk-seeking hyperparameter $\alpha$ to achieve the trade-off. In practice, we find $\alpha=0.1$ achieves the best performance, details are available in Appendix H.1. | null | null | null | null | null | null |
HiRemate: Hierarchical Approach for Efficient Re-materialization of Neural Networks | Accept (poster) | Summary: The paper introduces HiRemate, a hierarchical framework for neural network re-materialization to reduce memory usage during training. The core idea involves recursively partitioning the computation graph into manageable subgraphs, solving each with optimized strategies, and merging solutions hierarchically.
Claims And Evidence: - Comparisons with Checkmate on small graphs (Table 2) show close performance, but no theoretical guarantees or bounds on suboptimality. Actually for 5-layer GPT2, H-ILP already has >3% performance gap compared to the optimal one achieved by Checkmate.
- The problem statement in Section 3 seems to be highly simplified. Do you assume a topological order of a computational graph here? How do you handle dependency between layers/operators.
Methods And Evaluation Criteria: - Hierarchical decomposition is sensible for scalability, but the trade-off between partition granularity and solution quality is not rigorously analyzed.
Theoretical Claims: - As mentioned above, comparisons with Checkmate on small graphs (Table 2) show close performance, but no theoretical guarantees or bounds on suboptimality.
Experimental Designs Or Analyses: - Please include comparisons with dynamic re-materialization methods (e.g., DTR [A]) so that readers can more effectively evaluate the benefits of the hierarchical approach, especially considering that statically planning the schedule for a Transformer model with 2500 nodes takes 2.5 hours, which may be too expensive.
- Comparisons with Checkmate are limited to small graphs, and Rockmate is only tested on sequential models. Please consider including RNN or SSM experiments.
Supplementary Material: - Appendix E/F: Extended experiments validate generality but lack diversity in model types (e.g., no RNNs or recent State Space Models).
Relation To Broader Scientific Literature: It introduces a hybrid method that integrates hierarchical and solver-based approaches to address a key challenge in re-materialization.
Essential References Not Discussed: [A] Marisa Kirisame, Steven Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared Roesch, Tianqi Chen, Zachary Tatlock, "Dynamic Tensor Rematerialization", ICLR, 2021.
Other Strengths And Weaknesses: See the above sections.
Other Comments Or Suggestions: See the above sections.
Questions For Authors: See the above sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and detailed feedback. Below we address each of point of the review in turn.
**On Lack of Theoretical Guarantees and Suboptimality Gap**
We agree that HiRemate does not provide formal guarantees on optimality. This is also true of Rockmate and TW-Remat, which, like HiRemate, are designed for scalability rather than provable optimality.
Our focus is on solving large, complex computation graphs that are intractable for methods like Checkmate. The observed gap in small graphs (e.g., the 3% difference on GPT-2 5-layer) reflects this trade-off. HiRemate is designed to let users balance solution quality and solving time by:
- Adjusting the number of memory budget levels used in lower-level ILPs,
- Controlling partition granularity (larger subgraphs often improve quality but require more time),
- Using mixed solvers (e.g., DP on sequential parts, ILP only where needed),
- Optionally incorporating user-defined partitioning strategies.
These controls allow users to tune the solution process as needed. While we currently lack a formal analysis of suboptimality bounds, we are conducting further experiments to better characterize this trade-off in practice.
**On Problem Statement and Dependency Handling**
We clarify that HiRemate does not assume a topological order in the original graph. We extract the forward-backward graph using `torch.export()` at the level of torch/aten operations. The graph is simplified using the **rkgb** tool (Zhao et al., 2023) to remove view operations and isolate meaningful compute and data nodes (Appendix C.1).
This graph forms the input to our partitioning procedure (Section 3.1), which does not require topological ordering. A valid topological order is computed only within each subgraph before solving the H-ILP problem, as required for the ILP formulation -- similar to Checkmate.
The generated schedule consists of forward `aten` calls, backward Autograd calls, and memory deallocations (see Appendix C.2). Dependencies are fully respected by construction and execution is sequential.
**On Partition Granularity vs. Solution Quality**
HiRemate is designed to enable trade-offs between granularity and schedule quality. We evaluate this effect on encoder-decoder Transformers in Figure 4, which shows how deeper hierarchies (finer partitions) impact both solving time and memory usage.
A fully rigorous analysis would require evaluating several partitioning strategies across multiple architectures. We agree this is an important direction and are currently running additional experiments, although they were not yet complete at the time of submission.
**On Comparison to Dynamic Re-materialization (e.g., DTR)**
Static and dynamic re-materialization strategies differ mainly in how they handle the structure of the computational graph. Static re-materialization is well-suited for models with fixed computation graphs, where memory usage can be precomputed and optimized for predictable and efficient execution. On the other hand, dynamic strategies are better adapted to models with variable or input-dependent control flow, where runtime decisions enable on-the-fly memory management. While dynamic approaches offer more flexibility, they generally come with higher runtime overhead and less predictable behavior. Static strategies, in contrast, exceed dynamic methods in memory efficiency when applied to well-structured models. Quantifying these differences is complex because most dynamic strategies generally incorporate other algorithmic ingredients (typically paging) as in POET (https://proceedings.mlr.press/v162/patil22b/patil22b.pdf) or MegTaiChi (https://dl.acm.org/doi/pdf/10.1145/3524059.3532394), which makes comparison difficult.
We see integration or benchmarking against these methods as important future work.
**On Checkmate and Rockmate Comparisons**
Checkmate produces optimal schedules but scales poorly with graph size. We include comparisons where tractable (e.g., small GPT models), but for larger graphs, Checkmate does not complete in reasonable time. HiRemate is designed to provide approximate but scalable solutions for those larger models. In practice, we use Checkmate when feasible, and HiRemate otherwise.
Rockmate is designed for block-sequential architectures. When applied to non-sequential graphs (e.g., U-Nets), it treats the entire model as a single block, falling back to Checkmate behavior. As such, on non-sequential models, Rockmate effectively reduces to a limited form of Checkmate and does not offer meaningful additional comparison.
**Regarding RNNs and SSMs** Please, refer to the Answer to Reviewer YaUT | Summary: The paper presents a novel hierarchical framework to optimize memory usage during neural network training. They recursively partitions large computation graphs and apply optimized solvers at multiple levels to significantly reduces memory consumption while maintaining efficient training performance.
Claims And Evidence: The claims are well-supported by experimental results. The paper provides empirical evidence demonstrating significant memory savings with minimal computational overhead across multiple neural network architectures.
Methods And Evaluation Criteria: The proposed HiRemate framework and its evaluation criteria are well-aligned with the problem of optimizing memory usage in deep learning training.
Theoretical Claims: The paper provides a rigorous theoretical analysis of its algorithm
Experimental Designs Or Analyses: The paper includes exhaustive comparisons with state-of-the-art baselines, demonstrating its effectiveness across multiple architectures and hierarchy depths.
Supplementary Material: .
Relation To Broader Scientific Literature: HiRemate extends prior work on ILP-based re-materialization methods such as Checkmate and TW-Remat, which optimize memory usage by selectively recomputing activations. Unlike these approaches, which struggle with large computation graphs due to scalability issues, HiRemate introduces a hierarchical partitioning strategy that allows it to efficiently scale to larger models.
Essential References Not Discussed: The related work seems exhaustive.
Other Strengths And Weaknesses: Strength
- The paper provides a rigorous theoretical analysis of its algorithm
- The paper includes exhaustive comparisons with state-of-the-art baselines, demonstrating its effectiveness across multiple architectures and hierarchy depths.
- The proposed method is easily integrable with PyTorch, enhancing usability and enabling practical adoption with minimal modifications.
- The intuition behind the scheduling algorithm design is well-explained and easy to follow
Weakness
- While the hierarchical ILP solver improves scalability, it still incurs significant computational overhead for very large graphs. The reliance on multiple ILP optimizations per subgraph introduces bottlenecks, making it less practical for real-time or iterative optimization scenarios.
- The framework primarily targets static computational graphs, limiting its applicability to modern AI applications that involve dynamic graphs, such as reinforcement learning, adaptive architectures, and some transformer-based models.
Other Comments Or Suggestions: Please refer to Weaknesses.
Questions For Authors: Please refer to Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and detailed feedback. Below we address each of point of the review in turn.
**On Solver Overhead and Scalability**
Indeed, H-ILP introduces a solving time overhead. However, this can be mitigated in the following ways:
* Solving separate subgraphs can be performed independently and in parallel, providing efficient parallelization (moreover for subgraphs of the same structure we run the solver optimization once).
* The complexity analysis of Section 3.3 shows that the total solving time of all ILP optimizations is expected to grow linearly with the number of nodes in the graph, ensuring a reasonable scaling of the framework.
* For long training runs (e.g., months/weeks), even several hours of preprocessing is negligible.
* For cases where fast solving is required, faster heuristics (like TW-Remat) or Dynamic Programming solvers can replace ILP optimization subproblems, keeping ILP only for the several top-level optimization problems. Configuration options to run with different solvers are already implemented in HiRemate, and its modular design is straightforward to extend with more custom solvers and partitioning strategies.
**On Limitation to Static Computation Graphs**
HiRemate is designed for neural networks whose forward-backward computation graph remains constant across input samples. This includes many widely used architectures, including such as GPTs, Encoder-Decoder Transformers, ResNets, UNets, ... For these networks, HiRemate extracts the full computation graph once before training and generates a schedule that meets memory constraints by selectively recomputing intermediate activations rather than storing them. The resulting schedule is reused throughout training.
This static-graph assumption covers a wide class of models used in practice. For architectures where the graph structure can change depending on runtime input, HiRemate can still be applied or extended in many useful cases:
- *RNN-style architectures*: These apply the same computation block multiple times. If the number of steps is fixed, the computation can be unrolled into a static graph, which HiRemate can optimize directly. Thanks to the repeated sequential structure, the higher levels of HiRemate’s hierarchy can use a faster dynamic programming solver instead of the ILP-based one. If the number of steps varies, multiple schedules can be precomputed for different lengths and selected at runtime. In all cases, HiRemate identifies and reuses schedules for similar subgraphs, which reduces redundant computation during schedule generation.
- *SSM-based models (e.g., S4 or Mamba like)*: These models use state-space blocks that in most cases can be expressed in either a recurrent or convolutional form (e.g., S4D, Mamba2). The convolutional version, typically used during training due to its better compatibility with modern hardware, results in a static computation graph that HiRemate can process directly. In particular, we checked that the current version of our framework is compatible with S4D from `state-spaces/s4` and Mamba2Easy from `state-spaces/mamba2`. If the recurrent form is used instead, the situation is similar to RNNs, and the same adaptations apply.
- *Architectures with limited graph variation*: Some models include conditional branches or mode switches that result in a few possible computation graph variants. If these variations are known in advance and limited in number, HiRemate can precompute a separate schedule for each case. At runtime, the appropriate schedule is selected depending on the input or control signal.
- *Architectures with highly dynamic graphs*: A class of models changing the graph structure at runtime in many complex ways. These models fall outside the scope of HiRemate’s current design. In such cases, dynamic heuristics should be explored, their combination with static approaches is an interesting direction for further research.
Even for models with highly dynamic structure, it is often possible to apply HiRemate locally, at the level of individual `nn.Module` blocks whose behavior is static. This enables good block-wise memory/time trade-offs and can still lead to meaningful savings. HiRemate’s modular design supports partial integration into larger training pipelines, allowing users to optimize memory usage in parts of a model where static graphs are available. In particular, this is relevant for many kinds of mixture-of-experts architectures.
In summary, HiRemate automatically generates memory-efficient training schedules for static computation graphs and can be adapted to models with repeated or mildly varying structure. It handles large graphs through hierarchical partitioning, supports usage of diffrent solvers across subgraphs, and integrates well with PyTorch without requiring changes to model code. These properties make it practical for many real-world training pipelines and adaptable to a wide range of architecture types. | Summary: This paper develops a procedure for hierarchically partitioning a data flow graph in order to find a schedule for selectively recomputing neural network activations so that memory usage during backpropagation fits within a specified budget.
Claims And Evidence: The proposed method is a modular framework that delivers practical benefits over recent competing approaches.
Methods And Evaluation Criteria: Experiments measure peak memory usage and iteration time during training for HiRemate in comparison to recent related work. Across a variety of architectures and batch sizes, HiRemate appears to consistently outperform (faster iteration time at any fixed peak memory) TW-Remate and Rockmate.
Theoretical Claims: -
Experimental Designs Or Analyses: See methods and evaluation above.
Supplementary Material: The appendix covers additional technical and experimental details.
Relation To Broader Scientific Literature: Appears to be a consistent, but incremental improvement over recent work.
Essential References Not Discussed: -
Other Strengths And Weaknesses: With HiRemate in use, scaling batch size appears to (approximately) linearly increase both iteration time and peak memory usage (e.g., as in Figures 11 and 12). Within a limited regime, this same behavior would be expected of a naive strategy that simply reduces batch size in order to fit within a memory budget; i.e., to achieve the same effect of one large batch, two smaller batches could be run in series, accumulating gradients for both before updating parameters. Such a strategy would not pay any penalty for recomputation, but effectiveness would depend on whether the smaller batches fill the available parallel computational resources. How does this naive strategy compare to HiRemate? Specifically, at what point (what batch size) does HiRemate become more efficient than treating a larger batch as multiple smaller batches?Understanding the trade-off in comparison to this naive strategy seems crucial to making a practical argument for using HiRemate.
Other Comments Or Suggestions: See strengths and weaknesses above.
Questions For Authors: See strengths and weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and detailed feedback. Below we address a question regarding gradient accumulation comparison.
Gradient accumulation reduces memory usage by splitting a large batch into smaller sub-batches and accumulating gradients across them. This strategy avoids recomputation but comes with trade-offs that HiRemate addresses differently:
**When batch size 1 exceeds memory** (e.g., high-resolution inputs or very deep models), gradient accumulation does not help. HiRemate enables training in these settings by recomputing activations selectively.
**Low GPU utilization with small batches**: Accumulation processes sub-batches sequentially, often leaving GPU cores underutilized. HiRemate allows for larger effective batches under the same memory budget, improving throughput.
Increased GPU utilization with HiRemate is also worth applying in the case of network architectures whose computation is not dominated by large matrix multiplications, or more generally whose main computational operations are not well optimized for the hardware.
We agree that an empirical comparison with gradient accumulation would strengthen the practical case for HiRemate, and we plan to include such experiments in future work. | null | null | null | null | null | null | null | null |
Understanding Input Selectivity in Mamba: Impact on Approximation Power, Memorization, and Associative Recall Capacity | Accept (poster) | Summary: This paper provide theoretical justifications for selective SSM layer (S6) in Mamba architecture. They show
1. S6 has better expressiveness than S4D layer
2. S6 suffers from exponential memory decay
3. 1-layer Mamba (with S6) solves MQAR tasks with SSM mixer. MQAR is an information retrieval tasks that normally requires >2 transformer layers to solve.
The authors use sufficient numerical results to backup their theoretical findings.
## update after rebuttal
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes. I check the lemmas and theorems and their proofs.
Experimental Designs Or Analyses: yes. I check all verification experiments and their figure and tables.
Supplementary Material: yes, i skimmed through both proofs and code. I do not run the code. I do not check proofs line-by-line.
Relation To Broader Scientific Literature: this work connects to both transformer and mamba. the literature are covered in sec 2.
Essential References Not Discussed: not that I can think of
Other Strengths And Weaknesses: ### Strengths
* Clarity: the language and formating of this paper is of very high quality.
* Originality: the results and proposed method are original. There are lots of mamba papers, but this one is very refreshing yet solid.
* Significance: I believe this work is significant. The theory echos closely with practice and provides practically useful insights for practitioners.
### Weaknesses
* (minor): SD4 is mentioned without definition in abstract (and the 1st page).
Overall, good theory paper well-executed numerical support. I lean toward acceptance.
Other Comments Or Suggestions: .
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Common Response
We refer the reviewer to the Common Response in the Rebuttal to WpKK.
## Individual Response
***SD4 is mentioned without definition in abstract (and the 1st page).*** \
We thank the reviewer for pointing out the mistake: this has been corrected in the text. We further thank the reviewer for their favorable review, and remain ready to address any question or suggestion for improvement they may have. | Summary: This paper analyzes the flexibility of MAMBA showing that the S6 layer can i) it can project the input into Haar wavelet basis, ii) counteract memory decay, iii) solve multi-query associative recall (MQAR) problem proposed by Arora et al. tasks. While this is mostly a theoretical paper, the authors demonstrate applicability of their theory on approximation of discontinuous functions and the counteraction of memory decay via the KEEP n-TH task, requiring memorization the n-th token in a sequence. For the full Mamba model, they confirm empirically that the model sizes prescribed theoretically by their analytical solutions to the MQAR and INDUCTION HEADS are tight in practice.
## Update after rebuttal
I did not see anything to alter my score. I maintain a positive impression of the work.
Claims And Evidence: These results are proven rigorously.
Methods And Evaluation Criteria: The empirical studies add a little bit as they show that the theoretical bounds are not too loose.
Theoretical Claims: I have read the proof sketches in the main text but not the detailed proofs in the supplementary material.
Experimental Designs Or Analyses: I have looked at the experiments but not delved to deeply into it, this being mostly a theoretical paper.
Supplementary Material: Mostly section D, with additional experimental detail.
Relation To Broader Scientific Literature: For several years now, there has been theoretical work probing of which tasks are possible to solve by transformers. The work contrasting transformers and state space models mostly focused on what transformers can do and state space models cannot (see for example Jelassi etal, 2024). However, as new state space models start to competitive performance, the theoretical interest in state space models capacity increases. This work is a useful addition to such literature.
Essential References Not Discussed: I am not in a position to comment on it.
Other Strengths And Weaknesses: The paper is quite clearly written.
Other Comments Or Suggestions: None!
Questions For Authors: None!
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## Common Response
We refer the reviewer to the Common Response in the Rebuttal to WpKK.
## Individual Response
We thank the reviewer once again for their positive feedback, and remain open to include any suggestion for improvement, or answer any question they might have.
---
Rebuttal Comment 1.1:
Comment: I would like to maintain my score. Best of luck! | Summary: This paper aims to understand the effect of gating in Mamba models in terms of function approximation power, long-term memory, and associative recall capabilities. Both theoretical derivations and empirical results are provided.
Claims And Evidence: Three major claims as outlined in the paper:
* S6 layer is more expressive than S4D because S6 can represent projections onto Haar wavelets, which presumably reflects the capability of approximating discontinuous functions.
* S6 (akin to RNN models) suffers from memory decay.
* There exists an analytical solution to MQAR using Mamba architecture, which reveals the advantage of input selectivity.
All claims are supported by theoretical derivations, along with some constructed tasks to show the tightness.
Methods And Evaluation Criteria: The empirical evaluation relies on benchmarks of MQAR which is widely used to assess associative recall capability.
Theoretical Claims: I have checked the function approximation analysis and sensitivity analysis in sec 4.1 and 4.2. Below are questions on whether the analysis can be generalized:
* Does Mamba2 fit into the function approximation analysis?
* Does the analysis shed light on the expressivity differences between Mamba1 and Mamba2?
Experimental Designs Or Analyses: The MQAR and Induction Head experiments are well-suited for differentiating the expressive power of S6 and S4D.
Supplementary Material: Yes, I skimmed the proof on approximation power.
Relation To Broader Scientific Literature: The submission connects to the broad research effort on sub-quadratic/efficient architectures of LLMs. While most existing work focuses on empirical evaluations of pretraining results, this work provides a valuable perspective in understanding a core design element (i.e., gating) of Mamba models, and might inspire follow-up work on better parameterization of gating.
Essential References Not Discussed: NO.
Other Strengths And Weaknesses: NO.
Other Comments Or Suggestions: No.
Questions For Authors: See the question before.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## Common Response
We are grateful for the positive comments from reviewers on our paper, particularly regarding its **clarity**
(kLHf: *“this paper is of very high quality”*; R4Me: *“quite clearly written”*),
**soundness** (kLHf: *“very refreshing yet solid”*; R4Me: *“These results are proven rigorously”*),
and **impact** (kLHf: *“I believe this work is significant. The theory echos closely with practice and provides practically useful insights for practitioners”*; R4Me: *“a useful addition to such literature”*; WpKK: *“this work provides a valuable perspective in understanding a core design element...and might inspire follow-up work”*).
We answer the remaining questions from the reviewers in the Individual Response section. Additionally, if the reviewers have any other remarks or recommendations that can help further improve the quality of the paper, we remain at their disposal to address them.
## Individual Response
***Does Mamba2 fit into the function approximation analysis?*** \
Indeed it does, and the reviewer is right in that it is worth highlighting more clearly. We do briefly mention (Sec3, after Eq(7)) how the Mamba2 layer is a simplification of Mamba that prescribes a state matrix parameterized by a single scalar $\boldsymbol{\Lambda}=\lambda \boldsymbol{I}$. Substituting this into (8) would remove the dependency of $\lambda$ from the hidden-state component $n$, that is we would have $\lambda_n \equiv \lambda$ for all $n=1, \ldots, N$ in the integral. Nonetheless, for the proof in Sec4.1, it suffices to set $\lambda_n = -1$ for all $n$ (see Line 605), so the proof still holds even in the Mamba2 framework. A similar reasoning also applies to Sec4.2, where again for Mamba2 $\lambda_n \equiv \lambda$ would be constant over $n=1, \ldots, N$, but it still does not affect the overall derivation. Following the reviewer's remark, in the main text we have added a note on the validity of both Thm1 and Lem1,2, for Mamba2.
***Does the analysis shed light on the expressivity differences between Mamba1 and Mamba2?*** \
The analysis in Sec4 (see also the response above) highlights that, *from the point of view of function expressivity alone*, if we consider *only the SSM-layer*, then there is no expressivity difference between Mamba and Mamba2 on approximating Haar Wavelets: Mamba2's simplification of setting $\lambda_n \equiv \lambda$ across state dimension does not hinder this ability. \
Nonetheless, the performance difference between Mamba and Mamba2 has been clearly shown empirically. Motivated by this, we extended our analysis to *the whole architecture* in Sec5. Our results in Thm2,3 highlight that, thanks to its per-SSM-parameter short-convolution layers, Mamba2 can recover more parameter-efficient solutions than Mamba for the MQAR synthetic task, hinting at a possible explanation for its superior performance. We believe analyzing other, more complex tasks to be an interesting next step in shedding more light on the expressivity differences between Mamba and Mamba2. \
We thank the reviewer for raising this point. We took this chance to update the manuscript to highlight more clearly the implications that Thm2 and Thm3 have on the relative parameter-efficiency of Mamba and Mamba2 in solving MQAR. | null | null | null | null | null | null | null | null |
Learning Attribute-Aware Hash Codes for Fine-Grained Image Retrieval via Query Optimization | Accept (poster) | Summary: This paper presents a novel learn-to-hash method for large-scale fine-grained image retrieval. It introduces a query learning mechanism that can capture nuanced attribute-level information, making each bit of the hash code interpretable. The paper also introduces auxiliary branches during the training process to improve model performance. Experiments on five datasets demonstrate the effectiveness of the method.
Claims And Evidence: This paper analyzes the pairwise loss used in its method from the perspective of cosine similarity, arguing that when the number of categories exceeds the feature dimensions, the model may not learn distinguishable features. In addition to the theoretical analysis, the authors also provide visual results of the loss landscape, clearly illustrating that it is more difficult to achieve lower loss values when the number of dimensions is small.
Methods And Evaluation Criteria: I think the method proposed in the paper is meaningful for the application of fine-grained image retrieval.
Theoretical Claims: This paper leverages auxiliary branch to increase the number of dimension, alleviating the low-bit optimization challenge. Experiments regarding the hyperparameter N show that the experimental results are consistent with the theoretical analysis.
Experimental Designs Or Analyses: The experiments conducted on five fine-grained datasets, covering various scenarios such as birds, airplanes, and food, demonstrate that our method significantly outperforms previous approaches in low-bit scenarios. This underscores its broad applicability. Additionally, ablation study further validate the effectiveness of each module.
Supplementary Material: The authors have not provided any supplementary material, and reviewed all the content in the appendix.
Relation To Broader Scientific Literature: This paper innovatively models the hash problem as a set prediction task inspired by DETR. By leveraging learnable queries, the method not only enhances interpretability but also demonstrates improved retrieval capabilities. This contribution explores the intersection of hashing techniques and cross attention mechanisms, offering a fresh perspective that could inspire further research in both hashing and attribute-based image retrieval.
Essential References Not Discussed: No Essential References Not Discussed
Other Strengths And Weaknesses: Strengths:
1,This paper is well organized, and the figures and tables are presented clearly.
2,This paper addresses the important and practical task of large-scale fine-grained retrieval, presenting an effective method that enhances retrieval accuracy while also offering intriguing interpretability. The entire method is well-motivated.
3,This paper analyzes why retrieval performance is poor in low-bit hash code scenarios from the perspective of cosine similarity and proposes an efficient approach that introduces an additional auxiliary branch during the training process as a solution.
4,The authors provide many visualizations to help readers understand the method.
Weaknesses:
1. Figure 3 illustrates the curves at different values of c, and the authors further support their analysis with the visual results in Figure 4. However, Figure 4 only provides visualization for the specific case of C=200. Does the trend of the loss landscape at different values of c align with that in Figure 3? More visual results should be included.
2.Section 3.3 mentions that without the auxiliary branch, the model may not learn discriminative features. Are there any experimental results to support this observation? The authors could consider plotting the t-SNE result of X_out to observe the feature distribution across different categories.
Other Comments Or Suggestions: There is a spelling error in line 302.
Questions For Authors: Please refer to Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Comment1:** *Figure 3 illustrates the curves at different values of c, and the authors further support their analysis with the visual results in Figure 4. However, Figure 4 only provides visualization for the specific case of C=200. Does the trend of the loss landscape at different values of c align with that in Figure 3? More visual results should be included.*
**Reply1:**
We sincerely appreciate your constructive feedback. In response to your insightful suggestion, we have supplemented the analysis with additional visualization cases across multiple c-values (C = 200, 292, 555). These extended results consistently demonstrate that:
1. The loss landscape exhibits a strong correlation with $\mu$, aligning well with the trends shown in Figure 3.
2. Larger class numbers coupled with lower feature dimensions indeed lead to:
- Inaccessible lower local minima
- Less smooth loss landscape
This phenomenon aligns with our conjecture on the limitation of large class numbers with low feature dimensions. The visualization results can be found at: https://anonymous.4open.science/r/rebuttal_for_ICML2025-89B6/figure1.png
---
**Comment2:** *Section 3.3 mentions that without the auxiliary branch, the model may not learn discriminative features. Are there any experimental results to support this observation? The authors could consider plotting the t-SNE result of X_out to observe the feature distribution across different categories.*
**Reply2:**
We sincerely appreciate your insightful suggestion. To validate our observation, we have conducted t-SNE visualization on the feature embeddings $X_{out}$. The results demonstrate that when training with the auxiliary branch, the feature distributions exhibit:
1. Larger inter-class distances: Distinct categories form more separated clusters.
2. Smaller intra-class variations: Samples within the same category show tighter aggregation.
In contrast, the model without the auxiliary branch produces features with substantially overlapping distributions across different categories. This quantitative evidence strongly supports our claim that the auxiliary branch enhances feature discriminability. visualization results can be found at: https://anonymous.4open.science/r/rebuttal_for_ICML2025-89B6/figure2.png
---
**Comment3:** *There is a spelling error in line 302.*
**Reply3:**
We sincerely appreciate your thorough review. We have carefully addressed the identified issues and conducted further proofreading of the manuscript to ensure its quality.
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the author's rebuttal as it has effectively addressed my concerns. Therefore, I would like to increase my score to 4. | Summary: In this paper, the authors propose a query optimization-based fine-grained image hashing method, which enables the generated hash bits to exhibit attribute-aware characteristics. From the perspective of cosine similarity, the challenges in generating effective low-bit hash codes are analyzed. Based on this analysis, the performance of the model is particularly enhanced for the low-bit case through the incorporation of auxiliary branches. Expensive experiments were conducted, showing that the proposed method achieves superior retrieval performance, and a single bit demonstrates interpretability.
Claims And Evidence: From the perspective of cosine similarity, this paper provides an analysis of why retrieval performance is poor in low-bit hashing scenarios within a pairwise setting. The authors also visualize the loss landscape, providing empirical evidence for their theory. Furthermore, the authors claim that the proposed method can generate attribute-aware hash codes. They showcase the results through qualitative comparisons of one bit retrieval results and heat maps. Additionally, the authors explain the attribute diversity of query learning mechanism.
Methods And Evaluation Criteria: The method proposed in the paper is meaningful for the application of fine-grained image retrieval. By focusing on attribute-level information and optimizing low-bit hash codes, it offers a robust and interpretable framework for this task.
Theoretical Claims: The theoretical proof in the paper seems to be reasonable.
Experimental Designs Or Analyses: The authors compare the results of different methods on five fine-grained datasets and present a lot of visualizations. However, a major contribution of the paper is the introduction of auxiliary branches. The key experiments regarding the hyperparameter N are only conducted on one dataset, which makes this part of the experimentation insufficient.
Supplementary Material: The authors did not submit any supplementary materials.
Relation To Broader Scientific Literature: The paper presents an advanced method that generates attribute-aware hash codes optimized through queries for large-scale fine-grained image retrieval. Compared to previous works such as AGMH and SEMICON, this method achieves superior retrieval performance
Essential References Not Discussed: No essential references not discussed
Other Strengths And Weaknesses: Strengths:
1.The proposed method achieves superior retrieval performance and can generate attribute-aware hash codes.
2.The introduction of the auxiliary branch and the corresponding analysis is interesting.
3.The authors provide many visualization results.
Weaknesses:
1.The paper lacks a comparison with the recent method ConceptHash[1]
2.The paper discusses the limitation of large class numbers with low feature dimensions. However, the experiments related to the hyperparameter N, which are directly relevant to this analysis, were only conducted on one dataset, which is very insufficient
3.The authors provide a detailed description of the motivation behind the auxiliary branch design, but only a brief description is given regarding the specific operations.
Ref:
[1]Ng, K. W., Zhu, X., Song, Y.-Z., and Xiang, T. Concepthash: Interpretable fine-grained hashing via concept discovery. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshop, pp. 1211–1223, 2024.
Other Comments Or Suggestions: 1. The Related Work section on 'Set Prediction and Parallel Decoding' could be divided into two parts.
Questions For Authors: 1. For key questions, please refer to the Weaknesses section.
2. What is the difference between Query Optimization mentioned in the paper and c-vectors optimization in CMBH[1].
Ref:
[1]Chen, Z.-D., Zhao, L.-J., Zhang, Z.-C., Luo, X., and Xu, X.-S. Characteristics matching based hash codes generation for efficient fine-grained image retrieval. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 17273–17281, 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment1:** *The paper lacks a comparison with the recent method ConceptHash*
**Reply1:**
| Method | bits | CUB-200 | Aircraft | Stanford Cars |
|--------------|------|---------|----------|---------------|
| ConceptHash | 16 | 83.45 | 82.76 | 91.70 |
| | 32 | 85.27 | 83.54 |**92.60**|
| | 64 | 85.50 | 84.05 | 93.01 |
| Ours$^+$ | 16 | **84.12** | **84.16**|**92.12** |
| | 32 | **86.02** | **84.26**| 92.40 |
| | 64 |**86.76** | **85.76**|**93.46** |
**Table 2:Comparison of Retrieval Accuracy (%mAP) with ConceptHash.$^+$ denotes our method is trained with classification and uses the ViT-Base backbone.**
Due to the different experimental settings between ConceptHash and our experiment (both in the main paper and appendix). We made adjustments to our own method and re-conducted the experiment. The experimental results are shown in Table 1, demonstrating that our method is comparable to these methods.
---
**Comment2:** *The paper discusses the limitation of large class numbers with low feature dimensions. However, the experiments related to the hyperparameter N, which are directly relevant to this analysis, were only conducted on one dataset, which is very insufficient.*
**Reply2:**
| $N$ | 1 | 2 | 4 | 6 | 8 | 12 |
|----------|--------|--------|--------|--------|--------|--------|
| CUB200 | 33.33 | 52.44 | 69.46 | 71.57 | 72.19 | 72.36 |
| Aircraft | 39.43 | 65.70 | 74.51 | 75.20 | 78.47 | 78.38 |
| Food101 | 43.79 | 66.16 | 71.91 | 69.67 | 70.69 | 71.08 |
| NABirds | 7.48 | 11.01 | 18.88 | 24.17 | 28.13 | 32.08 |
| VegFru | 18.16 | 27.76 | 39.08 | 49.66 | 69.76 | 71.68 |
**Table 2: Comparison results of hyperparameter N. Results are based on five commonly used benchmark datasets under the 12-bits setting.**
Thank you for your suggestion. We have provided more experimental results in Table 2. The results on different datasets show that as $N$ increases from 1 to 6, the retrieval performance improves rapidly. Once $N$ exceeds 6, the performance stabilizes, which follows the same trend as the change of mu shown in Figure 3.
---
**Comment3:** *The authors provide a detailed description of the motivation behind the auxiliary branch design, but only a brief description is given regarding the specific operations.*
**Reply3:**
For the specific implementation of the auxiliary branch, we provide a detailed description. Given a query $q_i \in \mathbb{R}^d$, we divide $q_i$ evenly into multiple parts. We then perform a circular shift with step sizes of $1, 2, 3, ..., N-1$. For each of the $N-1$ different step sizes, we extend the original $q_i$ into $N-1$ new queries, $\hat{q}_i^j$, where $j = 1, 2, 3, ..., N-1$ indexes the different step sizes and all queries share the same parameters. We refer to this extension process as query transformation. We initialize a total of $k$ learnable queries. After the query transformation operation, each query is passed to the decoder for computation according to Equation 3. Finally, we obtain $\hat{h} \in \mathbb{R}^{N \times k}$, which is optimized using the loss function defined in Equation 6.
**Comment4:** *The Related Work section on 'Set Prediction and Parallel Decoding' could be divided into two parts.*
**Reply4:**
We sincerely appreciate your valuable suggestion regarding the organization of the related work section. We have restructured this part to improve clarity.
---
**Questions1:**
*What is the difference between Query Optimization mentioned in the paper and c-vectors optimization in CMBH?*
**Reply1:**
Our approach differs from CMBH in the following ways:
1. CMBH’s contribution focused on better extracting subtle image features but does not improve the process of generating hash codes. This distinction results in the hash codes generated by CMBH not exhibiting interpretability, while our approach generates hash codes that can indicate whether an image possesses a certain visual attribute.
2. We primarily follow the pairwise setting, which means that we can not use the classification task for training. In contrast, CMBH’s optimization process is designed based on a classification task, which cannot be used in the pairwise setting. | Summary: This work treats the hash learning process as a set prediction problem, using a cross-attention-based decoder to decouple attribute-specific features and further compress them into hash codes. From the perspective of cosine similarity, it argues that large class numbers with low feature dimensions lead to poor retrieval performance in low-bit hash code scenarios, and a query transformation operation is employed as a solution, resulting in a well-optimized set of queries for hash codes generation. The authors provide numerous experimental results that validate the effectiveness of the proposed method.
Claims And Evidence: The paper claims that the proposed method can capture attribute-level information from images. The authors provide single bit retrieval results, demonstrating that the generated hash code can indicate nuanced visual attributes that can distinguish different fine-grained categories. The visualization results of the heatmaps also show that different queries can focus on different parts of the image.
Methods And Evaluation Criteria: The proposed method is well-suited for the problem of fine-grained image retrieval. It not only achieves superior retrieval performance but also brings interpretability to the generated hash codes.
Theoretical Claims: The proof of the lower bound for μ in the paper seems to be free of issues.
Experimental Designs Or Analyses: During the query learning process, no attribute-level annotations were provided. Interestingly, the authors present corresponding quantitative results and visualizations that showcase the attribute-aware characteristics of the generated hash codes and further analyze the diversity of query learning.
Supplementary Material: There is no supplementary material submitted by the authors.
Relation To Broader Scientific Literature: The innovation of this paper is reflected in its practicality, as it combines semantic decoupling of fine-grained hashing with a lightweight model. This approach effectively addresses the performance limitations of existing methods (such as ExchNet and SEMICON) in low-bit scenarios. By introducing a learnable query mechanism, the proposed method differentiates from the attribute-aware learning framework of A²-NET, achieving automated attribute decoupling of complex global image features.
Essential References Not Discussed: The related references are discussed.
Other Strengths And Weaknesses: Strengths:
1.The paper utilizes a query learning mechanism with learnable queries to generate attribute-aware hash codes, achieving not only performance improvements but also enhanced interpretability.
2.The explanation from the perspective of cosine similarity is clearly exemplified by Figures 3 and 4. This fresh perspective substantiates its novelty.
3.The experiments in this paper are sufficient, demonstrating significant performance improvements, especially in low-bit scenarios.
Weaknesses:
1.The subtle feature extractor employs a multi-scale framework in conjunction with a multi-head self-attention (MHSA) mechanism. This method is widely adopted in deep learning, particularly for image feature extraction. For instance, architectures like Mobile-Former and ConVit exemplify this trend. While the design is reasonable, it does not offer significant innovation.
Other Comments Or Suggestions: 1.Line 58 should use 'demonstrates,' and line 75 should use 'are'.
Questions For Authors: 1.How many layers of cross attention does the decoder part consist of?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Comment1:** *The subtle feature extractor employs a multi-scale framework in conjunction with a multi-head self-attention (MHSA) mechanism. This method is widely adopted in deep learning, particularly for image feature extraction. For instance, architectures like Mobile-Former and ConVit exemplify this trend. While the design is reasonable, it does not offer significant innovation.*
**Reply1:**
We would like to emphasize that:
1. We model the hash problem as a set prediction problem, where each element in the set represents a bit of the hash code that can indicate a visual attribute. Specifically, our method uses $ k $ learnable queries to directly decouple distinguishable visual attributes from complex feature representations to generate the $ k $-bit hash code.
2. Previous fine-grained retrieval methods have overlooked the low-bit problem. We provide an analysis from the perspective of cosine similarity and design a query transformation strategy that effectively alleviates this issue.
3. Regarding the design of the subtle feature extractor, our main objective is to use a lightweight and simple strategy to extract fine-grained features. Overly complex feature extraction modules also lead to larger model parameters and greater computational overhead. Some previous methods have primarily focused on the design of this module, while neglecting improvements in the hash code generation process.
Overall, the first two key contributions have never been explored in previous work. At the same time, using our subtle feature extractor makes the method more lightweight and reduces computational overhead..
---
**Comment2:** *Line 58 should use 'demonstrates,' and line 75 should use 'are'.*
**Reply2:**
We sincerely appreciate your thorough review. We have carefully addressed the identified issues and conducted further proofreading of the manuscript to ensure its quality.
---
**Questions1:**
*How many layers of cross attention does the decoder part consist of?*
**Reply1:**
The decoder consists of a single layer. Its lightweight design ensures that the introduction of auxiliary branches does not incur significant computational overhead.
---
Rebuttal Comment 1.1:
Comment: I have carefully read the authors' rebuttal and the feedback has well addressed my questions and concerns. Thus, I would like to raise the score to 4. Thanks. | Summary: This paper presents a query optimization-based attribute-aware hash code generation method. First, a hybrid convolution and attention structure is utilized to obtain rich representations. Second, unlike other works that simply use fully connected layers to generate hash codes, this paper leverages a decoder and a set of learnable queries to automatically decouple different attribute features. Third, the paper incorporates an auxiliary branch to help alleviate the challenges of complex landscape optimization. Quantitative experimental results demonstrate the high retrieval performance of the method, while qualitative results show that the learned bits exhibit visual attribute-level interpretability. Additionally, the method is relatively lightweight, resulting in low computational overhead in practical applications.
Claims And Evidence: The claims made in the submission are clearly supported by convincing evidence. The author provides a well-rounded and methodologically sound approach. The motivation and the method of this paper are clearly described, and the extensive experimental results offer rigorous validation, demonstrating the superior efficacy and robustness of the proposed method across diverse datasets and scenarios.
Methods And Evaluation Criteria: The large-scale fine-grained retrieval problem studied in this paper is a fundamental and practical task, and the proposed method can be treated as an effective solution.
Theoretical Claims: I have checked the proofs concerning the theoretical claims. The analysis and proofs are both reasonable.
Experimental Designs Or Analyses: The authors conducted extensive experiments to compare the performance of the proposed method with other methods. Additionally, the authors performed experimental analysis on the method itself, which helps readers better understand the proposed method.
Supplementary Material: The authors did not include supplementary material with their submission.
Relation To Broader Scientific Literature: This paper proposes an efficient attribute-aware hash code generation method. Unlike the stepwise attention proposed by AGMH for better feature extraction, this paper uses a decoder to decouple the extracted features, allowing the generated hash bits to keep key information from the images. This idea contributes a solution to the field of fine-grained image retrieval. Additionally, the introduction of an auxiliary branch is quite interesting. It incurs only a small amount of extra computational overhead, helping the model to alleviate the challenges of optimization in low-bit scenarios while enhancing performance.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. This paper is well written and easy to follow.
2. The motivation is clear, and the method is reasonable. The generated hash codes are associated with distinguishable attribute-level information, which brings interpretability to the retrieval process. Qualitative experimental results show that the proposed method offers interpretability.
3. The experimental design is well-structured, and Section 4.3 provides a thorough analysis of the experiments, offering a comprehensive evaluation of the proposed method.
4. The proposed method demonstrates strong performance while requiring less computational overhead.
Weaknesses:
1. One of the main contributions of the paper is the introduction of an additional auxiliary branch. However, the description of this operation is too brief and does not sufficiently clarify its specific implementation. It is recommended that the authors provide a more detailed explanation of the auxiliary branch, including the interaction between queries and features, so that readers can better understand the significance and practical implications of this contribution.
Other Comments Or Suggestions: 1. Figure 7 could provide a more detailed description.
Questions For Authors: What is the relationship between the two decoders in Figure 4?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment1:** *One of the main contributions of the paper is the introduction of an additional auxiliary branch. However, the description of this operation is too brief and does not sufficiently clarify its specific implementation. It is recommended that the authors provide a more detailed explanation of the auxiliary branch, including the interaction between queries and features, so that readers can better understand the significance and practical implications of this contribution.*
**Reply1:**
We sincerely appreciate your valuable feedback. Without the auxiliary branches, the learnable queries interact with the extracted features in a fixed manner. For instance, each query can be split into $N$ segments. When $N=2$, the first segment consistently interacts with features corresponding to the first half of the image channels, while the second segment interacts with features from the latter half. However, when an auxiliary branch is introduced, the interaction pattern undergoes a significant change. In the case where $N=2$, the first segment of the query can interact with features from either the first half or the latter half of the image channels, and the same applies to the second segment. The introduction of auxiliary branches expands the receptive range of queries across the channel dimension. Since different channels typically encode distinct semantic information or visual features, this enhancement allows the optimized queries to better capture visual attributes across different images.
---
**Comment2:** *Figure 7 could provide a more detailed description.*
**Reply2:**
We sincerely appreciate your valuable feedback. In Figure 7, we present the visualization results of heatmaps across different datasets. Each row corresponds to a distinct learnable attribute query ($q_i \in \mathbb{R}^d$). The heatmaps generated by different $q_i$ demonstrate that the learned queries effectively focus on different parts of the objects. For instance, on $CUB200$ dataset, certain queries attend to the body, while others focus on the beak of a bird. On $Aircraft$ dataset, some queries focus on the wings, while others focus on the tail of the airplane. These visualizations qualitatively illustrate that our proposed query learning achieves strong interpretability.
---
**Questions1:**
*What is the relationship between the two decoders in Figure 4?*
**Reply1:**
Figure 4 shows the visualized results of the loss landscape under different conditions.
In Figure 5, the two decoders share the same weights. | null | null | null | null | null | null |
Optimizing Temperature for Language Models with Multi-Sample Inference | Accept (poster) | Summary: The authors introduce a data-free way to automatically find the best sampling temperature for multi-sample generation in LLMs. By detecting a sharp “turning point” in the model’s token-level entropy, they find a good balance between quality and diversity.-.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There are not theoretical claims.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: Research on LLMs is extensive; understanding the impact of temperature is a significant contribution to the field.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: #### Strengths
- The paper is clearly written and well presented.
- It compares multiple LLMs, identifying critical turning points for each. Additionally, it demonstrates that the same LLM can behave differently depending on whether it has been specialized for a given task.
#### Weaknesses
See Questions.
Other Comments Or Suggestions: See Questions.
Questions For Authors: 1. It is not entirely clear why the authors do not include a comparison to a baseline that has access to labels. Although I understand that TURN does not require labels, selecting the optimal temperature typically occurs before deployment, where labels are commonly available. Could you provide a rationale for excluding this baseline? Additionally, could you elaborate on scenarios where selecting the optimal temperature is necessary but labels are not accessible?
2. Do reasoning models follow the same paradigm? It would be helpful if you could provide results using available reasoning models (e.g., Gemma-3, Qwen).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer tFFz,
Thank you for your thoughtful feedback and for recognizing the significance of our work. We appreciate your positive remarks on our approach and presentation. Our detailed responses are as follows:
> *It is not entirely clear why the authors do not include a comparison to a baseline that has access to labels. Although I understand that TURN does not require labels, selecting the optimal temperature typically occurs before deployment, where labels are commonly available. Could you provide a rationale for excluding this baseline? Additionally, could you elaborate on scenarios where selecting the optimal temperature is necessary but labels are not accessible?*
>
We compared TURN to grid search using the test set as validation, which serves as a strong upper-bound baseline that has access to labels. Our results show that TURN correlates very highly with grid search (Figure 1B) and even outperforms the best fixed-temperature baseline (Table 2).
In real-world scenarios, labeled data is often scarce or expensive to obtain. For example, in domains like robotics or drug discovery, labeled samples are difficult to collect. Furthermore, when training models that go beyond human performance or require super alignment (e.g., for scientific discovery or working with new or unsolved math problems), ground-truth labels may no longer exist, making label-free methods essential.
> *Do reasoning models follow the same paradigm? It would be helpful if you could provide results using available reasoning models (e.g., Gemma-3, Qwen).*
>
Thank you for your insightful question about whether reasoning models follow the same paradigm. Evaluating temperature sensitivity in reasoning models is indeed a valuable consideration. At the time of our original submission, the first well-known open-source reasoning model, DeepSeek-R1, was released on January 20, 2025—approximately 10 days before the ICML deadline. This timing prevented us from including such an evaluation in our initial work.
Since then, we have conducted an additional experiment using **DeepSeek-R1-Distill-Qwen-7B** on the MATH dataset. Below are the results (content length = 1000, sample size = 128, majority voting):
| Temperature | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | 1.1 | 1.3 | 1.5 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Accuracy | 0.847 | 0.860 | 0.855 | **0.874** | 0.860 | 0.830 | 0.830 | 0.800 |
Our findings suggest that temperature has a relatively low impact on the accuracy of **DeepSeek-R1-Distill-Qwen-7B** on the MATH dataset, with performance remaining consistently high across the tested range (peaking at 0.874 at T=0.7). This stability could be attributed to the model being heavily optimized for the MATH dataset and then overfitting it, or the dataset is relatively simple for reasoning models. Due to time constraints, we are unable to explore additional reasoning models or datasets, but we recognize this as a promising direction for future work.
We appreciate your thoughtful review and welcome any further questions or suggestions.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing additional results on reasoning on MATH (DeepSeek R1). While a more in-depth study would indeed have been beneficial, I find the current work valuable overall. Therefore, I am maintaining my recommendation for acceptance. | Summary: Authors explores how to automatically determine the optimal temperature for large language models (LLMs) in multi-sample inference settings, without relying on labeled validation data. The authors analyze temperature’s role in balancing diversity and quality in generated samples and propose TURN (Turning Point Temperature Selection), an entropy-based method for automatic temperature tuning. A key insight is the Entropy Turning Point (EntP)—the temperature at which the log-entropy curve shifts from concave to convex—which strongly correlates with the best-performing temperature found via grid search. Through extensive experiments across diverse models and tasks (e.g., mathematical reasoning and code generation), TURN demonstrates robust generalization and outperforms fixed-temperature baselines, offering a practical solution for optimizing LLM inference.
Claims And Evidence: TURN estimates optimal temperatures without relying on external labels by using token-level entropy. This is a clear advantage over existing methods that require validation data. Also empirical correlations (Fig. 4) between EntP and grid-searched optimal temperatures, showing strong alignment. However, while the results are consistent across datasets, the explanation of why EntP works theoretically is based on a stochastic process model, which is a simplification of real LLM behavior.
Methods And Evaluation Criteria: The method leverages token-level entropy to estimate an optimal temperature without requiring labeled validation data, which is a practical and scalable approach. The evaluation is conducted on two benchmark datasets (MATH for reasoning and MBPP for code generation), which are reasonable choices since they rely heavily on multi-sample aggregation strategies like majority voting and best-of-N. The experiments systematically test different models, including general-purpose and task-finetuned variants, to assess robustness. Key metrics—hit rate, temperature gap, and performance drop—provide a clear measure of how well TURN approximates the best temperature compared to grid search.
Theoretical Claims: There is no theorem that is proved.
Experimental Designs Or Analyses: observed correlation between Entropy Turning Points (EntP) and optimal temperatures supports the method’s validity. The stochastic process model is an interesting theoretical tool to explain entropy behavior, but it remains a simplified approximation that may not capture all aspects of LLM sampling dynamics. The sample efficiency analysis (Table 3) is a strong aspect, demonstrating that TURN requires relatively few samples to make accurate predictions.
Supplementary Material: Yes, Part B.2
Relation To Broader Scientific Literature: The paper's key contributions align with broader research on temperature calibration in language models, multi-sample inference strategies, and self-assessment metrics for optimization. Prior work has established that temperature tuning significantly impacts generation quality, affecting the trade-off between diversity and coherence (Holtzman et al., 2019; Renze & Guven, 2024)
Essential References Not Discussed: Not strictly related but also a multi-sample or more specifically multi-domain scaling approach: Robust Calibration with Multi-domain Temperature Scaling. Yaodong Yu, Stephen Bates, Yi Ma, Michael I. Jordan.
Other Strengths And Weaknesses: I am concerned about the novelty and evaluation here because temperature scaling is common in calibration. Now the paper only measures the entropy turning point and accuracy, it is unclear whether this can apply to canonical setting in calibration - optimizing for a temperature using a holdout validation set to see how the model is better calibrated for decoding. The evaluation does not include a wider range of metrics such as fluency, diversity etc as in the adaptive calibration paper. It is crucial because having a small temperature would
Another concern is it is not common that tuning temperature won't affect model performance a lot if the model size is large enough. [1]
[2] The Effect of Sampling Temperature on Problem Solving in Large Language Models. Matthew Renze, Erhan Guven
Other Comments Or Suggestions: Please include more takaways into fig.1 legend as the concept of EntP is introduced late.
Questions For Authors: Usually for math and coding, a good temperature is low, e.g. 0.5, so how does the found optimal temperature vary across different tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer JAaC,
Thank you for acknowledging our efforts. Our detailed responses are as follows:
> ***Essential References Not Discussed:** Not strictly related but also a multi-sample or more specifically multi-domain scaling approach: Robust Calibration with Multi-domain Temperature Scaling. Yaodong Yu, Stephen Bates, Yi Ma, Michael I. Jordan.*
>
Thank you for pointing out this reference. *Yu et al.* present a method for training a temperature prediction model with labeled data applicable across multiple tasks for confidence calibration. We will cite and discuss it in our paper. The key distinction is that our study focuses on **accuracy improvement in multi-sample inference** using **label-free temperature selection** in **natural language processing**, whereas the referenced work centers on **confidence calibration** in **computer vision**.
> *I am concerned about the novelty and evaluation here because temperature scaling is common in calibration. Now the paper only measures the entropy turning point and accuracy, it is unclear whether this can apply to canonical setting in calibration - optimizing for a temperature using a holdout validation set to see how the model is better calibrated for decoding.*
>
While temperature tuning is a well-studied technique in the context of calibration, the novelty of our approach lies in its methodology. As noted by other reviewers (3z2D, NxtV, TgFe, and 3FXt), our method, TURN, determines temperatures based on the **intrinsic probability distribution, without requiring a validation set**. In contrast, traditional temperature calibration relies on **validation data**.
It is also worth highlighting that we include **grid search on test sets** as a baseline in our evaluation. This serves as an **upper bound** for achievable performance. TURN exhibits a **high correlation** between its predicted temperatures and those identified via grid search (Figure 1B), and it **outperforms fixed-temperature baselines** (Table 2).
> *The evaluation does not include a wider range of metrics such as fluency, diversity etc as in the adaptive calibration paper. It is crucial because having a small temperature would*
>
Our evaluation is focused on **multi-sample inference** strategies like *best-of-N* and *majority voting*, primarily in domains such as math and code, where **accuracy** is an appropriate metric. These tasks are less suited for evaluating fluency or diversity, which are more relevant in open-ended generative tasks.
As for your reference to an "*adaptive calibration*" paper, though without a specific citation, we identified the relevant works:
*[1] Xie et al. Calibrating Language Models with Adaptive Temperature Scaling*
*[2] Huang et al. Calibrating Long-form Generations from Large Language Models*
However, we did not find fluency or diversity as metrics in them. If a different work was intended, we would appreciate a specific reference so we can address it more thoroughly.
> *Another concern is it is not common that tuning temperature won't affect model performance a lot if the model size is large enough. [1]*
>
>
> *[2] The Effect of Sampling Temperature on Problem Solving in Large Language Models. Matthew Renze, Erhan Guven*
>
Thank you for raising this point. We add an evaluation of a relatively large model (Qwen2.5-32B-Instruct) on MATH at two temperatures using majority voting (sample size = 16):
| Temperature | 0.1 | 0.7 |
| --- | --- | --- |
| Accuracy | 0.825 | **0.870** |
Notably, increasing the temperature from 0.1 to 0.7 improved performance by 4.5%, a significant margin given the already high baseline.
While *Renze & Guven* focus on **single-output accuracy**, our study evaluates **multi-sample inference**, where **sample diversity** and **accuracy** are crucial. Thus, our findings do not contradict their conclusions.
> *Please include more takaways into fig.1 legend as the concept of EntP is introduced late.*
>
Thank you for the constructive suggestion. We have revised Figure 1 and its caption to better introduce and clarify the EnTP concept. The updated version can be found at https://drive.google.com/file/d/1O9DOpEW0z3GRjID6AQeUHT-Z869Tyrqe/view?usp=sharing
> *Usually for math and coding, a good temperature is low, e.g. 0.5, so how does the found optimal temperature vary across different tasks?*
>
You make a great point. When generating only a few samples (e.g., 4), lower temperatures (0–0.3) often perform better (see Figures 12 and 13). However, with more samples (e.g., 32), low temperatures lead to repetitive outputs, while higher temperatures promote diversity, which is beneficial in multi-sample settings. Additionally, the optimal temperature depends on the task and aggregation methods. For example, best-of-N favors diversity since only one strong candidate is needed, leading to a higher optimal temperature compared to majority voting.
We are looking forward to hearing your further comments!
---
Rebuttal Comment 1.1:
Comment: Thank you for adding the math result and the new figure. The rebuttal mostly addressed my concerns. I am not quite convinced by the notion of "intrinsic probability distribution" though because the approach did not modify model weights etc to get actionable insights. I increased my score by one. | Summary: This paper introduces TURN, an entropy-based approach for automatically determining optimal sampling temperatures for large language models using multi-sample aggregation strategies. The authors identify that different models require different temperature settings based on their training and observe an "entropy turning point" that strongly correlates with optimal temperature values. Their method outperforms fixed-temperature baselines without requiring labeled validation data.
Claims And Evidence: The paper presents two main claims:
1. The optimal temperature varies across models and correlates with training-task similarity
2. The entropy turning point (EntP) can predict optimal temperature without labeled data
Both claims are supported by extensive experimental evidence across multiple models (13 models on two distinct tasks). The correlation between model-task similarity and optimal temperature is compellingly demonstrated.
Methods And Evaluation Criteria: The methodology is sound and the evaluation criteria appropriate. The authors analyze temperature's impact on model performance using three well-defined metrics: Hit Rate, Temperature Gap, and Performance Drop. The comparison against fixed-temperature baselines is comprehensive and fair. One limitation is the restriction to majority voting and best-of-N strategies. Exploring weighted voting would strengthen the work.
Theoretical Claims: The stochastic process model provides a theoretical foundation for the observed entropy spike, but lacks rigorous mathematical proof. While intuitive and empirically grounded, the connection between the toy model and actual LLM behavior could be more formally established.
Experimental Designs Or Analyses: The experiments are thorough, examining 13 models across two distinct tasks. The temperature range exploration is methodical, and the sample size analysis demonstrates TURN's efficiency. However, the work would benefit from 1) testing on more diverse domains beyond mathematical reasoning and code generation, 2) including newer/different aggregation methods (e.g., weighted majority voting)
Supplementary Material: The supplementary materials are comprehensive, containing detailed experimental results, visualizations, and implementation details. The heatmaps and entropy curves for all tested models provide valuable context.
Relation To Broader Scientific Literature: The authors adequately position their work within the literature on sampling strategies for LLMs. However, connections to related fields like ensemble methods in traditional ML could be strengthened.
Essential References Not Discussed: NaN
Other Strengths And Weaknesses: Strengths:
- Novel insight connecting entropy characteristics to optimal temperature
- Method requires no labeled validation data
- Consistent performance gains across diverse models
Weaknesses:
- Limited task diversity (only math and code)
- Minimal discussion of computational overhead for entropy calculation
Other Comments Or Suggestions: - Investigate scaling laws of optimal temperature with model size would be great
- Analyze whether the optimal temperature varies by difficulty level within tasks1.
Questions For Authors: 1. How does TURN perform on generative tasks with more subjective quality metrics?
2. Have you extended your method to tuning other sampling parameters like top-p
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 3FXt,
We appreciate your insightful recommendations. Our point-by-point responses are detailed below:
> *Minimal discussion of computational overhead for entropy calculation*
>
Table 3 in our paper presents the computational overhead associated with sampling. The results indicate that as few as 40 samples are sufficient for temperature estimation. For entropy computation, we leverage VLLM, which supports efficient inference and provides token probabilities. Calculating the entropy for 40 samples at a temperature interval for an 8B model costs approximately 2 minutes on an A6000 GPU.
> *Investigate scaling laws of optimal temperature with model size would be great*
>
Thank you for the insightful suggestion. Due to the limited rebuttal period, we may not be able to complete this analysis, but we acknowledge its importance and leave it for future work.
> *Analyze whether the optimal temperature varies by difficulty level within tasks1.*
>
Thank you for this valuable suggestion. We extended our analysis to test whether the optimal temperature varies by difficulty level on MATH. The table below reports the accuracy across difficulty levels (Model: Llama3.1-8B-Instruct, sample size: 128):
| | T=0.1 | T=0.3 | T=0.5 | T=0.7 | T=0.9 | T=1.1 |
| --- | --- | --- | --- | --- | --- | --- |
| level 1 | 0.90 | **0.91** | **0.91** | 0.90 | 0.88 | 0.86 |
| level 2 | 0.70 | **0.76** | **0.76** | **0.76** | **0.76** | 0.70 |
| level 3 | 0.68 | 0.70 | 0.73 | **0.69** | 0.68 | 0.68 |
| level 4 | 0.45 | 0.51 | 0.54 | **0.59** | 0.46 | 0.36 |
| level 5 | 0.21 | 0.31 | 0.27 | **0.32** | 0.28 | 0.21 |
While no consistent trend is observed in the optimal temperature across difficulty levels, temperature selection becomes more critical for harder problems. For example, the performance difference between T=0.1 and the optimal temperature is 1% for level-1 problems, but rises to 11% for level-5 problems. This highlights the increasing importance of temperature tuning in complex scenarios.
> *1) testing on more diverse domains beyond mathematical reasoning and code generation
Limited task diversity (only math and code)
How does TURN perform on generative tasks with more subjective quality metrics?*
>
Thank you for these suggestions. We primarily focus on problem-solving domains like math and code, where answers from language models can be aggregated and verified objectively. These domains have demonstrated the most success with multi-sample inference.
Reward models or human evaluations would be required for generative tasks with subjective metrics. However, these evaluation tools may suffer from reward hacking or be expensive. Therefore, to minimize external factors and focus on the language models themselves, we currently limit our experiments to domains with objective quality metrics.
> *2) including newer/different aggregation methods (e.g., weighted majority voting)*
>
Thank you for the insightful comment. We initially did not experiment with weighted majority voting for two reasons:
1. Introducing a reward model could introduce biases, such as reward hacking.
2. If the reward model were perfect (i.e., assigning +1 for a correct answer and 0 for an incorrect one), best-of-N and weighted majority voting would yield identical results.
Additionally, to address this within the given time constraints, we conducted an experiment using the log-scale probability of generation as the reward, minimizing external bias. We then applied weighted majority voting and report the results below (Model: Mistral-7B-SFT, Sample Size: 128, Dataset: MATH).
| Temperature | 0.2 | 0.4 | 0.6 | 0.8 | 0.9 | 1.1 | 1.3 | 1.5 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Majority Voting | 0.408 | 0.450 | 0.465 | 0.473 | 0.463 | 0.465 | 0.470 | 0.465 |
| Weighted Majority Voting | 0.400 | 0.453 | 0.468 | 0.465 | 0.458 | 0.463 | 0.470 | 0.463 |
Empirically, our results indicate that weighted majority voting, when using a reward model with similar performance as the generator, has minimal impact on performance. The optimal temperature range remains almost the same. Furthermore, since entropy estimation in TURN is independent of aggregation methods, TURN remains applicable in this setting.
Future work could explore whether a stronger reward model would provide meaningful benefits in this context.
> *Have you extended your method to tuning other sampling parameters like top-p*
>
We appreciate this suggestion. We conducted experiments involving other common sampling parameters (Model: Llama3.1-8B-Instruct, sample size: 128, Dataset: MATH):
| | 0.5 | 0.7 | 0.9 | 1.1 |
| --- | --- | --- | --- | --- |
| None | 0.625 | **0.650** | 0.597 | 0.482 |
| top-p=0.9 | 0.645 | **0.653** | 0.635 | 0.623 |
| top-k=20 | 0.650 | **0.653** | 0.613 | 0.563 |
These findings suggest that while alternative sampling parameters can slightly reduce temperature sensitivity, they do not significantly impact the optimal temperature.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their effort. Since my questions are adequately addressed, I will keep my rating at 4. | Summary: The paper tries to find the optimal temperature on various question answering tasks.
They notice that the optimal temperature is higher for models that are fine-tuned on the task, and lower for more general models.
Motivated by that, they set out to find a way to set the temperature that doesn't require a validation set.
To do that, they look at the entropy of generations. They notice that as you increase the temperature, there is a point at which entropy explodes. They hypothesise that this is the point at which performance breaks down, which does seem reasonable.
They therefore set the temperature using this point, which they find by looking at the inflection point of log(entropy).
They show modest performance improvements using their temperature selection scheme.
Claims And Evidence: The claims made are appropriate to the evidence. They include data such as plots of temperature vs token entropy for Llemma-7b (Fig. 1a is this a typo?), and plots of the best accuracy, as obtained from a grid search, against the accuracy at the turning point (Fig 1b). Figure 2a analyses performance vs temperature and number of samples, while 2b looks at the midpoint of the optimal temperature range vs the number of samples for Mistral-7B-Instruct.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental designs are appropriate to the claims made, and thorough, e.g. Table 1 looks at multiple models on the MATH and MBPP datasets.
Supplementary Material: No.
Relation To Broader Scientific Literature: Yes.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Overall, I think the paper has a number of interesting insights, and should be published.
Some other thoughts:
* It would be good to see performance before/after rather than performance drops (as in Table 1). These performance drops are quite difficult to interpret.
* The performance drops look relatively modest in the Tables, but very large in Figure 4A. What's going on here?
* I wonder whether what's going is less "temperature is related to closeness of the task", and more "fine-tuned models can in general tolerate higher temperatures than pre-trained models". Not sure if you did this, but one way to disambiguate would be to take a model fine-tuned on one task, and look at the optimal temperature for a different task.
* Figure 1A is basically repeated as Figure 4A. I wasn't able to understand Figure 1 on my first reading, so I'd recommend dropping Figure 1.
* Also, Figure 1A/4A has two different y-axes, which is usually regarded as bad-practice. But I can see why it makes sense here.
* Perhaps the one weakness of the paper is that they find lots of interesting phenomena, such as the explosion in entropy at a particular temperature. But there isn't that much analysis of these phenomena. e.g. is there some easy way to see the generations go "off the rails"? Is it evident in samples?
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer TgFe,
Thank you for your supportive review and thoughtful suggestions. Please find our detailed responses below:
> *It would be good to see performance before/after rather than performance drops (as in Table 1). These performance drops are quite difficult to interpret.*
>
We appreciate this suggestion. Our goal in presenting performance drops was to highlight how close our estimated temperature comes to the true optimal temperature. The complete results are in **Appendix D.** We will add more explanation about performance drops in our revision.
> *The performance drops look relatively modest in the Tables, but very large in Figure 4A. What's going on here?*
>
Thank you for pointing this out. We believe there may have been a mix-up in the figure reference—**Figure 4A** is unrelated to performance drops. The figure showing drops in accuracy is **Figure 4B**, which visualizes the sensitivity of model performance to temperature.
To clarify:
- **Table 1** compares the performance of our **estimated temperature** to the **best temperature**, resulting in small performance drops due to the accuracy of our estimation.
- **Table 2** shows performance drops when using a **fixed temperature** (e.g., T=0.1), which can lead to significant degradation (e.g., up to 15.5%) when the temperature is poorly suited to the task.
- **Figure 4B** aligns with these findings, illustrating how performance deteriorates when the temperature is too low or too high.
> *I wonder whether what's going is less "temperature is related to closeness of the task", and more "fine-tuned models can in general tolerate higher temperatures than pre-trained models". Not sure if you did this, but one way to disambiguate would be to take a model fine-tuned on one task, and look at the optimal temperature for a different task.*
>
This is a very insightful observation. Our primary claim is that **the optimal temperature correlates with task closeness**—that is, models perform best with higher temperatures when the test task closely matches their training data. However, we added experiments and found that **task-finetuned models applied to unrelated tasks tend to underperform**, making temperature analysis in these **task-mismatch** settings less meaningful. For example, *CodeLlama-8B*, fine-tuned on code tasks, achieves only **8% accuracy** on the MATH dataset. In such cases, the model’s poor task fit overshadows any insights from temperature tuning.
> *Figure 1A is basically repeated as Figure 4A. I wasn't able to understand Figure 1 on my first reading, so I'd recommend dropping Figure 1.*
>
Thank you for this constructive feedback. We agree that **Figure 1A** may be hard to grasp before reading the introduction. While it provides early motivation, we recognize the potential redundancy with **Figure 4A**. In our revision, we have refined Figure 1 with clearer captions and context, which can be found at [link](https://drive.google.com/file/d/1O9DOpEW0z3GRjID6AQeUHT-Z869Tyrqe/view?usp=sharing).
> *Perhaps the one weakness of the paper is that they find lots of interesting phenomena, such as the explosion in entropy at a particular temperature. But there isn't that much analysis of these phenomena. e.g. is there some easy way to see the generations go "off the rails"? Is it evident in samples?*
>
This is an excellent point. When the temperature exceeds a certain threshold, we frequently see sharp **entropy spikes**, which come from **erratic or degenerate outputs**. For example, consider the following excerpt from a high-temperature generation on a MATH problem:
> **Step 1: Step 1:** In order to find the probability that the product of the two numbers is a multiple of 5, we will find the total number of possible outcomes, and the number of outcomes that are a multiple of 5.
… …
Step 5: Case 1: The first integer is a 5. In that case, the second integer has a **$1\\in\\frac{6}{6}$** probability to be the second integer. Thus the number of ways this can occur is $6$.
Step 6: Case 2: The second integer is a 5. In that case, the second integer has a **$1\\in\\frac{6}{6}$** probability to be the first integer. Thus the number of ways this can occur is $6$.
Step 7: Therefore, the probability to obtain the desired product is **$$1-\\frac{6+6}{36}=\\boxed{\\frac{5}{6}}$$**
The answer is: \\\\boxed{\\frac{5}{6}}
**In the provided example, the answer to each step in Question 1 is a numerical value or a mathematical expression that … ...**
>
In this generation, we observe several breakdowns:
- **Repetitions**, such as “Step 1: Step 1”
- **Logical and numerical inaccuracies**
- **Unprompted continuation into irrelevant content**
These symptoms clearly illustrate how poor temperature selection, particularly at extremes, can degrade output quality. They also **complement** our entropy-based analysis qualitatively, reinforcing the idea that high-entropy generations often go “off the rails.” | Summary: This paper proposes a method to estimate the right temperature LLM. The method relies on "multi-sample aggregation strategies" which has the advantage to spare the costly and task-specific validation data. The idea is based on an extensive analysis of the impact of temperature temperature while varying model architectures, datasets, task types, ... The outcome is a novel entropy-based metric for automated temperature optimization, which allows the model to outperform the fixed-temperature baselines.
Claims And Evidence: - The paper is well-structured and easy to follow.
- The problem tackled is relevant and interesting.
- The identification of the concave-to-convex shift in entropy as a function of temperature and its connection to model accuracy is particularly insightful.
- The proposed method enables temperature tuning without requiring labeled validation data.
Overall, the demonstration that the proposed method provides a clear advantage over a fixed temperature baseline. This is the central issue since in practice, when using certain models, a quick online search often yields general temperature recommendations, which tend to fall within the presented optimal range. For instance, Llama models commonly use a temperature of 0.7 (see the technical reports), which generally aligns with the authors' reported optimal range across tasks.
This is why clearer formatting and additional comparisons could help make this point more evident.
Methods And Evaluation Criteria: - The experiments appear somewhat disorganized. See remarks for specific concerns.
- The improvements achieved by the method over a fixed temperature setting are marginal, especially given the additional computational costs of generating multiple samples. See remarks for further discussion.
- The claim that adaptive beta does not require specific tuning lacks sufficient empirical support, as its generality is demonstrated on only one additional dataset.
- The evaluation process is computationally expensive, and it remains unclear how many samples are actually needed to reliably estimate the optimal temperature.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See the remark above
Supplementary Material: I did not look at it, if any.
Relation To Broader Scientific Literature: Seems OK
Essential References Not Discussed: No
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: No
Questions For Authors: See the remark above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer NxtV,
Thank you for your valuable and positive feedback. We are pleased to hear that you find our research problem compelling and appreciate the novelty of our proposed algorithm. Please find our detailed responses below:
> *For instance, Llama models commonly use a temperature of 0.7 (see the technical reports), which generally aligns with the authors' reported optimal range across tasks. This is why clearer formatting and additional comparisons could help make this point more evident.*
>
You're absolutely right that T=0.7 is often used as a default for general-purpose models like LLaMA, and in many cases, it performs reasonably well. However, when examining the optimal temperature across different models and tasks, we observe clear deviations from this default, particularly for task-finetuned and pretraining-stage models.
- **Task-finetuned models** often require **higher temperatures** to encourage greater diversity in generation.
- **Pretraining-stage models**, by contrast, tend to perform better with **lower temperatures** to reduce deviation and improve focus.
For example:
- On the **MATH** dataset, the optimal temperature for *LLaMA3.2-3B* is **T = 0.3**, which results in a **3% performance gain** over the default T = 0.7.
- Conversely, the optimal temperature for *DeepSeek-Math-7B-Instruct* is **T = 1.0**, outperforming T = 0.7 by **2%**.
These results highlight the importance of adaptive temperature selection across varying model-task combinations. We’ve improved the formatting in our final submission to make these comparisons clearer and more accessible.
> *The improvements achieved by the method over a fixed temperature setting are marginal*
>
One of our method's key strengths is its ability to operate **without any labeled validation data**, making it directly applicable to real-world test sets. As Reviewer 3z2D also noted, even modest gains can be valuable, particularly given this level of generality and ease of use.
Our method achieves an **average performance improvement of 0.75%** over the best fixed temperature across various models and tasks. Notably, **individual models** see gains of up to **2–3%**, which can be significant in competitive or high-stakes settings.
> *The claim that adaptive beta does not require specific tuning lacks sufficient empirical support, as its generality is demonstrated on only one additional dataset.*
>
Thank you for highlighting this concern. To further assess the generality of adaptive beta, we conducted an additional experiment on the GSM8k dataset. Due to the time constraints of the rebuttal period, we evaluated only one model (Llama3.1-8B-Instruct, temperature grid size=0.1).
| GSM8k | Majority Voting | Best-of-N | adaptive beta |
| --- | --- | --- | --- |
| Llama3.1-8B-Instruct | 0.9 | 1.0 | 0.1 |
Our results show that the adaptive beta calculated for GSM8k (0.1) is consistent with that of MATH (0.092, reported in Appendix C), providing initial evidence that adaptive beta can generalize to another dataset. However, we acknowledge that its behavior may vary across different aggregation functions, and we leave a more comprehensive investigation for future work.
> *The evaluation process is computationally expensive, and it remains unclear how many samples are actually needed to reliably estimate the optimal temperature.*
>
We address this concern in Section 5.4 (Table 3) of the paper, where we analyze how sample size affects temperature estimation. Our findings show that the method is **robust even with small sample sizes**. For instance, with only **40 samples**, the estimated optimal temperature shows a **variance of only 0.005** and a **performance drop of only 0.2%**. Meanwhile, calculating the entropy for 40 samples at a temperature interval costs approximately 2 minutes on an A6000 GPU for an 8B model. This makes the method practical and cost-efficient, even in low-resource scenarios. | Summary: The paper investigates the role of temperature when using best-of-n and majority-voting inference strategies. The authors discover an intriguing phenomenon coined "The Entropy Turning Point" that can accurately predict the change-point between generating diverse high-quality samples and generating low-quality samples. This phenomenon is used to devise an algorithm for automatic temperature selection. In addition, the authors investigate the optimal temperature for pre-trained, instruction-tuned, and domain-fine-tuned LLMs.
Claims And Evidence: For the most part:
- Regarding the proposed distance between the model's train data and a given task, to support the claim that it is indeed an informative distance measure, it'd be interesting to see both code and math models on each of the Figure 3 plots. A good distance measure should identify that code models are further than instruct models from the math task and vice versa.
Methods And Evaluation Criteria: Yes
Theoretical Claims: NA
Experimental Designs Or Analyses: - The improvements with the temperature chosen by TURN are relatively modest in comparison to the empirically popular t=0.7. However, given the simplicity of adopting the method, even modest improvements are welcome.
- Best-of-n and majority voting are typically used with CoT-style prompts to achieve the best performance. Could you please provide additional details regarding the prompts used in the experiments? If CoT was not used, I recommend conducting additional experiments with CoT to validate the method in a more practical setting.
Supplementary Material: Yes, partially.
Relation To Broader Scientific Literature: As test-time scaling gains popularity, choosing temperature is an important consideration.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: Please see my questions regarding CoT-prompting and the proposed distance. I'd be happy to increase the score if these questions are addressed.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 3z2D,
Thank you for the insightful suggestions. We are glad you find our research problem important and provide positive reviews. Please find our detailed responses below:
> *it'd be interesting to see both code and math models on each of the Figure 3 plots. A good distance measure should identify that code models are further than instruct models from the math task and vice versa.*
>
To further investigate this, we conducted additional experiments comparing the distances between different model types and tasks. Specifically, we measured:
- The distance between the **code-finetuned model** (*code-llama-7b-hf-instruct*) and the **MATH** task,
- The distance between the **math-finetuned model** (*OpenMath2-Llama3.1-8b*) and the **MBPP (code)** dataset,
- And the corresponding distances for a **general-purpose model** (*Llama-3.1-8B-Instruct*) for comparison.
The results, shown in the table below, align with expectations: the code-finetuned model is furthest from the MATH task, while the math-finetuned model is furthest from the MBPP task. The general-purpose model falls in between for both. This result implies the robustness of our distance metric.
| Distance | OpenMath2-Llama3.1-8b (finetuned on math) | Llama-3.1-8B-Instruct (general purpose) | code-llama-7b-hf-instruct (finetuned on code) |
| --- | --- | --- | --- |
| MATH | **0.0867** | 0.1847 | 0.2152 |
| MBPP (code) | 1.0159 | 0.1901 | **0.1477** |
> *Could you please provide additional details regarding the prompts used in the experiments? If CoT was not used, I recommend conducting additional experiments with CoT to validate the method in a more practical setting.*
>
For the **MATH** task, we **did** use chain-of-thought (CoT) prompting. Specifically, we adopted prompts from the official codebase for models already finetuned on the MATH training set with typical CoT reasoning formats (e.g., *llama-7b-sft-metamath-hf*). For general-purpose instruction-tuned models, we added a **four-shot CoT prompt** containing step-by-step reasoning examples.
For the **coding task**, models were prompted to directly generate code solutions, following the default configuration from the *bigcode-evaluation-harness* benchmark.
We hope these clarifications and additional experiments address your concerns. Thank you again for your thoughtful feedback — we welcome any further questions or suggestions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. My questions have been addressed and I've updated my score. | null | null |
Improved Algorithm for Deep Active Learning under Imbalance via Optimal Separation | Accept (poster) | Summary: The paper proposes DIRECT, an algorithm for deep active learning under the dual challenges of class imbalance and label noise. DIRECT reduces the multi-class problem to a set of one-dimensional agnostic active learning subproblems by identifying an “optimal separation threshold” for each class. Annotating examples close to these thresholds is intended to yield a more balanced and informative labeled set. DIRECT is designed to support parallel annotation while still benefiting from guarantees drawn from classic active learning results. Experiments on several imbalanced datasets (with and without label noise) claim that DIRECT saves over 60% of the annotation budget compared to state-of-the-art methods and over 80% compared to random sampling.
## update after rebuttal
Thank you for your response. I appreciate that most of my concerns have been addressed. In light of these improvements, I will raise my rating.
Claims And Evidence: The paper makes strong claims regarding label efficiency improvements, robustness to label noise, and scalability via parallel annotation. Experimental results on multiple datasets generally support these claims. However, some aspects remain less clearly supported:
The absence of a direct comparison with SIMILAR—even though SIMILAR also addresses imbalanced data—is a gap that makes it difficult to assess the relative benefits of DIRECT in similar settings.
Methods And Evaluation Criteria: The methodological approach is innovative, particularly the reduction of the active learning problem to a one-dimensional thresholding task. The evaluation criteria—balanced accuracy and annotation cost—are appropriate for imbalanced learning scenarios. Nevertheless, The paper does not clearly explain the training procedure in each active learning iteration. For example, it is ambiguous whether a balanced data loader is used during retraining, which is crucial for mitigating imbalance during model updates.
Theoretical Claims: The theoretical contribution builds on existing agnostic active learning results (e.g., from ACED) to justify that the probability of misidentifying the optimal threshold decays exponentially with the annotation budget. While the reduction to a one-dimensional problem is elegant, some concerns persist:
(1) The proofs (provided in the appendix) assume that the behavior of deep neural network outputs is amenable to a threshold classifier analysis. In practice, the effect of label noise and imbalance on such outputs might be more complex.
(2) A more detailed discussion on how the theoretical guarantees translate into deep learning contexts would strengthen the paper’s claims.
Experimental Designs Or Analyses: The experimental section is extensive and evaluates DIRECT under various noise levels and across different architectures (ResNet-18 and CLIP ViT-B32). The experiments support the claim of improved label efficiency. However:
(1) The starting point for active learning (e.g., the initialization strategy and subsequent retraining details) is not thoroughly clarified, raising questions about reproducibility.
(2) As mentioned, the experiments lack a direct comparison with SIMILAR which handles imbalanced datasets effectively.
Supplementary Material: The supplementary material contains additional experimental results, detailed proofs, and analyses (including the time complexity comparison with GALAXY and BADGE).
Relation To Broader Scientific Literature: The paper builds on and extends several strands of active learning research:
(1) It leverages classic agnostic active learning theory to tackle deep learning challenges under imbalance and noise.
(2) It directly builds on prior work such as GALAXY.
(3) The reduction to one-dimensional threshold learning represents a creative attempt to bridge theory and practice in active learning for deep neural networks.
Essential References Not Discussed: The paper has cited relevant works.
Other Strengths And Weaknesses: Strengths:
(1) aim to address a very practical and challenging scenario in real-world applications.
(2) The reduction to a one-dimensional active learning problem is novel and well-motivated.
(3) The ability to support parallel annotation is a significant practical advantage.
Weaknesses:
(1) Experiments results that connected empirical observations with theoretical understanding could help.
Other Comments Or Suggestions: Expanding on the computational cost analysis and other limitation discussion in the main text (or summarizing key points from Appendix C) would strengthen the practical implications of the work.
Questions For Authors: (1) Training Strategy: How is the model retrained at each active learning iteration? Specifically, is a balanced data loader used during retraining to mitigate class imbalance, and how does this affect performance?
(2) Comparison with SIMILAR: Given that SIMILER also targets imbalanced datasets, can you provide a direct experimental comparison in SIMILAR paper’s figures. or detailed discussion on how DIRECT’s performance differs in similar settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Experimental Settings Compared to SIMILAR**
Our settings actually closely mirror the settings in SIMILAR. In SIMILAR, the rare class setup is very close to the long tail distribution setups in our paper. SIMILAR’s setting reduces the number of examples in some of the classes to form rare classes. Our long tailed setup also reduces the number of examples in different classes, but following a long tail distribution. In fact, the later half of the classes receive way less examples than the largest class, effectively making them the rare classes. We chose the long tail version for experiments since this setting has been much more widely adopted in deep learning literature than the construction in SIMILAR. Also, real world data often follow long-tailed distribution in practice. In our experiments, we included CIFAR-10LT and CIFAR-100LT in the original draft. Also, as suggested by Reviewer 7FVV, we also conducted extra experiment for ImageNet-LT (https://ibb.co/fVq3wsYJ) and iNaturalist datasets (https://ibb.co/F4fqyfdQ).
Our setup adopted from GALAXY, where we combined multiple classes into a single “other” or “out-of-distribution” class, which directly mirrors the out-of-distribution setting in SIMILAR. This is also known as the Open-Set classification setting, which is widely studied in deep learning literature.
**Model Retraining Strategy**
As mentioned in our paper, we use a reweighting strategy by weighting the loss of each example by the inverse class frequencies. Reweighting is usually preferred over resampling strategies (balanced dataloader suggested by the reviewer) for deep learning as we want to reduce the repetition of examples during neural network training to avoid overfitting.
**Complete Time Complexity Analysis**
First, as a reminder in our paper in Appendix C, the dominating computational cost has always been the neural network training and inference cost, which takes more than 90% of the total computational cost.
As for data selection algorithms, let $K$ be the number of classes, $N$ be the pool size and $B_{\text{train}}$ be the batch size, $D$ be the penultimate layer embedding dimension and $T$ be the number of batches. Below, we detail the computation cost of data selection of each algorithm we consider.
* DIRECT: $O(T(KN\log N + B_{train}N))$.
* GALAXY: $O(T(KN\log N + B_{train}KN))$
* BADGE: $O(TB_{train}N(K + D))$
* Margin sampling/most likely positive/confidence sampling: $O(TKN)$
* Coreset: $O(T^2B_{train}ND)$
* SIMILAR: $O(TB_{train}ND)$
* Cluster margin: $O(N^2\log N + TN(K + \log N))$
* BASE: $O(TN(D+B_{train}))$
**Theoretical Guarantee**
We are not sure about what you mean by “In practice, the effect of label noise and imbalance on such outputs might be more complex.” As the agnostic active learning algorithm specifically targets the label noise scenario and the optimal 1D threshold classifier is defined to address the imbalance issue, we think our theoretical argument can exactly be applied to this 1-D reduction setting. Concretely, the agnostic active learning procedure in Algorithm 2 is exactly trying to recover the empirical risk minimizer (ERM) when freezing all of the neural network weights and only looking at the sigmoid/softmax score space. The agnostic active learning algorithm has been proven to be noise robust and can identify the ERM efficiently.
To bridge the theoretical guarantee and the effect for deep learning, in deep active learning, almost all algorithms are trying to balance between querying uncertain examples and diverse examples. In our case, we utilize the margin scores as an uncertainty measure. For diversity, we want to provide better data coverage in our annotation. This has been traditionally done in the representation space using penultimate layer embeddings or gradient embeddings. However, as we show in our paper, these methods (such as BADGE and Coreset) underperform DIRECT in imbalance scenarios. DIRECT focuses on class diversity, combined with uncertainty sampling, labeling a more balanced dataset of uncertain examples (i.e. ones around the optimal separation threshold). Our theoretical guarantee directly translates into how well we are ensuring class-balancedness of the annotated examples. With an accurate threshold, our annotation around the threshold would result in higher class-diversity in annotated examples. | Summary: ## update after rebuttal
After read the rebuttal and other reviews, the reviewer maintains the initial recommendation.
The paper introduces DIRECT, a new algorithm for deep active learning under class imbalance and label noise. The main contribution is a reduction of the imbalanced classification problem into a set of one-dimensional agnostic active learning problems, allowing the algorithm to identify optimal separation thresholds for each class. DIRECT selects the most uncertain examples near these thresholds for annotation, ensuring a more class-balanced and informative labeled set. The authors claim that DIRECT significantly reduces annotation costs by over 60% compared to state-of-the-art active learning methods and over 80% compared to random sampling, while maintaining robustness to label noise.
Claims And Evidence: Refer to other part.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand, but there are some limitations:
1. The authors use fine-tuning of ResNet-18 on imbalanced datasets but do not explain why fine-tuning is preferred over training from scratch. Additionally, the pre-training details of the model are not provided, which could be important for reproducibility.
2. The experiments do not include commonly used imbalanced datasets like ImageNet-LT and iNaturalist, which could provide additional validation of the algorithm's effectiveness.
3. The paper does not use the more common method of constructing imbalanced datasets by creating a long-tailed distribution[1]. Furthermore, it is unclear whether the test datasets are imbalanced or uniformly distributed, which could affect the evaluation of the algorithm's performance.
[1] Long-tail learning via logit adjustment
Theoretical Claims: The theoretical claims in the paper are partially supported. The proofs in the appendix establish the equivalence between the ERM solution based on the learner's output and the "best threshold" on the same training set. However, this does not directly address the key claim that this solution serves as the optimal threshold for active learning on unlabeled data. The theoretical justification for the algorithm's robustness to label noise is also lacking.
Experimental Designs Or Analyses: Yes. Refer to other part.
Supplementary Material: The supplementary material was reviewed, and it includes additional details on the experimental setups, theoretical proofs, and further results.
Relation To Broader Scientific Literature: The key contributions of the paper are well-situated within the broader literature on active learning and class imbalance.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The approach of combining class separation thresholds with one-dimensional active learning is innovative and provides a fresh perspective on tackling these issues.
2. The experimental results are comprehensive and demonstrate significant improvements over existing methods.
3. The paper is well-organized.
Weaknesses
1. The authors claim that DIRECT addresses both class imbalance and label noise issues, and the experiments include extensive evaluations under noisy settings. However, the algorithm itself does not appear to include any explicit mechanism designed to handle label noise. This is, in my opinion, the most significant weakness of the paper.
2. Recent research has shown that majority and minority classes often have different learning speeds during training. The proposed method does not seem to account for this, which makes the estimation of thresholds for minority classes particularly challenging.
3. There is some confusion in the notation used in the paper. In Section 4.1, the label space is described as [0,1], while in Equation 1, the labels y take values of 1 and 2. This inconsistency is puzzling and could lead to misunderstandings.
4. The authors do not clearly explain the difference between active learning for imbalanced datasets and general active learning. Intuitively, for an imbalanced problem, labeling samples from the minority class should yield higher benefits than labeling samples from the majority class. While Definition 4.1 seems intuitively effective, the authors do not provide a detailed explanation or experimental results to demonstrate the characteristics of the samples selected by active learning.
Other Comments Or Suggestions: Refer to weakness and questions.
Questions For Authors: 1. Why is the method called DIRECT? Is it an abbreviation? The paper does not seem to provide an explanation for the name.
2. Is the optimal separation threshold proposed in Definition 4.1 a novel contribution, or is it based on prior work?
3. Intuitively, in multi-class classification, $\hat{p}_i^k$ can take negative values, which seems unreasonable for probabilities.
4. Could the authors explain the motivation behind this paper? Specifically, in active learning, what makes certain unlabeled samples more beneficial to label, and how does this principle change when the training data is imbalanced?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the insightful and detailed review. We make the following clarifications to address your concerns.
**Handling Label Noise by Agnostic Active Learning Algorithm**
The large body of classic literature of agnostic active learning studies exactly the active learning under label noise scenarios (please see [1-3] as examples). These papers study sample complexities in identifying the optimal hypothesis when annotations are noisy. DIRECT applies an instance of the agnostic active learning algorithm in finding the optimal separation threshold, which inherits the noise robustness with theoretical guarantees to be minimax and instance optimal (shown in [1] and [3]). We will include some of these discussions in our paper.
[1] Balcan, M. F., ... & Langford, J. (2006, June). Agnostic active learning. ICML.
[2] Dasgupta, S., ... & Monteleoni, C. (2007). A general agnostic active learning algorithm. NeurIPS.
[3] Katz-Samuels, ... & Jamieson, K. (2021). Improved algorithms for agnostic pool-based active classification. ICML.
**Motivation and Justification behind DIRECT**
In active learning, almost all algorithms are trying to balance between querying uncertain examples and diverse examples. Uncertainty can come in many forms, such as entropy score, margin score, epistemic uncertainty, or even more recent methods such as local model smoothness and influence functions. In our paper, we choose to use margin sampling as it is computationally efficient, and has been shown to perform no worse than other more advanced methods in various benchmarking efforts [4-5]. We would also note that our method can use any of the scoring methods above in place of margin score.
The other beneficial philosophy is diversity in sampling. In other words, we want to provide better data coverage in our annotation. This has been traditionally done in the representation space using penultimate layer embeddings or gradient embeddings. However, as we show in our paper, these methods (such as BADGE and Coreset) underperform DIRECT in imbalance scenarios. DIRECT focuses on class diversity, combined with uncertainty sampling, labeling a more balanced dataset of uncertain examples (i.e. ones around the optimal separation threshold).
As we demonstrated in our experiments, we include plots for both accuracy and number of minority class labels, where the latter clearly shows DIRECT labels more class-diverse sets of examples.
Furthermore, to address your concerns, the optimal separation threshold is a novel proposal in our paper in the context of active learning.
[4] Zhang, J., Chen, Y., ... & Nowak, R. D. (2023). Labelbench: A comprehensive framework for benchmarking adaptive label-efficient learning. DMLR Journal.
[5] Bahri, D., ... & Rostamizadeh, A. (2022). Is margin all you need? An extensive empirical study of active learning on tabular data.
**Different Learning Speed of Minority Class**
We think the different learning speed argument of minority class examples contributes to the motivation of DIRECT. Firstly, as DIRECT labels a more class-balanced set of examples, the learning speed of minority classes will be much faster when compared to other active learning algorithms. Furthermore, as we are adaptively labeling more examples when identifying the optimal separation threshold (Algorithm 2), we think these additional labels can significantly help in estimating the threshold. Without these additional labels, as you suggested, the decision boundary will be very inaccurate, which is exactly what the other active algorithms are experiencing. In other words, the slow learning speed of minority class results in bad estimation in uncertainty threshold (Figure 2a), and DIRECT mitigates this issue by adaptively labeling to find the optimal separation threshold, instead of relying only on the current decision boundary.
**Additional Experiments Suggested by The Reviewer**
We actually do have the standard long-tail experiments in our paper, for CIFAR-10LT and CIFAR-100LT. In addition, we have conducted extra experiments under the LabelBench benchmark for ImageNet-LT (https://ibb.co/fVq3wsYJ) and iNaturalist datasets (https://ibb.co/F4fqyfdQ). In both cases, we are clearly seeding DIRECT dominating the other active learning algorithms.
**Training Details**
Our pretrained ResNet-18 is the standard checkpoint in the PyTorch library. As pretrained models are much more widely available, we believe finetuning of neural networks can provide us better signal for practice.
**Notation Issues**
In our paper the label space is defined as $[K] = 1, 2, …, K$. In the binary case, the label space is therefore {1, 2}. In section 4.1, as we mentioned right after the [0, 1], $\widehat{p}$ is mapping to sigmoid scores, which is different from the label space.
We just like the name DIRECT for our algorithm.
The margin score $\widehat{p}_i^k$ is indeed confusing as it is not a probability. We will change the notation to $\widehat{s}_i^k$ instead.
---
Rebuttal Comment 1.1:
Comment: Since the authors failed to adequately address most of our concerns, particularly regarding label noise and theoretical claims, we have decided to maintain our original rating.
---
Reply to Comment 1.1.1:
Comment: We would like to further clarify how the label noise is handled by the agnostic active learning algorithm in addition to what we mentioned above.
In our setting, there is an underling conditional probability function $P(y_i|x_i)$, where for each data example $x_i$. We first consider a dimensionality reduction $x_i \rightarrow q_i$, where $q_i$ is the real-valued sigmoid score of $x_i$. This gives us an ordered set of 1-dimensional features ${q_i}_{i=1}^n$. This map produces a distribution $P(y_i|s_i)$, $\forall s_i in S$. The probability therefore encodes the label noise.
In addition, we also have a set of classifiers, 1-dimensional threshold classifiers $\{h_j\}$ on the real line. Our goal is to find the hypothesis that minimizes the probability of error w.r.t. $P(y_i|s_i)$. In our notation, $j^\star$ corresponds to the threshold classifier that minimizes the empirical error w.r.t. $P(y_i|s_i)$.
This is precisely what our agnostic active learning algorithm does. The key point is that we make no assumptions about p(y_i|x_i) and hence p(y_i|s_i), so the algorithm handles any possible noise model, expressed by p(y_i|s_i).
We will include some of this discussion in the final version of our paper. | Summary: This paper studies active learning under both class imbalance and label noise. An improved algorithm for agnostic active learning is proposed, referred to as DIRECT, which is considered an advanced version of GALAXY. Various experiments are done to validate the property of DIRECT, showing its superiority in the case of imbalance and label noise. The results are promising.
## update after rebuttal
Claims And Evidence: Using object function for agnostic active learning in finding optimal separation points and annotating the unlabeled data points near these optimal separation points can improve active learning for imbalanced datasets on the tasks of agnostic active learning. The experiment results show this claim can be achieved.
Methods And Evaluation Criteria: The proposed algorithm is conventional and sound in active learning for assessing uncertainty. Performance is usually measured by prediction accuracies, which are well presented in the paper.
Theoretical Claims: None
Experimental Designs Or Analyses: The experimental design and analysis are logical and acceptable in studying active learning. I have two points, if possible, that should be supplemented.
1) The effect of several classes and the more detailed analyses, such as the confusion matrix, showing the prediction accuracy per (major/minor) class.
2) Sensitivity analysis from the balance to a severe imbalance in the Toy dataset.
These experiments can provide more insight into the proposed algorithm.
Supplementary Material: Only code
Relation To Broader Scientific Literature: The study is closely related to efficient learning and obtaining more representative data points, which is essential in the development of AI.
Essential References Not Discussed: None
Other Strengths And Weaknesses: DIRECT can be applied to the problem of agnostic active learning. When label noise exists, the advantages of DRECT can be dominant.
The proposed algorithm's merits are small for balance with small noise. However, more robustness to the imbalance ratio can be required in some cases due to limited prior knowledge.
Other Comments Or Suggestions: Please see the comments on Experimental Designs Or Analyses
Questions For Authors: Q1: Can you consider any other metrics (Eqn. 2) in the multi-class problem, such as the use of mean instead of maximum probabilities of remaining classes?
Q2: Can you provide the computational costs for all algorithms considered in the paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for providing the insightful review. We address your concerns below.
**Using a different scoring in Eqn. 2**
As you suggested, we could indeed subtract the mean instead of the max of the per-class softmax scores. The mean will simply be a constant for all examples, which makes the scoring equivalent to the confidence score. We totally agree that different scores can be used to rank the examples. For confidence scores, however, some of our initial experiments found it to be less effective than margin sampling. Notably, margin sampling has been shown to be a superior scoring function over confidence and entropy in several previous large scale benchmarking papers (e.g. [1] and [2]). We will nevertheless add a future work direction to test out different scoring methods beyond these scores.
**Computation cost of all algorithms**
First, as a reminder in our paper in Appendix C, the dominating computational cost has always been the neural network training and inference cost, which takes more than 90% of the total computational cost.
As for data selection algorithms, let $K$ be the number of classes, $N$ be the pool size and $B_{\text{train}}$ be the batch size, $D$ be the penultimate layer embedding dimension and $T$ be the number of batches. Below, we detail the computation cost of data selection of each algorithm we consider.
* DIRECT: $O(T(KN\log N + B_{train}N))$.
* GALAXY: $O(T(KN\log N + B_{train}KN))$
* BADGE: $O(TB_{train}N(K + D))$
* Margin sampling/most likely positive/confidence sampling: $O(TKN)$
* Coreset: $O(T^2B_{train}ND)$
* SIMILAR: $O(TB_{train}ND)$
* Cluster margin: $O(N^2\log N + TN(K + \log N))$
* BASE: $O(TN(D+B_{train}))$
[1] Zhang, J., Chen, Y., Canal, G., Mussmann, S., Das, A. M., Bhatt, G., ... & Nowak, R. D. (2023). Labelbench: A comprehensive framework for benchmarking adaptive label-efficient learning. arXiv preprint arXiv:2306.09910.
[2] Bahri, D., Jiang, H., Schuster, T., & Rostamizadeh, A. (2022). Is margin all you need? An extensive empirical study of active learning on tabular data. arXiv preprint arXiv:2210.03822.
**Sensitivity Analysis**
Thank you for the suggestion. We think our experiments already cover a wide range of imbalance ratios as shown in Table 1, even when fixing model training to be ResNet-18. As we have shown that our algorithm is superior across all of these ratios. Interestingly, when comparing the performance of random sampling, we see DIRECT saves an increasing amount of annotation costs as the dataset is more and more imbalanced. We will definitely add this observation to our paper.
**More Detailed Analysis**
Thank you for the suggestion! We think this is indeed beneficial. We have to rerun some of the experiments for this. We will send over the analysis during the discussion period as soon as possible, once we get these numbers.
---
Rebuttal Comment 1.1:
Comment: The authors' response resolves many issues. Thanks for your reply. However, I'll keep my score since more insightful experiments and analyses are required for a better paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for giving us a chance to supplement our further findings, and we have conducted an additional experiment as you suggested. Please see [this plot](https://ibb.co/Y7d3fh0B) where we plot the average accuracy in blocks of classes for the ImageNet-LT dataset. Specifically, the class indices are arranged so that class #1 is the most frequent class while class #1000 is the least frequent class. We can see DIRECT outperforms baseline algorithms on less frequent classes (301-1000), which explains the overall better performance of DIRECT in our experiments, despite having slightly worse accuracies on more frequent classes. This corroborates our balancedness result in our original paper, where we see DIRECT labels more samples from rare classes.
Overall, DIRECT achieves our goal, by improving over the vast majority of the (rare) classes, while only sacrificing slight performance drop on a smaller number of the most frequent classes. | null | null | null | null | null | null | null | null |
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression | Accept (poster) | Summary: The ACBench framework proposed is designed to systematically evaluate the effects of compression on both agent capabilities and large language models (LLMs). It tests agent capabilities across key areas such as action execution, workflow generation, long-context understanding, and real-world application performance. For LLMs, it evaluates efficient rank, top-K ranking consistency, and energy-based analysis to measure the impact on model efficiency and output reliability. Additionally, ACBench analyzes the impact of different compression approaches, providing insights into their trade-offs and suitability for various tasks. This framework serves as a robust tool for optimizing compressed models while maintaining performance and practicality.
Claims And Evidence: Yes
Methods And Evaluation Criteria: YES
Theoretical Claims: Theoretical claims are poor for this article.
Experimental Designs Or Analyses: 1. It seems like the ACB mainly categorizes the existing Benchmark, while the one proposed by the essay, Action Execution aspect, lacks evaluation metrics setup.
2. The evaluation in Workflow Generation and Real-World Application seem to have overshadowing parts about the Embodied AI task.
Supplementary Material: Partly, some results of the ACBenchmark
Relation To Broader Scientific Literature: Integration of several Benchmarks and systematically propose the evaluation aspects for agentic behaviors.
Essential References Not Discussed: 1. The compression methods are limited to only five compression methods, namely GPTQ, AWQ, SmoothQuant, SparseGPT, Wanda. (For Quantization and Pruning)
2. Other compression methods like LoRA and Distillation are not systematically analyzed. (The passage use results from distilled model and original models to evaluate the distillation methods but not thoroughly analyzed.)
3. Larger Models (only contains size from 1.5B to 7B) are not evaluated.
Other Strengths And Weaknesses: Strength
1. The essay conducts thorough and comprehensive experiments and provide rather abundant elaboration on the methods and benchmarks it uses.
2. The essay compares different models regarding different fields and uses multiple metrics to evaluate those agents’ ability.
Weakness
1. Maybe need categorization for difference Benchmarks, like what capabilities are each Benchmark is testing. The results analyzed seem irrelevant (For example, you mention that four capabilities should be evaluated multi-step planning, long-context coherence, adaptive reasoning. How is each Benchmark related to these core aspects?)
2. Some expressions are vague. For example, “Knowledge Distillation Shows Unexpected Performance Characteristics” is not a good summary sentence to conclude without showing what characteristic is.
3. What is the relationship between the degradation of the compression has on LLMs and the degradation of the compression has on the Agentic Behaviors? You seem to analyze them separately, but more correlation analysis is expected.
4. Optimal Compression Strategy Recommendations proposed in Chapter 4 should be analyzed after all the aspects are evaluated
5. The framework stresses the importance of “Multi-Turn” conversation testing, but it is not highlighted in the following experiments
Other Comments Or Suggestions: Please refer to the section Other Strengths and Weaknesses
Questions For Authors: Please refer to the section Other Strengths and Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1**: About Theory.
> Theoretical claims are poor for this article. What is the relationship between the degradation of the compression has on LLMs and the degradation of the compression has on the Agentic Behaviors?
>
***Ans for Q1***: This paper is primarily an empirical study, as indicated by the title. To address this, we have developed a concise framework to explain how quantization errors propagate in sequential decision-making (agent scenarios). For details, please refer to this [anonymous link](https://anonymous.4open.science/r/ICML_ACBench_Rebuttal-B4DA/).
**Q2**: About the setup for Action Execution
> Action Execution aspect, lacks evaluation metrics setup.
>
***Ans for Q2***: The Action Execution metrics (Sec. 4) adopt T-Eval's setting, evaluating **Plan** (similarity/LIS), **Reason/Understand** (Sentence-BERT), **Retrieve** (exact match), **Instruct** (format/parameter accuracy), and **Review** (classification accuracy).
**Q3**: About the Embodied AI tasks.
> The evaluation in Workflow Generation and Real-World Application seem to have overshadowing parts about the Embodied AI task.
>
***Ans for Q3***: In this paper, we employed **ScienceWorld** and **AlfWorld** as they inherently require structured, multi-step reasoning, making them ideal for evaluating *workflow generation*. BabyAI, while useful for basic instruction-following, lacks the complexity needed for higher-level planning assessment. Refer to Tab.8,9 and [anonymous link](https://anonymous.4open.science/r/ICML_ACBench_Rebuttal-B4DA/) for more details.
**Q4**: About the compression methods
> The compression methods are limited to only five compression methods. LoRA and Distillation are not systematically analyzed.
>
> The passage use results from distilled model and original models to evaluate the distillation methods but not thoroughly analyzed.
>
> Larger Models (only contains size from 1.5B to 7B) are not evaluated.
***Ans for Q4***: As presented in Sec.2.2 and Sec.8, we prioritize these compression methods for the following reasons:
1. **Practical Impact & Compatibility**: The selected methods (GPTQ, AWQ, SmoothQuant, SparseGPT, Wanda) are foundational and widely adopted in serving systems, which are supported by vLLM[3] and SGLang[4]. They are critical for real-world high-throughput serving (e.g., 10–24× speedups over HuggingFace).
2. **Distillation/LoRA**: While we include distilled models (e.g., R1-Distill series) in baseline comparisons, a systematic evaluation of distillation or LoRA lies beyond our focus on post-training compression. These techniques inherently require retraining or architectural modifications.
3. **Model Scale**: We evaluate up to **Qwen2.5-32B** (Tab.9), but larger models (e.g., 70B) are prohibitively expensive (>1 month/run on 8 GPUs). Memory constraints in long-context agent tasks further limit scaling.
**Q5**: About the evaluation
> Four capabilities should be evaluated multi-step planning, long-context coherence, adaptive reasoning. How is each Benchmark related to these core aspects?
>
***Ans for Q5***: Thank you for raising this important point. We explicitly connect each benchmark to the core capabilities in the following ways:
- **Multi-step planning** is evaluated in **Workflow (Sec. 5)** and **Real-World Applications (Sec. 7)**.
- **Long-context coherence** is assessed in **Long-Context (Sec. 6)**.
- **Adaptive reasoning** is demonstrated in both **Workflow (Sec. 5)** and **Real-World Applications (Sec. 7)**.
In line 35-52, we have already refined these tasks. We will further enhance clarity in the revised manuscript
**Q6**: About the writing.
> Some expressions are vague. For example, “Knowledge Distillation Shows Unexpected Performance Characteristics” is not a good summary sentence to conclude without showing what characteristic is.
>
***Ans for Q6***: To address this, we have revised the statement to:
> *"Knowledge distillation from a reasoning model lead to performance degradation in agent scenarios."*
**Q7**: About the Optimal Strategy:
> Optimal Compression Strategy Recommendations proposed in Chapter 4 should be analyzed after all the aspects are evaluated
>
***Ans for Q7***: To align with the feedback, I propose refining the section title to **"Compression Strategy Recommendations for Action Execution"**. And for overall guidings, please refer to **Q3 to Reviewer DuJV**.
**Q8**: About the "Multi-Turn".
> The framework stresses the importance of “Multi-Turn” conversation testing, but it is not highlighted in the following experiments
>
***Ans for Q8***: Thank you for your feedback. The "Multi-Turn" is implicitly integrated into the experimental design:
- In Sec. 5): Tasks in WorfBench inherently involve multi-turn interactions, as agents must dynamically plan and execute
- In Sec.7, Benchmarks like Jericho and PDDL explicitly test multi-turn reasoning. For instance, agents must navigate 10+ conversational turns to solve complex puzzles
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, your reply has answered my doubts to some extent. I have raised my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer vD1Z,
Thank you for your thoughtful feedback and for revising the scores following our rebuttal. We sincerely appreciate the time and effort you dedicated to evaluating our work. Your suggestions and response mean a great deal to us.
Best regards,
Authors of #6400 | Summary: This is a very interesting paper that studies agentic capabilities in LLM compression. The authors have carefully selected a series of evaluation benchmarks that cover practical scenarios of agent manipulation to assess the performance drop after compression.
Claims And Evidence: Yes, the claims are well supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed evaluation criteria are entirely reasonable.
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: The experimental design is extensive and well-structured, providing a detailed evaluation of all aspects of agentic abilities. It also tests across a variety of language models.
Supplementary Material: I reviewed the appendix; it is lengthy yet well-structured, containing all the detailed information along with excellent experimental visualizations.
Relation To Broader Scientific Literature: This paper provides a valuable window for readers to understand the influence of compression on agentic workflows and serves as a guide on which compression methods to choose. It contains many valuable insights.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: This paper is excellent. Well done!
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Dear Reviewer Sovg,
Thank you for your thorough and constructive review of our work. We sincerely appreciate your recognition of the experimental rigor and practical relevance of our evaluation benchmarks, as well as your encouraging feedback. Your insights strongly support our goal of providing actionable guidance on the trade-offs in LLM compression for agentic workflows.
We are pleased that the detailed results and visualizations in the appendix proved useful—ensuring methodological transparency was a key priority for us. Should you have any further suggestions or require additional clarifications for the final version, we would be happy to incorporate them.
Thank you once again for your time and thoughtful evaluation. We look forward to any additional comments you may have.
Best regards,
Authors of #6400 | Summary: Large language models (LLMs) have significantly advanced areas such as code synthesis and multi-agent collaboration; however, their practical deployment remains constrained due to substantial computational and memory requirements. Compression techniques, including pruning and quantization, effectively reduce model size but frequently neglect crucial agent capabilities like planning, coherence, and tool integration.
This paper proposes the Agent Compression Benchmark (ACB), aiming to comprehensively assess the effects of compression methods—pruning (SparseGPT, Wanda) and quantization (GPTQ, AWQ)—on LLMs. The benchmark specifically evaluates:
1. Agent Capabilities: Action execution, long-context coherence, and tool integration.
2. Model Impact: Assessed through ERank, Top-K Ranking Correlation, and energy-based analyses.
3. Compression Comparison: Evaluates a range of models (from <7B up to 32B parameters) to provide insights for selecting optimal compression strategies without significantly compromising agent performance.
Claims And Evidence: The primary goal of this paper—to extensively evaluate compression impacts across multiple dimensions—is clearly presented, and the paper thoroughly quantifies these effects across various models and settings. However, the claim that the proposed metrics (ERank, Top-K Ranking Correlation, and energy-based methods) significantly enhance understanding of compression impacts is less convincing, as their practical utility remains unclear.
Methods And Evaluation Criteria: The selected models, datasets, and metrics are comprehensive and well-chosen. Nevertheless, the paper does not clearly articulate how the proposed metrics meaningfully contribute to a broad quantitative analysis. More explanation is needed on how these metrics translate into practical insights across diverse workloads.
Theoretical Claims: The theoretical justifications and statistical analyses provided by the authors are reasonable and supported effectively by motivational diagrams.
Experimental Designs Or Analyses: As noted in the methods section, the experiments are comprehensive and rigorous. However, the connection between the experimental outcomes and the practical implications of the proposed metrics requires additional clarification.
Supplementary Material: The authors have compiled a valuable set of supplementary materials, offering extensive quantitative evaluations across a range of compression techniques and models.
Relation To Broader Scientific Literature: The significant contribution of this paper lies in presenting a curated benchmark suite for comprehensively evaluating different facets of model compression. However, the paper lacks deeper insights and guiding principles on how practitioners might effectively leverage these extensive observations to make informed decisions.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: See "Other Comments or Suggestions" below.
Other Comments Or Suggestions: I find it challenging to clearly understand how the proposed metrics (ERank, Top-K Ranking Correlation, and energy-based metrics) correlate with the specific benchmarks' performance metrics. Certain compression techniques perform well in specific scenarios but underperform in others. Although trends in the proposed metrics are observable, it is unclear how practitioners should interpret these results practically to predict or understand performance across diverse workloads.
Questions For Authors: Could you elaborate on the significance of your proposed metrics to the main contributions of the paper? Specifically, why are these metrics important, and how can practitioners leverage them effectively to make informed decisions when choosing appropriate compression techniques for novel tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1**: About the practical utility of the proposed metrics:
> the proposed metrics significantly enhance understanding of compression impacts is less convincing, as their practical utility remains unclear.
**Ans for Q1**: . We would like to address each metric:
- **ERank**: Diff-eRank[1] is a theoretically grounded method based on information theory and geometric principles. It analyzed differences between base and trained models using ERank. We extend it to quantized LLMs shows ERank's effectiveness with experimental results (extending Tab.1 of Diff-eRank), where higher ERank values correlate with better model performance (see below table).
- **Top-K Ranking Correlation**: As detailed in Q1 to Reviewer K5g3, this metric provides meaningful insights by focusing on the model's behavior regarding top-k token ranking. The correlation analysis captures important aspects of the model's predictive distribution.
- **Energy-based Analysis**: We find that compressed models' energy distributions gradually align with uncompressed ones over timesteps (Fig.14), showing that while quantization disrupts LLM representations, parameter redundancy compensates for the loss. Aggregated energy reflects this recovery and correlates with performance (see below table).
| OPT | 125M | 1.3B | 2.7B | 6.7B |
| ------------ | ------ | ------ | ------ | ------ |
| ACC | 0.276 | 0.332 | 0.370 | 0.360 |
| delta Loss | 5.734 | 6.138 | 6.204 | 6.258 |
| Diff-ERank | 1.410 | 2.140 | 2.338 | 2.280 |
| ERank (4bit) | 15.462 | 15.589 | 13.898 | 17.877 |
| Energy | 2.738 | 2.746 | 2.631 | 2.883 |
[1] Lai Wei etc, Diff-eRank: A Novel Rank-Based Metric for Evaluating Large Language Models, NeurIPS 2024
**Q2**: About the connection between the metrics with traditional metric.
> the connection between the experimental outcomes and the practical implications of the proposed metrics requires additional clarification. How the proposed metrics correlate with the specific benchmarks' performance metrics.
***Ans for Q2***: Our proposed three metrics serve as complementary tools to traditional benchmarks, offering causal insights into performance variations. Specifically, they help explain why certain models achieve better or worse results on standard metrics like perplexity (PPL) or accuracy. Also, please refer to our response to **Reviewer K5g3 (Q1)**, where we provide additional experiments demonstrating how **Top-K Ranking Correlation** reflects changes in traditional benchmark performance. You can check the [anonymous link](https://anonymous.4open.science/r/ICML_ACBench_Rebuttal-B4DA/) about the topk-ranking results.
**Q3**: **Guiding Principles for Practitioners**
> The paper lacks deeper insights and guiding principles on how practitioners might effectively leverage these extensive observations to make informed decisions.
***Ans for Q3***: We have refined our guidelines to provide clearer, actionable insights for practitioners:
1. For Specific Agent Capabilities:
- If targeting single capability like tool use, directly consult the task-specific results in Sections 4-7 to select the optimal compression method.
- eg: For tool use, AWQ preserves JSON-structured outputs better than GPTQ (Fig. 3); for long-context, AWQ surpass GPTQ in most of cases.
2. For General-Purpose Agent Deployment:
- Model Choice > Compression Method: Base model capability is critical. For instance, Qwen2.5 outperforms R1-Distill-Qwen2.5 in four capabilies (Tab. 8). Prioritize high-quality base models first.
- Default to AWQ: AWQ shows stable performance across all benchmarks (workflow, tool use, long-context).
- If Quantization is Infeasible: Use Wanda (outperforms SparseGPT in 80% of cases).
- Avoid R1-Distill for Agents: Despite its reasoning strengths, it fails in real-world agent tasks (Fig. 7). Use quantized base models instead.
3. For hybrid scenarios (e.g., workflow + tool use): Start with Qwen2.5-7B (strong base) → Apply AWQ.
You can refer to [anonymous link](https://anonymous.4open.science/r/ICML_ACBench_Rebuttal-B4DA/) for guideline flow chart.
**Q4**: About the importance:
> Could you elaborate on the significance of your proposed metrics to the main contributions of the paper?
***Ans for Q4***:
These metrics are foundational to our three key contributions:
- ERank can reflect how the information are compressed from the perspective of information theory. **See Q1 to Reviewer DuJV**.
- Topk Ranking Correlation is intuitive over how the compression influence sampling process. we also added additional experiments to show that topk ranking correlation have the ability to predict the downstream performance. **See Q1 to Reviewer K5g3**.
- Energy is helpful for us to understand what is going on during the decoding stages, where as the time goes, the distribution shifted to uncompressed one and become stable. **See Q1 to Reviewer DuJV**
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanation and the additional experiment. I do not have further questions, but given my lack of expertise on the specific compression topics, I will keep the weak accept rating and leave further judgements to AC and other reviewers.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer DuJV,
We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Your valuable insights and suggestions are greatly appreciated and will certainly help us improve our paper.
Best regards,
Authors of #6400 | Summary: The authors introduce ACBench (Agent Compression Benchmark), a benchmark designed to evaluate how compression techniques (quantization and pruning) affect the agentic capabilities of large language models (LLMs), such as multi-step planning, workflow generation, tool use, and long-context understanding. They assess compression across three dimensions: the effect on agentic capabilities, internal model changes (using ERank, Top-K Ranking Correlation, and Energy-based Analysis), and the comparative effectiveness of compression methods (quantization vs. pruning). The authors argue that traditional benchmarks neglect real-world, multi-turn scenarios, making ACBench relevant for practical deployment. Their findings show that quantization preserves structured tasks well but significantly degrades performance on complex real-world applications, whereas pruning typically performs worse. Finally, they also found that distilled reasoning models performed poorly on agentic tasks.
## update after rebuttal
Thank you for the rebuttal. I have upped my score.
Claims And Evidence: The paper’s core claim—that current benchmarks inadequately capture the performance impacts of compression on agentic capabilities—is convincingly supported by extensive experiments. However, the surprising underperformance of distilled models lacks deeper investigation or explanation, leaving uncertainty about whether this degradation arises specifically from compression or reflects a broader difficulty models have with multi-step tasks.
Methods And Evaluation Criteria: The proposed benchmark (ACBench) and evaluation criteria clearly address the identified gap by focusing explicitly on agentic capabilities—such as multi-step planning and long-context understanding—which current benchmarks neglect. The use of novel metrics (ERank, Top-K Ranking Correlation, Energy-based Analysis) to analyze internal model changes is well justified, although these metrics are not explicitly validated against established measures. It would be valuable to directly compare these novel metrics with traditional benchmarks and simpler metrics (such as perplexity, single-turn accuracy, or standard ranking correlations) to confirm whether they provide distinct additional insights. Additionally, the lack of explicit correlation analysis with simpler, existing benchmarks limits the ability to judge ACBench’s unique value, leaving open questions about whether observed performance differences arise specifically from compression or reflect more general model limitations on multi-step tasks.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental design is comprehensive and sound, evaluating a broad range of compressed models and techniques across diverse agentic tasks. However, a key limitation is that the experiments do not explicitly separate baseline model limitations from compression-specific degradation, nor do they compare or correlate performance systematically with simpler, traditional benchmarks.
Supplementary Material: NA
Relation To Broader Scientific Literature: The key contribution of this paper—introducing the ACBench to systematically evaluate compression effects on agentic capabilities—addresses a specific gap in the broader literature. Prior research extensively studied compression techniques (quantization, pruning, distillation) mainly on traditional NLP benchmarks like GLUE or perplexity-based tasks. However, this literature rarely investigates how these techniques impact more complex, interactive, and multi-step tasks. ACBench builds upon recent work on evaluating and benchmarking LLMs’ agentic capabilities (such as workflow generation, long-context comprehension, and tool use), explicitly linking the compression literature to the emerging field of interactive agents. By doing so, the paper connects two previously distinct streams of research—model compression efficiency and agentic AI performance—offering insights particularly relevant for practical LLM deployments in resource-constrained, real-world scenarios.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: I like the general idea of the paper, but as expressed above, I am concerned about: 1) the lack of explicit validation and comparison of the novel metrics (ERank, Top-K Ranking Correlation, Energy-based Analysis) against simpler, existing benchmarks and metrics, leaving their unique added value unclear; and 2) the insufficient exploration and explanation of why distilled reasoning models underperform specifically on agentic tasks, raising ambiguity about whether this is due to compression techniques or more fundamental limitations of models in multi-step reasoning scenarios.
Other Comments Or Suggestions: Figure 5 is hard to read, and Figure 6 seems incomplete?
Questions For Authors: 1) Can you clarify if (and how) the novel metrics (ERank, Top-K Ranking Correlation, Energy-based Analysis) were validated against simpler, traditional benchmarks and metrics (e.g., perplexity, accuracy, simpler correlation metrics)?
2) Can you elaborate on why distilled reasoning models unexpectedly underperform specifically on agentic tasks? Is this issue related directly to compression methods, or is it indicative of more general difficulties these models have with multi-step reasoning?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the thorough comments and recognition of our work. We appreciate that you acknowledge our experiments are “comprehensive and sound”. Please see our responses to your questions and concerns below.
**Q1**: About the metrics.
> Can you clarify if (and how) the novel metrics (ERank, Top-K Ranking Correlation, Energy-based Analysis) were validated against simpler, traditional benchmarks and metrics (e.g., perplexity, accuracy, simpler correlation metrics)?
>
***Ans for Q1***:
As explained in the **Abstract and Introduction (lines 97-108)**, our proposed metrics **are not against** traditional evaluation metrics. Instead, they focus on compression explainability by revealing how compression influences LLM behavior and causes degradation. While traditional metrics only reflect the overall performance decay after compression, they cannot show **how compression specifically affects the model behavior.** For example, our Topk Ranking Consistency metric focuses on the inference feature of language models: when quantization/pruning causes he token ranking dislocation. For instance, in an instruction fine-tuning scenario, quantization can reverse the probability order of "of course" and "sorry" tokens, directly changing the output affective tendency.
Besides, inspired by this question, we explored that **whether our metrics can be used not only to exploiting of compression, but also how to choose compression methods.** We first conducted experiments on InternLM2.5-20B:
| InternLM2.5 20B | PPL | Hotpot QA | TriviaQA | MultiNews | Lcc | SciWorld | Topk R |
| --- | --- | --- | --- | --- | --- | --- | --- |
| AWQ | 7.61 | 36.52 | 84.69 | 25.82 | 61.68 | 13.81 | 87.29 |
| GPTQ | 7.59 | 18.53 | 56.04 | 24.21 | 56.62 | 15.21 | 84.15 |
| Mag(Un) | 10.39 | 31.59 | 79.26 | 26.2 | 49.72 | 3.58 | 49.44 |
| SparseGPT(Un) | 7.65 | 34.21 | 84.43 | 26.11 | 49.42 | 8.98 | 57.36 |
| Wanda(Un) | 7.87 | 32.65 | 87.44 | 25.87 | 58.49 | 9.79 | 58.98 |
Then, we compute the **correlation of traditional metric (acc, ppl) with topk ranking**, and find that the ranking consistency between ppl and topk ranking are relative high, meaning that topk ranking can reflect the evaluation performance in some extend.
| Pearson | PPL | HotpotQA | TriviaQA | MultiNews | Lcc | SciWorld | Topk R |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PPL | 1 | 0.098 | 0.088 | 0.417 | -0.551 | -0.848 | -0.636 |
| Topk R | -0.636 | -0.323 | -0.457 | -0.664 | 0.756 | 0.928 | 1 |
Compared with ppl, our topk ranking achieved generally better performance, indicating stronger predictive capability for downstream tasks.
**Q2: About deeper insight of the unexpected degradation**
> Can you elaborate on why distilled reasoning models unexpectedly underperform specifically on agentic tasks? Is this issue related directly to compression methods, or is it indicative of more general difficulties these models have with multi-step reasoning?
>
***Ans for Q2***: The underperformance stems from two key factors:
- **Knowledge Gap in R1:** Prior Deepseek V3/R1 lacked agentic capabilities (like tool use) until the recent V3 0324 version. Therefore, the distillation process couldn’t transfer the agentic knowledge from Teacher R1 to student Qwen. This distillation process mainly focus on reasoning and dialogue skills (evidenced by Qwen2.5 32B's improvement from 50.0%to 72.6% in AIME2024)[1].
- **Capacity Tradeoff in Distillation**: The distilled models tested have limited capacity. When distillation priorizes core reasoning skills (math), agentic skills (function call) may be deprioritized. From the perspective of information bottleneck perspective, we formulate it as:
$$
\mathcal{L} _ {\text{distill}} =I(\theta_S; y_{\text{reason}})-\beta I(\theta_S; y_{\text{agent}})+\lambda\|\theta_S\|
$$
where $I(\theta_S; y_{reason})$ maximizes reasoning performance, $\beta I(\theta_S; y_{\text{agent}})$ represents the penalty on agentic skill retention, $\lambda\|\theta_S\|$ enforces model compactness. So, for capacity-constrained LLMs, maximizing reasoning performance often suppresses agentic capabilities due to competition for parameter space. This suggests that larger models can mitigate this trade-off. In the Wanda (2:4) benchmark, Qwen2.5-14B experienced an 11.4% performance degradation, while Qwen2.5-7B saw a more significant drop of 17%(from Table 7).
[1] DeepSeek AI, DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
**Q3:** About Figures
> Figure 5 is hard to read, and Figure 6 seems incomplete?
>
***Ans for Q3***: We apologize for the clarity issues. For Fig.5, we improved readability by using darker color. For Fig. 6, we removed the redundant metric and add missing one(SmoothQ) to ensure completeness. You can check the [anonymous link](https://anonymous.4open.science/r/ICML_ACBench_Rebuttal-B4DA/) over the updated figures.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses and the clarification in Q2. The additional results wrt Q1 are especially interesting. I have updated my score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer K5g3,
Thank you for your feedback and for raising scores following the rebuttal. We truly appreciate your time and consideration. Your continued support mean a great deal to us. Your insights have been instrumental in refining our work, thanks again for inspiring us (in Q1).
Best regards,
Authors of #6400 | null | null | null | null | null | null |
Prediction-Powered E-Values | Accept (poster) | Summary: The authors propose to combine e-values to prediction-powered inference. The result is an e-value that combines in its definition both observed data as well as prediction of an auxiliary model. This approach is shown to improve the power of the testing procedure as long as the model is good enough. The author then provide three example applications encompassing mean estimation, change point detection and casual discovery. In all these cases, the PPI e-values are have better statistical power.
Claims And Evidence: The claims follows straightforwardly from the properties of e-value and PPI, and they are correct and clearly stated. The authors also do a good job of at explaining the implications of the theoretical results.
Methods And Evaluation Criteria: Yes. All the considered evaluation criteria are well-suited to measure the statistical power of the testing procedure.
Theoretical Claims: Yes, I have checked the theoretical claim about the growth rate and size of confidence intervals. They seem all correct.
Experimental Designs Or Analyses: Yes and they seem all correct.
Supplementary Material: Only the proof of the above mentioned theorems
Relation To Broader Scientific Literature: I think the paper misses a critical reference Active, anytime-valid risk controlling prediction sets from Ziyu Xu and Nikos Karampatziakis and Paul Mineiro in NeurIPS 2024. In this paper the authors " describe how to use predictors (i.e., the machine learning model for which we provide risk control guarantees) to further improve the utility of our RCPSes by estimating the expected risk conditioned on the covariates" which to me appear to very similar to the approach presented in the paper.
Essential References Not Discussed: Active, anytime-valid risk controlling prediction sets from Ziyu Xu, Nikos Karampatziakis and Paul Mineiro in NeurIPS 2024.
Other Strengths And Weaknesses: I think the paper is very well-written, and the author does a great job of providing theoretical claims along with their practical implications. The idea of combining e-values and PPI is valid, and I appreciate the analysis of the growth rate of the proposed scheme. The experiments are well-designed, significant, and effectively support the paper’s claims.
From my perspective, the main weakness of the paper is that the same idea has already been presented in Active, Anytime-Valid Risk-Controlling Prediction Sets by Ziyu Xu, Nikos Karampatziakis, and Paul Mineiro. See the section on "Variance reduction through prediction" in which the same e-processes based on PPI is proposed. This work was originally posted on ArXiv in June 2024, so I am unsure whether it qualifies as concurrent work. At the very least, it should be mentioned, and the differences between the two should be discussed.
Other Comments Or Suggestions: My main suggestion is to include and clarify the similarities and difference from prior art.
##update after rebuttal: I decided to keep my score considering the overlap with existing work "Active, Anytime-Valid Risk-Controlling Prediction Sets".
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for pointing out the paper of [Xu et al.]. It is definitely a work we need to cite, and we will do so. That said, our contribution differs from theirs.
1. Our construction is significantly more general than theirs. In equation (8) they construct what would be our prediction-powered e-process in the specific case of the mean-estimation e-process of [Waudby-Smith and Ramdas, 2021]; this particular construction can alternatively be seen as the standard mean-estimation e-process applied to the expectation constructed in [Zrnic & Candès, 2024]. Our construction, in contrast, is valid for virtually any e-value, which is much broader. For example, it is not at all clear how their construction could be leveraged for causal discovery, whereas ours is applicable in a straight-forward manner.
2. The context is fairly different. In their paper, the primary concern is the construction of predictive sets that are endowed with a strong notion of validity. In our paper, we are concerned with statistical inference.
3. Beyond constructing the e-values, we also investigate the statistical power of our procedure (growth rate, etc.) in the general case, which is something that [Xu et al.] do only in their restricted setting and for a lower bound on the growth rate (not the exact growth rate, as we do). Also note that our analyses on the power are not exactly trivial, as evidenced by the fact that stronger (but less compact) statements are left to the supplementary material.
Overall, given these differences, we believe that our work is a significant contribution, and we hope the reviewer agrees.
---
Rebuttal Comment 1.1:
Comment: Thank you for considering my comments. I agree with the points discussed above, and I believe that, in light of our discussion, the contribution of the paper would be best restated as extending and refining the ideas presented in Xu et al. Would the authors agree with this perspective?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We agree that our results can be seen as a more general framework from which the work of [Xu et al.] can be derived as a special case. This connection certainly justifies the citation of their work, which we will add. Thank you for bringing this interesting paper to our attention. | Summary: This paper introduces prediction-powered e-values. Its primary contribution is extending prediction-powered inference (PPI) beyond Z-estimation problems (e.g. inference of means) to the broader class of inference problems solvable via e-values. The authors show that their method retains key benefits of classic e-values, such as anytime-validity. Moreover, unlike classic PPI, their method allows updating the predictive model throughout the inference process. Empirical validation demonstrates its effectiveness in diverse scenarios, consistently yielding more precise inference.
Claims And Evidence: The theoretical claims are convincingly supported.
The empirical evidence could be improved; see Methods And Evaluation Criteria.
Methods And Evaluation Criteria: Some aspects of the experimental setup could be improved:
- Choice of labeling proportion (Sections 3.1 and 3.2):
The labeling budgets chosen (1% and 0.5%) are overly restrictive and somewhat arbitrary. An ablation study varying these proportions would improve the understanding of the method's effectiveness across different budget constraints.
- Qualitative nature of causal discovery evaluation (Section 3.4):
The causal graph experiments currently rely on a single visualized random graph, making conclusions overly qualitative. To properly validate their method, the authors could sample multiple random graphs systematically and provide quantitative metrics (e.g. structural Hamming distance) averaged across these graphs.
Theoretical Claims: I did not check the correctness of any proof.
Experimental Designs Or Analyses: N/A
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: See: Essential References Not Discussed
Essential References Not Discussed: I believe there are essential related works missing from the current submission. In particular, the literature on semiparametric inference and missing data is highly relevant but currently omitted.
The prediction-powered e-value construction (lines 148-156) strongly resembles doubly robust estimators, introduced by:
- J. M. Robins, A. Rotnitzky, and L. P. Zhao, *“Estimation of regression coefficients when some regressors are not always observed,”* Journal of the American Statistical Association, 89(427), pp. 846–866, 1994.
- J. M. Robins and A. Rotnitzky, *“Semiparametric efficiency in multivariate regression models with missing data,”* Journal of the American Statistical Association, 90(429), pp. 122–129, 1995.
The main conceptual difference in the "modern" PPI framework is that the imputation model is trained externally on large independent datasets (see e.g. Section 5 in [1] and Appendix A.2 in [2] for a discussion). However, in this submission, the authors train the imputation model on the available labeled data, making their approach essentially an extension of classic semiparametric inference into a sequential setting with e-values.
Explicitly discussing these connections would clarify the paper's position in relation to the established literature.
[1] Xu, Zichun, Daniela Witten, and Ali Shojaie. "A Unified Framework for Semiparametrically Efficient Semi-Supervised Learning." arXiv preprint arXiv:2502.17741 (2025).
[2] De Bartolomeis, P., Abad, J., Wang, G., Donhauser, K., Duch, R. M., Yang, F., & Dahabreh, I. J. (2025). Efficient Randomized Experiments Using Foundation Models. arXiv preprint arXiv:2502.04262.
Other Strengths And Weaknesses: - I believe the algorithm is not described in sufficient detail. Specifically, the procedure for estimating and sequentially updating the predictive model is not clearly discussed, despite being central to the performance of the method. Providing details on how these models are initially trained and then updated (e.g., online vs. batch updates, frequency of updates) would be essential for understanding and replicating the approach.
- An important limitation not explicitly discussed by the authors is that if the predictive model yields very inaccurate predictions (e.g. misspecified model), their proposed method could actually result in worse statistical efficiency or lower power compared to simply using the labeled data alone. A brief discussion or exploration of this potential issue—either theoretically or empirically—would strengthen the paper.
Other Comments Or Suggestions: - typo line 87: ... over the distribution of the Y∗.s
Questions For Authors: - Your prediction-powered e-value appears structurally very similar to doubly robust estimators. Can you explicitly clarify how your approach differs from (or relates to) these classic doubly robust/AIPW methods, particularly since you train the predictive (imputation) models online using labeled samples rather than externally obtained data?
- You chose labeling budgets of 1% (diabetes prevalence estimation) and 0.5% (risk monitoring). What motivated these particular choices, and how sensitive are your empirical conclusions to these specific labeling budgets? Could you provide some evidence of robustness across different budget values?
- You currently provide causal discovery results on a single random DAG. Could you run your method across multiple random graphs and provide quantitative performance metrics (e.g., structural Hamming distance)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We'd like to thank the reviewer for their review. Please refer to our responses below.
**Labeling budgets:** We agree. The choices of 1% and 0.5% are indeed a bit arbitrary, and were made mainly so as to be a reasonably large reduction in the number of collected labels, while keeping the underlying number of samples reasonably large. We will add experiments varying the budget in the camera-ready version. That said, the currently presented results aren't particularly sensitive to these sampling budgets, though if the number of labelled samples (not the proportion, the absolute number) goes under a certain threshold, then our method does struggle a bit (most likely because the predictive model deteriorates).
**Evaluation of causal discovery setting:** The reviewer raises a good point. We will add comparisons of the average structural Hamming distance over several sampled random graphs to the camera-ready version. We have already run some first simulations, with favorable results: over 40 sampled graphs, the baseline of only labelled graphs obtains a mean structural Hamming distance of 12.15, whereas our method obtains that of 6.7; the best possible one (i.e., if we used all data available, as in Figure 4) is of 6.4.
**Connections to the semiparametric inference and missing data literature:** Indeed, our prediction-powered e-values are essentially an AIPW-like estimator applied to the e-values. This construction was inspired from [Zrnic & Candès, 2024], which were in turn inspired by the usual doubly-robust estimators literature (cf. right after their equation (1)). We will make this connection more explicit in our paper. That said, perhaps the key difference between our method and usual semiparametric inference is that, rather than applying the AIPW-like estimator to the data, we apply them to the e-values -- i.e., directly to our measure of evidence. By doing so, we avoid asymptotics, and inherit all the properties of the original e-values, since all we require is that the e-value be at most one in expectation. The ability to update the predictor sequentially comes for free and is fairly easy to prove, with no significant restrictions (e.g. regularity conditions) necessary.
It is not clear whether our method can be seen as a more classic doubly-robust estimation for general e-values (as [Xu et al.] and [De Bartolomeis et al.] do for some previous PPI procedures). In the particular case of the e-value for the mean [Waudby-Smith and Ramdas, 2021], one can look at the resulting PPI e-value as the e-value for the mean applied on the AIPW-like estimator of [Zrnic & Candès]. But this observation does not seem to generalize to more complex e-values (e.g., those of https://arxiv.org/abs/2305.00143, or ones arising from p-to-e calibration).
**Details on the use of the algorithm in the experiments:** In the experiments, the sequential updating of the predictive model was done by simply appending new available data and retraining on all (labelled) data prior, every 100 collected samples. We will add this to the experiments section.
**Discussion on cases where PPI e-values would be underpowered:** Thank you for the suggestion, we will add this. | Summary: This paper proposes an extension of "prediction powered inference" to e-values, a general class of statistics that are used in many methods outside of the estimation. The authors show theoretical results justifying the validity of their prediction powered e-value, and its validity under active sampling, as well results for its growth rate. They also provide a comprehensive set of experiments showing the applications of prediction powered e-values to change point detection, continuous monitoring, and causal discovery.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes --- specifically for Theorem 2.1 --- seems correct.
Experimental Designs Or Analyses: Yes, all 4 experiments. No particular issues
Supplementary Material: Sections A.1 and B.
Relation To Broader Scientific Literature: This paper extends active prediction powered inference methods to work with e-values. There has been similar work (mathematically) in the e-value literature in the past for off policy estimation/doubly robust estimation/active risk control, but never applied in a black-box fashion to e-values directly.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The experiments are quite substantial and illustrate a suit of real world use cases, which is nice.
Other Comments Or Suggestions: - Is it possible to tie the growth rate result in Theorem 2.2 to a property of the predictor, e.g., MSE?
- One potential drawback is that if one's prediction are quite off, the resulting PPI e-value can be much smaller than the original true e-value --- is there a way of mitigating this and falling back to sampling every true e-value if the predictor is bad?
- A nitpick, but the original prediction powered inference reduced variance of the estimator based on the accuracy of the predictor, even without active sampling. For prediction powered e-values, it seems like it's advantage lies solely in the active setting, and does not offer any benefits in the "semi-supervised" setting where a fixed batch of true e-values is collected. Is there any way to also use prediction powered e-values in the "semi-supervised" setting?
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We'd like to thank the reviewer for their insightful comments.
**Growth rate as a function of the MAE of the predictor:** Yes, we can; in particular, we can show that the Wasserstein distance in Theorem 2.2 is upper bounded by the MAE of the predictor:
$$\begin{aligned}
W(\mu\_i(X\_i) || Y\_i)
&= \sup_{\\|\phi\\|\_{Lip} \leq 1} | \mathbb{E}[\phi(\mu\_i(X\_i))] - \mathbb{E}[\phi(Y\_i)] |
\\\\ &= \sup_{\\|\phi\\|\_{Lip} \leq 1} | \mathbb{E}[\phi(\mu_i(X\_i)) - \phi(Y\_i)] |
\\\\ &\leq \sup_{\\|\phi\\|\_{Lip} \leq 1} \mathbb{E}[| \phi(\mu\_i(X\_i)) - \phi(Y\_i) |]
\\\\ &\leq \sup_{\\|\phi\\|\_{Lip} \leq 1} \mathbb{E}[| \mu\_i(X\_i) - Y\_i |]
\\\\ &= \mathbb{E}[| \mu\_i(X\_i) - Y\_i |].
\end{aligned}$$
This is an interesting result, and we will add it to the camera-ready version of the paper. Thank you for the suggestion!
**Fallback to non-PPI e-value in case of lack of power:** There probably does not exist a procedure which allows for the practitioner to 'peek' at the final prediction-powered e-value $E^{ppi}_n$ and non-prediction-powered one $E_n$, and choose which to use. However, if at any point over the course of the inference the practitioner believes that the prediction-powered e-values are under-powered (e.g., because so far not using it would have been more efficient), then can start selecting $\pi_i(X_i) = 1$, which would reduce the prediction-powered e-value to the non-prediction-powered one.
Alternatively, since we are working with e-values, any convex combination (parametrised by some scalar $\eta_i \in [0, 1]$) of the e-value components $\eta_i e^{ppi}_i + (1 - \eta_i) e_i(Y_i)$ yield a valid e-value, as long as the $\eta_i$ are predictable (and thus independent of $e^{ppi}_i$ and $e_i(Y_i)$). So rather than increasing the probability of sampling the label, one could also decrease such a $\eta_i$.
**Semi-supervised setting:** Indeed, our construction is focused on an active setting, and extending it to a "semi-supervised" setting is non-trivial. Nevertheless, here is one possible way of doing it (though probably suboptimal): at each step, choose $\pi_i$ to approximate the probability of whether you have access to the corresponding label. Sample $\xi_i\sim Bern(\pi_i(X_i))$. Then you have a couple of cases:
1. If $\xi_i = 0$ (i.e., don't want to use the label), we continue as normal.
2. If $\xi_i = 1$ (i.e., want to use the label) and we have access to the label, we continue as normal.
3. If $\xi_i = 1$ (i.e., want to use the label) but we _don't_ have access to the label, then rather than contributing $e^{ppi}_i$ to the e-value, we contribute $e^{ppi-missing}_i:=\frac{a_i-(1-\pi_i(X_i))b_i}{\pi_i(X_i)}$; this works because, whichever the value of $Y_i$, $e^{ppi-missing}_i \leq e^{ppi}_i$.
We believe that a more in-depth exploration of "semi-supervised" procedures such as this one is best suited to its own paper (e.g., how do we retain good performance even when case (3.) is somewhat frequent?), and so we leave it to future work. | Summary: The paper proposes a methodology for converting any e-value based inference procedure into a prediction-powered counterpart. More concretely, they consider the setting where we have a data stream $(X_i, Y_i, \pi_i, \xi_i)_{i=1}^\infty$ where $X_i$ is cheap data, $Y_i$ is expensive labelled data, and $\xi_i \sim Bern(\pi_i(X_i))$ indicates whether we have access to $X_i$ or not. Additionally, we have access to a predictive model $\mu_i$ at each step.
In this model, they key contribution is that given any specification of a sequence of e-values on the expensive data, $E_n \doteq \prod_{i=1}^n e_i(Y_i)$, they show one can construct a corresponding prediction-powered sequence $E_n^{ppi} \doteq \prod_{i=1}^n e_i^{ppi}(Y_i)$ by defining $e_i^{ppi} \doteq e_i (\mu_i(X_i)) + [e_i(Y_i) - e_i(\mu_i(X_i))] \cdot \frac{\xi_i}{\pi_i(X_i)}$.
The authors supplement this contribution with a lower bound on the expected growth rate that is in terms of that using the true data minus the expected Wasserstein distance between the predictions of the model and the true values $Y_i$. They extend their results to building anytime-valid prediction-powered confidence sequences, as well as to general e-value based algorithms.
Finally, they supplement their theoretical contributions by a series of experiments showing the empirical gains achievable (faster null rejection, smaller interval widths, etc.).
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I skimmed through the proofs in the appendix and they appear correct.
Experimental Designs Or Analyses: I found the experimental analyses to be thorough and well-done. I appreciated the inclusion of different types of inference tasks, the baseline considered and the presentation.
Supplementary Material: I skimmed the proofs.
Relation To Broader Scientific Literature: This paper is very fundamentally connected to the broader scientific literatures of sequential hypothesis testing via betting and the more recent line of work in prediction-powered inference. They provide extensive references and even apply their method to more niche/particular developments such as using e-values for change-point detection. In my opinion, this is a situation where combining two lines of work really enriches both respective lines.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: Both prediction-powered inference and inference with e-values are relatively recent developments in statistical inference that are particularly appealing and relevant — proposing an elegant way to bring them together is in my opinion a very useful contribution.
Beyond allowing for prediction-powered inference with e-values, this methodology allows for continuous updating of the predictive model used during inference, which I find to be particularly neat and practical (as batch predicition-powered inference required a predictive model fixed a priori).
One potential weakness is that the modeling of missingness as $Bern(\pi_i(X_i))$ is a bit weird and a bit hard to map to all practical scenarios where one may want to employ prediction-powered inference. For example, sampling $\xi \sim Bern(\pi_i(X_i))$ may not be possible and/or unclear if we could have access to $\pi_i(X_i)$ in order to construct our e-value in many applications.
Other Comments Or Suggestions: List of typos:
- on line 57, left column, i think it should be “allow” rather than “allows”
- on line 175, right column, change “inputted” to “imputed”
- on line 225, left column, it should be “thorough” not “through”
- on line 357, right column, “extremely quicker” is not grammatically correct
Questions For Authors: Could you comment more on modeling missingness through $\pi_i(X_i)$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We'd like to thank the reviewer for their comments.
**Regarding modelling missingness through $\pi_i(X_i)$:** Our construction is focused on an active/experimental setting in which we can actually sample $\xi_i \sim Bern(\pi_i(X_i))$ and collect $Y_i$ when $\xi_i=1$. Extending it to a "semi-supervised" setting is non-trivial. Nevertheless, here is one possible way of doing it (though probably suboptimal): at each step, choose $\pi_i$ to approximate the probability of whether you have access to the corresponding label (this is akin to e.g. a propensity score). Sample $\xi_i\sim Bern(\pi_i(X_i))$. Then you have a couple of cases:
1. If $\xi_i = 0$ (i.e., don't want to use the label), we continue as normal.
2. If $\xi_i = 1$ (i.e., want to use the label) and we have access to the label, we continue as normal.
3. If $\xi_i = 1$ (i.e., want to use the label) but we _don't_ have access to the label, then rather than contributing $e^{ppi}_i$ to the e-value, we contribute $e^{ppi-missing}_i:=\frac{a_i-(1-\pi_i(X_i))b_i}{\pi_i(X_i)}$; this works because, whichever the value of $Y_i$, $e^{ppi-missing}_i \leq e^{ppi}_i$.
We believe that a more in-depth exploration of "semi-supervised" (i.e., that rely solely on observational data) procedures such as this one is best suited to its own paper (e.g., how do we retain good performance even when case (3.) is somewhat frequent?), and so we leave it to future work.
Also, thank you for pointing out the typos, we will correct them! | Summary: This paper extends the ideas of prediction-powered inference to e-values. The contribution is focused but nice, since as the authors note, prediction-powered e-values allow for a broader set of possible inference techniques and guarantees (anytime validity in particular). The core methodology involves leveraging the same inverse propensity weighting tricks used in AIPW and Active Statistical inference to the sequence of components of an e-value (with additional constraints on $\pi(X_i)$ to ensure that $E_n^{ppi}$ is non-negative for all $n \in \mathbb{N}$), and then theoretically showing that the ppi e-value is a valid e-value. The remainder of the empirical section gives several compelling experiments that show the versatility and effectiveness of prediction-powered e-values.
Claims And Evidence: The theoretical claims are all clear and well-presented. I appreciate the focus on crisp statements in the main text, with additional generalizations in the Appendix.
There are, however, two claims that I believe need more support:
- Contribution 3 cites "massive (often 100x-1000x) reductions in data acquisition costs", but from the empirical results it is not clear where this is supported. Please also see the "Questions for Authors" section of this review about the labeling and descriptions of the results figures.
- It is repeatedly mentioned that a strength of sequential prediction-powered e-values is the ability to update the underlying prediction model and data collection rule over the course of the inference process. However, for at least the experiment in 3.1, this is also possible in Active Statistical Inference (Zrnic & Candes, 2024), which relies only on having martingale increments (and uses the martingale CLT). It seems the fairest comparison would be to use the same setup used in "Active prediction-powered (ours)", but with the estimator in Zrnic & Candes. Similarly, it is unclear if the vanilla prediction-powered approach uses lambda-tuning or not?
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Core results (Theorem 2.1 & Proposition 2.3) were checked, and auxiliary results (Theorem 2.2, Proposition 2.3/2.4/2.6) appear to be correct.
Experimental Designs Or Analyses: The experimental designs in all of the four case studies appear sound for demonstrating the applicability of the proposed method. As noted earlier, a comparison to Active Statistical Inference (Zrnic & Candes, 2024), which allows for updates to the predictive model and adaptive sampling, is missing from the experimental section.
Supplementary Material: Reviewed the appendix.
Relation To Broader Scientific Literature: The paper builds upon the existing literature on prediction-powered inference, particularly the work of Angelopoulos et al. (2023) and Zrnic & Candes (2024). It expands this area by applying the concept to e-values, which have gained popularity as a flexible alternative to p-values --- with impactful applications to sequential and post-hoc inference. The paper also connects to the literature on e-values in various applications like hypothesis testing, confidence sequences (Waudby-Smith & Ramdas, 2020), change-point detection (Shin et al., 2022; Shekhar & Ramdas, 2023), and causal discovery. The key contribution is in bridging these two active areas of research.
Essential References Not Discussed: The related work appears to be discussed appropriately throughout the introduction and main text.
Other Strengths And Weaknesses: The main issue with statistical inference using e-values is typically a loss in power. Therefore I am surprised by the improvement over PPI in Fig. 1, though the authors claim this is due to data splitting vs. online model updates. The paper should be comparing on fairer terms to Active Statistical Inference (Zrnic & Candes, 2024) in Fig. 1.
Other Comments Or Suggestions: See above.
Questions For Authors: While the presentation is generally very good, a couple of things were unclear:
1) What is $m$ on page 5 for the definition of $E_n^{(\theta)}$?
2) What is $n$ in the experiments in Fig. 1? I understand that the number of labelled examples is 1% of that. Similarly, what $n$ is used for vanilla PPI, as that seemed to be different following data splitting.
3) What is the x axis in Fig. 2?
4) Can you provide an explanation of the change point detection algorithm in Shekhar & Ramdas in the Appendix.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive remarks, addressed as follows:
**Sequential updating and [Zrnic & Candès, 2024]:** The reviewer raised a good point. We will add comparisons in Section 3.1 to the method of [Zrnic & Candès, 2024]. We've run a preliminary experiment on this and indeed, their method (the sequential variant) performs much more similarly to ours (their martingale-CLT-based interval is slightly smaller than our nonasymptotic e-value-based one: with fixed label sampling probability, ours is [0.127, 0.155], while theirs is [0.128, 0.153]; for reference, vanilla PPI is [0.129, 0.172]).
**Reductions in acquisition costs:** In all the experiments, we stipulate some particular labelling budget, in the form of $\pi_{inf}$. For example, in Section 3.1 we have $\pi_{inf} = 1\%$ (right under equation (2)) and in Section 3.2 we have $\pi_{inf} = 0.5\%$ (right after equation (4)). These quantities denote the percentage of samples which we will label, and it is from these numbers that we obtained the 100x-1000x statistic: $1 / 1\\\% = 100$, and $1 / 0.5\\\% = 200$. As for the 1000x number, it corresponded to a previous version of Section 3.2 that we ended up not including in the paper, in which we had $\pi_{inf} = 0.1\%$. The results were very much satisfactory, but we ended up increasing $\pi_{inf}$ so that it was a bit less arbitrary and more in-line with the other values of $\pi_{inf}$ in the paper. After this change, it seems we forgot to update the introduction. We will fix this for the camera-ready version. We will also clearly present these numbers in the experiments section, as well as compute the exact amount of labels used in practice (though the actual number was generally very close to the stipulated budget).
**As for the remaining questions:**
- $m$ on page 5: that is supposed to be $\theta$, we'll fix that.
- $n$ in the experiments in Fig. 1: the full dataset has 253680 samples; for the only-labelled baseline, we use 253680*1% = 2537 of that, for our method we follow the sampling (which samples about 1% of labels as well), and for vanilla PPI we collect also 1% of labels, and split them 50%-50% between training the model and the statistical inference, since we both want to get a good model (so want a large training split) but also don't want to suffer much in the statistical inference (so want a large inference split). We will also include experiments varying this split's proportion, though it does not seem to improve vanilla PPI significantly.
- The x axis in Fig. 2 is time, i.e., the course of the inference procedure. The first samples of the experiment are to the left, the later samples are to the right. We will make this more clear in the camera-ready version.
- We will also add an explanation of the change-point detection algorithm of Shekhar&Ramdas in the Appendix, thank you for the suggestion. | null | null | null | null |
Compositional Generalization via Forced Rendering of Disentangled Latents | Accept (poster) | Summary: The paper develops theoretical and empirical results as to why a disentangled representation in
input does not necessarily lead to OOD compositional generalization.
First, the paper demonstrates the failure of common generative architectures (decoder-only CNN / MLP) to
perform compositional generalization, even when given disentangled input. This is demonstrated on a task of
generating synthetic images of a 2D Gaussian bump, centered at the (x, y) input coordinates. They then suggest
and test an explanation for this failure - that the input becomes re-entangled in deeper layers. This is shown
quantitatively using kernels and manifold. The paper suggests that instead, the models memorize training data.
Finally, two methods are suggested to improve the compositional generalization: architectural modifications with
regularization, and additional training on curated data.
Claims And Evidence: Strengths:
* Studying the impact of factorization is a very important task, since a breakthrough in compositional embedding can break many limits in current learnable approaches.
* The toy example is analyzed extensively and thoroughly, in terms of memorization, manifold analysis, and augmentation.
* The paper’s demonstration of the memorization of the in-distribution training data is presented clearly and is
justified using the binary kernel factorization. Likewise, the presentation of the re-entanglement using the Jacobian
tensor is valid.
* The low-rank approach is an interesting way to force factorized structure and prevent memorization. In addition,
augmenting the training data allows improving compositional generalization of current models without the need
to modify architecture.
Weaknesses:
* It appears problematic to generalize into holistic and general insights out of a single and limited toy example (it is indeed mentioned in the Limitation section, however this may not be a simple limitation but a biased *general* insight). Experiments may be biased due to various reasons, such as: 1) the relationship between the size of the network and the amount/variability of the learning data may extremely bias the network into memorization instead of generalization. 2) relying on CNN based network for a task containing locality, and hence spatial aware network is preferable, may also be problematic. It can shift the network into memorization, since the architecture is less adequate to solve the problem. For instance, it may be that more complicated networks, specifically spatial aware ones (like transformer containing positional encoding phase) may lead to different conclusions.
* The authors show that subsequent layers re-entangle the representation, however it is not proven to be the cause
of failure for OOD generation. The paper does not test the less constrained hypothesis, that it is sufficient for the
representation to be merely invertible into the original factors. The requirement of strict factorization seems too
constraining.
* The generality and applicability of the suggest methods are unclear without tests on more complicated datasets.
In addition, generating additional curated data is domain-specific and can be very expensive and demanding
for real dataset.
* The paper’s claim - that standard architectures fail to achieve compositional generalization despite being pro-
vided with explicitly factorized inputs — implicitly assumes that a factorized input should be sufficient for such
generalization. However, prior work (e.g. Montero 2021) has already shown that compositional generalization depends on more than
just disentanglement at the input level. Consequently, the paper refutes a weaker claim than what is typically
asserted.
Methods And Evaluation Criteria: * Nowadays VAEs fall short in terms of generative capabilities, way beyond diffusion models. I think that theoretically it is important to understand compositional generalization in general, but investigate it on CNNs is reducing the impact of the findings. (Also here I noticed that it mentioned in the Limitation section as well, I agree with this limitation and mention it here as a weakness that it is not "cancel" the insights but tone them down).
* The ablation study is done on various combinations of the loss terms but all on the same architecture. If the tuning is only on the learned loss it can easily be applied to other architectures (which is a pro in fact), then I would expect this type of ablation study.
Theoretical Claims: The methodolgy and reasoning here seems not to take into account all possible cases for their final conclusions. see above.
Experimental Designs Or Analyses: Insufficient, toy, single architecture that is not optimal for the problem, see above.
Supplementary Material: OK
Relation To Broader Scientific Literature: OK
Essential References Not Discussed: Related Work should contain a section on loss terms for compositionality.
Other Strengths And Weaknesses: Minor weaknesses:
* The paper does not list the architectures used for the CNN and MLP used in section 3.1.
* VAE play a central role in generation in general and compositionality research in particular, and hence the paper
is lacking by not including it in its tested architectures.
* The visualizations contain too small font size and information. An extreme example is in (e) in Fig. 4.
* Line 309 right column, a sentence started but not finished, "both results show"
* Not clear that MSE loss for this task is preferable. What if the network required to set a bump in (x,y) and set it in (x+d,y+d). The same MSE loss will be reached regardless the distance from the required. I think that earthmovers metric (optimal transport loss in general) would be more appropriate here.
* Why do the authors decide on this specific task? Why generative task, and under generative ones, why this specific bump task? I did not find intuitive explanation in the paper why this task is preferable for some reason.
* Inconsistencies in capitalizing. For example subsection 3.4 contains capital letter only for the first word, while subsections 4.1, 4.2 and more capitalize each of the words.
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Architecture Choice (Size & Type)**
- **Data–Model Size and Memorization:** In our experiments, the network does not memorize when we alter the data (e.g., 1D stripes with few conjunctions), even with significantly fewer samples. Thus, we do not see evidence that a “small dataset vs. large model” alone must lead to memorization. Instead, we find that memorization arises when factorization is not encouraged (via e.g. regularization, data augmentation), and it exhibits the “superposition” strategy—activating all relevant seen patterns simultaneously.
- **CNNs/MLPs as Foundational Backbones:** We tested both CNNs and MLPs (the backbone of many generative models, from VAEs, UNet-based Diffusion models, to transformers). Both show manifold distortion and OOD failure, suggesting that “warping” in the presence of incomplete data coverage is fairly universal. We also tried positional encoding with CNN/MLP (omitted for brevity) and observed similar OOD failures.
- **Transformers & Spatially-aware Models:** When we experimented with Transformer-based architectures for the 2D bump task, we found a notable difference: standard Transformer encoder layers with self-attention across pixel tokens performed worse in out-of-distribution bump placements compared to a simpler stack of per-token linear layers and non-linearities. Since each pixel token already has direct access to (x,y) context via positional embeddings, pooling or mixing tokens through self-attention seems unnecessary and can inadvertently entangle factors, degrading OOD generalization. Conversely, a purely feed-forward per-token approach preserves the simple, distance-like mappings needed for forming a 2D bump, yielding better compositional generalization.
**Factorization vs. Mere Reversibility**
- **Why Not Just Invertibility?** We appreciate the suggestion that invertibility might ensure compositional generalization. However, we emphasize compositionality at the manifold level, as single-sample invertibility alone may still allow factors to become entangled across the manifold, hindering new combinations. For instance, invertible models like normalizing flows don't inherently show better compositionality, suggesting that invertibility by itself is insufficient.
- **Jacobian‐Based Manifold Factorization:** Our Jacobian‐based metric checks how well the manifold locally preserves directions ∂/∂x vs. ∂/∂y If these directions become “cross‐wired,” the network can no longer compose new factor pairs effectively—despite potentially having a globally invertible map. In short, local entanglement disrupts factor reuse.
- **Partial Rather Than Strict Factorization:** We do not require each neuron or layer to exclusively encode one factor. Rather, we need that the overall manifold maintain approximately independent axes for each factor. Empirically, when the representation heavily mixes them, the model reverts to memorizing ID combinations rather than systematically composing new ones.
Although an invertible transformation can theoretically map individual samples in or out, robust compositional generalization requires more than local invertibility—it requires the activations' distribution to avoid entangling distinct factors, thus enabling reuse in novel combinations. Our Jacobian-based metric tracks this manifold-level factorization, emphasizing sufficient (rather than strict) factorization across the manifold for OOD generalization. We will clarify this distinction to avoid confusion.
**Applicability to More Complex Datasets & Real Data**
See our discussion for Reviewer 3 (iydz).
**Novelty & Positioning Relative to Prior Work**
See our response for Reviewer 3 (iydz).
**Discussion on Loss, Architecture Details, & Additional Points**
- **Loss Functions:** We appreciate the note on Earth‐Mover distance. While it may be more spatially appropriate for “bump” images, MSE is simpler and general. Off‐manifold images, which need not look like bumps, might not benefit from EMD.
- **CNN/MLP vs. VAEs:** Our “decoder‐only” approach effectively isolates the portion of generative models responsible for mapping factorized latents to pixel space, which also occurs in VAEs and Diffusions. We wanted to remove as many confounding factors as possible to highlight the re‐entanglement phenomenon.
- **Why This Specific Bump Task?** We wanted a minimal environment with known factorization. This clarity allows us to discover the superposition and warping phenomena in an unambiguous way that might otherwise be obscured in complex tasks.
- **Minor Stylistic Issues:** We will address all writing and visualization issues raised by the reviewer.
We appreciate the reviewer’s feedback and will clarify this in our revision. Our findings underscore that factorized inputs alone don't guarantee compositional generalization unless factor separation is maintained throughout the network’s forward pass. We hope these insights inform robust model designs and data strategies more broadly.
---
Rebuttal Comment 1.1:
Comment: In light of the rebuttal response to my review and to the other reviewers the contribution is now more clear and I raise my rank. | Summary: The authors investigate why disentanglement is not sufficient for compositional generalization. First they observe that models are unable to reconstruct simple bumps in unseen locations in visual space from fully factorized latents. They use this as evidence that disentanglement alone is not enough for compositional generalization. They then proceed to show how different forms of regularization and data augmentation can help models to achieve better generalization. These include penalizing the entropy and variance of the singular value of filters in the transformation from representation to input and modifying how data is presented so that models learn about each factor independently.
Claims And Evidence: Yes, the claims made in the article are well supported by empirical simulations.
Methods And Evaluation Criteria: They do, though as is often the case in ML/DL, there is a hope to see insights in toy datasets translated to more complex datasets. In this case the datasets used are very simple (just gaussian bumps), so it is unclear if these insights translate to other settings.
Theoretical Claims: There were no proofs in the manuscript.
Experimental Designs Or Analyses: The experimental design and analysis is sound.
Supplementary Material: I didn't.
Relation To Broader Scientific Literature: They are, but also not very novel. My feeling is that the authors put too much emphasis on what they have found (which is not novel) when they should put more emphasis on their analysis. Specifically, the idea that just having fully disentangled representations does not help compositional generalization was already pointed out in Montero et al., 2021 (which they discuss, making this omission a bit puzzling. See the last section in that article.). Additionally, their design where a central pattern is excluded from reconstruction was already explored in [1], though admittedly it appears in the appendix. Finally, the insight that enforcing some factorization in output space is required to perform compositional generalization was already hypothesized in [2] (final section). Thus the article sits in a weird position where it retreads some points that have already been made, reaching similar conclusions even if they do so via different methods and perspectives (which I believe is still valuable).
- [1] Watters, N., Matthey, L., Burgess, C. P., & Lerchner, A. (2019). Spatial broadcast decoder: A simple architecture for learning disentangled representations in vaes. arXiv preprint arXiv:1901.07017.
- [2] Montero, M., Bowers, J., Ponte Costa, R., Ludwig, C., & Malhotra, G. (2022). Lost in Latent Space: Examining failures of disentangled models at combinatorial generalisation. Advances in Neural Information Processing Systems, 35, 10136-10149.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: The main strength of the article is that it provides a more thorough account of some findings/insights that have been previously found. Specifically, I believe that viewing compositional generalization failures in generative models from the perspective of transport operators is a valuable one. But since both the findings and insights are not novel it is unclear how useful this will ultimately be.
Other Comments Or Suggestions: I think my main suggestion is to shift the tone of the article from one where the authors claim to make this fundamental discovery (which for better or worse has already been done), to one where they characterize it based on transport perspective. Specifically, they can ask how layer depth and the like affect this issue and then discuss their regularization techniques/data augmentation. I would actually say that this last part feels more like curriculum learning, which has some interesting links to cognitive science and how we learn about concepts in isolation before learning how the interact with each other. Then I would try to translate their insights to more sophisticated datasets (We don't need ImageNet, but there are plenty of datasets that are used to explore disentangled representations out there). Be aware though, this idea of regularization has been tried before and in general does not lead to good results (see references), so the authors need to make a clear case as to why their approach is fundamentally different instead of just mathematically. Finally, I agree that architectural constraints are good, as shown in [4]. The question would be, are the general architectural motifs that apply across modalities/concepts?
[1] Kim, H., & Mnih, A. (2018, July). Disentangling by factorising. In International conference on machine learning (pp. 2649-2658). PMLR.
[2] Burgess, C. P., Higgins, I., Pal, A., Matthey, L., Watters, N., Desjardins, G., & Lerchner, A. (2018). Understanding disentangling in $\beta $-VAE. arXiv preprint arXiv:1804.03599.
[3] Zhu, X., Xu, C., & Tao, D. (2021, July). Commutative lie group vae for disentanglement learning. In International Conference on Machine Learning (pp. 12924-12934). PMLR.
[4] Montero, M. L., Bowers, J. S., & Malhotra, G. (2024). Successes and Limitations of Object-centric Models at Compositional Generalisation. arXiv preprint arXiv:2412.18743.
Questions For Authors: I have no questions apart from the ones above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Novelty and Positioning Relative to Prior Work**
We fully agree that claiming the insufficiency of disentanglement is not novel (also see response to Reviewer 1, vKC8). Our primary contribution lies instead in providing a detailed mechanistic explanation of why disentanglement fails—specifically, through manifold warping (due to topological deformation) and superposition-based memorization. We thank the reviewer for highlighting Montero et al. (2022), who primarily attribute compositional failures to encoder limitations rather than decoder issues. In contrast, we focus exclusively on decoder failures. Montero et al. observe that models succeed in non-interactive but struggle with interactive composition, consistent with our findings that the default composition mode is superposition-based, suitable only for non-interactive scenarios. They suggest interactive composition requires learning causal interactions between factors; notably, our proposed methods (embedding regularization and data augmentation) explicitly encourage learning these causal interactions, which we demonstrate to be effective empirically.
In our revision, we will amend the title and abstract to clearly highlight our contribution as providing deeper diagnostic insights into compositional generalization failures, explicitly acknowledging prior work and positioning our study accordingly.
**Applicability to More Complex Datasets and Real-World Data**
We agree that translating insights from our synthetic study to more complex, real-world datasets is an important future direction. Indeed, as previously emphasized by Montero et al., interactive compositionality cannot be addressed by a one-size-fits-all solution, and accordingly, we do not claim to propose a universal remedy. While our specific approach may not directly generalize to complex datasets, it highlights two valuable exploratory directions: (1) training modular, output-level embedding filters dedicated to each disentangled input dimension, and (2) dataset augmentation with isolated factors of variation. Our simplified setting was specifically chosen to clearly illustrate underlying mechanisms of compositional failure (such as models resorting to memorized ID data for OOD generalization—a nontrivial yet unsuccessful mode).
In our revision, we will explicitly acknowledge that generalizing our diagnostic insights to more complex datasets remains open and valuable. We appreciate the reviewer’s suggestion of explicitly exploring common disentanglement datasets and will discuss this as a key direction for future research.
**Relationship to Curriculum Learning and Cognitive Science**
We greatly appreciate this insightful connection. Indeed, our data curation method—introducing each factor independently—does resonate closely with curriculum learning strategies in cognitive science. In the revised manuscript, we will explicitly acknowledge this parallel, highlighting that isolating concepts before combining them is a recognized effective strategy both in human learning and potentially in artificial neural networks. We will cite relevant cognitive science literature as suggested, discussing this interesting link.
**Regularization Techniques and Architectural Constraints**
We thank the reviewer for this important caution. Our regularization method differs from prior work by explicitly targeting the singular-value structure of the decoder’s weight matrix to mitigate topological deformation (warping). Nevertheless, we recognize and appreciate the reviewer’s caution about prior regularization strategies’ limited success in broader domains. We will clarify this distinction explicitly in our revision, emphasizing that while our targeted regularization approach is effective in our controlled setting, its broader applicability is an open and important question.
Regarding architectural constraints, we strongly agree that identifying general architectural motifs applicable across modalities and concepts is a promising direction. We will explicitly discuss this in the revised manuscript, noting that the identified failure modes and solutions (such as factor-specific architectural constraints) could be generalized across other data modalities.
**Terminology and Clarity**
We thank the reviewer for this valuable suggestion. We will comprehensively revise the manuscript to adopt a more precise and appropriate tone, clearly positioning our paper as providing mechanistic diagnostic insights rather than fundamental novelty in identifying the insufficiency of disentanglement. We agree this will significantly enhance clarity and ensure accurate positioning within the existing literature.
We sincerely thank the reviewer again for their detailed comments, constructive criticisms, and thoughtful suggestions. These revisions will substantially improve the clarity and positioning of our paper, and we appreciate the reviewer’s help in guiding this important improvement. | Summary: The paper investigates conditions under which a neural network learns to generalize "compositionally". The setting involves learning to generate a 2D "bump function". A key result is that a "disentangled" representation is not sufficient to ensure compositional generalization. The authors then describe data curation and regularization strategies that appear, in their synthetic setting, to be sufficient for compositionality.
## update after rebuttal: As described in my comment later in the thread, I am keeping my initial rating,
Claims And Evidence: The claims seem reasonably supported, and I appreciated the extensive use of visualizations to analyze the results. The setting is simple enough that the results seem fairly transparent—that's a plus, and kudos to the authors for creating such an easy-to-analyze model.
If I have any reservations, it's that the specific definition of "compositional generalization" (as described briefly in the body of the paper, and more extensively in Appendix A) seems potentially too narrow. When I look at the function learned in Figure 1d, for example, it actually seems like a fairly clever OOD generalization: the network appears to have learned that (1) center pixels are empty, (2) there must be bumps at the given x- and y-coordinates; (3) the centroid of the bumps appears at the given x- and y-coordinates. And this generalization appears "compositional" in the sense that it is superposing (adding) separate solutions for x- and y-coordinates.
Methods And Evaluation Criteria: The methods seem reasonable for the given toy problem. One issue is that I didn't immediately see exact details on some of the networks used (e.g. for the bump functions). It's not clear to me there are sufficient details to reproduce the authors' experiments precisely.
Theoretical Claims: Generally the claims made sense to me. However, I have to admit I don't completely follow what's going on with equation 8, and would like to see more detail.
Experimental Designs Or Analyses: The experimental designs made sense, but note the question above about experimental detail.
Supplementary Material: n/a
Relation To Broader Scientific Literature: The related work sections seems good: in particular, I appreciate that it was focused and did not bring in irrelevant details.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
- The results from the initial toy model are not particularly surprising in retrospect, but are a very nice illustration that generalization doesn't always happen in the way that one would expect. I don't know of this example in the literature, and it seems like a good addition.
- The regularization described in section 4 appears to be a simple way to encourage compositional generalization; I'm not 100% sure I understood it, but if I did, it seems potentially useful.
Weaknesses:
- I would have liked a more careful description of why this particular definition of "compositional" was chosen
- Not clear there's enough detail to replicate the experiments exactly.
Other Comments Or Suggestions: I don't think you need to keep adding quotes around the word "superpose".
Questions For Authors: Can you say more about equation 8?
Is code available for this? If so, it would help in reproducing the results; apologies if I missed this in the text.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and encouraging feedback on our work. We greatly appreciate the insightful comments and constructive questions raised, and we respond to each point concisely below.
**Definition of Compositionality and OOD Generalization**
We thank the reviewer for this accurate observation. Indeed, compositional generalization, or “OOD generalization” in general is a very broad term. For a task that involves extrapolating far from the training regime (as studied in our work), a central challenge is the lack of well-defined “ground truth.” Thus, we adopted a setting where OOD generalization explicitly involves combining two independently varying factors, representing perhaps the simplest compositional scenario. Interestingly, even in this minimal setup, our results illustrate that neural networks struggle without additional interventions, highlighting broader challenges in compositional generalization. There are indeed many more different forms of compositionality, see our response to Reviewer 3 (iydz) regarding novelty for a discussion of interactive vs. non-interactive compositionality.
In terms of the model's superposition strategy, while the model indeed learned a highly nontrivial, compositional solution, it is at the activation level, rather than the pixel level, which led to failure of the desired generalization performance, revealing the fact that it is memorizing the ID data rather than breaking them down into composing factors. Our definition of compositionality and our performance metrics aim at assessing the model’s ability to construct novel compositions from factors of independent variation. In the superposition case, the model fails at constructing novel compositions correctly.
**Experimental Details and Reproducibility**
We acknowledge the importance of reproducibility and will provide precise architectural and training details in the revised manuscript. Specifically, we will clarify the architecture of the neural networks used for generating bump functions, including exact layer sizes, activations, optimization algorithms, hyperparameters, and training protocols. Additionally, we will release the code to ensure complete reproducibility and facilitate further exploration of our findings. We have explored a plethora of different input formats and model depths. The architecture details for a sample CNN and an MLP taking bump-encoding inputs are given below:
| Architecture | Input Dimensions | Output Dimensions | Layers | Hidden Layer Size | Parameter Count (example) | Activation | Optimizer & LR |
|--------------|---------------------------|-------------------|--------------------------------------------------|-------------------|---------------------------|------------|----------------|
| **CNN** | 56 (reshaped to 56×1×1) | 1×28×28 | ConvTranspose2d (upsampling from 7×7 → 28×28) | 64 channels | ~315K (4 hidden layers) | ReLU | AdamW (1e-3) |
| **MLP** | 56 | 1×28×28 | Fully-connected Linear layers | 256 units | ~272K (3 hidden layers) | ReLU | AdamW (1e-3) |
We will include this summary table in the revised manuscript to clearly communicate the architectural details and facilitate reproducibility.
**Clarification of Equation (8)**
We thank the reviewer for pointing out this potential source of confusion. The vectors and eigenvalues referenced in Equation (8) are specifically used to construct the weight matrix within our network. This formulation enables targeted regularization of different components of the weight matrix more efficiently, promoting factorization and preventing representational warping. This is especially helpful since a separate 2D embedding matrix is dedicated to each dimension in the factorized input, which encourages the model to learn for pixel-level factors. Indeed, the low-rank regularization of these embedding matrices encouraged the model to find “stripe-like” factors as shown in Fig. 3b. In the revised manuscript, we will elaborate further on Equation (8), explicitly explaining the mathematical intuition behind this approach and clarifying how it aids compositional generalization.
**Minor comments**
We will revise the manuscript to get rid of quotation marks around superposition accordingly.
In summary, we sincerely thank the reviewer again for these valuable comments, which significantly help enhance our manuscript. We are confident that incorporating these suggestions will improve the clarity, rigor, and accessibility of our paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the additional information, which will improve the paper. Note that I am keeping my rating the same, because the main issue, in my view, is the nature of the toy task, and the assumptions about what a "correct" OOD generalization is. As described in my initial review, the solution the network found is justifiable even at the pixel level; this is an ambiguous task. Perhaps for a human, the fact of a bump being "connected" is extremely salient, so a generalization that doesn't preserve connectedness of the output seems somehow "wrong". But there's really no reason to think connectedness should be salient to a learner in this context.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the helpful feedback! While we agree that the "connectedness" is a subjective measure of success, it is revealing whether the model has learned the compositional causal models underlying the data. If the model has learned to construct data via learning the correct compositional maps (e.g. composing two 1D gaussians), then it will generalize, meaning constructing ID and OOD data in the same fashion. Our toy task setting is perfect for demonstrating the failure mechanisms because of its simplicity, which would have been otherwise impossible. Indeed as you mentioned that the model does something unexpected and nontrivial when asked to compositionally generalize when it memorizes ID data. Further, we showed that with techniques such as regularization and data augmentation, we can bias the model to learn compositional solutions. Hence, this is why we are convinced that such toy studies are of value for understanding disentanglement and compositionality. | Summary: The authors investigate the role of disentangled representations in compositional generalization, which remains unclear in the literature. The authors observe that while inputs may be disentangled, this disentanglement can "erode" through subsequent layers, such that the model overall is not able to generalize OOD. Looking deeper, the authors find that this "erosion" usually stems from models memorizing training instances. Finally, the "erosion" can be mitigated via architectural constraints or curated datasets. Overall, disentangled representations seem to be a necessary but not sufficient condition for compositional generalization.
Claims And Evidence: 1. Factorization alone, independent of input encoding formats, is not sufficient for comp. gen.
- This claim is mostly well supported by the data, especially Fig. 1b-c.
- More comprehensive numbers (e.g., MSE ID/OOD for different encodings in a table) would have been nice to see.
- The different input encodings are not exhaustive, e.g., what about simple (normalized) scalar inputs instead of 1-hot encoding?
- Does the shape of the ID/OOD region play any role? E.g., if the ID region was a diagonal strip as in [3] (see below), correlation of factors in the training data might be an additional issue.
2. Failures are due to memorization and "superposition" of seen data points.
- This claim is somewhat supported by the plots in Fig. 1b-d.
- The kernel perspective from Sec. 3.3 and Fig. 1e should give a more comprehensive perspective, but it is somewhat unclear to me what Fig. 1e depicts, see below.
3. Manifold warping ruins factorization
- This claim is clear and well supported by the evidence in Fig. 2.
4. Architecture/regularization can encourage compositional generalization.
- This claim is mostly supported by the visualizations in Fig. 2 and especially the ablation in Fig. 2d.
- However, the visualizations in Fig. 2b,c are for one specific model only. A more comprehensive comparison, e.g., in terms of average ID/OOD performance over multiple models with/without regularization in a table could give increased certainty that the results hold in general.
5. Dataset augmentations can encourage comp. gen.
- As with 4, this claim is mostly supported by the results, but more general results averaged over multiple models would increase confidence in the visualizations.
Methods And Evaluation Criteria: The proposed evaluations make sense for the most part. However, I have some questions about the evaluation setup in Sec. 4.2.:
- Was the fixed coordinate still an input to the network?
- I find the stripes somewhat problematic since, by definition, their superposition results in the Gaussian blob. If the model simply memorizes and superimposes factors as before, this would mostly "solve" the OOD case, as we can see in Fig. 7b with $p=0$%.
- What could this augmentation look like for other types of compositional data? I'm having a hard time imagining an analogy to the 1d stripes for factors such as "color", "shape", "size" in, e.g., a sprites setting.
- How does this compare to model performance when restricting the ID set to x/y combinations with 1 fixed coordinate, but where the output is still a bump (i.e., the dataset consists solely of bumps on the left with varying y, or bumps on the bottom with varying x)? I find this to be there more widely applicable data augmentation that also provides the model with independent influence of each factor.
Theoretical Claims: The paper makes no theoretical claims.
Experimental Designs Or Analyses: The analyses are mostly clear, except for:
- Sec. 3.3/Fig. 1e: What does "agreement between the binary factorized kernel and the similarity matrix between the OOD and ID generated samples" mean? It would be helpful if this could be formalized in an equation. What regions of the complete plots in Fig. 8 does Fig. 1e correspond to? What regions in Fig. 1e correspond to ID/OOD? From the y-axis label it seems like the entire plot corresponds to OOD samples?
Supplementary Material: I checked App. D,E for additional details.
Relation To Broader Scientific Literature: In my understanding, the main and titular observation that disentangled representations are not sufficient for comp. gen. is well known in the literature and has been shown in multiple prior works, including Wiedemer et al., 2023 (cited in the paper) as well as [1,3,4] (see below).
Wiedemer et al., 2023, and [3,4] specifically show theoretically that compositional inputs _alone_ are not enough and additional assumptions in the form of regularizations, architectural constraints, or conditions on the training data are required.
That said, the additional kernel and transport perspectives on how disentangled representations might be "diluted" throughout a model, and the (to my understanding) novel regularization scheme in Sec. 4.1. are still interesting, but the main claim should be adapted to properly reflect prior work, which might also have to be reflected in the title.
Essential References Not Discussed: - [1] _Montero et al., 2022, Lost in Latent Space: Examining failures of disentangled models at combinatorial generalisation_. This is a follow up to Montero et al., 2021, which the authors cited. It investigates whether failure to generalize compositionally is largely due to a model's inability to disentangle inputs (the encoder), or its inability to generate new compositions (the decoder) and finds decoder errors to be most prominent.
- [2] _Schott et al., 2022, Visual Representation Learning Does Not Generalize Strongly Within the Same Domain_. This paper is similar to Montero et al., 2021, but focuses on the encoder rather than the decoder
- [3] _Wiedemer et al., 2023, Provable Compositional Generalization for Object-Centric Learning_. Similar to [1], this paper shows decoder errors to be responsible for failures to generate compositionally, which can be overcome by additional architectural constraints. The paper discusses how models that disentangle latent factors can be guaranteed to robustly compose them, albeit with a focus on object-centric learning methods.
- [4] _Lachapelle et al., 2023, Additive decoders for latent variables identification and cartesian-product extrapolation_. Like [3], the authors explore compositional generalization in object-centric learning.
Specifically, [1], [3], and [4] hint at the finding that factorization must be actively maintained, e.g., via additional architectural constraints.
Other Strengths And Weaknesses: As outlined above, I believe the main contribution (in its current phrasing) is not novel, however, additional insights provided in Secs. 3.3-4.2 are still interesting.
Other Comments Or Suggestions: - I would suggest picking either "disentangled" _or_ "factorized" and using it consistently throughout
- unclear how the OOD coordinates in Fig. 1e come to be
- Fig. 2a-b, e-f and Fig. 4e are too small to read. There is white space to both sides, a different arrangement of plots might be possible.
- Page 6, left, first paragraph ends in incomplete sentence
Questions For Authors: Please refer to the questions above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Input Encoding Types**
We appreciate the reviewer's suggestion regarding the comprehensiveness of input encoding formats. Indeed, we have included both rate-based (scalar) and population-based encodings (e.g., 1-hot, Gaussian bumps, and positional encoding), as depicted in Fig. 1. These variations produced qualitatively similar outcomes. We will explicitly clarify this range of encodings and add a table summarizing ID/OOD MSE comparisons for clarity in the revision.
**Shape of the OOD Region**
We agree the shape of the ID/OOD region could influence results. To address this concern, we previously conducted experiments comparing "circular" and "square" OOD regions, which showed no qualitative differences. Regarding correlated factors (e.g., diagonal strips), our primary focus was on the simpler case of independent factors, as even this simpler scenario already presented significant challenges for compositional generalization. We will clarify this reasoning explicitly in our revised manuscript.
**Quantitative Averaging Over Multiple Runs**
We confirm that quantitative results shown in Figures 3 and 4 are indeed averaged over multiple experimental runs. To enhance transparency and reader confidence, we will include a clearly labeled table of these averaged ID/OOD performance metrics in the revised manuscript.
**Fixed Coordinate Inputs**
We appreciate the opportunity to clarify the input setup. The "fixed" coordinate for the stripes was provided explicitly as -1 (off-canvas), ensuring clear separation between stripe and bump datasets. This detail will be clearly stated in the revised manuscript.
**Potential Issues with Stripe Augmentation**
We appreciate the reviewer’s insightful concern about the stripe augmentation potentially promoting superposition-based memorization. The key significance of our stripe augmentation experiment lies precisely in demonstrating compositional generalization: the model generates correct 2D bumps in OOD regions where it never previously observed bumps during training. Even if stripes are memorized individually, the model successfully leverages this information to construct completely novel, unseen bump configurations. Thus, the stripes serve as a spatial scaffold, facilitating pixel-level signals necessary for learning factorized variations within entirely unseen regions. We will highlight this clearly in our manuscript.
**Generalizing Data Augmentation Strategy**
Please refer to our response to Reviewer 3 (iydz) regarding the same point.
**Fixed Coordinate Bumps Experiment**
We appreciate the reviewer’s insightful suggestion. While practically feasible, this approach might not encourage compositional generalization effectively because each Gaussian bump inherently includes both coordinates, regardless of the dataset's internal variation. The key challenge for the model is still establishing the correspondence between different input dimensions and pixel-level output variations. We will explicitly discuss this consideration in our manuscript revision, clarifying the challenge presented by compositional product structures.
**Clarification on Figure 1e and Section 3.3**
We apologize for any ambiguity regarding Fig. 1e. To clarify, Fig. 1e visualizes similarity between representations of OOD-generated samples (sorted by coordinates) and an idealized binary kernel (similarity=1 if coordinates match exactly, 0 otherwise). The "agreement" quantifies how closely the learned similarity matrix aligns with this ideal binary factorized kernel. Specifically, Fig. 1e illustrates overlap between Gaussian bump images centered at coordinates (x,y) and (x′,y’), comparing the ideal binary kernel (top panel) to actual model outputs (bottom panel). Fig. 1e corresponds to the top-left sections within matrices shown in Fig. 8. We will clearly state equations defining "agreement," explicitly describe the data subset (OOD samples), and provide a detailed caption to eliminate confusion.
**Other References**
We thank the reviewer for highlighting important references and will ensure their inclusion and proper discussion in the revised manuscript.
**Regarding Novelty and Main Contribution**
We agree with the reviewer’s insightful point that the observation "disentanglement alone is insufficient for compositional generalization" is not novel. Our primary contribution is identifying and explaining the failure mechanism (superposition and manifold warping), providing mechanistic insights into why factorized latent representations fail to generalize compositionally. Please refer to our response to Reviewer 3 (iydz) regarding the same point. To clearly communicate this, we will revise our manuscript title and abstract accordingly.
**Additional Comments and Suggestions**
We will revise the manuscript accordingly.
Once again, we sincerely thank the reviewer for these valuable suggestions, which will undoubtedly strengthen our paper's clarity and impact. | null | null | null | null | null | null |
Observation Interference in Partially Observable Assistance Games | Accept (poster) | Summary: The paper studies POAG where human and AI assistant only have partial observations. It shows that an optimal assistant who aims to maximize human reward needs to take observation-interfering actions, defined as an action showing a subset of information to human, for 3 purposes:
1. Communicate AI’s private information
2. Query human’s private information
3. Help Boltzmann-rational humans
## update after rebuttal
My assessment has not changed after the rebuttal from the authors -- I believe that this paper makes a useful contribution to the field.
Claims And Evidence: Section 3 sets up POAG, section 4 supports purpose 1, section 5 supports purpose 2, section 6 supports purpose 3, section 7 runs an experiment evaluating the pros and cons of interfering for purposes 1 and 3.
Methods And Evaluation Criteria: The authors use a very simple but effective hypothetical example to illustrate the framework. There is no real-world data to evaluate the ideas.
Theoretical Claims: The theoretical claims appear sound to me.
Experimental Designs Or Analyses: The experiments in Section 7 were intuitive and easy to follow.
Supplementary Material: I did not review this section.
Relation To Broader Scientific Literature: The paper provides a useful contribution to the literature by reviewing the conditions in which it is rational for AI to withhold information from a human user. This runs counter to the prevailing idea that the AI should reveal all available information. This paper might lead to interesting extensions where the AI might withhold not only information about the world state but also other information such as AI explanations (as this type of information often leads people to make inferior decisions).
There are some conceptual similarities of this work to off-switch games (this is not an essential reference though):
Hadfield-Menell, D., Dragan, A., Abbeel, P., & Russell, S. (2017). The off-switch game. In Workshops at the thirty-first aaai conference on artificial intelligence
Essential References Not Discussed: NA
Other Strengths And Weaknesses: See comments above
Other Comments Or Suggestions: 1) major typo: line 232 if A can communicate with A H
2) minor typo: line 104 Lehrer extend(s), line 380 utilities and 1 and 0
3) p. 6, the paragraph that starts with "At first sight" is a bit unclear. Why present this unlikely human strategy?
4) I did not follow the logic of the values of the utilities (e.g., 1,0,4,7) on p. 7, the paragraph that starts with "Intuitively, both the tldr and man pages allow the human to choose optimally?
Questions For Authors: 1) why didn’t the paper show an experiment for purpose 2?
2) For the experiment in section 7.1, there is no shared information correct? Would the presence of shared information change anything?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Thanks + response
Thank you for your review. We are glad you think our work “provides a useful contribution to the literature” with technical details that are “sound,” “effective,” and “intuitive and easy to follow.” We appreciate the questions you ask about some of our examples and experiments.
## 1.
> There are some conceptual similarities of this work to off-switch games (this is not an essential reference though): Hadfield-Menell, D., Dragan, A., Abbeel, P., & Russell, S. (2017). The off-switch game. In Workshops at the thirty-first aaai conference on artificial intelligence
Thank you for sharing this reference. We agree it shares conceptual similarities, and we will cite it.
**Action 1.1: We will add a citation to Hadfield-Menell’s et al.’s “The off-switch game.”**
## 2.
> why didn’t the paper show an experiment for purpose 2?
While we were limited by space constraints, we think this is an exciting direction for future work. Thus:
**Action 2.1: We will discuss experiments for purpose 2 as a direction for future work in the “limitations and future work” section of our paper.**
## 3.
> major typo: line 232 if A can communicate with A H
We will fix this typo.
> minor typo: line 104 Lehrer extend(s), line 380 utilities and 1 and 0
We believe “Lehrer extend” is correct (see [rule 20 here](https://www.cs.columbia.edu/~hgs/etc/writing-bugs.html#:~:text=Note%20that%20%22et,%22shows%22.)) and will fix the line 380 typo.
> p. 6, the paragraph that starts with "At first sight" is a bit unclear. Why present this unlikely human strategy?
We were worried that Example 5.1 might be misconstrued as a counterexample to Theorem 4.2. We included this “At first sight” paragraph in an attempt to clarify why Example 5.1 is still consistent with Theorem 4.2.
**Action 3.1: To clarify this, we will do the following:**
* Remind the reader of the content of Theorem 4.2: “which states that non-interfering optimal policy pairs always exist”.
* At the end of the “At first sight” paragraph, add: “The key point of Example 5.1 is therefore that---while there is _some_ optimal policy pair without observation interference---there is no plausible optimal policy pair that avoids observation interference. More specifically, we use the notion of acting naively (Definition 3.7) to express this notion of plausibility and rule out the above policy. We thus obtain the following proposition, which states that in some POAGs, if we want to play an optimal policy pair and we want H to be able to act naively, then A has to interfere with observations.”
> I did not follow the logic of the values of the utilities (e.g., 1,0,4,7) on p. 7, the paragraph that starts with "Intuitively, both the tldr and man pages allow the human to choose optimally?
We agree that the logic of the utilities for Example 6.2 should be clearer.
**Action 3.2: To address this, we plan to discuss the utilities in Example 6.2 itself, rather than after it.**
* In Example 6.2, we will spell it out in more detail as follows: “Specifically, the worse flag always yields a utility of 0. The better flag either yields a utility of 1 or a utility 7.”
* We will also expand the parenthetical “i.e., the exact state: which flag is better and whether its utility is 1 or 7”.
* And at the end of the example, write, “Therefore, the expected utility of the better flag is 4 (and the utility of the worse flag is 0).“
> For the experiment in section 7.1, there is no shared information correct? Would the presence of shared information change anything?
That is correct: there is no shared information. The presence of shared information would not necessarily change anything. As long as A has private information that is useful to the human, then it can still have an incentive to interfere with observations for purpose 1. And as long as A can make H’s decision easier by interfering, then A can still have an incentive to interfere with observations for purpose 3. This holds true even in the presence of shared information.
## What do you think?
Thank you again for taking the time to review the paper and providing helpful feedback! **Do the above actions provide clarity to your questions?** If not, what further clarification or modifications could we make to improve your score? | Summary: This work studies a two player decentralised POMDP called Partially Observable Assistance Games. In this game, the authors study cases where it might be beneficial to one of the player (called assistant) to "interfere" with the observations of the other player (called human). They also identify situations, where such an undesirable interference should not happen (e.g. without any private information, or with a free communication channel).
For a different definition of observation interference though (at policy level vs action level), they provide a result guaranteeing the existence of interference free optimal policies.
Lastly, this work shows that observation interference might also be optimal in the case of irrational or 'naive' human player.
Claims And Evidence: N/A
Methods And Evaluation Criteria: N/A
Theoretical Claims: N/A
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Studying situations where hiding some private information, even in collaborative games, is of strong interest in my opinion. I think this is even more general than this "assistance game" motivation.
My main concern is about the clarity of the provided mathematical framework. Notably, there are a lot of definitions, notations and objects at stake and it is hard to understand the real meaning of all of them.
As an example, the real difference between Definition 3.2 and Definition 4.6 is unclear to me (notably because it does not affect state transition), even after reading the illustrating example. In consequence, I don't understand why this shold lead to a significant difference in the optimal policy pairs, given by Theorem 4.7 vs Proposition 4.4.
After looking more carefully at the appendix, this seems more due to some subtleties in the definitions than a fundamental diferrence between these definitions.
Probably related is my impression that the POMDP framework might be overcomplicated with the messages that the authors want to give. Indeed, when looking at the mathematical definition of the example provided along the paper, this example corresponds to a degenerate POMDP (the stationary distribution cycles in a dummy terminal state). I thus feel that a much simpler problem formulation, e.g., in a normal form game, could have led to the same kind of conclusions. As a consequence, the messages of this paper are hard to grasp, due to an unnecessary (in my opinion) level of complexity in the setting, that requires the introduction of numerous definitions and notations.
Instead, I would have preferred a much simpler setting that draws similar conclusions.
Additionally, the example given along the paper indeed seems a necessary part to grasp all the presented concepts. However, I find this specific example unclear at some points, and it didn't really help me understand in the end (I particularly did not understand the example after Theorem 4.7).
Lastly, I am not sure of the relevance of this setting to RLHF, and I would need more explanation to really think this setting could apply to RLHF.
---------------------
# Minor comments
It seems in the paper that the assistance first takes his action, the human observes it and then takes action based on the assistance action. I did not see it clearly stated in the paper though.
Definition 2.1: please give the full name for DecPOMDP at least once
Line 132: for what utilities are they Nash Equilibria?
I don't get how Propositions 2.2 and 2.3 are actual propositions. They are just probabilistic statements (no need for a formal environment here in my opinion).
In definition 3.1, what is the definition for a stochastic function?
Line 231 (right): I think there is a typo, it should be "if A can communicate with H"
Questions For Authors: What is so specific to POMDP in the final conclusions drawn in this work? It seems to me that all the statements are somehow consequences of statements for stateless games (eg normal form game)
See other questions above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Thanks + response
Thank you for your review. We are glad you find our setting “of strong interest,” and we appreciate your help with the clarity and simplicity of our paper.
## 1.
> real difference between Definition 3.2 and Definition 4.6 is unclear… I did not understand the example after Theorem 4.7.
Thank you - we want to be sure we clearly communicate these important points. We will:
**Action 1.1: Extend our discussion on line 290 of the example after Theorem 4.7 to more thoroughly explain how Definition 4.6 differs from Definition 3.2.** The revised discussion will read:
“We now revisit Example 4.3. For observation tampering under our earlier Definition 3.2, H simply knows that A has taken the *action* to suppress some versions of cuda. However, H does not know anything about A’s *policy*. For all H knows, A’s policy could be to randomly suppress cuda versions or to always suppress the same cuda version. Thus, suppressing any version is strictly less informative for H than the list of all available versions. This is why Definition 3.2 calls suppressing versions ‘tampering at the action level.’
The key difference with Definition 4.6 is that H knows A’s policy. Suppose that A’s policy $\pi^\mathbf{A}$ is to suppress exactly the versions of cuda that are incompatible with the other software in the environment. Because H knows that A suppressed the incompatible cuda versions, seeing the filtered list tells H which versions of cuda are compatible! Although suppressing versions is strictly less informative under Definition 3.2 (when H doesn’t know A’s policy), suppressing versions provides H with new information under Definition 4.6 (when H knows A’s policy). Accordingly, $\pi^\mathbf{A}$ is interfering with observations at the action level *but not at the policy level*.”
**Action 1.2: Move the discussion of the example after Theorem 4.7 (which is currently on Line 290) upward to line 280 so that it immediately follows Definition 4.6.** This way, the reader will immediately see an explanation of the intuition behind Definition 4.6 before Theorem 4.7 (which relies on Definition 4.6).
**Question 1.3: Do these revisions provide clarity? If not, please let us know, and we will work to clarify further.**
## 2.
> What is so specific to POMDP? … POMDP[s] might be overcomplicated…. A [much simpler] normal form game, could have led to the same conclusions.
We require a framework with private information and sequential play, which normal-form games lack. Observation interference (both Definitions 3.2 and 4.6) needs A to choose first, then H to observe something dependent on A's choice.
We study causes of observation interference which need:
* For Communication (Section 4): A observes privately → A acts → H observes → H chooses (Example 4.1)
* For Querying (Section 5): A acts → H observes → H acts → A observes → A acts (Example 5.1)
While extensive-form games could use fewer "dummy" states, they're similarly complex as DecPOMDPs, which are standard in assistance games literature (Hadfield-Menell et al., 2016; Shah et al., 2020).
**Question 2.1: Does this clarify why we can’t simplify our setup?**
## 3.
> I would need more explanation to really think this setting could apply to RLHF.
**Action 3.1: At the start of Section 4.1, we will add the following explanation:**
“We can model RLHF within the POAG framework as follows:
* The assistant’s goal in RLHF is to satisfy the human’s preferences. In a POAG, this corresponds to the shared reward function $R$ which has a parameterization $\theta$ that only H knows.
* In RLHF, the assistant rolls out trajectories, and the human’s picks which trajectory is preferred. A POAG can model this by letting H observe pairs of trajectories explored by A but only giving H a binary action (to choose which trajectory H prefers).
* A’s final RLHF policy maximizes an estimate of $R$ based on a dataset of H’s preference comparisons (Lang *et al.* (2024), Proposition 4.1). In the POAG framework, A can compute this policy based on A’s observations of H’s binary actions.”
## 4.
For the minor comments, we will:
* Add to line 115 “POAGs inherit the generality of DecPOMDPs: POAGs can model games where H acts first, where A acts first, or where H and A act simultaneously.”
* Add the full name for DecPOMDP to Line 111.
* Add a clarification so that line 132 reads: “... Nash equilibria for the shared reward function R.”
* Add this definition of a stochastic function: “a function mapping observations to random variables over observations”.
* Fix the line 231 typo.
> how Propositions 2.2 and 2.3 are actual propositions
Appendix A needs these propositions to have names in order to refer to them and prove them. To clarify, we will add “(See Appendix A for proofs.)” to line 148.
## What do you think?
Thank you again for your review. **Do the above actions address your concerns?** If not, what further clarifications or modifications could we make to improve your score?
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answer. The proposed modifications for the revised version will surely help in the clarity of the paper and especially in understanding its relation with RLHF. | Summary: Paper studies the conditions under which an agent has an incentive to perform observation interference (take an action which returns partial state observation to the human) even when the goals are aligned. The thought of taking such an action, at surface level seems adversarial and counter intuitive. However, the paper discusses conditions such as when the human is making decisions based on immediate reward, when agent wants to reveal some private information, or when the human is irrational and restricting observation is a way of forcing them to be rational. All of these comes at a cost of destroying state information which causes the trade off.
Claims And Evidence: yes, the paper is great to read.
Methods And Evaluation Criteria: yes.
Theoretical Claims: yes, and the general reasoning discussed in the paper is helpful to build intuition for the theory.
Experimental Designs Or Analyses: yes.
Supplementary Material: section E and G.
Relation To Broader Scientific Literature: yes, this is very relevant to assistant games.
Essential References Not Discussed: discussed.
Other Strengths And Weaknesses: I enjoyed reading the paper.
I have a few concerns :
1. What is the practical applicability of the work? Are there domains beyond the curated examples where results of the paper are be discussed? What is the main impact of these results in such domains?
2. What happens if A has multiple private information?
3. Are actions like, attempt to open door, for an agent trying to expose that the door is locked, a valid example for agent exposing private information, (let's say human could never have been in the room). If so, this action doesn't alter the environment but exposes private information. Therefore, in the presence of both such actions - observation interfering and signaling which would be preferred?
Other Comments Or Suggestions: see above comments.
Questions For Authors: see above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Thanks + response
Thank you for your review. We are glad you consider our work “very relevant” to the broader scientific literature and that “the general reasoning discussed in the paper is helpful to build intuition for the theory.” We appreciate the questions you raise about the practical applicability of our work along with the example scenarios that you pose.
## 1.
> What is the practical applicability of the work? Are there domains beyond the curated examples where results of the paper are be discussed? What is the main impact of these results in such domains?
Thank you for raising these important questions about the practical applicability of our work. To address these questions:
**Action 1.1: We will include a discussion of the practical applicability of our work in the “Impact Statement” section of our paper.** We propose the following text:
“AI assistants are being developed and deployed in settings where humans can only partially observe what’s happening. For example, AI assistants including ChatGPT, Claude, and Gemini can search the web while only returning summaries to users ([OpenAI, 2024](https://openai.com/index/introducing-chatgpt-search/); [Anthropic, 2025](https://www.anthropic.com/news/web-search); [Google, 2024](https://blog.google/products/gemini/google-gemini-deep-research/)). Moreover, the sorts of AI models powering these assistants are processing increasingly long inputs. Whereas the original ChatGPT model could only process 4096 input tokens, today’s Gemini 1.5 Pro can process 2,000,000 input tokens—which is roughly 100,000 lines of code, or 16 novels of average length in English ([Google, 2025](https://ai.google.dev/gemini-api/docs/long-context)). In the future, we anticipate that AI assistants will be deployed at increasing scale, independently taking more actions on behalf of users and processing increasingly long context lengths. We thus expect that over time, humans will have less and less ability to directly observe everything that’s happening.
The goal of our work is to lay a theoretical foundation for understanding when AI assistants have an incentive to interfere with human observations. Our results create a nuanced picture, suggesting that not all observation interference is inherently bad. In practice, we expect that AI assistants will exhibit observation interference for a mix of good and bad reasons. With this theory, our goal is to help practitioners disentangle these different incentives for observation interference when they emerge in practice.”
## 2.
> What happens if A has multiple private information?
While we prove all our “negative” results (results showing that observation tampering can happen) with simple examples in which both A and H only make a single observation once, all our positive results apply to any POAG, and thus allow settings in which both A and H observe multiple times!
**Action 2.1: To clarify how general our setting is, we will add the following sentence to Section 2.1:**
“While all the examples in this paper are quite simple, note that the POAG setup and thus all our positive results are very general, allowing both H and A to observe private information at multiple times, taking actions that influence both the state and each other’s observations, and so on.”
## 3.
> Are actions like, attempt to open door, for an agent trying to expose that the door is locked, a valid example for agent exposing private information, (let's say human could never have been in the room). If so, this action doesn't alter the environment but exposes private information. Therefore, in the presence of both such actions - observation interfering and signaling which would be preferred?
Thank you for proposing this great “locked door” example.
**Action 3.1: We will add the locked door example to our paper after Definition 4.6, with this text:**
“While in our examples, we will mostly consider actions that in some sense act directly on H’s observations, Definition 4.6 also considers the informational effects of physical actions. For example, if A (visibly) tries to open a door that A knows to be locked, then this reveals to H that the door is locked. Consequently, not trying to open the door (when A knows it to be locked) is an instance of observation interference in the sense of Definition 4.6. While having the same (null) effect on the state of the world, trying to open the door provides H with more information about the world.”
## What do you think?
Thank you again for taking the time to review the paper and providing helpful feedback. **Do the above actions address your questions about the paper?** If not, what further clarification or modifications could we make to improve your score? | Summary: This paper investigates observation interference by AI assistants in partially observable assistance games (POAGs), where both the AI and the human have limited information. The authors demonstrate that an optimal assistant may have incentives to interfere with observations to communicate private information, query human preferences when the human acts naively, and help the human make better decisions when the human is irrational. Defining observation interference as providing less informative signal, they show that while action-level interference may sometimes be necessary, policy-level interference is never required. This finding suggests that although observation interference involves sacrificing some information, it can benefit the human by facilitating the communication of more critical information. Experiments further explore the trade-offs influenced by the amount of the AI's private information and the degree of the human's rationality.
Claims And Evidence: Please refer to Strengths And Weaknesses.
Methods And Evaluation Criteria: Please refer to Strengths And Weaknesses.
Theoretical Claims: Please refer to Strengths And Weaknesses.
Experimental Designs Or Analyses: Please refer to Strengths And Weaknesses.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Please refer to Strengths And Weaknesses.
Essential References Not Discussed: Please refer to Strengths And Weaknesses.
Other Strengths And Weaknesses: **[Strengths]**
- This paper is well-organized and easy to follow, presenting formal definitions, solid proofs, examples, and experimental results.
- This paper addresses a crucial aspect of the human-AI value alignment problem by considering the more realistic scenario of partial observability. The theoretical analysis of observation interference and the three distinct incentives may have important implications for building trustworthy AI systems.
- Although the authors acknowledge some minor limitations in the definition of observation interference in Appendix G, the definitions, theoretical claims, and preliminary results appear mathematically sound.
**[Weaknesses]**
- The analysis of different incentives for observation interference often relies on specific assumptions about human behavior, such as naivety or adherence to the Boltzmann rationality model. The extent to which these assumptions accurately capture real human behavior across different contexts could affect the practical relevance of the findings. Additionally, the communication channels appear to assume that the human and the AI agent can perfectly understand each other, which may not hold in more complex scenarios.
- While the experimental model provides valuable insights, the paper would benefit from empirical validation with human subjects to test its theoretical predictions.
- The gap between theoretical optimality and practical implementation raises questions about the design of AI systems. A more in-depth discussion of how these insights could direct the development of AI assistants may enhance the paper.
Other Comments Or Suggestions: N/A
Questions For Authors: - Consider the computational complexity (NEXP-hard) of finding optimal policies in POAGs, what are your initial thoughts on how the theoretical insights from this paper could be practically applied to the design and implementation of real-world AI assistants, particularly those operating in complex and partially observable environments?
- Have you considered the ethical implications of designing AI systems that may optimally interfere with observations? What constraints might be appropriate to ensure that such interference does not lead to problematic patterns of interaction?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Thanks + response
Thank you for your review. We are glad you consider our work to "address a crucial aspect of the human-AI value alignment problem" and "have important implications for building trustworthy AI systems." We appreciate the discussion points you raised.
## 1.
> The analysis… relies on specific assumptions… [such as] the Boltzmann rational model.
In this first paper on observation interference in human-AI interaction, we seek to lay a theoretical foundation using simple, well-known human models. Our results consider several models, including optimal play, decisions based on immediate outcomes, and Boltzmann rationality. Many published papers consider only the Boltzmann model, which is standard in the literature. For example, Hong Jun Jeon et al. (NeurIPS 2020) considers 11 feedback types that all assume Boltzmann rationality. Similarly, Ramachandran and Amir (IJCAI 2007) and Laidlaw and Dragan (ICLR 2022) exclusively study this model. We hope our paper will be judged by a consistent standard with other published work.
**Action 1.1: In "Limitations and Future Work," we will add discussion of human modeling assumptions, suggesting additional models as a direction for future work.**
## 2.
> the communication channels appear to assume that the human and the AI agent can perfectly understand each other
We agree that communication is not a fully general solution to observation interference. But we don't view this as a weakness. If anything, our nuanced study would be less relevant if communication was a fully satisfying solution.
**Action 2.1: We will add the following paragraph on practical limitations of unbounded communication channels after Theorem 4.5:**
"One could argue that in practice, an unrestricted communication channel between A and H could usually be made available. However, Theorem 4.5 ignores various real-world obstacles. For one, it considers communication that incurs no cost, but realistically communication costs the human time and effort. Second, the optimal policy pair requires A to send information in a way that H can reliably understand and act on. We should expect that in practice, A and H sometimes cannot understand each other. Therefore, despite Theorem 4.5, we think observation interference is of broad practical relevance, even where A can, e.g., send text messages to H."
## 3.
> the paper would benefit from empirical validation with human subjects
We agree this would be valuable for future work, but we also believe that would be a different kind of paper. This first paper on the topic is about laying theoretical foundations.
**Action 3.1: In "Limitations and Future Work," we will discuss empirical validation with human subjects as a valuable future direction.**
## 4.
> Consider the computational complexity (NEXP-hard) of finding optimal policies in POAGs, how can the theoretical insights from this paper be practically applied to the real world?
**Action 4.1: We will add the following discussion to our paper:**
"How might our results apply given that finding optimal policies in POAGs is NEXP-hard?
Most of our paper is descriptive, characterizing when observation tampering could happen. Complexity considerations could affect these results in either direction. It's easy to construct environments where finding good observation-interfering policies is computationally intractable but constructing good non-interfering policies is easy; and vice versa. In practice, complexities of the environment can be orthogonal to incentives to interfere. For instance, a real-world version of the CUDA example is complex (A assesses complicated software compatibility issues), but the decision whether to interfere with observations is easy. We believe our characterizations remain useful even in complex environments (where we can't expect optimal policies), although we can't make as definitive claims as we can about optimal policies.
We have discussed allowing communication between H and A. A complexity-theoretic argument favors this solution: If H and A share all private information, the game effectively turns from a DecPOMDP into a POMDP. Solving POMDPs is PSPACE-complete and thus likely easier than solving DecPOMDPs."
## 5.
> the ethical implications of designing AI systems that may optimally interfere… What constraints might be appropriate?
Please see Action 1.1 in our response to Reviewer fCAi, where we discuss implications. Our theory reveals that optimal assistants must sometimes interfere with observations, so interference is a nuanced issue. Appropriate constraints might include transparency about when interference occurs, alignment with user goals, and user controls to override interference when desired. As you suggest, future work with human subjects is needed to refine these constraints in practice.
## What do you think?
Thank you again. **Do the above actions address your concerns?** If not, what further changes could we make to improve your score? | null | null | null | null | null | null |
One Diffusion Step to Real-World Super-Resolution via Flow Trajectory Distillation | Accept (poster) | Summary: The paper introduces FluxSR, a novel one-step diffusion model for real-world image super-resolution (ISR), leveraging flow trajectory distillation (FTD) to distill a multi-step diffusion model into a one-step model. The authors propose several innovations, including TV-LPIPS as a perceptual loss and attention diversification loss (ADL) to reduce high-frequency artifacts. The method achieves promising performance in both quantitative and qualitative evaluations, outperforming existing one-step and multi-step diffusion-based Real-ISR methods.
Claims And Evidence: The paper provides ample empirical evidence to support its claims, including quantitative results, qualitative comparisons, ablation studies, and theoretical justifications. The proposed method, FluxSR, demonstrates its improvements over existing approaches in terms of image quality, computational efficiency, and artifact reduction. However, the paper also acknowledges some limitations, such as the computational cost of training and the presence of periodic artifacts, which could be addressed in future work.
Methods And Evaluation Criteria: The paper employs a comprehensive set of methods and evaluation criteria to demonstrate the effectiveness of FluxSR.
- The proposed FTD, TV-LPIPS perceptual loss, and ADL are key innovations that contribute to the model's superior performance.
- The evaluation includes both quantitative metrics and qualitative comparisons, along with thorough ablation studies to validate the contributions of each component.
- The results show that FluxSR achieves SOTA performance in real-world ISR with only one diffusion step.
Theoretical Claims: The paper provides a solid theoretical foundation for the proposed FluxSR model, supported by flow matching theory, mathematical formulations, and empirical evidence. The key innovations (FTD, TV-LPIPS perceptual loss, and ADL) are well-justified and contribute to the model's superior performance in real-world ISR. The theoretical claims are validated through extensive experiments, ablation studies, and visual comparisons, demonstrating the effectiveness of FluxSR.
Experimental Designs Or Analyses: The paper employs a comprehensive set of experimental designs and analyses to validate the effectiveness of the proposed FluxSR model. The experiments include quantitative evaluations, qualitative comparisons, ablation studies, and comparisons with non-diffusion methods. The results demonstrate that FluxSR achieves SOTA performance in real-world ISR with only one diffusion step, while also addressing key challenges such as computational efficiency and artifact reduction. The ablation studies provide insights into the contributions of individual components (FTD, TV-LPIPS, ADL), and the visual comparisons highlight the model's ability to generate realistic and detailed images.
Supplementary Material: The supplementary material provides additional implementation details, visual results, and comparisons with GAN-based methods, further validating the effectiveness of the proposed FluxSR model. The visual comparisons highlight the model's ability to generate realistic and detailed images, while the quantitative comparisons with non-diffusion methods demonstrate its superior performance in perceptual quality metrics.
Relation To Broader Scientific Literature: The paper builds on and advances existing research in image super-resolution, diffusion models, and flow matching theory. By introducing flow trajectory distillation (FTD), TV-LPIPS perceptual loss, and attention diversification loss (ADL), the authors address key challenges in real-world SR, such as computational efficiency, artifact reduction, and perceptual quality. The proposed FluxSR model achieves SOTA performance with only one diffusion step, providing a new direction for real-world SR research.
Essential References Not Discussed: The references are relatively comprehensive.
Other Strengths And Weaknesses: - Strengths
1) The proposed flow trajectory distillation (FTD) is a novel approach that effectively bridges the gap between noise-to-image and LR-to-HR flows, preserving the generative capabilities of the teacher model while enabling efficient one-step inference.
2) The method achieves impressive results, outperforming existing one-step and multi-step diffusion-based methods across multiple datasets. The qualitative results demonstrate that FluxSR generates more realistic and detailed images compared to other SOTA methods.
3) By reducing the inference steps to one, FluxSR significantly reduces computational overhead and inference latency, making it more practical for real-world applications.
- Weaknesses
1) While the method reduces inference steps, the training process still requires significant computational resources, particularly due to the use of large models like FLUX.1-dev. This could limit its accessibility for researchers with limited resources.
2) Although the authors propose ADL and TV-LPIPS to address high-frequency artifacts, the paper acknowledges that periodic artifacts are not entirely eliminated. This suggests room for further improvement in artifact reduction.
3) The method relies heavily on the pre-trained FLUX.1-dev model, which may limit its generalization to other domains or tasks. The paper does not explore how well the method performs when applied to different types of image degradation beyond the tested datasets.
Other Comments Or Suggestions: None.
Questions For Authors: 1) Could the authors provide more details on the scalability of FluxSR? For instance, how does the method perform on larger or more diverse datasets, and what are the implications for real-time applications?
2) Given that periodic artifacts are still present, are there any plans to further refine the model to address this issue? Could the authors explore additional regularization techniques or architectural changes to mitigate these artifacts?
3) While the paper compares FluxSR with other diffusion-based methods, it would be beneficial to include a more detailed comparison with non-diffusion-based approaches, especially in terms of computational efficiency and real-world applicability.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer mAgS (denoted as R5)
**Q5-1:** Could the authors provide more details on the scalability of FluxSR? For instance, how does the method perform on larger or more diverse datasets, and what are the implications for real-time applications?
**A5-1:** We generate 10k noise-image pairs on the 220k-GPT4Vision-captions-from-LIVIS dataset as new training data. After training with a larger dataset, there are no significant changes in the quantitative results. However, we observe that the model trained with the larger dataset generated fewer high-frequency artifacts in the actual generated images. For real-time applications, further optimization, such as model pruning, quantization or more efficient techniques, would be helpful to improve real-time efficiency without sacrificing performance.
**Q5-2:** Are there any plans to further refine the model to address this issue? Could the authors explore additional regularization techniques or architectural changes to mitigate these artifacts?
**A5-2:** Yes, we increase the weights of the anti-artifact losses. Specifically, we increase the weight of TV-LPIPS to 2 and the weight of ADL to 0.2. In practice, we do not observe the high-frequency artifacts anymore, which demonstrates the effectiveness of the proposed anti-artifacts losses. We will include these results into the paper.
**Q5-3:** While the paper compares FluxSR with other diffusion-based methods, it would be beneficial to include a more detailed comparison with non-diffusion-based approaches, especially in terms of computational efficiency and real-world applicability.
**A5-3:** Thank you for your suggestion. We compare the computational efficiency and quantitative comparison of FluxSR with non-diffusion methods in the table below. Compared to non-diffusion methods, FluxSR produces significantly better visual results with higher image quality. Although FluxSR has higher computational complexity, it generates more realistic images. In highly degraded scenarios, if the user aims to generate high-quality images and is not too strict about inference speed, we believe FluxSR holds greater value.
| Methods | RealSR-JPEG | BSRGAN | ESRGAN | Real-ESRGAN | SwinIR | FeMaSR | FluxSR |
|-----------------|-------------|--------|--------|-------------|--------|--------|---------|
| **Inference time / s** | 0.042 | 0.042 | 0.042 | 0.042 | 0.200 | 0.082 | 0.228 |
| **MACs / T** | 0.294 | 0.294 | 0.294 | 0.294 | 0.478 | 0.476 | 11.71 |
| **# Params / B**| 0.017 | 0.017 | 0.017 | 0.017 | 0.027 | 0.033 | 11.99 |
*Complex Analysis*
| Method | RealSR-JPEG | BSRGAN | ESRGAN | Real-ESRGAN | SwinIR | LDL | FeMaSR | **FluxSR** |
|--------------|-------------|--------|--------|-------------|--------|-------|--------|------------|
| **MUSIQ** | 50.54 | 65.58 | 42.37 | 63.22 | 63.82 | 63.22 | 64.88 | **70.75** |
| **MANIQA** | 0.2927 | 0.3887 | 0.3100 | 0.3892 | 0.3818 | 0.3897| 0.4017 | **0.5495** |
| **TOPIQ** | 0.4118 | 0.5812 | 0.3864 | 0.5368 | 0.5306 | 0.5358| 0.5736 | **0.6670** |
| **QAlign** | 3.4416 | 3.9730 | 2.9680 | 4.0442 | 3.9661 | 4.0038| 4.0162 | **4.2134** |
*RealSet65*
**Q5-4:** The paper does not explore how well the method performs when applied to different types of image degradation beyond the tested datasets.
**A5-4:** We evaluate FluxSR on face restoration task. Although no specific training set was used, our method still achieves good results. The specific quantitative results are shown in the table below.
| Method | RestoreFormer++ | VQFR | CodeFormer | DAEFR | PGDiff | DifFace | DiffBIR | OSEDiff | OSDFace | **FluxSR** |
|------------------|-----------------|--------|------------|--------|--------|---------|---------|---------|---------|------------|
| **MUSIQ** | 71.484 | 70.906 | 74.001 | 72.698 | 68.599 | 65.116 | 72.272 | 69.322 | 73.935 | **75.908** |
| **MANIQA** | 0.4902 | 0.4909 | 0.5034 | 0.4934 | 0.4460 | 0.4189 | 0.5839 | 0.4713 | 0.5162 | **0.6765** |
| **ClipIQA** | 0.6950 | 0.6769 | 0.6918 | 0.6696 | 0.5653 | 0.5737 | 0.7441 | 0.6321 | 0.7106 | **0.7604** |
*WebPhoto-Test*
| Method | RestoreFormer++ | VQFR | CodeFormer | DAEFR | PGDiff | DifFace | DiffBIR | OSEDiff | OSDFace | **FluxSR** |
|------------------|-----------------|--------|------------|--------|--------|---------|---------|---------|---------|------------|
| **MUSIQ** | 71.332 | 71.417 | 73.406 | 74.143 | 68.135 | 64.907 | 75.321 | 66.538 | 74.601 | **76.198** |
| **MANIQA** | 0.4767 | 0.5044 | 0.4958 | 0.5205 | 0.4531 | 0.4299 | 0.6625 | 0.4616 | 0.5229 | **0.6665** |
| **ClipIQA** | 0.7159 | 0.7069 | 0.6986 | 0.6975 | 0.5824 | 0.5924 | **0.8084** | 0.6235 | 0.7284 | 0.7847 |
*Wider-Test* | Summary: The paper introduces FluxSR, a novel one-step diffusion model for Real-ISR (Real-World Image Super-Resolution). The primary goal is to reduce the high computational cost associated with multi-step diffusion models while preserving high-quality image generation. The key innovation is Flow Trajectory Distillation (FTD), which transfers the generative capabilities of a large-scale T2I diffusion model (FLUX.1-dev) into a single-step framework. Additionally, TV-LPIPS loss is introduced to suppress high-frequency artifacts, and Attention Diversification Loss (ADL) is used to prevent repetitive patterns.
Claims And Evidence: 1. The claims about FluxSR's performance improvements are well-supported by quantitative and visual results in Tables 1, 2, and Figure 5.
2. The claim that FTD prevents distribution shift is plausible but not directly validated by additional distribution analysis.
3. The effectiveness of TV-LPIPS is quantitatively supported but lacks a visual comparison.
Methods And Evaluation Criteria: 1. The proposed FTD method is conceptually sound and effectively translates flow matching principles to super-resolution.
2. The evaluation criteria (PSNR, SSIM, LPIPS, DISTS, MUSIQ, MANIQA, TOPIQ, Q-Align) are appropriate for Real-ISR tasks.
3. However, adding inference speed and computational cost metrics would strengthen the evaluation.
Theoretical Claims: The Flow Trajectory Distillation formulation is mathematically consistent with flow matching theory. No formal proofs are included, but the derivations appear correct.
Experimental Designs Or Analyses: 1. The experimental setup is reasonable, using pre-generated noise-image pairs instead of real datasets.
2. The ablation study on loss functions is well-structured, but missing a visual comparison of TV-LPIPS.
3. A missing comparison of inference efficiency limits conclusions about computational benefits.
Supplementary Material: No Supplementary Material is provided.
Relation To Broader Scientific Literature: 1. The paper builds on prior diffusion-based Real-ISR methods (e.g., OSEDiff, SinSR, TSD-SR) and flow matching models (e.g., ReFlow, InstaFlow).
2. The discussion of one-step vs. multi-step models is well-grounded in prior research.
3. The connection to large-scale T2I diffusion models (e.g., FLUX, SDXL) is relevant.
Essential References Not Discussed: A comparison to alternative single-step SR methods (e.g., GAN-based SR models like ESRGAN, Real-ESRGAN) would help contextualize the approach.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written and clearly structured.
2. The mathematical formulation of FTD is well-integrated with flow-matching theory.
3. Introducing ADL for SR is a new contribution.
Weaknesses:
1. Some claims about the differences between T2I and SR mapping require stronger justification. The paper argues that the T2I noise-to-image mapping differs significantly from the LR-to-SR degradation process, necessitating FTD. However, in recent large-scale T2I models, the final stages of denoising already address degradations similar to LR-to-SR, making the proposed motivation less compelling.
A more thorough analysis (e.g., comparing distributions of features extracted from T2I and SR models) would strengthen this claim.
2. No quantitative evaluation of computational efficiency. The paper claims FluxSR achieves efficient inference, but there is no quantitative comparison of inference speed, MACs (Multiply-Accumulate Operations), or parameter count. A table comparing these metrics against one-step and multi-step baselines would clarify the trade-offs between computational efficiency and performance.
3. Lack of a visual ablation for TV-LPIPS. The ablation study in Table 4 demonstrates that TV-LPIPS improves perceptual quality, but a visual comparison (before and after applying TV-LPIPS) would provide a more intuitive understanding. Adding side-by-side images showing the effect of TV-LPIPS vs. LPIPS alone would strengthen the justification.
Other Comments Or Suggestions: 1. Can you provide a quantitative comparison of inference time, MACs, and parameter count for FluxSR vs. other one-step and multi-step methods? This would help support the efficiency claims.
2. Can you provide a visual comparison of TV-LPIPS vs. LPIPS alone? This would help illustrate the effectiveness of the proposed perceptual loss.
3. How does the noise-to-image mapping in T2I models fundamentally differ from LR-to-SR degradations in practice? Could you provide a more detailed analysis to support the claim in Figure 2?
4. How sensitive is FluxSR to different training datasets? Would training on a different set of noise-image pairs alter its performance?
5. Can the proposed FTD method be extended to video super-resolution? Would additional temporal constraints be needed?
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer jQYF (denoted as R4)
**Q4-1:** How does the noise-to-image mapping in T2I models fundamentally differ from LR-to-SR degradations in practice? Could you provide a more detailed analysis to support the claim in Figure 2?
**A4-1:** Although $x_t$ in the diffusion process *appears* to be similar to LR, they are fundamentally different. First, the distributions of degradations on these mappings are different. The calculation of $x_t$ is given by
$$
x_t = (1 - t) x_0 + t \epsilon,
$$
which is equivalent to adding Gaussian noise to the image. In contrast, LR is a real-world low-resolution image, which undergoes complex and unknown degradation. In other words, LR images may not be on the T2I trajectory, which is also a condition for the validity of Figure 2. Second, the degradations/noises are added in different domains. T2I adds noise in the latent space, but LR-to-SR can be regarded as adding degradations/noise onto the HR images. We highlight that these differences directly motivates us to reduce the distribution shift and propose our FTD method.
**Q4-2:** Can you provide a quantitative comparison of inference time, MACs, and parameter count for FluxSR vs. other one-step and multi-step methods? This would help support the efficiency claims.
**A4-2:** Yes, we have compared FluxSR's computational complexity with other multi-step and single-step methods, including inference speed, MACs, and parameter count. Despite using a 12B parameter model, the inference speed of our method is less than twice that of the fastest one-step diffusion ISR method.
| Methods | StableSR | DiffBIR | SeeSR | SUPIR | ResShift | SinSR | OSEDiff | TSD-SR | FluxSR |
|--|--|--|--|--|--|--|--|---|--|
| Inference step | 200 | 50 | 50 | 50 | 15 | 1 | 1 | 1 | 1 |
| Inference time/s | 11.503 | 7.798 | 5.926 | 18.359 | 0.806 | 0.131 | 0.167 | 0.138 | 0.228 |
| MACs/T | 75.81 | 24.52 | 32.34 | 120.41 | 4.90 | 2.09 | 2.27 | 2.91 | 11.71 |
| # Params/B | 1.39 | 1.62 | 1.99 | 4.49 | 0.17 | 0.17 | 1.40 | 2.21 | 11.99 |
**Q4-3:** Can you provide a visual comparison of TV-LPIPS vs. LPIPS alone? This would help illustrate the effectiveness of the proposed perceptual loss.
**A4-3:** Thank you for your suggestion. We would like to show the qualitative comparison of the ablation studies. However, it is not allowed to upload any images according to the rules of rebuttal. In fact, our TV-LPIPS loss achieves better visual results than the LPIPS loss. As shown in Table 4 of the paper (the first two rows), this can be verified by the improved metrics that measure visual quality, including MUSIQ, ManIQA, and Q-Align. We will include more qualitative comparisons in the revised paper.
**Q4-4:** How sensitive is FluxSR to different training datasets? Would training on a different set of noise-image pairs alter its performance?
**A4-4:** We generate 10k noise-image pairs on the 220k-GPT4Vision-captions-from-LIVIS dataset as new training data. After training with a larger dataset, there are no significant changes in the quantitative results. However, we observe that the model trained with the larger dataset generated fewer high-frequency artifacts in the actual generated images.
**Q4-5:** Can the proposed FTD method be extended to video super-resolution? Would additional temporal constraints be needed?
**A4-5:** Our FTD is indeed a promising approach for image SR, and we believe it could be extended to video SR with some adjustments. Incorporating temporal constraints (e.g., optical flow, temporal smoothness, feature alignment and propagation) between frames is crucial to avoid introducing flickering or artifacts across consecutive frames, and helps preserve motion consistency and smoothness. Due to the time limit, we leave it for future work.
**Q4-6:** No Supplementary Material is provided.
**A4-6:** In fact, we have provided supplementary material.
**Q4-7:** A comparison to alternative single-step SR methods (e.g., GAN-based SR models like ESRGAN, Real-ESRGAN) would help contextualize the approach.
**A4-7:** We have provided a comparison with non-diffusion methods in the supplementary material. Here, we present a comparison with BSRGAN, Real-ESRGAN, SwinIR, LDL, and FeMASR, as shown in the table below.
| Method | RealSR-JPEG | BSRGAN | ESRGAN | Real-ESRGAN | SwinIR | LDL | FeMaSR | **FluxSR** |
|--------------|-------------|--------|--------|-------------|--------|-------|--------|------------|
| **MUSIQ** | 50.54 | 65.58 | 42.37 | 63.22 | 63.82 | 63.22 | 64.88 | **70.75** |
| **MANIQA** | 0.2927 | 0.3887 | 0.3100 | 0.3892 | 0.3818 | 0.3897| 0.4017 | **0.5495** |
| **TOPIQ** | 0.4118 | 0.5812 | 0.3864 | 0.5368 | 0.5306 | 0.5358| 0.5736 | **0.6670** |
| **QAlign** | 3.4416 | 3.9730 | 2.9680 | 4.0442 | 3.9661 | 4.0038| 4.0162 | **4.2134** |
*RealSet65* | Summary: This paper improves on the one-step diffusion-based super-resolution methods that target the real-world image super-resolution (Real-ISR) task by distilling on a larger and more advanced baseline image generation model (FLUX) compared to existing works that leverage Stable Diffusion as a backbone. It introduces Flow Trajectory Distillation (FTD) to address the distribution shift issue of existing methods. It also proposes using total variation as a perceptual loss and the ADL proposed by Guo et al. to emphasize restoring high-frequency details and improving generation quality.
## update after rebuttal
I appreciate the authors' additional experiments and justifications, which have adequately addressed my concerns. From my perspective, it is reasonable that the proposed method improves only on a subset of metrics, as different approaches naturally focus on different aspects of the problem. This is why, despite noting limitations such as lower performance on metrics like PSNR and the presence of over-smooth qualitative results, I initially recommended a rating of weak accept. Overall, I believe the paper's contributions outweigh its limitations, and I am inclined to maintain my recommendation of weak acceptance. Thanks
Claims And Evidence: The TV-LPIPS component is claimed to emphasize the restoration of high-frequency components; however, in Figure 1, the proposed FluxSR method seems to generate over-smooth results that do not align with the original low-res image. For example, the helmet in the bottom row ignores all the high-frequency details that can be observed from the LR image.
Also, the ablation studies in Tables 3 and 4 only compare four of the eight metrics. Thus, it can be quite incomplete to show the effectiveness of each component. Besides, the reported PSNR is always inferior after incorporating this paper’s designs, which is detrimental to the justification of the claims.
Methods And Evaluation Criteria: Overall, the evaluation criteria used in this paper, such as the metrics and benchmark datasets, make sense to me. This paper includes a range of metrics for evaluation. It adopts the standard RealESR-GAN’s degradation pipeline for creating the training data and DIV2K-val, RealSR, and RealSet65 as the test set, which, however, does not include the DrealSR that is widely used in various baselines.
Theoretical Claims: The theoretical claims seem to make sense to me; however, I have not thoroughly checked or reproduced the derivations myself.
Experimental Designs Or Analyses: Overall, the experimental designs in this paper, such as the compared baselines, make sense to me. I acknowledge that there are many more one-step diffusion-based Real-ISR methods at the moment; however, I believe it is sufficient for the authors to only compare with the reported baselines.
Supplementary Material: I have fully reviewed the supplementary material.
Relation To Broader Scientific Literature: This paper mainly contributes to proposing the first one-step diffusion for Real-ISR based on a large model with over 12B parameters (FLUX.1-dev), highlighting its practical value.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Despite the practical contribution this paper makes, I am a bit concerned about the over-sharp qualitative results shown in the Appendix. Also, the inferior PSNR results shown in both the main Table and the Tables for ablation studies since these suggest that the super-resolved images can hallucinate some image details, which increases no-reference metrics, however, at the cost of the aforementioned full-reference metric like PSNR.
Other Comments Or Suggestions: In line 035 of the appendix, there is a question mask during referencing that needs to be fixed.
Questions For Authors: Please check out the previous sections regarding my questions and concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer ozHQ (denoted as R3)
**Q3-1:** Does not include the DrealSR that is widely used in various baselines.
**A3-1:** Thank you for pointing out this issue. The table below shows the quantitative comparison on the DRealSR dataset. From the table below, our FluxSR obtains significantly better results.
| Method | DiffBIR | SeeSR | ResShift | SinSR | OSEDiff | TSD-SR | **FluxSR** |
|-|-|-|-|-|-|-|-|
| **PSNR** | 25.91 | 28.35 | 26.42 | 27.33 | 24.20 | 25.93 | 25.92 |
| **SSIM** | 0.6190 | 0.8052| 0.7310 | 0.7237| 0.7355 | 0.7423 | 0.7592 |
| **LPIPS** | 0.5347 | 0.3031| 0.4582 | 0.4444| 0.3429 | 0.3383 | 0.3418 |
| **DISTS** | 0.2387 | 0.1665| 0.2382 | 0.2262| 0.1763 | 0.1708 | **0.1628** |
| **MUSIQ** | 36.18 | 34.51 | 30.52 | 32.79 | 37.22 | 36.18 | **37.82** |
| **MANIQA**| 0.5059 | 0.4736| 0.3018 | 0.3907| 0.4793 | 0.4272 | **0.5310** |
| **QAlign**| 4.2402 | 4.2050| 4.2770 | 4.2704| 4.2503 | 4.2751 | **4.3356** |
*DRealSR*
**Q3-2:** Concern about the over-sharp qualitative results shown in the Appendix. Also, the cost of the aforementioned full-reference metric like PSNR.
**A3-2:** We agree that our FTD is not very effective on PSNR, but the hallucination issue is also not severe since most images still retain high consistency with the LR image in terms of content. We highlight that PSNR does not always align well with human visual perception, which has been widely proved in existing works. Moreover, when the input images are severely degraded, the "hallucinated" details generated by our method are reasonable and realistic, and no-reference metrics are better at reflecting the perceptual quality of the images.
**Q3-3:** In line 035 of the appendix, there is a question mask during referencing that needs to be fixed.
**A3-3:** Thank you for pointing out this mistake. We will make the correction.
**Q3-4:** In Figure 1, the proposed FluxSR method seems to generate over-smooth results that do not align with the original low-res image. For example, the helmet in the bottom row ignores all the high-frequency details that can be observed from the LR image.
**A3-4:** We agree that the generated helmet is smooth, but still visually reasonable. We highlight that our FluxSR is able to adaptively adjust the generative ability according to the risk of generating artifacts, showing better robustness than existing methods. For example, the texture of helmet is very blurry and suffers a high risk of artifacts. Existing methods consistently produce very poor results with severe distortions. Nevertheless, as for the nose and mouth in the face, since the textures can be easily imagined, our FluxSR works very well to produce detailed textures. Moreover, our FluxSR produces better visual details for text (see the second row of Figure 5).
**Q3-5:** Also, the ablation studies in Tables 3 and 4 only compare four of the eight metrics. Thus, it can be quite incomplete to show the effectiveness of each component. Besides, the reported PSNR is always inferior after incorporating this paper’s designs, which is detrimental to the justification of the claims.
**A3-5:** We expand the comparison to include additional metrics to provide a more complete evaluation of the performance, as shown in the table below. We highlight that our FTD consistently obtains better results on the metrics that measure visual quality. Regarding the PSNR results, it is important to note that PSNR is not always the best indicator of perceptual image quality, especially in severely degraded images. Our method focuses on enhancing perceptual fidelity and more visually realistic output, and it is better captured by no-reference metrics. It may not always align with the traditional PSNR metric, but it contributes positively to the overall perceptual quality. We will clarify this trade-off in the revised manuscript, emphasizing that our method is optimized for visual realism rather than solely maximizing PSNR.
| Method | **PSNR** | **SSIM** | **LPIPS** | **DISTS** | **MUSIQ** | **MANIQA** | **TOPIQ** | **Q-Align** |
|--|--|--|--|--|--|--|--|--|
| w/o FTD | **26.33** | **0.7580** | 0.3801 | 0.2200 | 56.02 | 0.3775 | 0.4006 | 3.5170 |
| FTD (ours) | 24.67 | 0.7133 | **0.3324**| **0.1896**| **67.84** | **0.5203** | **0.6530**| **4.1473** |
*Ablation study on FTD*
| $\mathcal{L}_{\text{LPIPS}}$ | $\mathcal{L}_{\text{TV-LPIPS}}$ | $\mathcal{L}_{\text{EA-DISTS}}$ | $\mathcal{L}_{\text{ADL}}$ | **SSIM** | **LPIPS** | **DISTS** | **TOPIQ** |
|--|--|--|--|--|--|--|--|
| ✓ | | | | 0.6893 | 0.3459 | 0.2096 | 0.6242 |
| | ✓ | | | 0.6999 | 0.3369 | 0.1933 | 0.6387 |
| | | ✓ | | 0.7283 | 0.3423 | 0.1970 | 0.6400 |
| | | ✓ | ✓ | **0.7339**| 0.3332 | 0.1915 | 0.6427 |
| ✓ | | ✓ | ✓ | 0.7133 | **0.3324**| **0.1896**| **0.6530**|
*Ablation study on different loss functions* | Summary: The authors claim that most existing one-step diffusion methods are constrained by the performance of the teacher model, where poor teacher performance results in image artifacts. To this end, the authors proposed a one-step diffusion Real-ISR technique, namely FluxSR, based on FLUX.1-dev and flow matching models. The authors introduce Flow Trajectory Distillation (FTD) to distill a one-step model from the teacher model. The author provides comparative experiments with the state-of-the-art ISR methods.
## update after rebuttal
The authors have addressed the majority of my concerns. Accordingly, I have updated my score to 3: Weak Accept.
Claims And Evidence: The author's claims are clear, but the evidence to support these claims is insufficient. There are mainly the following problems:
(1) The authors analyse the possible negative outcomes of VSD or GANs in Sec. 4.1, but lack experimental support.
Methods And Evaluation Criteria: The proposed method and evaluation datasets used by the authors are reasonable.
Theoretical Claims: I have carefully checked the correctness of the proofs for theoretical claims and found no relevant problems.
Experimental Designs Or Analyses: I have carefully checked the soundness/validity of any experimental designs and analyses, and there are the following problems:
(1) The outstanding performance of the proposed method may be mainly attributed to the FLUX model. If replaced with other baselines, the performance gain brought by the proposed pipeline in the paper is questionable.
(2) The FTD proposed by the authors introduces the SR flow trajectory based on the existing T2I flow trajectory distillation, but lacks quantitative and qualitative analysis to verify the effectiveness of this improvement.
(3) The authors proposed a large model friendly training strategy. What advantages does it bring to model training and inference? The authors should report this in the experimental analysis.
(4) There is a lack of corresponding comparison and analysis of model inference speed in the experiment.
(5) The paper lacks a comparison with the latest SUPIR [1]. An analysis of their performance and efficiency is necessary in the paper.
(6) The ablation study in the paper lacks a qualitative comparison.
[1] Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild, CVPR 2024
Supplementary Material: I have carefully reviewed the implementation details, more visual results, and comparsion with GAN-based methods provided in the appendix of the paper.
Relation To Broader Scientific Literature: This paper implements the Real-ISR task based on existing methods (FLUX model, TV-LPIPS, ADL), and further proposes an improved strategy (FTD). The key idea of FTD is introducing the LR-to-HR flow in SR based on the flow matching theory.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The paper lacks innovation. The TV-LPIPS and ADL proposed in the paper are both existing works. The outstanding performance of the proposed method may be mainly attributed to the introduced FLUX model.
Other Comments Or Suggestions: The color markings in Tab.1 and Tab.2 of the paper are confusing. There is no specific explanation of what red bold and blue bold mean. What is the difference between them and bold?
Questions For Authors: (1) The authors analyse the possible negative outcomes of VSD or GANs in Sec. 4.1, but lack experimental support. The author should support this conclusion with corresponding analysis.
(2) The FTD proposed by the authors introduces the SR flow trajectory based on the existing T2I flow trajectory distillation, but lacks quantitative and qualitative analysis to verify the effectiveness of this improvement. Corresponding experimental analysis should be provided.
(3) Is the outstanding performance of the proposed method mainly attributed to the FLUX model? If it is replaced with other baselines, will there still be performance advantages? Further analysis should be provided.
(4) There is a lack of corresponding comparison and analysis of model inference speed in the experiment. The author should compare the inference speed of the proposed FluxSR with other methods.
(5) The authors proposed a large model friendly training strategy. What advantages does it bring to model training and inference? The authors should report this in the experimental analysis.
(6) The color markings in Tab.1 and Tab.2 of the paper are confusing. There is no specific explanation of what red bold and blue bold mean. What is the difference between them and bold?
(7) The paper lacks a comparison with the latest SUPIR [1]. An analysis of their performance and efficiency is necessary in the paper.
(8) The ablation study in the paper lacks a qualitative comparison. Corresponding comparisons should be provided.
[1] Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild, CVPR 2024
If the author could address these issues, I would be inclined to raise my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer uJUz (denoted as R2)
**Q2-1:** The paper lacks innovation. The TV-LPIPS and ADL proposed in the paper are both existing works.
**A2-1:** The main contribution of this paper is the introduction of FTD, and our method is the first work to distill a large-scale flow matching model like FLUX into a one-step model for image super-resolution. Existing one-step methods struggle with large diffusion models. Specifically, VSD (with optional GAN loss) requires two additional copies of the large model in GPU memory, exceeding the capacity of an 80GB A800 GPU. Our FTD is large-model-friendly, requiring only about 55GB of GPU memory per card for training, and 23.7GB for inference with 512px resolution.
**Q2-2:** The authors analyze the possible negative outcomes of VSD or GANs in Sec. 4.1, but lack experimental support.
**A2-2:** We add experiments to support our argument. Specifically, we generate images using teacher models of OSEDiff, AddSR, FluxSR and compute the FID between original T2I images and SR images. FluxSR’s FID is significantly lower than OSEDiff and AddSR, indicating that FTD avoids distribution shift.
| Method | AddSR | OSEDiff | **FluxSR** |
|-|-|-|-|
| **FID** | 43.9 | 43.0 | **34.9** |
**Q2-3:** The FTD lacks quantitative and qualitative analysis to verify the effectiveness of this improvement.
**A2-3:** We have presented quantitative results of the FTD ablation in the ablation study section, demonstrating its effectiveness. We will include qualitative comparisons in the supplementary material.
**Q2-4:** Is the outstanding performance of the proposed method mainly attributed to the FLUX model?
**A2-4:** We replace FluxSR's baseline with SD3-medium and retrained it, comparing it with TSD-SR that is also trained on SD3-medium . The results show that FluxSR outperforms TSD-SR, indicating that the performance improvement comes from our method.
| Method | TSD-SR | FluxSR on SD3 |
|-|-|-|
| MANIQA | 0.4884 | **0.5310** |
| TOPIQ | 0.6526 | **0.6603** |
| QAlign | 4.2258 | **4.3422** |
**Q2-5:** Lack of corresponding comparison and analysis of model inference speed.
**A2-5:** We compare FluxSR’s computational complexity with other multi-step and one-step methods. Despite using a 12B parameter model, the inference time of FluxSR is no more than 0.1s longer than the current fastest method.
| Methods | StableSR | DiffBIR | SeeSR | SUPIR | ResShift | SinSR | OSEDiff | TSD-SR | FluxSR |
|-|-|-|-|-|-|-|-|-|-|
| Inference step | 200 | 50 | 50 | 50 | 15 | 1 | 1 | 1 | 1 |
| Inference time/s | 11.503 | 7.798 | 5.926 | 18.359 | 0.806 | 0.131 | 0.167 | 0.138 | 0.228 |
| MACs/T | 75.81 | 24.52 | 32.34 | 120.41 | 4.90 | 2.09 | 2.27 | 2.91 | 11.71 |
| # Params/B | 1.39 | 1.62 | 1.99 | 4.49 | 0.17 | 0.17 | 1.40 | 2.21 | 11.99 |
**Q2-6:** What advantages does Large-Model-Friendly Training bring to model training and inference?
**A2-6:** The Large-Model-Friendly Training (LMFT) strategy reduces memory usage and training time. During inference, it reduces memory usage and inference latency.
| Method | Training CUDA usage | Training time (per iteration) | Inference CUDA usage | Inference time (512px) |
|-|-|-|-|-|
| w/o LMFT | 76.2GB | 6.91s | 34.82GB | 457ms |
| w LMFT | 55.4GB | 4.43s | 23.77GB | 228ms |
**Q2-7:** The color markings in Tab.1 and Tab.2 of the paper are confusing.
**A2-7:** In Tables 1 and 2, red and blue bold represent the best and second-best methods among all approaches. The best method in one-step diffusion will also be bolded separately. Red bold indicates it is the best overall, while blue bold indicates it is the best in one-step methods but second-best overall.
**Q2-8:** Lacks a comparison with SUPIR.
**A2-8:** We add a comparison with SUPIR on RealLQ250 datasets in the table below. And we have included an efficiency comparison with SUPIR in **A5**.
| Method | DiffBIR | SeeSR | SUPIR | ResShift | SinSR | AddSR | OSEDiff | TSD-SR | **FluxSR** |
|-|-|-|-|-|-|-|-|-|-|
| MUSIQ | 71.61 | 70.53 | 65.91 | 59.45 | 65.38 | 64.23 | 69.56 | 72.10 | **72.65** |
| MANIQA | 0.5472 | 0.4971 | 0.3907 | 0.3383 | 0.4264 | 0.3707 | 0.4230 | 0.4596 | **0.5490** |
| TOPIQ | 0.6835 | 0.6653 | 0.5631 | 0.4709 | 0.5790 | 0.5470 | 0.6075 | 0.6456 | **0.6848** |
| QAlign | 4.2307 | 4.1652 | 4.1442 | 3.6340 | 3.7426 | 3.8884 | 4.2484 | 4.1682 | **4.4077** |
*RealLQ250*
**Q2-9:** The ablation study in the paper lacks a qualitative comparison.
**A2-9:** We would like to show the qualitative comparison of the ablation studies. However, it is not allowed to upload any images according to the rules of rebuttal. However, the metrics in our paper (MUSIQ, ManIQA, Q-Align) reflect image quality, showing that our method outperforms others in visual results, with TV-LPIPS and ADL effectively reducing high-frequency artifacts. We will include more qualitative comparisons in the revised paper. | Summary: This paper proposes FluxSR, a one-step diffusion model for real-world image super-resolution (Real-ISR). The author introduces Flow Trajectory Distillation (FTD) to distill multi-step diffusion models into a single step. FluxSR addresses distribution shifts by aligning noise-to-image and low-to-high-resolution flow trajectories. The method also introduces TV-LPIPS and Attention Diversification Loss (ADL) to reduce artifacts.
Claims And Evidence: Most claims in the paper are clearly supported by convincing experimental results, particularly regarding the effectiveness of Flow Trajectory Distillation (FTD) for aligning flow trajectories and improving realism. However, the claim related to the reason why high-frequency artifacts emerge ("high-frequency artifacts due to token similarity in transformers") is not sufficiently explained or supported.
Methods And Evaluation Criteria: The proposed method (FluxSR) is reasonable and effective in one-step diffusion methods for Real-ISR tasks. The evaluation metrics chosen by the authors are appropriate and cover both perceptual and fidelity aspects comprehensively.
However, the experimental evaluation is somewhat limited due to the use of three datasets, of which DIV2K is synthetic and does not fully reflect real-world complexities. Evaluating on additional, diverse real-world datasets would further validate the generalizability and robustness of the proposed method.
Theoretical Claims: The paper contains relatively few theoretical claims, primarily focused on clearly defined equations for Flow Trajectory Distillation (FTD). I have checked the main theoretical derivations and formulations provided (e.g., Equations 10–18). These derivations are straightforward and clear. I did not find any significant issues.
Experimental Designs Or Analyses: I checked the soundness and validity of the experimental designs, particularly the ablation studies, which are comprehensive and effectively demonstrate the contribution of each proposed component (FTD, TV-LPIPS, and ADL). However, one important shortcoming is the lack of visualization results in these ablation studies. Specifically, visual comparisons illustrating how TV-LPIPS and ADL mitigate high-frequency artifacts would significantly strengthen and clarify the analysis.
Supplementary Material: I reviewed the entire supplementary material.
Relation To Broader Scientific Literature: The proposed method, FluxSR, builds upon recent advances in one-step diffusion models (e.g., OSEDiff, TSD-SR), addressing their common limitations related to distribution shifts between teacher and student models through Flow Trajectory Distillation (FTD). Additionally, the proposed loss functions (TV-LPIPS and ADL) closely relate to perceptual loss-based super-resolution approaches (like RealESRGAN).
Essential References Not Discussed: I did not find any critical related works missing from the paper. The authors cited and discussed the relevant prior methods and ideas necessary for understanding their key contributions.
Other Strengths And Weaknesses: Strengths:
1. The paper introduces an effective approach (Flow Trajectory Distillation) that addresses the limitations (e.g., distribution shift) of existing one-step diffusion methods.
2. The paper is well-organized and generally clear, especially in terms of theoretical derivations and method explanations.
Weaknesses:
1. Incomplete Ablation on FTD: A key contribution of FTD is preserving the prior knowledge from the powerful teacher model (Flux). However, the authors only provide an ablation using the reconstruction loss at the single step ($T_L$). It remains unclear whether training the full trajectory ($[T_L , 1]$) using only reconstruction loss (without FTD) could achieve comparable results. Such an ablation would be crucial for validating the specific advantage provided by FTD, but is currently missing.
2. Insufficient Explanation for Artifacts: The authors do not adequately explain why artifacts occur. If the final distribution aligns closely with Flux’s HR distribution, these artifacts should theoretically not emerge. The current hypothesis attributing artifacts to token similarity lacks empirical validation. Providing visualization of attention maps or investigating whether artifacts result from insufficient training time or limited training data would clarify this issue.
3. Limited Evaluation on Real Datasets: The paper evaluates mainly on two datasets, with one (DIV2K) relying on synthetic degradations, limiting the strength of the empirical results. Evaluations on larger and more diverse real-world datasets (e.g., RealLR200 in SeeSR or RealLQ250 in DreamClear) would enhance the robustness and credibility of the experimental findings.
Other Comments Or Suggestions: 1. There are some inconsistencies and typos regarding mathematical notations. For example, the formulation presented in Algorithm 1 (line 9) is inconsistent with the corresponding equation shown in Figure 3. Clarifying and correcting these discrepancies would improve readability.
2. Although Flux is described as a DiT-based diffusion model, Figure 3 visually depicts it as a U-Net structure, potentially misleading readers. Revising the figure to accurately represent Flux’s architecture (DiT-based transformer) would avoid confusion.
Questions For Authors: See the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer m3JT (denoted as R1)
We sincerely thank the reviewer for the constructive comments. We provide the detailed responses to all the concerns below.
**Q1-1:** Incomplete Ablation on FTD.
**A1-1:** We add the relevant experiments. The results are shown in the table below. In practice, training the entire flow trajectory on $[T_L,1]$ yields similar results with training on a single time step. The main reason is that it is extremely hard to enforce a one-step model to directly mimic the entire flow trajectory, without explicitly preserving the generative ability of the teacher model. Instead, our FTD obtains significantly better results in most metrics that are widely used to evaluate visual quality.
| Method | PSNR | MUSIQ | MANIQA | Q-Align |
|----------------------|-------|-------|--------|---------|
| $L_{rec}$ in $T_L$ | **26.33** | 56.02 | 0.3775 | 3.5170 |
| $L_{rec}$ in $[T_L,1]$| 25.94 | 55.63 | 0.3900 | 3.6993 |
| $L_{rec} + FTD$ | 24.67 | **67.84** | **0.5203** | **4.1473** |
*Ablation study on FTD*
**Q1-2:** Insufficient Explanation for Artifacts.
**A1-2:** To investigate the artifact issue, we visualize the attention/feature maps and found that the similarity between tokens is quite high, and there are indeed repeated features in some dimensions. Additionally, we observe that using the official FLUX model for one-step inference also results in high-frequency artifacts, which we believe is a characteristic of FLUX itself. When using only the reconstruction loss, we observed even more severe periodic artifacts, so these artifacts are not caused by FTD. We hypothesize that this issue is mainly attributed to the limited training data, which are significantly smaller than the ones used to train the multi-step FLUX. To verify this, when using a larger dataset, such as 10k noise-image pairs generated from the 220k-GPT4Vision-captions-from-LIVIS dataset, we observed a clear reduction in periodic artifacts.
**Q1-3:** Limited Evaluation on Real Datasets.
**A1-3:** According to your suggestions, we add the comparison on these two datasets, and the results are shown in the table below. In fact, FluxSR achieves the best results. We will include these into the paper.
| Method | DiffBIR | SeeSR | SUPIR | ResShift | SinSR | AddSR | OSEDiff | TSD-SR | **FluxSR** |
|-------------|---------|--------|--------|----------|--------|--------|---------|---------|------------|
| MUSIQ | 71.61 | 70.53 | 65.91 | 59.45 | 65.38 | 64.23 | 69.56 | 72.10 | **72.65** |
| MANIQA | 0.5472 | 0.4971 | 0.3907 | 0.3383 | 0.4264 | 0.3707 | 0.4230 | 0.4596 | **0.5490** |
| TOPIQ | 0.6835 | 0.6653 | 0.5631 | 0.4709 | 0.5790 | 0.5470 | 0.6075 | 0.6456 | **0.6848** |
| QAlign | 4.2307 | 4.1652 | 4.1442 | 3.6340 | 3.7426 | 3.8884 | 4.2484 | 4.1682 | **4.4077** |
*RealLQ250*
| Method | DiffBIR | SeeSR | SUPIR | ResShift | SinSR | OSEDiff | AddSR | TSD-SR | **FluxSR** |
|-------------|---------|--------|--------|----------|--------|---------|--------|---------|------------|
| MUSIQ | 69.63 | 69.75 | 64.88 | 59.87 | 65.11 | 65.42 | 69.61 | 71.02 | **71.60** |
| MANIQA | 0.5526 | 0.5045 | 0.4677 | 0.3591 | 0.4549 | 0.3945 | 0.4388 | 0.4884 | **0.5588** |
| TOPIQ | 0.6772 | 0.6635 | 0.5870 | 0.4990 | 0.5998 | 0.5634 | 0.6083 | 0.6526 | **0.6814** |
| QAlign | 4.2529 | 4.2399 | 4.1675 | 3.7959 | 3.9317 | 4.0156 | 4.2895 | 4.2258 | **4.4004** |
*RealLR200*
**Q1-4:** There are some inconsistencies and typos regarding mathematical notations. For example, the formulation presented in Algorithm 1 (line 9) is inconsistent with the corresponding equation shown in Figure 3. Clarifying and correcting these discrepancies would improve readability.
**A1-4:** Thank you for noticing and pointing out this mistake. We did make an error while editing the image. Specifically, the formula in Figure 3 corresponding to line 9 of Algorithm 1 should have a "$+$" sign instead of a "$-$". We will correct this error.
**Q1-5:** Although Flux is described as a DiT-based diffusion model, Figure 3 visually depicts it as a U-Net structure, potentially misleading readers. Revising the figure to accurately represent Flux’s architecture (DiT-based transformer) would avoid confusion.
**A1-5:** Thank you for pointing this out. We will delete the U-Net structure and replace it with the DiT architecture in Figure 3. | null | null | null | null |
Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation | Accept (poster) | Summary: This paper is the first to produce a 10m resolution time-series forest height map of Europe from 2019 to 2022. The 2020 results were compared with multiple tree height studies, revealing that its accuracy is also the most reliable. The data used include GEDI, Sentinel-1, and Sentinel-2, with the model being 3D-UNet, the loss function Huber Loss, and the Adam optimizer. The paper also presents findings on temporal height changes, such as tree felling.
Claims And Evidence: Claims (such as first 10m-time-serises map and better accuracy) in this work are well supported.
Methods And Evaluation Criteria: Yes. Make sense for the application.
Theoretical Claims: No theoretical proof in this work.
Experimental Designs Or Analyses: I have checked the soundness of the experimental design and analysis. I found that the experiment itself is well-structured and complete. However, the analysis section needs further improvement. For example, there is a lack of temporal accuracy analysis, phenomenon discovery, and discussion. Currently, it seems that only Figure 6 and Tab 5 present these aspects, whereas the key highlight of the paper is the temporal aspect.
Supplementary Material: Yes. A. Data Handling Details.
Relation To Broader Scientific Literature: Forest height is critically important, and time-series forest height is a key indicator for assessing Earth's health and estimating carbon sink biomass. Currently, most studies focus on static forest height estimation, while dynamic inversion remains scarce. The high-resolution time-series inversion proposed in this paper fills this gap.
Essential References Not Discussed: No
Other Strengths And Weaknesses: As a machine learning application in geosciences, this work is well-structured and comprehensive, with insightful results, which is a strong point.
However, a key weakness is that the study primarily applies existing models and algorithms without introducing innovations in machine learning methods or techniques. For instance, why did previous static inversion methods fail in this task, while this study, using the same or similar existing algorithms, succeeds? Is it due to differences in data input, or does it stem from how the time-series data is processed and integrated? A deeper analysis of these aspects would strengthen the contribution.
I agree that innovation can be based on existing methods, but it is essential to explain why the current approach, without major modifications (such as in the loss function or data mining paradigm), is able to achieve what previous methods (e.g., single-year inversion) could not. This is the truly insightful aspect, rather than merely an engineering application.
Other Comments Or Suggestions: N/A
Questions For Authors: [1] If the improved results are merely due to the fact that previous methods aggregated the data while this study processes them separately, the authors should explicitly state this.
[2] The model structure mentions, "12 monthly Sentinel-2 images concatenated with an aggregated Sentinel-1 composite." So, what is the total number of input channels? Since a year's worth of data is extremely large and contains various uncertainties (such as clouds and other artifacts), how are these issues handled?
[3] Why is the 'Sentinel-1 data aggregated into a single median-based image over the year' ? Is it because the data lacks clear periodicity?
[4] The model's training configuration, duration, and computational resources should be provided in detail.
[5] If the reported MAE in this paper is 4.64m, what is the accuracy of GEDI? What is GEDI's temporal accuracy? Is the accuracy upper bound for time-series forest height inversion determined by GEDI's accuracy?
[6] Can you provide some failed cases and explain why the current method cannot handle them? Additionally, for low vegetation (below 5 meters), does this study face challenges in height estimation and time-series measurement?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and for acknowledging our work. Let us address your remarks and questions in detail below.
> However, the analysis section needs further improvement. [...]
We agree that this is an important aspect. Our primary goal was to develop an openly available model that others can use for their own analyses. To demonstrate the model's temporal capabilities, we analyzed potential deforestation events by tracking pixels for which the corresponding height decreased from above 8m to below 5m between years. The affected area increased from 7,729.1 km² in 2020-2021 to 15,942.5 km² in 2021-2022, also see our answer to reviewer 1141. We have updated our manuscript accordingly
> However, a key weakness is that the study primarily applies existing models and algorithms [...]
>
> [1] If the improved results are merely due to [...].
We would like to emphasize that our work follows the application-driven track. While pinpointing the exact source of improvement is challenging due to significant architectural differences, the results from Table 2 demonstrate three key improvements:
1. Utilizing 12 monthly images rather than a single composite image (`-Composite-` vs `-Stack-`) reduces MAE by 3.6%
2. Processing temporal data with 3D convolutions instead of 2D convolutions (`2D-Stack-` vs `3D-Stack-`) further reduces MAE by 1.4%
3. Training on multiple years of data (2019-2022) versus a single year (2020) (`-Year` vs `-MultiYear`) reduces MAE by 6%
> [2] The model structure mentions, "12 monthly Sentinel-2 images concatenated with an aggregated Sentinel-1 composite." [...]
The model receives 12 monthly images, each containing 16 channels - the Sentinel-2 bands for that month concatenated with the yearly median of Sentinel-1 data. So technically, the model overall receives 12*16=192 channels as input, although we do not concatenate these, but rather process them in the 3D-U-Net as (12, 16, 256, 256) image input. Regarding your second question, we do not explicitly filter clouds from the input images, but the model demonstrates robust performance even with partially cloudy inputs, maintaining high prediction accuracy. We utilize the L2A product, which includes atmospheric correction and other preprocessing steps.
> [3] Why is the 'Sentinel-1 data aggregated into a single median-based image over the year' ? Is it because the data lacks clear periodicity?
While Sentinel-1 has similar temporal resolution to Sentinel-2, we opted to use composite radar data since it reduces noise and speckle artifacts through temporal aggregation, while keeping computational costs manageable. In contrast, we found using the full temporal sequence beneficial for optical Sentinel-2 data.
> [4] The model's training configuration, duration, and computational resources should be provided in detail.
In Section 3.3 of the paper, we provide detailed information about the experimental setup, including the considered optimizer, learning rate, batch size, and more. Regarding duration and computational resources, we trained the model for a duration of roughly four days on two A100 GPUs. We will add detailed information about the computational resourced used to the revision. Thank you very much for this remark.
> [5] If the reported MAE in this paper is 4.64m, what is the accuracy of GEDI? What is GEDI's temporal accuracy? Is the accuracy upper bound for time-series forest height inversion determined by GEDI's accuracy?
GEDI has a measurement resolution of 15cm but varying accuracy by surface type - overall MAE of 0.98m, with higher error in tree-covered areas (1.67m) versus grassland (0.79m) and cropland (0.57m) [PEL]. GEDI cannot reliably measure heights 0-3m and has reduced accuracy below 5m. Regarding temporal accuracy - GEDI rarely measures the same location twice, so it lacks a temporal dimension. For accuracy bounds - as GEDI provides our training labels, this likely sets an upper limit on our model's achievable accuracy. Is this what you meant?
[PEL] Pronk, M., Eleveld, M., & Ledoux, H. (2024). Assessing vertical accuracy and spatial coverage of ICESat-2 and GEDI spaceborne lidar for creating global terrain models. Remote Sensing, 16(13), 2259.
> [6] Can you provide some failed cases and explain why the current method cannot handle them? Additionally, for low vegetation (below 5 meters), does this study face challenges in height estimation and time-series measurement?
We provide two failure cases: 1) GEDI measurements on slopes, where terrain changes are mistaken for vegetation height differences, and 2) checkerboard artifacts in harbor areas. We will expand this analysis in the appendix. Additionally, as noted earlier, both GEDI and our model struggle with accurate measurements of low vegetation.
We hope to have addressed all your concerns, please let us know if further clarification is needed. Thank you!
---
Rebuttal Comment 1.1:
Comment: The dataset and the research itself are highly meaningful, and I believe the work could have a significant impact across multiple disciplines and research areas.
Your detailed response has addressed most of my concerns, and I believe the paper can be raised to Accept.
I still have a few minor questions:
1. Inspired by Reviewer sUWg, I’m curious—within your framework, what are the key differences between building and vegetation height inversion? Since vegetation height can be supervised using GEDI tree height products (no building-height product), is the only difference between building and vegetation height inversion just the source of ground truth?
2. You mentioned accuracy seems to refer to GEDI's observation precision. However, your tree height labels are from GEDI products, which are already processed, right? Is their accuracy consistent with the raw GEDI observations? Or did I misunderstand something?
3. It seems that “Training on multiple years of data (2019–2022) versus a single year (2020) (-Year vs -MultiYear) reduces MAE by 6%” contributed the most to performance improvement. So I’d like to confirm if my understanding is correct:
a) The version trained with fewer labels (only one year) actually achieved better accuracy. In other words, for this application, mixing multi-year labels may introduce inconsistency or noise, **so fewer but more consistent (high signal-to-noise ratio) labels are more beneficial.**
b) If that’s the case, **would it be more effective to use fewer but higher-quality labels**—for example, sampling only 1/4 of high-quality labels each year (e.g., non-cloud, high-quality waveform/tree height)—**to construct a cleaner, multi-year training set?**
Given time constraints, **I don't expect new experiments**, but I’d like to confirm whether I understood this correctly, and whether this is also how the authors interpret the results. If not, please feel free to correct me—or perhaps consider discussing this point further in a revised revision.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and further questions!
> Inspired by Reviewer sUWg, I'm curious—within your framework, what are the key differences between building and vegetation height inversion? Since vegetation height can be supervised using GEDI tree height products (no building-height product), is the only difference between building and vegetation height inversion just the source of ground truth?
Yes, the only difference lies in the source of ground-truth data. While GEDI is designed to measure vegetation height, it still captures height measurements in urban areas, which explains the height variations we observe in cities. However, since GEDI is not optimized for building measurements, these urban height estimates are less reliable than vegetation measurements; the focus of our work lies on tree canopy height estimation.
> You mentioned accuracy seems to refer to GEDI's observation precision. However, your tree height labels are from GEDI products, which are already processed, right? Is their accuracy consistent with the raw GEDI observations? Or did I misunderstand something?
We are not entirely sure if we understand correctly. Yes, the GEDI labels we use for training are already "pre-processed" by the GEDI provider in the sense that GEDI actually returns a waveform, hence a 1-dimensional array of photon information. The label we use corresponds to the L2A product, which just returns the "rh98" value, which is the 98th percentile of the waveform, i.e. the height value such that 98% of the photons returned are below this value (cf. lines 186). In that sense, this is entirely consistent with the raw GEDI observation, it is however just a single statistic of the waveform. Does this answer your question? If not, please let us know.
> It seems that "Training on multiple years of data (2019–2022) versus a single year (2020) (-Year vs -MultiYear) reduces MAE by 6%" contributed the most to performance improvement. So I'd like to confirm if my understanding is correct:
a) The version trained with fewer labels (only one year) actually achieved better accuracy. In other words, for this application, mixing multi-year labels may introduce inconsistency or noise, so fewer but more consistent (high signal-to-noise ratio) labels are more beneficial.
b) If that's the case, would it be more effective to use fewer but higher-quality labels—for example, sampling only 1/4 of high-quality labels each year (e.g., non-cloud, high-quality waveform/tree height)—to construct a cleaner, multi-year training set?
Given time constraints, I don't expect new experiments, but I'd like to confirm whether I understood this correctly, and whether this is also how the authors interpret the results. If not, please feel free to correct me—or perhaps consider discussing this point further in a revised revision.
We believe there might be a misunderstanding.
a) Could you elaborate what you mean by "accuracy" here? In Table 2 we report MAE, MSE, and RMSE, hence three metrics where lower is better. Training on just a single year consistently achieves higher errors compared to training on multiple years. Further note that in this table we account for the number of samples throughout training, i.e., training on a single year just uses information from a single year, but not necessarily fewer labels. We will make sure to make this more clear in the paper, if necessary.
b) If we understand you correctly, point a) is not the case, however we do agree that using higher-quality labels is beneficial. In fact, we are actively investigating this trade-off between label quantity and quality in our ongoing research.
We hope to have clarified your questions and remain at your disposal for any further questions. | Summary: The paper presents an approach for creating large-scale temporal tree canopy height maps using satellite imagery. The main contributions include a deep learning model (3D U-Net architecture) that can track forest height changes across Europe from 2019-2022 at 10m spatial resolution; a canopy height map of Europe for 2020; and the finding that using full 12-month time series of Sentinel-2 imagery improves performance by capturing seasonal patterns and leveraging geo-location shifts, compared to aggregated composites).
Claims And Evidence: * Capturing seasonal patterns: The paper claims that using full 12-month time series of Sentinel-2 imagery improves performance by "capturing seasonal patterns" (compared to using aggregated composites).
* The paper lacks direct analysis of how the model actually uses seasonal information, with examples showing different predictions in leaf-on vs leaf-off seasons. While the paper shows that using 12 months performs better than composites, they don't isolate whether this improvement is due to seasonal patterns or simply having more data points.
* To properly support this claim, the paper should include analysis of the model's attention to seasonal changes, performance comparisons across seasons, and demonstrations of different behavior for deciduous vs evergreen forests. Explicit testing is needed to show that the improvement comes from seasonal information rather than just more data points. Visualizations or examples showing how the model uses seasonal information, along with ablation studies isolating the impact of seasonal patterns, would significantly strengthen the evidence for this claim.
* leveraging geo-location shifts: the paper claims that by processing a stack of 12 monthly Sentinel-2 images concatenated with an aggregated Sentinel-1 composite, the method leverages geolocation offsets in Sentinel-2 imagery.
* The paper lacks visualization or quantitative analysis demonstrating how the model uses geolocation shifts, including examples showing improved edge detection or fine spatial details that could be attributed to leveraging these shifts. On the contrary the spatial resolution is lesser than Liu et al. The improved performance could be due to other factors like increased data volume or temporal information, rather than specifically leveraging geolocation shifts.
* To properly support this claim, the paper should include an ablation study isolating the impact of geolocation shifts from other factors, with metrics or measurements showing the degree of improvement that can be attributed to leveraging geolocation shifts. Providing examples showing enhanced edge detection or spatial detail due to this technique will strengthen the claims in this paper.
Methods And Evaluation Criteria: The methods and evaluation criteria employed in the paper are generally appropriate for the problem of large-scale canopy height estimation, with some notable strengths and limitations:
Strengths:
* Use of GEDI LiDAR data as ground truth aligns with standard practices in the field and comprehensive comparison with existing methods using multiple metrics (MAE, MSE, RMSE, R²)
* Inclusion of both quantitative metrics and qualitative visual comparisons with use of high-quality ALS data for additional validation of tall tree detection
Limitations:
* Temporal validation relies heavily on detecting deforestation, with limited validation of growth detection
* While multiple metrics are used, they don't specifically address the claimed benefits of seasonal patterns and geolocation shifts
Theoretical Claims: Not Applicable
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: The paper's contributions can be placed within the context of the broader scientific literature on Forest Monitoring and Remote Sensing:
* Builds upon established work using satellite data for forest monitoring
* Advances previous single-year height mapping efforts (Lang et al., Liu et al.) by incorporating temporal imagery.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The paper's most significant contribution is making high-resolution temporal forest monitoring more accessible through its integration with Google Earth Engine. While individual technical components might not be groundbreaking, their combination and practical implementation represents applied machine learning for environmental monitoring.
Weaknesses:
* Technical Limitations:
* Lack of detailed analysis supporting claims about seasonal patterns
* Insufficient evidence for geolocation shift benefits
* Limited validation of temporal dynamics, especially for forest growth
* Practical Implementation Details:
* Limited discussion of computational requirements
* Minimal discussion of model robustness to different environmental conditions
Other Comments Or Suggestions: Not Applicable
Questions For Authors: 1. Seasonal Pattern Analysis: How does the model specifically utilize seasonal information? Could you provide analysis showing:
* Model attention/activation patterns across different seasons
* Performance comparison between different types of forests that show different seasonal patterns
* Ablation study isolating seasonal pattern benefits from general data volume benefits
2. Geolocation Shift Benefits: Can you provide direct evidence that the model leverages Sentinel-2 geolocation shifts? Specifically:
* Comparative analysis with models that cannot use shift information
* Examples showing improved edge detection or spatial detail
* Quantification of improvement specifically attributable to shift utilization
3. Growth Detection Validation: How reliable is the model at detecting forest growth? Please provide:
* Validation using known growth areas
* Comparison with ground measurements over time
* Analysis of minimum detectable growth rate
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. In our manuscript, we proposed seasonal variation and geolocation shifts as possible factors influencing the model's performance. We agree that some of our statements have been to explicit from a environmental perspective (e.g., to make use of the geolocation shifts given in Sentinel-2 time series data to improve model performance or the use of time series data to capture seasonal patterns). We regret these unprecise statements. We have revised the manuscript to present these as hypotheses rather than definitive claims. Below, we include further experiments and analyses that explore these factors in more detail. We hope that our comments and additional experiments address your concerns.
### Seasonal Pattern Analysis: How does the model specifically utilize seasonal information? Could you provide analysis showing:
>
> a) Model attention/activation patterns across different seasons
The figure in https://ibb.co/1f8zh2gR shows activation patterns across months for different patches using Guided Attention [1]. We observe varying activation strengths across months and patches, suggesting the model processes temporal information differently by location. However, further research would be needed to confirm these hypotheses.
[1] Striving for simplicity: The all convolutional net.
> b) Performance comparison between different types of forests that show different seasonal patterns
We evaluated our model separately on broadleaf and coniferous forests using the Copernicus Land Monitoring Service Forest Type Map (2018). These forest types show different seasonal patterns - broadleaf forests have distinct leaf-on/off periods while coniferous forests maintain constant canopy. The metrics below show our findings:
|Model|Broadleaf MAE (m)|Coniferous MAE (m)|
|-|-|-|
|Lang et al.|5.44|5.11|
|Liu et al.|7.01|6.91|
|Pauls et al.|5.30|4.85|
|Tolan et al.|10.43|11.73|
|Turubanova et al.|8.60|8.43|
|**Ours**|**4.57**|**4.11**|
> c) Ablation study isolating seasonal pattern benefits from general data volume benefits
Note that the number of training labels remains identical for all variants. To investigate the benefits of seasonal information independently from data volume, we conducted an ablation study comparing three models trained on different 4-month subsets:
- Winter (Nov-Feb)
- Summer (Jun-Sep)
- Mixed (Jan-Feb, Aug-Sep)
The results below show that using a mix of winter and summer months yields better validation performance.
|Model Variant|Huber Loss (m)|
|-|-|
|Winter (Nov-Feb)|1.169 ± 0.003|
|Summer (Jun-Sep)|1.13 ± 0.002|
|Mixed (Jan-Feb, Aug-Sep)|1.122 ± 0.002|
Thank you for this comment. We will provide these additional findings in the updated version of our manuscript.
### Geolocation Shift Benefits: Can you provide direct evidence that the model leverages Sentinel-2 geolocation shifts? Specifically:
>
> a) Comparative analysis with models that cannot use shift information
> c) Quantification of improvement specifically attributable to shift utilization
Direct analysis of geolocation shifts is challenging as they are inherently embedded in raw satellite data. However, our ablation study (Table 3) shows superior performance using raw vs composite data. Note that although the amount of input data changes, we keep the number of training labels constant.
> b) Examples showing improved edge detection or spatial detail
We provide a comparative figure (https://ibb.co/WpHqStqM) demonstrating enhanced edge detection across three model variants: 2D-Composite, 2D-Stack, and 3D-Stack. Additional examples of improved edge detection can be found in Figures 1, 4, and 8, as well as in our interactive Google Earth Engine application.
### Growth Detection Validation: How reliable is the model at detecting forest growth? Please provide:
>
> a) Validation using known growth areas
While our model's ability to detect forest growth is constrained by the 4-year observation period, we observe clear growth signals in forest plantation regions like the Le Landes forest plantation in France (search of "Garein" in our GEE app).
> b) Comparison with ground measurements over time
We collected LiDAR data from the Vosges forest in France for 2020 and 2022 and analysed the measured vs predicted growth (https://ibb.co/YTbkXfpf). Our model is able to predict the growth of the trees, however the model uncertainty is high, which leads to "heavier tails" than what was measured. We have added these results to our manuscript.
> c) Analysis of minimum detectable growth rate
Given our model's MAE of 4.76m and GEDI's uncertainty of 0.98m, we estimate that reliable growth detection requires changes of at least 5m over the observation period, varying by region and forest type.
We have also revised the paper about computational requirements, model performance and robustness across different forest types. Please let us know if you need any additional clarification. | Summary: The article describes a method for calculating canopy height using satellite data and reference values from GEDI LiDAR. The authors propose the use of a UNet network for regression. Using multispectral Sentinel images, they provide canopy height estimates for Europe between 2019 and 2022. Their R² is 0.819. When they only consider labels exceeding 7m, their R² is 0.591.
Claims And Evidence: The authors provide an extensive discussion of their procedure to obtain their results. This discussion includes different options for constructing the training set, the use of various error measures, comparisons with other approaches, and the distribution of errors for different tree sizes. They also conduct a qualitative evaluation by presenting several examples.
However, some observations I have include:
*** How do you know that you are looking at trees (as opposed to buildings, for instance)?
Methods And Evaluation Criteria: The authors construct their database, which consists of Sentinel-1 and Sentinel-2 images as predictors and GEDI LiDAR as the reference value (why is Sentinel-1 not mentioned in the abstract?). They then train their UNet using different error measures (*** Please explain why MSE≠RMSE\sqrt{\text{MSE}} \neq \text{RMSE}?).
Even though the authors do not provide their weights or datasets (they state that they will do so upon acceptance of their publication), the code provided appears reasonable.
Theoretical Claims: The article does not include a relevant theoretical proof or one that needs to be tested. However, I would recommend training, validating, and testing in different years, in non-overlapping.
Experimental Designs Or Analyses: The article does not provide weights or data to verify the claims, although the authors have stated that they will do so once the paper is accepted. The included code appears to be reasonable.
Supplementary Material: The authors describe how they managed their data. They also include an image of Europe with their results and results for small patches, along with a comparison of their results with other approaches.
Relation To Broader Scientific Literature: The paper
1. introduces a multi-temporal approach to canopy height estimation. Other studies have used
optical data (Schwartz et al., 2024)
2. utilices Sentinel-2 time series instead of median composites( Pauls et al., 2024)
3. develops a 3D U-Net model that improves performance over prior 2D approaches
4. achieving state-of-the-art results in canopy height prediction at 10m resolution.
(Liu et al., 2023; Pauls et al., 2024)
5. provides publicly available code, but not weigths nor data.
Essential References Not Discussed: A search yields the following articles related to the problem that are not mentioned:
1. **Satellite Image and Tree Canopy Height Analysis Using Machine Learning on Google Earth Engine with Carbon Stock Estimation**
Other Strengths And Weaknesses: This article presents an important application to the study of forests. The authors scaled their solution to cover Europe, and their results appeared to advance the state of the art.
However, I am uncertain about how their study specifically returns to tree height rather than the height of objects in general, such as buildings.
The authors do not provide weights or data, although they state that they will make it available upon acceptance of the paper. They provide code that seems to be well structured. The details provided would allow for replicating their approach.
Other Comments Or Suggestions: ***what is the normalizing divisor in Table 1? Where does it come from?
*** provide R2 in Table 2. (no need to include both MSE and RMSE)
*** I would recommend training, validating and testing in non-overlapping different years
*** seperate, Appendix A
Questions For Authors: 1. How does the model distinguish between trees and other tall objects (e.g., buildings)?
2. What are the sources of error when estimating tall trees?
3. Why not use stratified evaluation based on different forest types?
4. What post-processing steps were used to ensure temporal consistency?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive and thorough review. Let us address your concerns and questions in detail.
> How do you know that you are looking at trees (as opposed to buildings, for instance)?
That is a very good and true observation, indeed, given that GEDI measures the height of all objects, we cannot tell whether we are estimating a tree or a building: buildings are also measured at some height, therefore in cities one can see height prediction even in the absence of trees. This however is common practice in the remote sensing community. Canopy height maps are mainly used for forest monitoring and carbon stock estimation and both applications apply a forest mask before further use, which could be itself generated by a segmentation model; which is however a different problem we do not address here. While this is common practice, we will make this more clear in the paper to avoid confusion.
> why is Sentinel-1 not mentioned in the abstract?
That was not intentional, thank you for pointing it out. We have updated our manuscript accordingly.
> Please explain why sqrt{\text{MSE}} \neq \text{RMSE}?
We individually calculate the MSE and RMSE for each of the 1,500 validation patches and average them afterwards (weighted average based on the number of labels in each patch). This is why there can be a difference. Thank you for this remark. We will provide additional explainations in our manuscript.
> Even though the authors do not provide their weights or datasets (they state that they will do so upon acceptance of their publication), the code provided appears reasonable.
Please note that the predictions can be viewed and downloaded from the Google Earth Engine (GEE) website linked in the paper. We are happy to share all the code to reproduce our results upon acceptance. Sharing terabytes of data in an anonymous way right now is challenging.
> I would recommend training, validating, and testing in different years, in non-overlapping.
We tested this approach under the "-2020" setup, where we train only on 2020 data and evaluate on data from 2019, 2021 and 2022 (cf. e.g. Table 3 in the paper). We have made this more explicit in our manuscript, thank you.
> A search yields the following articles related to the problem that are not mentioned: Satellite Image and Tree Canopy Height Analysis Using Machine Learning on Google Earth Engine with Carbon Stock Estimation
Thank you very much. We have added the reference to the background section of our manuscript.
> what is the normalizing divisor in Table 1? Where does it come from?
We found it beneficial to rescale the data to a range that is more suitable for the model (i.e., we divide the input data by this divisor). To that end, we manually inspected the data to find out value ranges that contain valuable data, in particular because standard normalization did not work due to problem with atmospheric distortions and cloud cover.
> provide R2 in Table 2. (no need to include both MSE and RMSE)
Thank you for the suggestion, we have added R2 to Table 2.
> seperate, Appendix A
Fixed, thank you!
> What are the sources of error when estimating tall trees?
Although we do not know the exact reason, we suspect that it has to do with the following two reasons:
1. Due to the natural distribution of trees in Europe, tall trees are less common, creating a skew in the label distribution.
2. Tall trees naturally have a higher canopy density (e.g. more leaves, branches, etc.), which leads to a higher fraction of LIDAR photons/measurements not penetrating the canopy. In that case, the photons do not reach the ground and we do not have a usable measure of the tree height (this filtering is already applied by GEDI, not by us).
> Why not use stratified evaluation based on different forest types?
Thank you for this suggestion. While our paper focused on overall performance metrics, we have now included a detailed analysis comparing broadleaf and coniferous forests (see our response to reviewer qZ94). We welcome your input on additional forest categories that would be valuable to evaluate.
> What post-processing steps were used to ensure temporal consistency?
We use a quadratic-spline approach to smooth the predictions over time, but only for visualization purposes.
We hope to have addressed all your remarks. Thank you again. If you have any further questions or concerns, please let us know. | Summary: This paper introduces a novel deep learning approach for generating high-resolution, large-scale temporal canopy height maps across Europe using satellite imagery, specifically leveraging Sentinel-2 time series data and GEDI LiDAR measurements as training data. The proposed method significantly improves accuracy and resolution, delivering consistent 10-meter spatial resolution canopy height predictions from 2019 to 2022, which allows for the tracking of temporal dynamics such as deforestation and forest growth. By using a 3D U-Net model architecture with monthly temporal stacks, the model effectively captures seasonal variations and geolocation shifts, demonstrating superior performance compared to existing approaches, especially in accurately estimating the height of tall trees critical for carbon stock assessment and ecological analysis. In general, this paper makes a great contribution in the training dataset and canopy height data products.
## Update after Rebuttal
The authors' response to my questions fully clarified my concerns. This is a good paper that utilizes well-established methods in an application area that resolves practical questions in sustainability. Thus, this paper is a good fit for ICML's Application-Driven Machine Learning track and deserves acceptance.
Claims And Evidence: The authors present three central claims, all of which are supported by quantitative and qualitative results:
1. **A model capable of tracking forest height changes**: The paper features the first 10 m resolution temporal canopy height map of the European continent for 2019–2022, as shown in the Earth Engine app. Thus, comparing those annual maps can be used to track changes in forest height. In addition, Figure 6 provides additional qualitative examples of change tracking.
2. **More accurate measurements and finer spatial details than previous studies**: The performance of the model against baselines is comprehensively evaluated through the experiments in 4.3.
3. **12-month timeseries is helpful than a single composite**: Table 2 and Table 3 substantiate this claim.
Methods And Evaluation Criteria: The evaluation protocol of this paper conforms with prior works in forest canopy height estimation. The proposed method (3D Unet for mapping tree height from satellite timeseries) also makes sense conceptually.
Theoretical Claims: N/A
Experimental Designs Or Analyses: I reviewed the experiment design, especially the validation dataset. My only concern is the possible spatial autocorrelation in the validation dataset since the validation dataset is generated by randomly selecting tiles. Is it possible that two very close tiles share similar canopy heights?
Supplementary Material: I reviewed all of the supplementary materials.
Relation To Broader Scientific Literature: The paper advances the field of tree canopy height estimation by introducing a novel deep learning model capable of producing temporal canopy height maps at 10 m resolution over large spatial scales. Unlike prior studies that mainly focused on single-year canopy height predictions, this work addresses the critical gap of modeling temporal dynamics, crucial for tracking ecological changes and carbon stocks. It builds upon previous research by using Sentinel-2 monthly image stacks instead of aggregated median composites, thereby leveraging seasonal vegetation patterns and subtle geolocation shifts to enhance accuracy. Furthermore, this approach significantly surpasses existing methodologies, such as those of Liu et al. (2023) and Tolan et al. (2024), especially in accurately estimating taller trees, which are essential for precise biomass estimations.
Essential References Not Discussed: More references to remote sensing timeseries understanding literature such as [1] can be discussed.
[1] Tarasiou, M., Chavez, E. and Zafeiriou, S., 2023. Vits for sits: Vision transformers for satellite image time series. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10418-10428).
Other Strengths And Weaknesses: This paper is well written and has great contributions in its training dataset and the final data product produced. Please see other comments and questions for the weaknesses.
Other Comments Or Suggestions: Although this paper mainly focuses on methodology and dataset, I encourage the author to include scientific conclusions about forest health or deforestation of the European continent, if any, from analyzing the trend of yearly forecast canopy height maps generated by the model.
Questions For Authors: 1. Line 152: What’s the reason for including coastal aerosol (B01) and water vapour (B09) bands?
2. Line 185: Could the authors clarify the sparsity of the labels? What percentage of the pixels have a canopy height label derived from GEDI?
3. Line 197: How are 2.56 km × 2.56 km patches created? Are they created by buffering GEDI point measurements?
4. Line 199: Do “month images” refer to month composites or one image selected within a month? If the latter, could the authors clarify the reasons for not using monthly composites?
5. Line 207: Is smoothing applied before calculating validation metrics or just for the final mapping visualization?
6. Line 240: Have the authors considered possible spatial autocorrelations in the validation dataset? For example, two randomly selected points can have similar canopy heights.
7. Have the authors considered other architectures for remote sensing timeseries such as [1]?
[1] Tarasiou, M., Chavez, E. and Zafeiriou, S., 2023. Vits for sits: Vision transformers for satellite image time series. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10418-10428).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and for acknowledging the contributions of our work. Let us address your concerns and questions one by one.
> More references to remote sensing timeseries understanding literature such as [1] can be discussed.
Thank you for the suggestion. We have added additional references to our manuscript.
> Although this paper mainly focuses on methodology and dataset [...]
We agree this is important. While our main focus is providing an openly available model for others to analyze, we did conduct some initial analysis: By tracking pixels where height decreased from >8m to <5m between years (indicating potential deforestation), we found affected areas of 9,747.9 km² (2019-2020), 7,729.1 km² (2020-2021), and 15,942.5 km² (2021-2022). We have added these findings to our manuscript.
> Line 152: What’s the reason for including coastal aerosol (B01) and water vapour (B09) bands?
We decided to include all L2A (bottom-of-the-atmospohere) information, which includes all bands except for B10. We agree that B01 and B09 might not necessarily be relevant for the task at hand. To analyze their impact, we ran additional experiments to assess their importance. The following table reports the results on the validation part of our dataset (L1>15m refering to the L1 loss for all labels that exceed 15m):
| Configuration | L1 (m) | L1>15m (m) | L1>20m (m) | L1>25m (m) | L1>30m (m) | L2 (m) |
|-----------------------|----------------|----------------|----------------|----------------|----------------|----------------|
| Without B01 and B09 | 1.991 ± 0.002 | 4.837 ± 0.008 | 5.476 ± 0.010 | 7.384 ± 0.008 | 11.406 ± 0.004 | 22.281 ± 0.037 |
| Including B01 and B09 | 1.992 ± 0.003 | 4.830 ± 0.014 | 5.460 ± 0.015 | 7.364 ± 0.029 | 11.384 ± 0.031 | 22.277 ± 0.053 |
As it can be seen, B01 and B09 only have little impact on the results and are, hence, candidates to be removed from the set of input channels. We will discuss these findings in the updated version of our manuscript.
> Line 185: Could the authors clarify the sparsity of the labels? What percentage of the pixels have a canopy height label derived from GEDI?
GEDI measures roughly 4% of Earth's surface. For our 256x256 pixel training samples, only about 100 pixels (0.15%) have usable GEDI labels from the same year, due to noise and the need to match measurements temporally with satellite imagery.
> Line 197: How are 2.56 km × 2.56 km patches created? Are they created by buffering GEDI point measurements?
We first make sure to only take image patches from our training areas to not have an overlap with the validation patches. We then randomly select a 2.56km x 2.56km area and load all GEDI measurements within that area. Since we only have the coordinates of the GEDI measurements, we assign them to the closest Sentinel-pixel. We hope that this clarifies your questions. Please let us know if that is not the case.
> Line 199: Do “month images” refer to month composites or one image selected within a month? If the latter, could the authors clarify the reasons for not using monthly composites?
When creating the 12-months image stack, we select one of the images within each month, namely the one with the least amount of cloud cover. We decided not to use monthly composites, following Wolters et al., who showed that not using composites allows the model to learn finer details, possibly due to small geolocation shifts in the satellite images.
> Line 207: Is smoothing applied before calculating validation metrics or just for the final mapping visualization?
Smoothing is only applied for the visualizations.
> My only concern is the possible spatial autocorrelation in the validation dataset [...]
Thank you for raising this point. Indeed, there are spatial correlations between the different spatial areas. Note, however, that the learning scenario can be seen as a transductive learning setting, where one already has access to the (test) input images (but not the ground-truth labels). From that perspective, it is even valid to include the geolocation of an image stack as input. Note that the overall goal is to fill the remaining "gaps" that are not covered by GEDI groundtruth labels. We hope that this answers your question. Please let us know if there are still remaining concerns from your side.
> Have the authors considered other architectures for remote sensing timeseries such as [1]?
We have considered other segmentation architectures and backbones, but we found that the 3D extension of the U-Net architecture to work surprisingly well. U-Nets are efficient and in our setting, yield good results. We will consider exploring more complex architectures in future work.
We hope to have clarified all concerns and questions, please let us know if further clarification is needed. Thanks!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. My concerns have been addressed, and I have raised my scores accordingly. I think this is an excellent application paper and deserves acceptance. | null | null | null | null | null | null |
ReferSplat: Referring Segmentation in 3D Gaussian Splatting | Accept (oral) | Summary: This paper introduces a new task—Referring 3D Gaussian Splatting Segmentation (R3DGS), which aims to segment target objects in 3D Gaussian Splatting scenes based on natural language descriptions. The authors construct the dataset specifically for this task, named Ref-LERF, and propose a framework called ReferSplat. The method primarily builds 3D Gaussian Referring Fields, introduces a Position-aware Cross-Modal Interaction module to fuse spatial and textual information, and employs Gaussian-Text Contrastive Learning to enhance the discriminative capability of cross-modal features, achieving state-of-the-art performance on both the R3DGS and 3D open-vocabulary segmentation tasks.
Claims And Evidence: The authors claim that ReferSplat achieves SOTA performance through extensive comparisons with existing methods. The effectiveness of individual modules (e.g., PCMI and GTCL) is demonstrated.
Methods And Evaluation Criteria: Methodologically, the authors cleverly incorporate natural language guidance into the conventional 3D Gaussian Splatting framework, enabling the model to capture fine-grained segmentation details of target objects in 3D scenes. The evaluation metrics, such as mean IoU, are standard, and the constructed Ref-LERF dataset provides a reasonable test platform for the task.
Theoretical Claims: The paper presents several derivations regarding rendering formulas, cross-modal attention mechanisms, and contrastive learning. The overall theoretical derivations are consistent with commonly used methods in the field.
Experimental Designs Or Analyses: The authors perform extensive comparisons with existing methods (such as LangSplat and Grounded SAM), providing both quantitative and qualitative analyses.
However,
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: This work is closely related to the latest developments in 3D neural rendering.
However, for the R3DGS task, GOI[1] stands out from other methods by incorporating a 2D referring expression segmentation model to assist in 3D localization, making it one of the most comparable works to your proposed approach. However, the current submission does not include a comparison with GOI in Ref-LeRF dataset. I recommend conducting and reporting such a comparison—or at least providing a detailed discussion regarding how your method differs from GOI—to strengthen the experimental validation and highlight the relative advantages of your proposed framework.
[1] Goi: Find 3d gaussians of interest with an optimizable open-vocabulary semantic-space hyperplane.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ### Strengths
1. Provides comprehensive ablation studies and quantitative comparisons, with experimental results showing significant performance improvements.
### Weaknesses
1. The model is trained separately for each scene, which might affect its scalability and generalization ability in large-scale deployments;
2. The dataset is relatively small, and more scenes will be needed in the future to validate the robustness of the method;
3. Refer to "Relation To Broader Scientific Literature*"
Other Comments Or Suggestions: No.
Questions For Authors: Why does incorporating positional information into the refer feature (CLIP feature) in Eq. (7) not degrade open-vocabulary segmentation performance, but instead improve it?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback on our work: clever, reasonable, effective PCMI and GTCL design, and comprehensive ablation studies.
>**Q1&Q2: Generalization ability**
**A1:** The experiments in our main paper follow a per-scene optimization setup, which naturally limits direct generalization to unseen scenes. To address this, we explore a generalized training paradigm, as shown in **Tab. 11 of the Appendix**. In this setting, the referring feature is no longer per-scene initialized; instead, it is predicted from other Gaussian attributes such as color, opacity, and position, following a feed-forward paradigm. This design enables generalization beyond per-scene optimization. To further validate the generalization capability of our method, we conduct additional experiments on the larger-scale and more diverse ScanNet dataset. We select 30 scenes from the official training split for joint training and 5 scenes from the validation split for evaluation. Language expressions are sourced from ScanRefer. As shown in the table below, our method achieves strong performance across diverse and unseen environments, highlighting its robustness and generalization. We expect that extending our framework with scalable architectures (e.g., MVSplat [a]) will lead to further performance gains and improved applicability in real-world scenarios.
|Method|scene0011|scene0015|scene0019| scene0025|scene0030|mIoU|
|-|-|-|-|-|-|-|
|ReferSplat|15.9|19.7|27.8|18.3|21.4|20.6
[a] MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images, ECCV 2024.
>**Q3: GOI result**
**A3:** Thank you for this insightful suggestion. To further validate the effectiveness of our method, we have conducted additional experiments comparing our approach with GOI [b] on the Ref-LERF dataset. The results demonstrate that although GOI outperforms LangSplat, its performance remains notably lower than that of ReferSplat. This highlights that, although both methods utilizing 2D pseudo-mask supervision, the key difference lies in how effectively the connection between 3D Gaussian points and natural language expressions is established within the 3D scene. We will revise the main paper to discuss GOI [b] and update Tab. 5 with the new comparison results shown below.
|Method|ram.|fig.| tea.| kit.|avg.|
|-|-|-|-|-|-|
|Grounded SAM|14.1|16.0|16.9|16.2|15.8|
|LangSplat|12.0|17.9|7.6|17.9|13.9|
|SPIn-NeRF| 7.3|9.7|11.7|10.3|9.8|
|GS-Grouping| 27.9|8.6|14.8|6.3|14.4|
|GOI|27.1|16.5|22.9|15.7|20.5|
|**ReferSplat**|**35.2**|**25.7**|**31.3**|**24.4**|**29.2**|
[b] GOI: Find 3D Gaussians of Interest with an Optimizable Open-vocabulary Semantic-space Hyperplane, ACM MM 2024.
>**Q4: Position information**
**A4:** Thank you for your insightful question. Incorporating positional information into the referring feature (Eq. 7) enhances the model’s ability to understand **spatial relationships** described in the referring expressions while also enriches the referring feature with **geometric context** from the 3D scene. Instead of degrading performance, this geometric context helps ensure that segmentation masks accurately cover the complete target object, resulting in improved open-vocabulary segmentation performance.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed most of my concerns, and the additional experiments further demonstrate the effectiveness of their approach. Therefore, I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer naU6,
We’re truly grateful for your support and decision to raise the score. It means a lot to us and motivates us to continue improving our work. We sincerely appreciate the time and effort you devoted to evaluating our work. We will carefully incorporate your suggestions in the final revision.
Best regards,
Authors of Paper #779 | Summary: This paper formulates the Referring 3D Gaussian Splatting Segmentation (R3DGS) task, which focuses on segmenting 3D entities that correspond to a given referring-expression in the form of a language-based query. The R3DGS differs by the currently employed task formulation of open-vocabulary 3D segmentation by its focus on spatial relationships between scene objects, as well as distinguishing properties of objects. The paper proposes a 3D Gaussian Splatting based method, ReferSplat, which learns a referring field enabling segmentation in a multi-modal setting. To evaluate the proposed method, the paper shows results both on the R3DGS task, and the open-vocabulary 3D segmentation task. The paper extends the existing LERF dataset with annotations for referring expressions, and constructs a dataset for the R3DGS task, namely the Ref-LERF dataset, which is then used for the R3DGS evaluations.
### Update after rebuttal
The clarifications and discussions provided in the rebuttal have sufficiently addressed my concerns. I am still leaning towards acceptance, and will be keeping my original score (4: Accept).
Claims And Evidence: Most claims made in the submission are supported by convincing evidence and thorough analysis. However, there are a few claims that I found not fully-founded. It might be that I interpreted certain aspects incorrectly, but I believe these statements might be a bit problematic:
1. _[L010-013] "We introduce Referring 3D Gaussian Splatting Segmentation (R3DGS), a new task that focuses on segmenting target objects in a 3D Gaussian scene based on natural language descriptions.":_ Open-vocabulary segmentation methods based on 3D Gaussian splatting representation already segment target objects based on natural language descriptions, as they take a free-form query text as input. I acknowledge that the referring expression-based segmentation is a different task as it focuses more on inter-object relations and object properties, but this statement is slightly misleading as the novel aspect of the proposed task is not segmenting objects based on "natural language descriptions", instead the referring segmentation aspect.
2. _[L034-037, Column 2] - "… identify newly described objects, even when occluded or not directly visible in the novel view…":_ The illustration and the explanation are a bit confusing as these objects are indeed visible in _some_ views. The narrative of the method being able to identify and segment objects not directly visible is not fully founded.
3. _[L074-078] - "Since the text query is only introduced during inference, the final predictions solely rely on 2D rendered features in a single-view reasoning framework, limiting the model’s ability to effectively localize objects in 3D space.":_ The limitation of these methods is indeed due to the fact that they rely on 2D rendered features instead of performing the localization directly in the 3D space. However, this is not due to "the text query only being introduced during inference" as the first part of the sentence implies.
Methods And Evaluation Criteria: Proposed method is reasonable and has been designed with a meaningful thought progression, successfully targeting the 3D referring expression based segmentation task. Introducing a referring field in the representation is quite meaningful, and it addresses the limitation of existing open-vocabulary 3D Gaussian splatting methods which mainly focus on learning an implicit semantic field, often falling short on identifying inter-object spatial relations.
As there are no available datasets for evaluating 3D referring expression segmentation task in the context of 3D Gaussian splatting, this paper proposes an extension of the LERF dataset. The proposed dataset, Ref-LERF, aims to provide a reasonable evaluation benchmark for the proposed task of 3D referring expression-based segmentation for 3D Gaussian splatting. In addition to designing this dataset and performing reasonable evaluations on this dataset, the proposed method is also compared against existing benchmarks for open-vocabulary 3D segmentation for completeness. Overall, I think the evaluation is overall meaningful and through, and I have the impression that a reasonable effort was made to obtain fair comparisons.
Theoretical Claims: I did not identify any theoretical claims with proofs provided in the submission.
Experimental Designs Or Analyses: As there are no available datasets for evaluating 3D referring expression segmentation task in the context of 3D Gaussian splatting, this paper proposes an extension of the LERF dataset. Experimental analysis is presented on the proposed Ref-LERF dataset. Experimental design is generally meaningful, and I found that the claims made while discussing the experimental results are reasonable.
Supplementary Material: No supplementary material was provided apart from the appendix. I reviewed the full appendix.
Relation To Broader Scientific Literature: Referring expression segmentation is a crucial task for many robotics applications. While there is a large body of work towards this goal in the 3D scene understanding domain that primarily focus on 3D point cloud representations, I am not aware of any works addressing the same for higher-fidelity representations such as 3D Gaussian splatting-based methods. There is another line of work for addressing natural-language based segmentation of 3D Gaussian splatting representations, particularly in the context of open-vocabulary segmentation. However, the design of such methods as well as the evaluation methodology is generally more fixated on identifying objects based on their semantics. Often, those methods are evaluated using a set of text queries describing the target object- however, to the best of my knowledge none of these methods systematically evaluate for how well the method can identify objects based on their relations to other scene objects. By extending the existing LERF dataset, this submission takes a step towards a relatively less explored aspect of language-guided segmentation of 3D Gaussian splatting representations.
Essential References Not Discussed: I found that this submission generally discussed essential references in the context of open-vocabulary 3D Gaussian splatting segmentation. While I understand that this method has a main focus on 3DGS and not on 3D point cloud representations, given the really large body of work for 3D referring segmentation of point clouds, I found the discussion in L134-148 a bit underdeveloped. I think at least the datasets for this task (such as ScanRefer and MultiRefer) can be discussed further, to provide a better context on the relevancy of the proposed RefLERF dataset.
Other Strengths And Weaknesses: Strengths:
- The paper addresses an important topic (3D referring segmentation) that was not explored in the context of 3DGS representations to the best of my knowledge.
- The paper is generally written with clarity.
- Qualitative examples are interesting, I appreciated that the examples in Figure 5 generally feature expressions where the target object's semantics are not explicitly stated, showing the strengths of the method for identifying objects mostly based on object-relations.
Weaknesses:
- If I understand correctly, the method requires the generation of a set of referring expression sentences first, which are then used during training. However, it is not clear to me how or when these sentences are generated. How are these sentences generated?
- I am unclear about how well the method can generalize to new referring expressions if it is trained with a set of pre-written expressions. I understand that the contrastive module is introduced for this purpose, but I do not understand how it is possible to generalize if the model never sees certain types of object relationship descriptors.
- L196-200 state that the overall similarity is measured by summing per-word similarities. But how are tokens representing negative relations accounted for? For instance if the query is "the object far away from the window", I still suspect that the model will identify an object right by the window. How are such cases handled in the model implicitly?
- I understand that based on the training referring expression sentences, pseudo-GT 2D masks are generated using Grounded-SAM. This means that the method indeed relies on the use of 2D masks. However there are some statements claiming that the proposed method circumvents using masks. This is not fully founded, and a bit confusing.
- For extracting word and sentence features, the method employs BERT embeddings (L314, right column) despite using CLIP distillation at another part of the method (L196, right column). I was unable to see any discussion or an ablation regarding the choice of BERT instead of CLIP to extract features from the text input.
Other Comments Or Suggestions: 1. _[Fig. 1 caption] "The one sanding on the..."_ should be _"standing"_
2. _[L156] "Each Gaussian g_i is parameterized its mean position..."_ should be _"by its mean position..."_
3. _[L250-251] "The proposed Position-aware Cross-Modal Interaction modules establishes"_ - should be _"module establishes"_
4. _[L040-042, Column 2] - "A straightforward baseline for R3DGS is to adapt existing open-vocabulary 3D scene understanding methods by replacing the open-vocabulary class names with natural language expressions.":_ I agree with this statement in the sense that it accurately identifies how a straightforward baseline can be formed from open-vocabulary 3D scene understanding methods. However, the phrasing "open-vocabulary class names" is not very accurate.
5. The impact statement required by ICML is missing in this submission.
Questions For Authors: Please see the questions listed in Strengths and Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback on our work: meaningful, reasonable, and thorough.
>**Q1: Claim 1**
**A1:** In established 2D/3D referring expression segmentation (RES) tasks, referring segmentation involves segmenting target objects based on free-form natural language expressions that often include spatial relationships or object attributes. In contrast, open-vocabulary segmentation typically focuses on identifying all objects of a given category, usually specified by a single **category name**. As shown in Fig. 4 of our paper, the average sentence length in the LERF-OVS dataset is approximately **1.5 words**, with the vast majority of queries being category names (e.g., “cup”) and very few containing descriptive or relational information (e.g., “tea in a glass”). This highlights a clear difference in the nature of the language inputs between the two tasks.
To avoid confusion and better reflect the novelty of our task, we have revised the sentence to:
“*a new task that focuses on segmenting target objects in a 3D Gaussian scene based on natural language descriptions that often contain spatial relationships or object attributes.*”
>**Q2: Claim 2**
**A2:** While the elephant is clearly visible in some training views, it is **not directly visible** from the **novel camera viewpoint** during inference in Fig. 1. We highlight this to emphasize a key difference from 2D RES, which relies solely on single-view information and struggles to handle such invisible scenarios. In contrast, the proposed ReferSplat leverages multi-view information to construct a holistic 3D scene representation, enabling robust reasoning even when target objects are occluded or not directly visible. For further clarification, please refer to L128–137 (second column). We will revise the corresponding description to make this point clearer.
>**Q3: Claim 3**
**A3:** We agree with your comment and have revised the sentence for better clarity. The updated version is:
*“Existing methods rely on matching the text query with 2D rendered features instead of performing localization directly in 3D space, which limits their performance in complex scenarios.”*
>**Q4: Related work**
**A4:** Thank you for the suggestion. We will expand the related work to provide more discussion on datasets like ScanRefer and MultiRefer.
>**Q5: Referring expression sentences generation**
**A5:** The referring expressions are manually annotated by human annotators prior to training and are included in our Ref-LERF dataset.
>**Q6: Generalize to new referring expressions**
**A6:** For generalization to new expression, our method uses BERT embeddings for their strong language understanding and generalization, gained via pre-training on diverse text. By aligning 3D Gaussian features with BERT representations via Gaussian-Text modeling, the model can interpret and generalize to unseen object relationship descriptors. We will clarify this in the revision. For generalization to new scenes and new datasets, please refer to our responses to Reviewer 2Vdk Q4 and Reviewer t3nW Q5, respectively.
>**Q7: Overall similarity**
**A7:** Thank you for the insightful question. Our method aggregates per-word similarities, allowing the model to account for the influence of individual terms, including spatial relations like “far away.” Visualizations show that such words activate relevant regions, suggesting the model captures their importance in segmentation decisions. Additionally, we incorporate Gaussian-Text Contrastive Learning on global sentence-level features to enhance **holistic comprehension of the sentence context**, including complex spatial relationships. This combination of **local (word-level) and global (sentence-level) cues** enables the model to better interpret both positive and negative spatial terms. We will clarify this aspect in the revision.
>**Q8: Mask usage**
**A8:** Our method does employ pseudo masks generated by Grounded-SAM during training. The intended point is to emphasize that our method eliminates the need for manually annotated ground-truth masks, which are often costly and impractical. We will revise any ambiguous statements in the paper to more accurately reflect this and avoid confusion.
>**Q9: CLIP result**
**A9:** As suggested, we conduct experiments comparing BERT and CLIP embeddings for language features in R3DGS. Results show that BERT consistently outperforms CLIP. This is likely because CLIP focuses more on noun categories, while referring expressions often involve spatial and attribute-based descriptions. We will include more discussion and results in the revision.
|Method|ram.|fig.| tea.| kit.|avg.|
|-|-|-|-|-|-|
|BERT|35.2| 25.7 |31.3 |24.4 |29.2|
|CLIP|23.5|23.2|26.2 | 21.0|23.5|
>**Q10: Typos**
**A10:** Thank you! We will carefully proofread and correct any typos or ambiguous phrasing in the revision.
>**Q11: Impact statement**
**A11:** We will add impact statement in the revision.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. The clarifications and discussions provided in the rebuttal have sufficiently addressed my concerns. I am still leaning towards acceptance, and will be keeping my original score (4: Accept).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Btgt,
Thank you for your thoughtful review and great support. We are glad that our rebuttal has addressed your concerns, and we sincerely appreciate the time and effort you devoted to evaluating our work. We will carefully incorporate your suggestions in the final revision.
Best regards,
Authors of Paper #779 | Summary: The paper introduces ReferSplat, a framework for Referring 3D Gaussian Splatting Segmentation (R3DGS), aiming to segment 3D objects based on **natural language descriptions**, even when occluded or not directly visible. Key contributions include:
1. **R3DGS Task**: A new task requiring 3D multi-modal understanding and spatial reasoning.
2. **Ref-LERF Dataset**: A dataset with 295 language expressions emphasizing spatial relationships and object attributes.
3. **ReferSplat Framework**: Combines 3D Gaussian Referring Fields, Position-aware Cross-Modal Interaction (PCMI), and Gaussian-Text Contrastive Learning (GTCL) to align language and 3D spatial features.
4. **State-of-the-Art Results**: Outperforms existing methods on R3DGS and 3D open-vocabulary segmentation benchmarks.
Claims And Evidence: - ReferSplat’s superiority over baselines (LangSplat, SPIn-NeRF) is validated through quantitative results (Tables 5-7).
- Ablation studies (Tables 1-4) confirm the effectiveness of PCMI and GTCL.
- Pseudo mask generation via confidence-weighted IoU improves mask quality (Table 3).
Methods And Evaluation Criteria: - **Strengths**:
- The integration of language features into 3D Gaussians via referring fields is novel and well-motivated.
- PCMI and GTCL address spatial reasoning and semantic disambiguation effectively.
- This paper is well-written.
- **Weaknesses**:
- Ref-LERF’s limited scale (only LERF-OVS dataset, including 4 scenes and 59 objects) and lack of diversity comparison to existing datasets (e.g., Scannet/Scannet++) may limit generalizability.
- Evaluation focuses on mIoU but omits metrics like precision/recall for occlusion cases.
Theoretical Claims: No
Experimental Designs Or Analyses: - **Strengths**:
- Comprehensive ablation studies validate each component.
- Failure case analysis (Appendix E) highlights practical challenges.
- **Weaknesses**:
- Dataset splits (train/test) and generalization to unseen scenes are unclear.
- without training & rendering time comparsion
- without storage comparsion
Supplementary Material: No demo submitted.
Relation To Broader Scientific Literature: I'm not familiar with Segmentation + 3DGS, so I have no idea whether Refersplat is related to previous papers.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback on our work: novel, well-motivated, effective PCMI and GTCL design, and well-written.
>**Q1: Diversity comparison to existing datasets**
**A1:** The comparision to ScanRefer and Multi3DRefer is shown in the table below. Due to the limited availability of large-scale 3DGS datasets, our Ref-LERF is built upon the widely used LERF dataset, extended with referring sentence annotations to support the R3DGS task. While the number of scenes is limited, Tab. 5 of the main paper shows that our method achieves strong performance, demonstrating its effectiveness. Furthermore, as discussed in our response to Q4, our method also generalizes well to larger and more diverse datasets, e.g., ScanNet, under the generalized training setting.
|Dataset|Year|Pub.|#Object|#Expression|#scenes|data format|
|-|-|-|-|-|-|-|
|ScanRefer|2020|ECCV|11,046|51,583|800|3D scan|
|Multi3DRefer|2023|ICCV|11,609 |61,926|800|3D scan|
|Ref-LERF|2025|-|59|295|4|Multi-view Images|
>**Q2: Evaluation metric**
**A2:** To provide a more comprehensive evaluation, we report mAcc@0.25, which measures the percentage of predictions with IoU > 0.25 and is commonly used in 3D point cloud referring segmentation tasks. This metric reflects performance in practical scenarios, including occlusions. As shown in the table below, ReferSplat achieves a mean mAcc. of 38.4 on Ref-LERF, significantly outperforming LangSplat and demonstrating our method’s robustness. We will include this metric and the results in the revision.
|Method|ram.|ram.|fig.|fig.|tea.|tea.|kit.|kit.|avg.|avg.
|-|-|-|-|-|-|-|-|-|-|-|
|Metric|mIoU|mAcc.|mIoU|mAcc.|mIoU|mAcc.|mIoU|mAcc.|mIoU|mAcc.|
|LangSplat |12.0|18.4|17.9|25.4| 7.6|10.2| 17.9|27.5| 13.9|20.4|
|GS-Grouping |27.9|30.3 |8.6|11.1| 14.8|16.9| 6.3|13.8| 14.4|18.0|
|**ReferSplat**|**35.2**|**50.0**|**25.7**|**31.7**| **31.3**|**33.9**| **24.4**|**37.9**|**29.2**|**38.4**|
>**Q3: Dataset splits**
**A3:** The dataset comprises 772 training images and 22 testing images, with 236 language descriptions used for training and 59 for testing, totaling 295 descriptions. We will add the detailed dataset split information in the revision.
>**Q4: Generalization to unseen scenes**
**A4:** The experiments in our main paper follow a per-scene optimization setup, which naturally limits direct generalization to unseen scenes. To address this, we explore a generalized training paradigm, as shown in **Tab. 11 of the Appendix**. In this setting, the referring feature is no longer per-scene initialized; instead, it is predicted from other Gaussian attributes such as color, opacity, and position, following a feed-forward paradigm. This design enables generalization beyond per-scene optimization. To further validate the generalization capability of our method, we conduct additional experiments on the larger-scale and more diverse ScanNet dataset. We select 30 scenes from the official training split for joint training and 5 scenes from the validation split for evaluation. Language expressions are sourced from ScanRefer. As shown in the table below, our method achieves strong performance across diverse and unseen environments, highlighting its robustness and generalization. We expect that extending our framework with scalable architectures (e.g., MVSplat [a]) will lead to further performance gains and improved applicability in real-world scenarios.
|Method|scene0011|scene0015|scene0019| scene0025|scene0030|mIoU|
|-|-|-|-|-|-|-|
|ReferSplat|15.9|19.7|27.8|18.3|21.4|20.6
[a] MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images, ECCV 2024.
>**Q5&Q6: Computational costs**
**A5:** We conduct experiments on the ramen scene from the Ref-LERF dataset using the same NVIDIA A6000 GPU to compare the computational cost of our ReferSplat against SOTA methods. Results show that ReferSplat achieves significantly lower computational complexity and faster inference speed than LangSplat. While GS-Grouping excels in storage and FPS, ReferSplat outperforms all methods in segmentation performance. ReferSplat also has the shortest training time, thanks to a lightweight preprocessing pipeline that avoids costly operations like language feature compression (LangSplat) or mask association with video tracking methods (GS-Grouping). These results demonstrate that ReferSplat’s compact, efficient design is well-suited for real-world and large-scale 3D applications. We will include these comparisons in the revision.
Method|Training↓|FPS↑|Storage↓|mIoU↑|
-|-|-|-|-|
LangSplat|176min|12.4|46MB|13.9|
GS-Grouping|66min|**54.2**|**2.3MB**|14.4|
**ReferSplat**|**58min**|26.8|3.3MB|**29.2**|
---
Rebuttal Comment 1.1:
Comment: I have reviewed all the rebuttal comments, and my concerns have been satisfactorily addressed. I have no further questions at this time.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 2Vdk,
Thank you sincerely for your time and thoughtful review. We’re glad to hear that our rebuttal has satisfactorily addressed your concerns.
If you find our clarifications reasonable, we would greatly appreciate your consideration in updating the score to Accept (4). Your support would mean a lot to us and would strongly encourage our continued work in this direction.
Best regards,
Authors of Paper #779 | Summary: - This paper introduces Referring 3D Gaussian Splatting Segmentation (R3DGS), a task aimed at segmenting target objects in a 3D Gaussian scene based on natural language descriptions.
- The proposed method addresses key challenges, including identifying occluded objects in novel views.
- The authors present Ref-LERF, for the proposed task.
- The framework integrates 3D Gaussian referring fields, a position-aware cross-modal interaction module, and Gaussian-Text Contrastive Learning to improve spatial reasoning and enhance fine-grained understanding of natural language descriptions.
## update after rebuttal
I maintain my original rating.
Claims And Evidence: The authors extend the existing LeRF dataset by incorporating expressive annotations, providing five descriptions of varying token lengths for each object. Figure 4(b) showcases the dataset’s increased complexity.
Methods And Evaluation Criteria: Yes, the proposed baselines and benchmark datasets are logical.
Theoretical Claims: There is no proof in the manuscript.
Experimental Designs Or Analyses: Yes, ablation studies in Section 4.3 discuss different design choices.
Supplementary Material: Yes, I reviewed all the content in the Appendix.
Relation To Broader Scientific Literature: This paper advances the challenge of 3D segmentation by integrating natural language descriptions for object identification, even in cases where objects are occluded or invisible from a single view. To support this task, the authors introduce Ref-LERF, a novel dataset specifically designed for language-guided 3D segmentation. This contribution is valuable to the scientific community, enabling more robust and interpretable scene understanding.
Essential References Not Discussed: All the related references are cited in the manuscript.
Other Strengths And Weaknesses: **Strengths**
- **[S1] Dataset Contribution**: The creation of the Ref-LERF dataset provides valuable resources for future research for the proposed task.
- **[S2] Outperforms SOTA methods**: The proposed model achieves state-of-the-art performance on the newly introduced R3DGS task and existing 3D open-vocabulary segmentation benchmarks.
- **[S3]** Unlike retrieval-based matching between rendered semantic features and text embeddings, the relationship is directly modelled in the proposed method.
- **[S4] Exhaustive ablations**: The authors provide thorough details regarding design choices, including the number of input views, the dimension of referring features and selection strategies for positive referring features.
**Weaknesses**
- **[W1] Detailed Evidence**: The paper lacks detailed evidence on the model's performance when handling highly ambiguous or incomplete language queries, which may limit its practicality in real-world applications. While Gaussian-Text Contrastive Learning is introduced to address ambiguities, it could still lead to confusion in referring segmentation for different objects. Additional ablations, videos, or novel view evaluations would strengthen the paper's claims and provide clearer validation of its effectiveness.
- **[W2] Training time**: The paper does not thoroughly address the method's training and inference time. The method's complexity, particularly the position-aware cross-modal interaction module, may result in high computational costs, potentially limiting its feasibility for large-scale or real-time 3D environments.
- **[W3] Difficult with sudden viewpoint changes**: The authors state that the model struggles with significant viewpoint changes and perspective shifts. However, how do other baselines perform under these conditions? Is this limitation specific to the proposed method, or is it a broader challenge in the field?
Other Comments Or Suggestions: -
Questions For Authors: - Fig 1. The elephant is clearly visible in the scene, so why do the authors claim that it is not?
- How would the model perform on datasets like MessyRooms, where some scenes contain up to 1,000 objects? Given that manual annotation and descriptions are impractical in such cases, how can this challenge be addressed?
- The approach relies heavily on pseudo-ground truth masks. How does the proposed method ensure that errors in these masks do not negatively impact the model's performance?
- Could the authors provide more qualitative results or supporting videos showcasing the model's performance from different novel views? This would further strengthen the paper's claims and demonstrate the method's effectiveness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback on our work: dataset contribution, SOTA result, relationship modeling, and exhaustive ablations.
>**Q1&Q7: Detailed evidence like video**
**A1:** We have provided additional qualitative video results at **[ReferSplat.mp4](https://anonymous.4open.science/api/repo/ReferSplat-779/file/ReferSplat.mp4)**, which include:
* Cases involving ambiguous language queries
* Cases with incomplete language input
* Performance under different novel views
* Performance on significant viewpoint and perspective shift
These videos demonstrate our method’s ability to handle diverse and challenging referring expressions, showcasing its robustness to ambiguity and incomplete descriptions. The qualitative results further validate the effectiveness of our method and its potential for real-world applications.
>**Q2: Computational costs**
**A2:** We conduct experiments on the ramen scene from the Ref-LERF dataset using the same NVIDIA A6000 GPU to compare the computational cost of our ReferSplat against SOTA methods. Results show that ReferSplat achieves significantly lower computational complexity and faster inference speed than LangSplat. While GS-Grouping excels in storage and FPS, ReferSplat outperforms all methods in segmentation performance. ReferSplat also has the shortest training time, thanks to a lightweight preprocessing pipeline that avoids costly operations like language feature compression (LangSplat) or mask association with video tracking methods (GS-Grouping). These results demonstrate that ReferSplat’s compact, efficient design is well-suited for real-world and large-scale 3D applications. We will include these comparisons in the revision.
Method|Training↓ |FPS↑|Storage↓|mIoU↑|
-|-|-|-|-|
LangSplat|176min|12.4|46MB|13.9|
GS-Grouping|66min|**54.2**|**2.3MB**|14.4|
**ReferSplat**|**58min**|26.8|3.3MB|**29.2**|
>**Q3: Difficult with sudden viewpoint changes**
**A3:** This is a broader challenge in the field, not specific to our method. Baselines like LangSplat and GS-Grouping also show performance degradation under significant viewpoint changes and perspective shifts, as shown in the **[ReferSplat.mp4](https://anonymous.4open.science/api/repo/ReferSplat-779/file/ReferSplat.mp4)**. Future work will focus on developing robust multimodal representations and improving global scene understanding to enhance robustness under extreme viewpoint variations.
>**Q4: Elephant visible?**
**A4:** While the elephant is clearly visible in some training views, it is **not directly visible** from the **novel camera viewpoint** during inference in Fig. 1. We highlight this to emphasize a key difference from 2D referring expression segmentation (RES), which relies solely on single-view information and struggles to handle such invisible scenarios. In contrast, the proposed ReferSplat leverages multi-view information to construct a holistic 3D scene representation, enabling robust reasoning even when target objects are occluded or not directly visible. For further clarification, please refer to L128–137 (second column). We will revise the corresponding description to make this point clearer.
>**Q5: Messy Rooms dataset evaluation**
**A5:** Drawing from experience in 2D/3D RES tasks, models trained on sufficiently diverse datasets can generalize well and exhibit strong zero-shot capabilities, as exemplified by models like Grounded-SAM. As shown in Appendix Tab. 11, our model shows promising generalization results under the generalized training setting. In particular, it generalizes well to unseen scenes on the ScanNet dataset, which contains numerous objects and diverse scene layouts (see Reviewer 2Vdk Q4). Building upon this foundation, joint training on large-scale, diverse datasets such as ScanRefer, Ref-LERF, and MultiRefer presents a viable path toward adapting our model to more complex environments like Messy Rooms, where manual annotation and detailed descriptions are impractical. The above analysis highlights the scalability and cross-dataset generalization potential of our approach in real-world applications.
>**Q6: Pseudo mask error**
**A6:** To reduce the impact of errors from pseudo masks, we employ a two-stage optimization strategy (as described in L310-316), iteratively refining mask predictions during training. As shown in Tab. 1 of the main paper, this two-stage strategy outperforms the one-stage pipeline, demonstrating its effectiveness in mitigating the impact of noisy pseudo masks and ensuring robust overall performance. It is worth noting that the default results are based on the one-stage pipeline for a fair comparison with previous methods.
---
Rebuttal Comment 1.1:
Comment: I have reviewed all the rebuttal comments, and my queries have been satisfactorily addressed. I have no further questions.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer t3nW,
Thank you for your thoughtful review and positive acknowledgment. We sincerely appreciate your constructive feedback, which has helped us improve the clarity and quality of our paper. We will carefully incorporate your suggestions in the final revision.
Best regards,
Authors of Paper #779 | null | null | null | null | null | null |
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs? | Accept (poster) | Summary: The paper investigates modality imbalance issues in existing VLMs by designing multiple tasks with diverse settings. Comprehensive experiments reveal interesting findings, and gradient analysis further illustrates how different settings impact the final results.
Claims And Evidence: Mostly, but with some caveats. While the experimental results support the idea that modality imbalance exists in VLMs and that Image-via-Text conversion helps alleviate this, the evidence could be more comprehensive. For instance, the paper focuses on synthetic tasks, which are controlled and may not fully represent the complexity and variety of real-world visual reasoning tasks. Also, the performance on the HARD-image tasks, even with the proposed strategies, is still far from perfect. This indicates that while the strategies help, they may not provide a complete solution to the problem, and there might be other underlying issues that are not addressed in the paper.
Methods And Evaluation Criteria: Yes, but limited by synthetic tasks. The methods and evaluation criteria (s2h generalization) are reasonable for controlled experiments. However, relying on synthetic tasks like Table Readout and Grid Navigation might limit the generalization of the findings. These tasks are constructed to highlight specific issues (such as reasoning across modalities), but they do not reflect the real-world diversity of tasks VLMs are typically applied to. More diverse, real-world tasks would provide a better test of the proposed methods' effectiveness and generalizability.
Theoretical Claims: No, I did not identify any formal proofs within the text. However, the paper discusses theoretical aspects like gradient alignment and the internalization of image-to-text conversion, which are supported by experimental data and loss function analysis. These are more empirical observations than formal mathematical proofs.
Experimental Designs Or Analyses: Sound but with limitations. The experimental designs are reasonable, but there are several points of concern:
- Task limitations: As mentioned earlier, the tasks are quite specific and synthetic. The paper could benefit from including more varied tasks or even applying the methods to existing, large-scale benchmarks like VQA (Visual Question Answering) or other real-world multimodal reasoning benchmarks.
- Modality imbalance: While the paper focuses on mitigating modality imbalance, the results show that the gap between text and image reasoning remains significant, even with the proposed methods. This suggests that the current approaches (Image-via-Text and Mix supervision) may not fully solve the problem. In fact, the model's heavy reliance on image-to-text conversion at inference time indicates that while the training can alleviate the gap, it doesn't eliminate it, which may limit the practical applicability of the approach.
- Inference time cost: One major issue that the paper glosses over is the increased inference time cost due to the image-to-text conversion. While the authors acknowledge this, they do not explore alternative strategies to mitigate this overhead, which could be a significant limitation in real-world applications.
- Gradient alignment study: The paper’s analysis of gradient alignment and its impact on S2H generalization is interesting but lacks depth. While the alignment scores provide some insight into how well the model is learning to generalize across modalities, the paper doesn't go into much detail on how these gradients are computed or how reliable they are as a measure of generalization. Moreover, this analysis could benefit from more ablations or sensitivity analyses to assess the robustness of the observed improvements.
Supplementary Material: I primarily focus on reviewing the details of training and task introduction.
Relation To Broader Scientific Literature: no
Essential References Not Discussed: no
Other Strengths And Weaknesses: **Strengths:**
1. **Originality and Innovation**:
- The paper introduces a novel perspective on mitigating modality imbalance in Vision-Language Models (VLMs), which is a relatively underexplored area. The idea of using simple-to-hard (S2H) generalization and specifically the concept of training with Image-via-Text supervision for improving VLMs’ reasoning capabilities across modalities is an innovative approach. This contribution is significant as it goes beyond just testing VLMs on a standard benchmark; it provides a systematic methodology to address a key issue in VLM performance.
- The idea of "internalizing" the image-to-text conversion is also quite creative, offering an elegant solution to reduce inference time, even if the initial training strategy is more costly. This speaks to a potential real-world application of reducing the computational overhead in production systems without sacrificing performance.
2. **Experimental Rigor**:
- The paper presents a comprehensive set of experiments with well-defined tasks, which allow the authors to study the issue of modality imbalance in great detail. The ablation studies, gradient alignment analysis, and task-specific results provide useful insights into how various training strategies impact performance. These experiments help demonstrate the practical effectiveness of the proposed methods.
3. **Clarity**:
- Overall, the paper is well-structured and presents its methods, experimental design, and results in a clear and logical manner. The methodology is sufficiently detailed for readers to follow the steps and replicate experiments, which is essential for scientific transparency. Figures and diagrams complement the text and are helpful for understanding complex ideas like the training strategies and results.
**Weaknesses:**
1. **Limited Applicability of Synthetic Tasks**:
- While the synthetic tasks used in the experiments are designed to probe specific aspects of VLM reasoning, they do not fully capture the complexity of real-world multimodal tasks. This limits the external validity of the results. Real-world applications often involve much more varied, noisy, and dynamic data, which may not be sufficiently represented by controlled tasks like Table Readout or Grid Navigation. As a result, the paper could have explored how these methods perform on real-world benchmarks such as VQA.
2. **Inference Time Cost**:
- The increased inference time due to image-to-text conversion remains a significant concern. While the paper addresses this cost, it doesn't fully explore how this trade-off might impact practical deployments, especially in environments where latency or real-time responses are crucial. A more detailed analysis of how to mitigate this overhead would make the paper more practical.
3. **Lack of Theoretical Depth**:
- The paper focuses heavily on empirical results, which are certainly valuable. However, there is a lack of deep theoretical exploration behind the proposed methods. For example, while the paper discusses gradient alignment, it doesn’t offer a formal mathematical formulation or sufficient explanation of why this approach works. The relationship between gradient alignment and generalization across modalities could have been made more rigorous.
4. **Modality Imbalance Still Present**:
- Despite the promising results, the paper acknowledges that a gap remains in generalization performance between text and image inputs, even with the proposed strategies. This indicates that the solution is not fully comprehensive, and the challenge of modality imbalance is far from being solved. A deeper exploration of why some gaps remain and further refinement of the approach would strengthen the paper's claim.
5. **Experimental Results on Large-Scale Tasks**:
- The experiments rely on relatively smaller synthetic datasets and do not provide results on large-scale, real-world benchmarks. While this helps control for specific variables, the results may not generalize well to larger, more complex datasets. It would be beneficial to see how the proposed strategies scale when applied to more realistic data, where other complexities such as noise, data sparsity, and multimodal interactions come into play.
6. **Insufficient Discussion on Limitations**:
- While the paper briefly mentions limitations such as inference cost and the restricted scope of the tasks, it doesn't delve deeply into potential shortcomings of the proposed approach. A more balanced discussion of when and where these methods might fail would provide readers with a clearer understanding of the boundaries of the work.
Other Comments Or Suggestions: no
Questions For Authors: 1. The paper relies on synthetic tasks like Table Readout and Grid Navigation to study modality imbalance. How do you justify using these tasks, and how well do you think the findings apply to real-world VLM applications? Can we generalize the results beyond these controlled tasks?
2. Even with the strategies you’ve proposed, there’s still a noticeable S2H generalization gap between text and image inputs. What do you see as the remaining challenges in addressing this gap, and are there any components or techniques missing that could help achieve full generalization?
3. You mention that the image-to-text conversion method helps but comes with increased inference time. What are the real-world implications of this, and do you have any suggestions or strategies for minimizing the extra cost during inference, especially in production environments?
4. The gradient alignment study is an interesting part of the paper, but I’m curious about how reliable those alignment scores are as a measure of S2H generalization. Have you tested their stability across different datasets or training conditions, and do you think they are robust enough to be a key indicator of generalization?
5. Lastly, how do you generate the Chain-of-Thought (CoT) data for the tasks you’ve presented? Could you walk us through the process a bit more?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Authors' Response to Reviewer NF8o
We thank the reviewer for their time and effort to review our paper. Please find responses to your comments and questions below.
# Questions
> Experiments are restricted to Synthetic Tasks
We use synthetic tasks to enable controlled studies of modality imbalance and simple-to-hard generalization in VLMs. These tasks are carefully designed to require diverse skills such as OCR, spatial navigation, and visual reasoning, which are essential to real-world VLM applications. As shown in **Table 7 (Appendix I.1)** (referenced in lines 048–049), frontier VLMs struggle with these tasks, suggesting that even simplified settings can reveal gaps in current models. Importantly, incorporating these synthetic tasks into pre-training improves performance on a broad range of real-world benchmarks, show in **Table 6 (Appendix G.6)** (referenced in lines 427–428). Thus, the synthetic tasks that we use are relevant to the real-world applications and could be utilized to motivate training choices to boost VLM’s general abilities.
> What are the challenges for the remaining modality gap?
Thank you for the interesting question. Our work highlights a modality gap in simple-to-hard (S2H) generalization between text and image inputs, which we attribute in part to current VLM training paradigms. Current VLMs rely on adapter-based architectures that loosely integrate small visual encoders with large pre-trained LLMs, which have been pre-trained separately on image and text pre-training data.
Our approach uses image-to-text conversion to better align image inputs with the model’s strong capabilities in language, partially mitigating this gap. However, fully closing it likely requires rethinking the model design itself. Moving beyond adapter-based setups towards early-fusion architectures or joint multi-modal pretraining could help models learn more unified representations and improve generalization across modalities.
> How do the authors propose to mitigate additional inference cost with image-to-text conversion
We would like to clarify that our proposed approaches, Mix, Mix+, and Align-Mix+ supervision types, use image-to-text conversion only during training the model (please see lines 151-152, 245-247, 256-257). During inference, the models trained with these supervision types **don’t explicitly perform image-to-text conversion**. Please see **Figures 3 and 6** in the main paper and lines 205-209, 257-264 for discussions on lengths of inference-time generations for Mix, Mix+, and Align-Mix+ supervisions.
> Further analysis of gradient alignment for more models and datasets
Thanks for the question! We evaluated gradient alignment on the Consecutive Table Readout and Table Readout datasets. Due to the high cost of saving frequent checkpoints, we did not extend these studies to other settings. Instead, we focused on fine-grained alignment measures (see Figures 17–20) and provided theoretical support in Theorems H.1 and H.2. Together, these offer strong initial evidence for the robustness of our alignment metric as a generalization indicator.
> More details on CoT Generation
All of our data (image, its text description, the CoT trace, and the solution) is generated with a Python script. When we generate an image (e.g., using matplotlib), we store relevant metadata (e.g., sequence of (row index, col index) of highlighted cells in Table Readout) and insert it in a fixed chat template. We give examples of our CoT data in **Figures 32-37** in the appendix. See Appendix D.6.2 (mentioned in footnote 1 of page 1) for alternate versions of CoT templates. | Summary: This paper studies visual reasoning using vision-language models (VLMs). The authors focus on three tasks: Table Readout, Grid Navigation, and Visual Analogy. They run experiments under simple to hard settings to test each task's generalization.
They propose distilling knowledge from large language models (LLMs) by converting images into text and extracting chain-of-thought reasoning. Their findings suggest that (1) converting images to text can improve visual reasoning and (2) applying this form of knowledge distillation enables the VLM to learn the transferred chain of thought directly.
Claims And Evidence: No, the paper is hard to read. It lacks clearly defined sections for the $\textit{method, dataset, and experiments}$. Instead, these sections are mixed together, making it difficult to identify the evidence supporting their claims.
Methods And Evaluation Criteria: The datasets in this paper are self-created and not publicly available, and they lack clear descriptions. As a result, it is hard to assess the quality and difficulty of the datasets for Table Readout, Grid Navigation, and Visual Analogy. It would be better to propose a new dataset if none exists or to use existing benchmarks.
Theoretical Claims: The paper does not present theoretical claims or proofs; it relies on experimental findings.
Experimental Designs Or Analyses: The paper mixes experimental designs and methods, which makes it hard to follow the claims. For example, Section 3 should focus on the methods, but it gets interrupted by experiments (see Figure 2), so it's not clear how the methods lead to the conclusions.
Supplementary Material: Yes, I reviewed the K.3 Visual Analogy section to understand the data format.
Relation To Broader Scientific Literature: The paper studies how vision-language models (VLMs) reason with images, similar to the way large language models (LLMs) reason. It is relevant to chain-of-thought, prompt engineering, and knowledge distillation.
Essential References Not Discussed: Multimodal Chain-of-Thought Reasoning in Language Models
Other Strengths And Weaknesses: Strengths:
1. The abstract is well written and easy to read.
Weaknesses:
1. The main sections are not clearly defined, and the English needs improvement.
2. The paper deals with knowledge distillation and multimodal chain-of-thought but does not compare or discuss existing methods.
3. The datasets are self-created without detailed descriptions, and public benchmarks are not used.
Other Comments Or Suggestions: 1. The paper does not separate methods, datasets, and experiments into clear sections. Instead, they are mixed together, which makes the manuscript hard to follow.
2. The fonts in the figures are too small and difficult to read.
3. The English needs further improvement and a thorough grammar check. For instance, the sentence in Line 106 (page 2, left column) is very confusing and its meaning is unclear. "for tasks where the S2H generalization failed in both modalities, the same idea as (i) led to S2H generalization in the image modality after text-only supervision was used to inject reasoning capability on HARD task in text modality", it's hard to understand what's the meanning."
Questions For Authors: 1. The paper transfers the reasoning process (Chain-of-Thought) from LLM to VLM through tuning. However, it is unclear how this approach differs from [1].
2. Since the work relates to multimodal Chain-of-Thought and knowledge distillation, it should discuss related work in these areas, such as the findings in "Multimodal Chain-of-Thought Reasoning in Language Models" [1].
[1] Multimodal Chain-of-Thought Reasoning in Language Models
Ethical Review Concerns: The paper does not have ethical concerns
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: # Authors' Response to Reviewer d6eT
We thank the reviewer for their comments and suggestions. Please find our responses to your comments below.
> Is the paper related to knowledge distillation or prompt engineering?
We would like to clarify that **we do not employ any knowledge distillation**. All of our data (image, its text description, the CoT trace, and the solution) is generated with a Python script. When we say the reasoning transfers from the LLM to a VLM, we mean that the innate (or learned) reasoning capability of the LLM backbone (i.e., Llama-3-8B-Instruct) of an adapter-based VLM helps the entire VLM learn to perform the same task in the image modality. There is no external LLM/VLM here.
> The datasets are self-created without detailed descriptions, and public benchmarks are not used
We propose to measure the modality imbalance in VLMs by looking at the different simple-to-hard generalization behaviors (e.g., length generalization, compositional generalization). For this, we need a dataset where 1) there is a clear level of difficulty and 2) each image has an equivalent text description. There is no public dataset that fits this description. We also mention in the Related Works paragraph that “current VLM benchmarks are often solvable without the visual input,” which makes our analysis impossible. Additionally, we provide details in generating the datasets in Appendix D and provide the example data points in Figure 32-37. We were planning on releasing the entire codebase with the final version of the paper, but here is an anonymous link to the code to generate the data: [https://github.com/asjdifpjadsi/VLM_S2H](https://github.com/asjdifpjadsi/VLM_S2H). Also see our response to Q1 of Reviewer NF8o.
> Please provide Comparisons with “Multimodal chain-of-thought in Language Models”
We thank the reviewer for the suggestion. We would like to clarify that our paper focuses on **modality imbalance** and its measurement through **simple-to-hard generalization**, aiming to improve a VLM’s reasoning ability on images to a comparable reasoning performance on **equivalent** text data. We would like to reiterate that **we do not employ any knowledge distillation**.
- Firstly, although both Multimodal-CoT and our paper use CoT to boost the model’s reasoning capability, we would like to point out that it is a very common technique in practice.
- Multimodal-CoT would be mostly comparable to our Image-via-Text Supervision as both attempt to leverage extra text generation to assist reasoning. However, a significant difference is that Multimodal-CoT optimizes for CoT while ours only relies on CoT with a fixed template to assist long-chain reasoning.
- Furthermore, our method converts an image to an **equivalent** text – a step that is later internalized – which is not equivalent to (either human-annotated or AI-generated) CoT. This conversion introduces no new information, so the help of text conversion in the model’s reasoning, if there is any, is entirely from a difference in modality, as opposed to a more optimal reasoning trajectory. Our work is more aligned with the idea of vision-depicting-prompting in Zhang et al. (2023) and many more we cited in the Related Works section (Appendix B, mentioned in page 8). Instead of prompting, we perform SFT and **propose a testbed to quantify** the benefit of an extra step of text conversion in mitigating **modality imbalance** in terms of **S2H generalization**.
# Comments on Presentation / Figures
> English needs further improvement and a thorough grammar check
We thank the reviewer for the comment. We have taken a lot of care when writing the paper. We will modify any remaining grammatical mistakes, if any, in our final version.
[1] Zhang et al., Lost in Translation: When GPT-4V(ision) Can’t See Eye to Eye with Text A Vision-Language-Consistency Analysis of VLLMs and Beyond, 2023.
[2] Zhang et al., Multimodal Chain-of-Thought Reasoning in Language Models, 2023. | Summary: This work investigates the modality imbalance in simple-to-hard generalization of VLMs. The main findings are: Explicit image-to-text conversion is important in improving S2H generalization on images, and the conversion can be internalized at test time.
## update after rebuttal
The rebuttal partly solves my concerns. I will maintain my evaluation.
Claims And Evidence: Most claims are well supported by experimental results, while it can be improved by showing consistent results with other base models of different sizes and backbones.
Methods And Evaluation Criteria: The methodology is well designed and the evaluation metrics are appropriately selected.
Theoretical Claims: I tried to check the proof in the Appendix F, I did not find obvious issues about it. But it is possible that I overlook some details.
Experimental Designs Or Analyses: This work can benefit from more experiments with different settings of the threshold to split simple and hard examples, as well as insights from the “hardness” (degree of how hard it is) of the examples.
Supplementary Material: I had a quick look at most of the contents, especially the interpretability experiments, the discussion about explicit vs implicit text conversion/CoT, and proof F.1.
Relation To Broader Scientific Literature: The finding that image-to-text conversion can be internalized at test time is particularly insightful, which could potentially benefit further research in Mechanistic Interpretability
Essential References Not Discussed: - There are missing related works that directly relate to modality imbalance and VLM evaluation, including but not limited to:
- https://arxiv.org/abs/2404.01266
Other Strengths And Weaknesses: - The choice of threshold to split SIMPLE and HARD needs to be clarified.
- From the examples in Figure 32-33, the simple and hard table readout tasks are more like scaling up the recognition and counting complexity, rather than requiring significantly better reasoning capabilities, e.g., chess puzzles.
- This paper is generally well written and easy to follow. However, it’s hard for the readers to get used to the many interchangeably used “image” and “text”. For example, I find it difficult to distinguish whether the “text/image” denotes S2H setting or training strategy in Figure 2. It’s probably better to rename the types of supervision, especially (a) and (b).
- The study on loss dynamics and gradient alignment is particularly interesting and insightful
Other Comments Or Suggestions: - Although the authors pointed out using only one base model is a limitation, I would like to mention it here again for visibility. This work can be significantly strengthened if the authors can show the main results and the conclusion with other base models with different backbones and sizes. Especially when models scale up, findings might not be 100% consistent.
- I understand that the authors want to compress as much crucial results and discussions as possible into the main content. However, my feeling of reading this paper is that too many concepts/discussions/paragraphs/subtitles seem to be equally important, but they are in fact not. I am not opposed to any style of writing, but the authors can do a better job to help the audience focus on the main storyline.
- Note that not everyone will read the appendix, so the main content must be self-contained. Without a proper related work SECTION, it’s hard to position this work among many other works in the literature. For example, how did previous works approach S2H generalization in general? Any of them worth being used as a baseline (only for the text supervision)? Such questions need to be answered in the main content to make it self-contained.
Questions For Authors: How would the “hardness” impact the performance? How well can it generalize from simple to relatively hard vs extremely hard? Can relatively hard generalize to extremely hard?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Authors' Response to Reviewer pko5
We thank the reviewer for their thoughtful comments and suggestions regarding the paper. Please find our responses to your comments below.
# Weaknesses
> Experiments are limited to one base model
Please see our reply to Reviewer fXux. We observe the same conclusion from Qwen 2.5 VL 3B and 7B.
> The simple and hard table readout tasks are more like scaling up the recognition and counting complexity, rather than requiring significantly better reasoning capabilities, e.g., chess puzzles.
We agree that the reasoning in Table Readout is simpler compared to e.g., solving chess puzzles. However, such simple synthetic tasks allow us to clearly define SIMPLE and HARD examples and enable a more controlled analysis of model behavior.
That being said, Table Readout does not only involve recognition and counting, but is analogous to Grid Navigation. At any cell, the model needs to know where to read the next number from. E.g., to realize the next number is on the left, the model needs to check that up/right/down is not highlighted but left is. This decision process mirrors the one in Grid Navigation, where all non-highlighted cells are “obstacles” to avoid.
> Please discuss IsoBench
Thank you for suggesting the reference. We will include the citation in lines 020-021 (page 1, right).
# Questions
> How would ‘Hardness’ impact the performance? The choice of threshold to split SIMPLE and HARD needs to be clarified.
As S2H generalization measures OOD performance, “hardness” is relative with respect to the difference between training and test distributions and is model-specific. On Consecutive Table Readout, we consider **two hardness levels**: HARD (15-20) and (25–30), each reading 15-20 or 25–30 consecutive numbers (line 150) and find **modality gap gets wider as the task becomes more challenging** (line 210-212, Fig. 2). For the other three tasks that also require compositional generalization, we decided the hardness by either increasing the number of components to be composed (Table Readout and Grid Navigation) or introducing held-out compositions (Visual Analogy).
We agree that ultimately the SIMPLE-HARD split is a matter of judgment. We tried to choose the most intuitive splits between SIMPLE and HARD while being aligned with existing studies (e.g., Hill et al., 2019 and Barrett et al., 2018). Further justifications for these decisions are in Appendix D and the specific thresholds in Table 1.
> How did previous works approach S2H generalization in general? Any of them worth being used as a baseline (only for the text supervision)?
In Appendix B, we discuss several prior works on S2H generalization that explore ICL with scratchpads (Anil et al., 2022), positional encodings (Kazemnejad et al., 2024), train set priming (Jelassi et al., 2023), curriculum learning (Abbe et al., 2024), and looped transformers (Fan et al., 2024). However, as we focus on modality imbalance, we do not compare with these methods. The aforementioned works (1) focus only on LLMs and are not directly adaptable to a multimodal setup and (2) study only length generalization, but not compositional generalization.
More recent efforts such as self-improvement (Lee et al., 2025) and generalizable verifiers (Sun et al., 2024) examine S2H generalization in different setups that rely on curriculum learning with progressively harder tasks or require reward models. In contrast, our study is restricted to supervised fine-tuning (SFT) approaches.
While we acknowledge that Mix or Mix+ may not be the optimal strategies for improving image generalization on our proposed tasks (line 429), it is still notable that reasoning transfer from the text to image modality—enabling S2H generalization on images—emerges naturally through autoregressive SFT. This, in our view, is both non-trivial and interesting.
We will include an explicit Related Works section in the main paper to clearly differentiate our contributions from prior work.
# Comments on Presentation / Figures
We appreciate the reviewer’s detailed feedback on this matter. We hope to address these issues in the final version of the paper.
> The authors interchangeably use “image” and “text”, which might create confusion
To reduce the number of new terms introduced, we used “Text/Image” (capitalized) for the supervision that trains on the corresponding modality. However, we understand the potential for confusion.
> The authors can do a better job to help the audience focus on the main storyline
In Section 1.1, we presented a compressed overview of the paper to direct the readers to relevant sections. In the final version, we plan to trim some of the ablations but instead expand on the discussions that better highlight the main storyline.
[1] Lee, et al., Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges, 2025
all other citations in Related Works / References of the paper | Summary: This paper investigates the "modality imbalance" problem in Vision Language Models (VLMs), where models perform worse on visual reasoning tasks compared to equivalent text-based tasks. The authors introduce a framework for studying simple-to-hard (S2H) generalization in VLMs using three synthetic tasks: Table Readout, Grid Navigation, and Visual Analogy. Each task has SIMPLE and HARD versions with equivalent representations in both text and image modalities.
The paper's main contribution is the discovery that VLMs exhibit modality imbalance in S2H generalization, with models able to generalize from SIMPLE to HARD examples in text but failing to do so in vision. Through experiments and gradient alignment analysis, the authors reveal that explicit image-to-text conversion is crucial for transferring reasoning capabilities from text to image modalities. They find that this conversion process can be internalized at test time, and identify gradient alignment measures that predict the effectiveness of different training strategies. Their work provides insights into the mechanisms of cross-modal learning and how the modality gap can be bridged through different training approaches.
Claims And Evidence: The claims made in this paper are well-supported by comprehensive empirical evidence. The authors:
1. Demonstrate the modality imbalance problem through controlled experiments on their synthetic tasks, showing significant performance gaps between text and image modalities
2. Show that their proposed Mix and Mix+ strategies improve S2H generalization in images by effectively transferring reasoning capabilities from text to vision.
3. Provide gradient alignment analysis that convincingly explains why these training strategies work.
4. Validate their approaches across multiple tasks with varying complexity, showing consistent improvements over baselines.
5. Conduct thorough ablation studies on key components like chain-of-thought reasoning, data composition, and text warm-up pretraining.
The authors are careful to test their approaches in both standard and challenging scenarios (missing modalities). The evidence presented is comprehensive and the conclusions are well-supported by the experimental results.
Methods And Evaluation Criteria: The methods proposed in this paper are novel and well-designed. The authors:
1. Create three synthetic tasks that allow for controlled study of S2H generalization in both text and image modalities, with difficulty levels that can be systematically tuned to test generalization capabilities.
2. Develop multiple training strategies (Mix, Mix+, Align-Mix+) that progressively address more challenging S2H generalization scenarios.
3. Introduce gradient alignment measures that provide theoretical insights into the effectiveness of their approaches.
The evaluation criteria are appropriate and comprehensive:
- Performance is measured on both SIMPLE and HARD examples across modalities
- Comparisons include baselines and ablations to isolate the impact of each component
- Gradient analysis provides mechanistic understanding of the approaches
The authors carefully control for factors like training data size to ensure fair comparisons. They also test on both complete and missing modality scenarios to demonstrate the robustness of their approach. Additionally, they provide extensive ablation studies and supplementary experiments in the Appendix, including text warm-up pretraining, explicit and implicit CoT ablation, and multi-task training effects, which further substantiate their claims and provide deeper insights into the mechanisms behind their approaches.
Theoretical Claims: I reviewed the theoretical claims related to gradient alignment in Section 5. The proofs for Theorem H.1 appear sound. The theorem establishes that (given a small enough learning rate) the gradient alignment score predicts how much loss reduction on HARD Image examples will be achieved by gradient updates from SIMPLE Image examples relative to updates from HARD Image examples directly.
However, as the main novelty is on the transfer between HARD text and HARD image examples (without training these HARD image examples), I am not sure whether the analysis is thorough enough. Cross-modality alignment should be examined.
Experimental Designs Or Analyses: The experimental designs are sound and well-executed. The authors use only one model architecture (Eagle-X2-LLAMA3-8B), which is a limitation but understandable given computational constraints.
Their synthetic tasks appear simple, but this simplicity actually strengthens the study by enabling precise isolation of reasoning capabilities and clear comparisons across modalities. The authors carefully control for data quantity and quality throughout, ensuring valid comparisons and reliable conclusions.
Supplementary Material: I thoroughly reviewed the supplementary material, which contains:
1. Detailed proofs for the theoretical claims about gradient alignment.
2. Ablation studies on key components of the proposed approaches.
3. Additional experimental results on multi-task training and text warm-up pretraining.
4. Further analysis of gradient measures and training dynamics.
The supplementary material substantiates the claims made in the main paper and provides additional insights that strengthen the overall contribution. The appendices are well-organized (with Table of Contents) and clearly written, making it easy to find specific details about the methods and experiments.
Relation To Broader Scientific Literature: This paper bridges several key research areas in multimodal learning. It introduces controlled algorithmic “visual” reasoning tasks that enable precise multi-step reasoning evaluation, addressing a significant gap in VLM assessment. (add sth like : Furthermore, they allow variable difficulty) It extends work on modality imbalance by uniquely addressing this issue in adapter-based VLMs through knowledge transfer from text to image modalities, moving beyond traditional approaches such as modulating learning rates (Peng et al., 2022). Finally, it extends simple-to-hard generalization research from language-only settings (Abbe et al., 2024; Zhou et al., 2024) to cross-modal contexts.
Peng et al., 2022 : Peng, Xiaokang, et al. "Balanced multimodal learning via on-the-fly gradient modulation." *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*. 2022.
Abbe et al., 2024 : Abbe, Emmanuel, et al. "Generalization on the unseen, logic reasoning and degree curriculum." *Journal of Machine Learning Research* 25.331 (2024): 1-58.
Zhou et al., 2024 : Zhou, Hattie, et al. "What algorithms can transformers learn? a study in length generalization." *arXiv preprint arXiv:2310.16028* (2023).
Essential References Not Discussed: The literature review seems comprehensive and contains all the relevant references. The authors (in the main text and Appendix B.) have appropriately cited related works in modality imbalance, generalization transfer between input modes, and S2H generalization.
Other Strengths And Weaknesses: **Strengths:**
1. The paper introduces a novel and practical approach to address an important problem in multimodal learning.
2. The synthetic tasks are well-designed to isolate and study S2H generalization.
3. The gradient alignment analysis provides valuable mechanistic insights.
4. The approaches are effective across a range of tasks and scenarios.
5. The paper includes thorough ablation studies and analyses.
6. The proposed methods (Mix, Mix+, Align-Mix+) are simple yet effective, making them likely to be adopted in practice.
**Weaknesses:**
1. The experiments are limited to one model architecture
2. While the synthetic tasks are useful for controlled study, more real-world examples would strengthen the paper's impact.
3. The paper focuses mainly on vision-text modalities; it would be interesting to see if the insights generalize to other modality pairs.
4. The approach requires task-specific chain-of-thought templates, which might limit its generality.
Overall, the strengths significantly outweigh the weaknesses, and the paper makes a valuable contribution to the field.
Other Comments Or Suggestions: I wonder if computing gradient alignment scores between modalities (rather than just between SIMPLE and HARD examples within a modality) might offer valuable insights. This cross-modal gradient alignment could potentially reveal how information transfers between vision and language representations for identical inputs presented in different formats.
In section 4.2 discussing the benefits of two-phase training, it would be helpful to cite the specific figure or table showing the 76%, 96%, and 56% performance metrics mentioned. This would make it easier for readers to connect your analysis with the supporting evidence.
In Figure 6, you only showed results for Table Readout and Visual Analogy. Could you also show the remaining task (Grid Navigation) to provide a complete picture of how Image-via-Text+ performs across all tasks?
I am a bit confused about the naming here : are “Consecutive Table Readout” the same as “Table Readout”? Appendix D suggests that they are a simliar but different tasks. However, in the Abstract, you state that you test on three tasks, which make this even more confusing (is the Consecutive Table Readout a “fourth” task?). Clarifying their difference in the main text would help with clarity.
I'm particularly intrigued by your findings on Chain of Thought reasoning. Given your ablation studies in Appendix I.7 showing that attempts to internalize CoT consistently fail to achieve image S2H generalization, could this suggest something fundamental about how reasoning occurs in these adapter-based VLMs? Perhaps the architectural design, with vision adapters attached to a text-based backbone, necessitates text-structured reasoning paths that CoT provides explicitly?
Questions For Authors: I may have missed it, but it seems that Figures 5 and 7 themselves are in the main text but not the figures are not referenced anywhere in the main text, making it a bit confusion to read. (For example, it seems that the last parts of Page 6 are referring to Figure 7, but this is never mentioned in the main text)
Regarding Figure 2, I noticed the grey dotted line representing text supervision on the right panels lacks a corresponding entry in the legend, though it's explained in the caption. Adding this to the legend (with a name such as “Text Supervision on Text”) would improve readability at a glance.
Similarly, I believe that the description of “Text” (grey dash) in Figure 5 could be more clearly stated. In the caption, it says that they are S2H generalization on text from Text supervision, but the legend seems to imply that “Text” is S2H generalization on Image from Text Supervision. As with the Figure 2, I believe giving the “Text” legend name a clearer name (e.g., “Text supervision on Text”) would be better.
In Figure 8 in the legend, there is “Mix → Mix+”, but there is no explanation of what this means.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Authors' Response to Reviewer fXux
We thank the reviewer for their careful review of our paper. Please find responses to your concerns and questions below.
# Weaknesses
> Experiments are limited to one architecture
We used EAGLE because it was the best performing open-source model which also released all details on **its training data and the set of hyperparameters**.
To test the generalizability of our results, we ran additional experiments by continual finetuning 3B, 7B models from the Qwen2.5-VL family. See the accuracy on HARD-image below. CTR, TR, GN, VA are respectively short for Consecutive Table Readout, Table Readout, Grid Navigation, and Visual Analogy. On the text version of CTR, we notice that the 3B model doesn’t show S2H generalization (which is expected since task difficulty can be model specific). However, across TR, GN, and VA, we generally observe the same conclusion - **Mix+ achieves S2H generalization on the image modality and Align-Mix+ improves the generalization**.
### Qwen2.5-VL-3B-Instruct
|Supervision|CTR(30k)|TR(30k)|TR(60k)|GN(30k)|GN(60k)|VA(30k)|VA(60k)|
|:-|-:|-:|-:|-:|-:|-:|-:|
|Image|0|11|10|22|22|0|0|
|Text+Image|1|7|6|0|14|0|0|
|Image-via-Text|1|12|8|13|14|0|0|
|Mix|1|11|10|14|16|0|1|
|Image-via-Text+|0|**81**|90|67|58|**48**|**48**|
|Mix+|**4**|78|86|77|91|20|27|
|Align-Mix+|-|66|**91**|**80**|**91**|38|42|
### Qwen2.5-VL-7B-Instruct
|Supervision|CTR(30k)|TR(30k)|TR(60k)|GN(30k)|GN(60k)|VA(30k)|VA(60k)|
|:-|-:|-:|-:|-:|-:|-:|-:|
|Image|0|18|17|14|29|0|0|
|Text+Image|4|8|5|6|11|0|0|
|Image-via-Text|36|9|13|13|18|0|0|
|Mix|52|8|17|15|12|0|0|
|Image-via-Text+|**73**|82|88|**75**|67|**41**|**44**|
|Mix+|72|13|66|69|**85**|12|17|
|Align-Mix+|-|**93**|**92**|36|58|25|34|
> Experiments are limited to synthetic settings
Please see our reply to the first question from Reviewer NF8o.
> Experiments are only on vision-text modalities
Our motivation came from works that identify the modality gap in VLMs. Additionally, we require an open-weights model that has strong enough reasoning capability in each modality, which is available in a VLM. We would love to explore more modalities in the future when open models can also incorporate multiple modalities.
> Task-specific chain-of-thought templates might limit generality of observations
By keeping the chain-of-thought templates fixed throughout the task, we were able to accurately probe the existing modality imbalance. In the future, we would like to explore more on how the results would generalize to a more flexibly generated CoT trace.
# Suggested Questions
> Can gradient alignment be used to measure cross-modal alignment?
We found that different supervision strategies could be distinguished only when analyzing gradient alignment between SIMPLE and HARD examples within each modality independently. This may be attributed to our use of a local definition of gradient alignment—examining gradients at each step in isolation. Capturing cross-modal alignment likely requires a more global perspective, as gradients at earlier time steps (e.g., on text) can influence gradients in later steps (e.g., on image). We consider the development of such alignment measures an important direction for future work.
> Consecutive Table Readout vs. Table Readout
Both tasks involve sequentially reading highlighted cells in a table. In Consecutive Table Readout, the path is always consecutive row-wise, and the model knows where the next cell should be. In Table Readout, the path can take turns in arbitrary directions at arbitrary locations, so the model needs to additionally check **which adjacent cell is highlighted** at every location.
In the final version, we will list all 4 tasks in the abstract or rename Table Readout to e.g., Path Table Readout and use Table Readout as an umbrella term for both Consecutive and Path Table Readout.
> Why does internalizing CoT fail
We observed that CoT is crucial for the model to transfer its reasoning from text to image input by repeating the same reasoning steps. Our hypothesis is that during SFT, the model establishes an implicit equivalence between reasoning on text and image inputs via the common CoT tokens.
There may exist multiple ways, e.g., a more fine-grained curriculum, to internalize CoT. For the scope of this paper, we limit our exploration to simple settings with randomly shuffled data and progressive internalization, and leave a complete study for future.
> Can you include results on Grid Navigation in Figure 6
The model already performs nearly perfectly on Grid Navigation with 60k examples of Mix+, so we did not compare with Image-via-Text+.
> What is the meaning of Mix$\to$Mix+ in Figure 8
Mix$\to$Mix+ means we took an intermediate checkpoint of Mix and resumed training with Mix+. This setup allows us to assess the effect of introducing Hard text examples on the loss curves of Mix training, enabling a fine-grained analysis of the differences between Mix and Mix+ (lines 334-341).
---
Rebuttal Comment 1.1:
Comment: I am generally satisfied with the author’s response and have raised my score. Despite some limitations regarding the use of synthetic tasks, I believe the paper provides meaningful contributions to the field.
That being said, I hope the authors enhance their presentation. In particular, although (as the authors responded) Mix $\rightarrow{}$ Mix is explained in lines 334-341, the text in those lines does not explicitly call them “Mix $\rightarrow{}$ Mix”. Also, although the authors didn’t respond, there is still the issue of figures in the main text (Figures 5 and 7) not being mentioned in the main text.
Raising the score to 4 (Accept).
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for carefully evaluating our manuscript and our response to the reviewer's concerns.
We especially thank the reviewer for commenting on how we can improve the presentation of our work, and we will try our best to incorporate the comments into the final version of the paper. Due to the character limit, we were initially not able to respond to the reviewer's feedback on Figures 5, 7 not being mentioned in the main text. We will resolve this matter as well.
If possible, could the reviewer edit the original review to reflect the updated score? Thanks! | null | null | null | null | null | null |
MoRAgent: Parameter Efficient Agent Tuning with Mixture-of-Roles | Accept (poster) | Summary: This work presents a novel framework to fine-tune LLMs to solve agent-specific task in a parameter-efficient manner. Specifically, the capabilities of agent are firstly decomposed into three roles. A relative fine-tuning framework MoR and a multi-role data generation pipeline are subsequently proposed to ensure that LLMs can correctly learn the capabilities of different roles.
Claims And Evidence: Yes, the author conducts several experiments with different size of models on different agent-specific tasks, and the results are convincing.
Methods And Evaluation Criteria: Yes. The author selects both agent-specific and mathematical benchmarks (which could also be solved in an agentic manner) to evaluate the proposed method.
Theoretical Claims: This work does not involve theoretical claims and proofs.
Experimental Designs Or Analyses: Yes. The three main tables have shown to be validate, the ablation studies also prove the correctness of the hyper-parameters and the design of the loss function.
Supplementary Material: Yes, I have reviewed the supplementary material in the part of Appendix, which include the prompts used in completion and training process, and an example of the execution trajectory.
Relation To Broader Scientific Literature: The related scientific literature of the main contributions of the paper are already cited and studied. For example: the idea of role decomposition may relate to alpha-UMi[1], and the novel architecture may evolve from the work of OCTAVIUS[2] and MoLA[3].
[1] Small llms are weak tool learners: A multi-llm agent.
[2] Octavius: Mitigating task interference in mllms via moe
[3] Higher layers need more lora experts.
Essential References Not Discussed: No, important related works are already cited and discussed in this work.
Other Strengths And Weaknesses: Strengths:
1) Other than directly training multi-models to learn different roles capabilities, the author introduces a novel architecture: Mixture-of-Roles, which comprises three specialized LoRA groups.
2) The author has conducted comprehensive experiments on different benchmarks with different base models, the results show a significant improvement compared with the base models.
3) The contributions of different parts in the loss function are studied through extensive experiments in the part of Ablation Studies.
Weakness:
1) To prove the effectiveness of the multi-role strategy, experiments with a single role is supposed to be conducted.
2) When adapt to each new task, the work of data preparation and unification is essential, but seems time-consuming and labor-intensive.
Other Comments Or Suggestions: Typos:
Caption of Figure 8, ‘mathmatical’ -> ‘mathematical’
Suggestions:
The code of the architecture and training process are suggested to be released.
Questions For Authors: 1. Have you conducted additional experiments on larger models like 7B or 14B?
2. In table 3, why qwen performs better after training when the base results are worse than llama on MATH benchmarks.
3. Have the authors tried other role-decomposition strategies? For example, does using more or fewer roles have a greater impact on the results?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Dear Reviewer Mktg:
Grateful for your support and helpful review. All concerns and questions are meticulously responded.
**1. Experiments with a single role is supposed to be conducted.**
Thanks for the helpful suggestion. We supplement the experiment of a single lora without roles on BFCL leaderboard with Llama3.2-1B-Instruct.
|Method|Trainable Params|AST (Non-live) | Exec (Non-live) | AST (Live) | Relevance (Live) | AVG|
| :-:| :-: | :-: | :-: | :-: |:-: |:-: |
|Base|-|21.9 | 19.2 | 29.8 | 38.9 |27.5|
|LoRA | 0.16B|60.5 |68.2 | 59.4|83.7|68.0 (+40.5)|
|MoR|0.16B| 75.2 |80.0 | 60.7|94.4|77.6 (+50.1)|
With the same number of trainable parameters, due to the lack of decomposition capability, the accuracy improvement achieved by LoRA is much lower than that of MoR.
**2. The work of data preparation and unification is essential, but seems time-consuming and labor-intensive.**
Thanks for the kind concern. Actually, a large proportion of the raw training data lacks multi-roles content, such as the thought of reasoner. Therefore, it cannot be directly used for MoR training. The data processing pipeline proposed in the paper addresses this by generating high-quality multi-role data based on the raw dataset. However, as the reasoner paradigm like DeepSeek-R1 and OpenAI o3, etc., has become mainstream recently, there will be more agent related data with thought.
**3. Additional experiments on larger models.**
Thanks for the helpful suggestion. Due to the limited time of rebuttal, we only supplement the experiment of Llama3.1-8B-Instruct on BFCL.
|Method|Trainable Params|AST (Non-live) | Exec (Non-live) | AST (Live) | Relevance (Live) | AVG|
| :-:| :-: | :-: | :-: | :-: |:-: |:-: |
|Base|-| 84.2| 86.3 | 61.0 | 77.8 |77.3|
|MoR|0.59B| 88.6 |89.2 | 80.5|95.1|88.4 (+11.1)|
The experimental settings are consistent with section4.2 in our paper. From the results, with the introduction of 0.59B trainable parameters, we improve the average accuracy by 11.1%.
**4. In table 3, why qwen performs better after training when the base results are worse than llama on MATH benchmarks.**
The possible reason is that llama is a general model and qwen-coder is a code-specific model. Since we solve math problems by importing packages here, it is more in line with the style of qwen-coder, so the accuracy of qwen-coder can be improved higher in the post-training stage.
**5. Using more or fewer roles have a greater impact on the results?**
Thanks for the constructive suggestion, we supplement the experiments of more or fewer roles. The results are as follows.
|Archs| Trainable Params|AST (Non-live) | Exec (Non-live) | AST (Live) | Relevance (Live) | AVG|
| :-: | :-: | :-: | :-: | :-: |:-: |:-: |
|Base| -|21.9 | 19.2 | 29.8 | 38.9 |27.5|
|2-roles| 0.11B | 60.6 |66.3 | 48.3|79.5|63.7 (+36.2)|
|3-roles|0.16B| 75.2 |80.0 | 60.7|94.4|77.6 (+50.1)|
|4-roles|0.21B| 70.8 |74.2 | 57.6|90.3|73.2 (+45.7)|
Our method integrates three core roles: Reasoner, Executor, and Summarizer. Through removing the Summarizer and introducing a Planner, we extend the roles to 2-roles and 4-roles. Experimental results demonstrate accuracy improvements across all variants compared to the baseline model. Notably, the 3-roles architecture achieves peak performance (accuracy improvement of 50.1% over baseline), whereas the 4-roles, despite containing more trainable parameters, shows diminishing returns (45.7% accuracy gain). This empirical evidence highlights that rational role definition outweigh mere parameter quantity expansion in achieving optimal model performance.
**6. Typos.**
Sincerely thanks for your detailed reviews. We will go through the entire paper again and fix potential typos in the revised version.
**7. The code of the architecture and training process are suggested to be released.**
Thanks, all our data and code will be open-sourced to contribute to the community. | Summary: The paper introduces a novel parameter-efficient fine-tuning method to enhance LLMs for agent tasks, such as function-calling and mathematical reasoning. The authors propose three main strategies: (1) decomposing agent capabilities into three roles—reasoner, executor, and summarizer—based on the Reason+Action paradigm; (2) developing the MoR framework, which assigns specialized LoRA groups to each role, incorporating a rule-based role-aware gate and token-aware routers to manage role interactions; and (3) creating a multi-role data generation pipeline that enhances publicly available datasets with role-specific content and reliability verification. The method is evaluated on benchmarks like StableToolBench, BFCL, GSM8k, and MATH. The paper claims that MoRAgent achieves competitive performance with fewer trainable parameters than traditional methods.
## update after rebuttal
Thank you for the response. I will keep my score.
Claims And Evidence: - The paper claims that decomposing agent capabilities into three roles improves PEFT for agent tasks. The improvement from MoR is clear and comprehensive analysis is conducted to demonstrate that.
- While we do see performance improvement on downstream tasks there is a lack of a direct ablation comparing MoRAgent with and without role decomposition (e.g., a single LoRA without roles), making it hard to isolate the decomposition’s specific contribution versus the multi-LoRA setup.
- Another claim is that the MoR framework with a router works. But he rule-based role-aware gate’s implementation is vague. The paper states that “the next role to be activated is determined based on the output of the reasoner”, but lacks specifics on how this is operationalized during training, weakening the claim’s clarity.
- The effect of multi-role data generation pipeline is unclear. No direct evidence compares MoRAgent’s performance with and without this pipeline versus raw data. Also it is unclear whether the performance is coming from MoR or the data pipeline.
Methods And Evaluation Criteria: The MoR framework decomposes agent tasks into reasoner, executor, and summarizer roles, Taligning with the Reason+Action paradigm cited in prior work. The proposed benchmarks are also reasonable and cover a wide variety of downstream tasks.
Theoretical Claims: The paper does not present theoretical claims or proofs. It is empirically driven.
Experimental Designs Or Analyses: - The experimental designs are robust, I just have two concerns detailed above: the ablation between standard LoRA and MoR and the ablations for data pipeline. Both are to isolate the contribution of the MoR framework to make a stronger argument.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Builds on full-parameter fine-tuning works like ToolLLM (Qin et al., 2023) and AgentTuning (Zeng et al., 2023), which enhance LLMs for agent tasks and Inspired by Reason+Action (Yao et al., 2022) and multi-agent systems like α-UMi (Shen et al., 2024a). It is a novel extension on top of those works by making LLMs good tool users (therefore agent) while being resource efficient.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - Contribution is clear and simple: it addresses resource barriers in agent fine-tuning, relevant for deploying LLMs in resource-constrained settings.
- However, the effectiveness of the method is unclear without the ablations mentioned earlier.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How is the rule-aware gate implemented during training? Are there labels in the training data indicating which role should be active for each token? Clarification could resolve ambiguity in Section 3.2, strengthening the method’s reproducibility.
2. How's the performance for standard LoRA? Comparing this method with standard LoRA can further consolidate the argument.
3. I got a bit confused by section 3.2 and equation (6): "It should be noted that for the token at the same location, there is only one role that is non-zero" what exactly does this mean? I thought u is the hidden state, so each element in u should just be a value of the hidden state, not a token. I couldn't connect this with figure 2. Is each value in the hidden state activated by one role, or each token?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer rpMC:
We sincerely thanks for your support and meticulous review. The concerns and questions are answered as follows.
**1. How is the rule-aware gate implemented during training? Are there labels in the training data indicating which role should be active for each token?**
Yes! Your deduction is correct. Our training data includes a "role" tag to indicate which role activates the subsequent tokens. Take the Math data in Figure 3 in our paper as an example. The tokens marked as "content" are activated by the role marked as "role". It should be noted that when the role is user or observation, it is also activated by reasoner. Therefore, "system_prompt", "user_query", "thought and next role" and "code execution result" are activated by reasoner. "Python code" is activated by executor. "summarize and feed back to user" is activated by summarizer. We will further clarify this in the revised version.
**2. The effect of multi-role data generation pipeline is unclear. It is unclear whether the performance is coming from MoR or the data pipeline. How's the performance for standard LoRA?**
Thanks for the helpful suggestion. A large proportion of the raw training data lacks multi-role content, such as the thought of reasoner. Therefore, it cannot be directly used for MoR training. The data processing pipeline proposed in the paper addresses this by generating high-quality multi-role data based on the raw dataset. A comparative example is illustrated below.
Raw data:
```
{
"role": "user"
"content": "I want to watch a movie tonight. Can you suggest me some movies starring Leonardo DiCaprio?"
},
{
"role": "assistant"
"content": "[unused11]Action: search_movies\n Arguments: {"actor": "Leonardo DiCaprio"}[unused12]"
},
{
"role": "observation"
"content": "{"movies": ["Inception", "The Wolf of Wall Street", "Titanic", "The Revenant", "Shutter Island"]}"
},
{
"role": "assistant",
"content": " "Inception", "The Wolf of Wall Street", "Titanic", "The Revenant", "Shutter Island". "
}
```
Multi-role data after generation pipeline:
```
{
"role": "user",
"content": "I want to watch a movie tonight. Can you suggest me some movies starring Leonardo DiCaprio?"
},
{
"role": "reasoner",
"content": "To suggest movies starring Leonardo DiCaprio, we can use the "search_movies" tool with the input specifying the actor's name.\n Next: executor."
},
{
"role": "executor",
"content": "[unused11]Action: search_movies\n Arguments: {"actor": "Leonardo DiCaprio"}[unused12]"
},
{
"role": "observation",
"content": "{"movies": ["Inception", "The Wolf of Wall Street", "Titanic", "The Revenant", "Shutter Island"]}"
},
{
"role": "reasoner",
"content": "The function has been successfully executed and five movies starring Leonardo DiCaprio are returned, which can solve the user's query.\n Next: summarizer."
},
{
"role": "summarizer",
"content": "Sure, here are some movies starring Leonardo DiCaprio: "Inception", "The Wolf of Wall Street", "Titanic", "The Revenant", "Shutter Island". Enjoy your movies!"
}
```
All data and code will be open-sourced to contribute to the community. We supplement the experiments of LoRA in two settings: a single LoRA without roles (standard, trained on raw data) and a single LoRA with roles (multi-roles, trained on multi-role data).
|Method|Trainable Params|AST (Non-live)|Exec (Non-live)|AST (Live)|Relevance (Live)| AVG|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Base|-|21.9|19.2|29.8|38.9|27.5|
|LoRA (standard)|0.16B|60.5 |68.2|59.4|83.7|68.0 (+40.5)|
|LoRA (multi-roles)|0.16B|59.7|64.2|56.3|81.8|65.5 (+38.0)|
|MoR|0.16B|75.2|80.0|60.7|94.4|77.6 (+50.1)|
Interestingly, the accuracy of LoRA (multi-roles) is lower than that of standard LoRA. The possible reason is that under limited training parameters, there is interference when a single LoRA learns the multi-roles datasets. In contrast, MoR can avoid interference by the architecture of mixture-of-roles to make full use of the limited training parameters and achieve higher accuracy.
**3. Confused by equation(6).**
Sorry for misleading. The size of u here can be represented by [sequence_length,hidden_dim] (for simplicity, we can omit the dimension of batch size). Without loss of generality, take the size of u in one layer equal to [4096,hidden_dim] as an example. Further example, [:1024, :] of u is processed by reasoner, [1024:1536, :] of u is processed by executor, [1536:2560, :] of u is processed by reasoner, [2560:3072, :] of u is processed by executor, [3072:3584, :] of u is processed by reasoner, and [3584:, :] of u is processed by summarizer. We will further clarify this in the revised version. | Summary: This paper proposes multiple strategies to improve the efficiency of applying PEFT to agent. First, the capabilities necessary for the agent tasks are decomposed into three distinct roles: reasoner, executor, and summarizer. The Mixture-of-Roles (MoR) framework, which comprises three specialized LoRA groups, each designated to fulfill the three roles. To more reasonably allocate LoRAs to the input features, a rule-based and role-aware gate and learnable token-aware routers are designed. During the training process, auxiliary balance loss and orthogonal loss between LoRAs are further introduced for better optimization. Last but not least, a multi-role data generation pipeline is introduced.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes, the proposed method is evaluated on StableToolBench、BFCL and Math problems.
Theoretical Claims: No proofs and theoretical claims.
Experimental Designs Or Analyses: Yes, I have checkd the soundness/validity of the experiments, including StableToolBench, BFCL leaderboard, GSM8K and MATH.
Supplementary Material: Yes, I have reviewed the supplementary material in the appendix (Page11-Page15).
Relation To Broader Scientific Literature: Three key contributions in this paper:1) Decomposing the agent ability into three roles; 2) Each abilities are fulfilled by Mixture-of LoRAs; 3) A lot of work of preparing the CoT agent training data. Any relevant previous work has been discussed in the paper.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
* The idea of decomposing the ability of agent into three roles is interesting.
* In my knowledge, this is the first work that apply Mixture-of-LoRAs to the agent tasks.
* The composition of rule-based gate and role-aware gate is novel.
* The experiments are conducted on StableToolBench, BFCL and Math, which are sufficient.
Weaknesses:
* Some other work like [1] also introduces the idea of decomposition of agent ability, what is the difference between you?
* The auxiliary balance loss is not novel.
* Ablation experiments on the auxiliary balance loss and orthogonal loss weight coefficients are missing.
* The preparation of experimental data seems to be a lot of work.
[1] Small llms are weak tool learners: A multi-llm agent
Other Comments Or Suggestions: Typos: "available datasets To effectively" in Line435.
Questions For Authors: 1. Will the two gates lead to an increase in FLOPs or latency?
2. What would be the result of having three separate LLMs to represent different roles?
3. With the same parameters, what is the comparison between a large LoRA and multiple small LoRAs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer MDgP:
We deeply appreciate your support and insightful feedback, detailed rebuttals to all queries are provided.
**1. The difference between ours and α-UMi.**
α-UMi decomposes the agent ability into planner,executor and summarizer. However, each role is implemented by a separate LLM, which results in a significant increase in computing resources. In contrast, we integrate the multiple agent capabilities into a novel parameter-efficient Mixture-of-Roles framework.
**2. The auxiliary balance loss is not novel.**
Thanks for the kind concern. Auxiliary balance loss is widely adopted in training MoE architectures. We borrow the idea of balance loss and are the first to apply it to our agent work. The experimental results on various benchmark verify the effectiveness of the loss.
**3. Ablation on the auxiliary balance loss and orthogonal loss weight coefficients.**
Thanks for the helpful advice. we supplement the ablation studies on BFCL leaderboard with Llama3.2-1B-Instruct.
|balance loss|orthogonal loss|AST (Non-live) | Exec (Non-live) | AST (Live) | Relevance (Live) | AVG|
| :-: | :-: | :-: | :-: |:-: |:-: |:-: |
|1e-3|1e-4 | 75.2 |80.0 | 60.7|94.4|77.6|
|1e-4|1e-4 | 74.6 |78.8 | 58.9|93.6|76.5|
|1e-3|1e-3 | 75.7 |79.3 | 59.1|94.0|77.0|
|1e-4|1e-3 | 74.1 |78.4 | 59.6|93.1|76.3|
With appropiate coefficients of auxiliary balance loss and orthogonal loss, the accuracy may be further improved.
**4. The preparation of experimental data seems to be a lot of work.**
Actually, a large proportion of the raw training data lacks multi-role content, such as the thought of reasoner. Therefore, it cannot be directly used for MoR training. The data processing pipeline proposed in the paper addresses this by generating high-quality multi-role data based on the raw dataset. However, as the reasoner paradigm has become mainstream recently, there will be more agent related data with thought. All data and code will be open-sourced to contribute to the community.
**5. Will the two gates lead to an increase in FLOPs or latency?**
Thanks. Our gates include the rule-based role-aware gate and learnable token-aware gate. For the rule-based gate, a "role" tag is designed to indicate which role activates the subsequent tokens. Therefore, no FLOPs or latency is introduced. For each learnable token-aware gate, we implement it with only a single Linear layer, resulting in a negligible FLOPs and latency.
**6. The result of having three separate LLMs to represent different roles.**
This is an interesting question, and actually, that is precisely how α-UMi operates. We supplement this experiment on BFCL leaderboard with Llama3.2-1B-Instruct.
|Method| Trainable Params|AST (Non-live) | Exec (Non-live) | AST (Live) | Relevance (Live) | AVG|
| :-: | :-: | :-: | :-: | :-: |:-: |:-: |
|Base| -|21.9 | 19.2 | 29.8 | 38.9 |27.5|
|MoR|0.16B| 75.2 |80.0 | 60.7|94.4|77.6 (+50.1)|
|3 LLMs|3.72B| 79.3|83.2 | 69.5|94.8|81.7 (+54.2)|
With more trainable parameters, three separate LLMs achieve higher accuracy, but the computational resources required for training and inference are much higher than ours.
**7. The comparison between a large LoRA and multiple small LoRAs?**
Sorry for not fully understanding what multiple small LoRAs meant, we guess it should be our proposed MoR. We supplement the experiment of one large LoRA with Llama3.2-1B-Instruct on BFCL.
|Method|Trainable Params|AST (Non-live) | Exec (Non-live) | AST (Live) | Relevance (Live) | AVG|
| :-:| :-: | :-: | :-: | :-: |:-: |:-: |
|Base|-|21.9 | 19.2 | 29.8 | 38.9 |27.5|
|LoRA| 0.16B| 59.7 |64.2 | 56.3|81.8|65.5 (+38.0)|
|MoR|0.16B| 75.2 |80.0 | 60.7|94.4|77.6 (+50.1)|
The LoRA is trained on the same 90k multi-roles data. From the results, the proposed MoR can fully utilize the limited training parameters and achieve significantly better accuracy.
**8. Typos.**
Sincerely thanks for your detailed reviews. We will go through the entire paper again and fix potential typos in the revised version. | Summary: This paper explores parameter-efficient fine-tuning (PEFT) methodologies for large language model (LLM)-based agent tasks, an area that remains largely unexplored. The authors propose three key strategies:
1. Role Decomposition: Inspired by the Reason+Action paradigm, the authors decompose agent capabilities into three distinct roles—reasoner, executor, and summarizer. The reasoner interprets user queries and determines the next step based on execution trajectory. The executor identifies and invokes appropriate functions with the correct parameters. The summarizer distills and conveys information back to the user.
2. Mixture-of-Roles (MoR) Framework: The authors introduce a framework with three specialized Low-Rank Adaptation (LoRA) modules, each dedicated to a specific role. These modules collaboratively perform the agent task while maintaining parameter efficiency.
3. Multi-Role Data Generation Pipeline: A novel data generation pipeline is designed using publicly available datasets. It incorporates role-specific content completion and reliability verification to support fine-tuning the MoR framework.
The paper presents extensive experiments and ablation studies on various LLMs and agent benchmarks, demonstrating the effectiveness of the proposed approach.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: good
Essential References Not Discussed: no
Other Strengths And Weaknesses: Strengths:
1. Novel PEFT Approach for Agents: The paper addresses a relatively unexplored area—parameter-efficient fine-tuning (PEFT) for agent tasks—by introducing a structured role-based approach.
2. Clear Role Decomposition: The division of agent capabilities into reasoner, executor, and summarizer aligns well with the Reason+Action paradigm, making the framework interpretable and modular.
3. Efficient Fine-Tuning via LoRA: The use of Mixture-of-Roles (MoR) with specialized LoRA modules enables efficient adaptation of LLMs without full model fine-tuning, reducing computational overhead.
Weaknesses:
1. Limited Comparison with Other PEFT Methods: While the paper focuses on its novel approach, it lacks a direct comparison with other existing PEFT techniques that might be adapted for agent tasks.
2. Scalability and Generalization: The approach is tailored to a specific role-based agent structure, and its effectiveness for more complex or diverse agent architectures remains unclear.
3. Data Dependence: The proposed multi-role data generation pipeline relies on publicly available datasets, but its adaptability to real-world or unseen tasks is not extensively analyzed.
4. Limited experiments about the ablation. It is unclear how much gains contributed from the SFT on the such diverse datasets.
Other Comments Or Suggestions: see Question
Questions For Authors: 1. What's the gain by SFT on the same datasets, w/o lora? and other PEFT methods?
2. Could you validate your approach on more agent structures? such as Reflection, AutoGen?
3. Could you validate your approach on more agent-related datasets? The training data is tool-use and the validation set is also tool-using (except for math), how about generalization ability on other agent tasks, maybe refer to Agentgym.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer QZb4:
Sincerely thank you for your constructive review, the concerns and questions are answered in detail.
**1. What's the gain by SFT on the same datasets, w/o lora? and other PEFT methods?**
Thanks for the helpful suggestion. Based on the same multi-roles dataset, we supplement the experimental results of model Llama3.2-1B-Instruct on BFCL leaderboard. The results are as follows.
|Method|Trainable Params|AST (Non-live)|Exec (Non-live)|AST (Live)|Relevance (Live)|AVG|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Base|-|21.9|19.2|29.8|38.9|27.5|
|SFT|1.24B|72.3|77.6|61.5|92.6|76.0 (+48.5)|
|LoRA|0.16B|59.7|64.2|56.3|81.8|65.5 (+38.0)|
|DoRA|0.16B|61.2|65.7|58.4|82.0|66.8 (+39.3)|
|Ours| 0.16B|75.2|80.0|60.7|94.4|77.6 (+50.1)|
From the results, SFT exhibits superior accuracy compared to PEFT methods (LoRA and DoRA), which can be attributed to its more trainable parameters, achieving an average accuracy 10.5% higher than LoRA. Notably, DoRA introduces an advanced scheme by decomposing pretrained weight matrices into magnitude vectors (m) and directional matrices (V), where LoRA is applied specifically to V while m is trained separately. This architectural innovation allows DoRA to surpass LoRA slightly in accuracy. Crucially, our proposed method achieves statistically significant performance improvements through two key innovations: 1) a more rational capacity decomposition strategy, and 2) a novel Mixture-of-Roles framework enabling dynamic interaction between decomposed modules. These enhancements collectively contribute to our method's marked accuracy superiority over SFT and PEFT methods.
**2. Validating on more agent structures, such as Reflection, AutoGen.**
Thanks for the valuable suggestion. Our method is not constrained by agent architectures. For example, the framework of Reflection[1] comprises three modules: Actor, Self-Reflection, and Evaluator. Similarly, in AutoGen's[2] application scenarios (Figure 3), multi-agent coding involves modules like Commander, Writer, and Safeguard. While each module in these frameworks operates as an independent LLM, our approach integrates multiple independent LLMs into a parameter-efficient Mixture-of-Roles architecture.
Due to the limited time during rebuttal, directly extending our method to these application scenarios proved challenging. Therefore, we modified the agent architecture by adding or removing specific roles based on the existing BFCL multi-roles training dataset. The results are as follows.
|Archs|Trainable Params|AST (Non-live)|Exec (Non-live)|AST (Live)|Relevance (Live)|AVG|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Base|-|21.9|19.2|29.8|38.9|27.5|
|2-roles|0.11B|60.6|66.3|48.3|79.5|63.7 (+36.2)|
|3-roles|0.16B|75.2|80.0|60.7|94.4|77.6 (+50.1)|
|4-roles|0.21B|70.8|74.2|57.6|90.3|73.2 (+45.7)|
Our method integrates three core roles: Reasoner, Executor, and Summarizer. Through architectural modifications — specifically removing the Summarizer and introducing a Planner — we extend the framework to 2-roles and 4-roles. Experimental results demonstrate accuracy improvements across all variants compared to the baseline model. Notably, the 3-roles architecture achieves peak performance (accuracy improvement of 50.1% over baseline), whereas the 4-roles, despite containing more trainable parameters, shows diminishing returns (45.7% accuracy gain). This empirical evidence highlights that rational role definition and architecture design outweigh mere parameter quantity expansion in achieving optimal model performance.
[1] Reflexion: Language Agents with Verbal Reinforcement Learning
[2] AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
**3. Validating on more agent-related datasets. How about generalization ability on other agent tasks, maybe refer to AgentGym.**
Thanks. In our experiments, we conducted evaluations across multiple datasets including StableToolBench, BFCL, GSM8K, and MATH. In fact, the application scenarios of these datasets involve API calls, Java programming, Python programming, JavaScript programming, and mathematical problem-solving, etc., not just tool use. AgentGym is an impressive work, however, limited by the time of rebuttal, it is difficult for us to reproduce AgentGym in a short time and migrate our methods to it. However, we supplement the experiments of Llama3.2-1B-Instruct on BFCL with varying amounts of training data.
|Train Data|AST (Non-live)|Exec (Non-live)|AST (Live)|Relevance (Live)|AVG|
|:-:|:-:|:-:|:-:|:-:|:-:|
|0|21.9|19.2|29.8|38.9|27.5|
|1k |49.5|44.6|46.3|79.1|54.9 (+27.4)|
|5k|55.3|51.8|50.6|82.1|60.0 (+32.5)|
|10k|58.8|57.4|52.7|85.9|63.7 (+36.2)|
|50k|70.4|74.9|56.5|91.7|73.4 (+45.9)|
|90k|75.2|80.0|60.7|94.4|77.6 (+50.1)|
From the results, even with only 1k training samples, we still achieves a 27.4% improvement in average accuracy, demonstrating its strong generalization capability. As the training data volume increases, the accuracy further improves accordingly. | null | null | null | null | null | null |
Tight and Fast Bounds for Multi-Label Learning | Accept (poster) | Summary: This paper provide general theoretical guarantees for the generalization of multi-label learning. By developing and leveraging a novel vector-contraction inequalities for smooth base losses, the author induces tight generalization bounds for multi-label learning that have no dependency on the number of labels. Besides, the author develops novel local vector-contraction inequalities for smooth base losses, it can have a bound with faster convergence rate. A tight generalization bounds with no dependency on the number of labels is derived for Macro-Averaged AUC by considering both Lipschitz continuity and smoothness of base loss functions.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Incorrect statements are not observed in the key claims (e.g., tighter bounds for smooth base losses, faster bounds, tighter bounds for macro-averaged AUC).
Experimental Designs Or Analyses: N/A
Supplementary Material: Roughly went through
Relation To Broader Scientific Literature: This paper introduces vector-contraction inequalities for smooth base loss functions, rather than Lipschitz ones. It shows tight generalization bounds for multi-label learning that have no dependency on the number of labels.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
- This paper extend the previous work and introduce novel vector-contraction inequalities for smooth base losses.
- This paper achieves the SoTA generalization bounds and also reveal the relationship between Macro-Averaged AUC and class-imbalance.
- Compared with prior methods, this apper removes the dependency on label count and achieves faster bounds.
Weaknesses
- This paper is highly technical and not reader-friendly, it would be great to show some intuitive explanations or some demos to help the reader understand the key ideas.
Other Comments Or Suggestions: N/A
Questions For Authors: - Any limitations when the results are applied to the real-world multi-label learning scenarios?
- Since there are many related papers mentioned and compared with the derived results, a more clear and explicit comparison like a summary table with the results and limitations can make the contribution clearer.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper.
The following are our responses to the Questions:
**1. Response to Weakness.**
We will add concrete examples of multi-label methods after the definition of the function class to improve readability and practical interpretation. For example, the DNN-based multi-label method named CLIF [1], which proposes to learn label semantics and representations with specific discriminative properties for each class label in a collaborative way, can be expressed in our function class as:
$$
\phi_j(\boldsymbol{x})= \sigma_{ReLU} \\{ W_5 \cdot \left[ \sigma_{ReLU} (W_4 \boldsymbol{x}) \odot \sigma_{sig} (W_3 \psi(Y)_j) \right] \\} .
$$
The label embeddings $\psi(Y)$ can be denoted by $\sigma_{ReLU}(\tilde{A} \sigma_{ReLU} (\tilde{A} Y W_1) W_2)$, where $\tilde{A}$ denote the normalized adjacency matrix with self-connections, $Y$ is the node feature matrix of the label graph, $\sigma_{ReLU}$ is the ReLU activation, $\sigma_{sig}$ is the sigmoid activation, $\odot$ is the Hadamard product, $W_i$ are the parameter metrices, $i \in [5]$.
In addition, a class of multi-label methods based on the strategy of label-specific representation, which facilitates the discrimination of each class label by tailoring its own representations, can be formalized in our function class. For example, the wrapped label-specific representation method [2], which presents a kernelized Lasso-based framework with the constraint of pairwise label correlations for each class label, can be expressed in our function class, where $f_j$ is the kernelized linear model and the constraint $\alpha(\boldsymbol{w}) $ is $\|\boldsymbol{w}_j\|_1 \leq \Lambda$ for any $j \in [c]$, and each label also has the property of sharing which is reflected by the additionally introduced constraint
$\sum_i^c (1- s_{ji}) {\boldsymbol{w}_j}^\top \boldsymbol{w}_i \leq \tau$,
where $s_{ji}$ is the cosine similarity between labels $y_j$ and $y_i$.
Besides, the function class here is applicable to the typical Binary Relevance methods for multi-label learning, where different methods correspond to different nonlinear mappings $\phi_j$.
[1] Collaborative Learning of Label Semantics and Deep Label-Specific Features for Multi-Label Classification, TPAMI 2022.
[2] Multi-Label Classification with Label-Specific Feature Generation: A Wrapped Approach, TPAMI 2022.
In addition, we will also provide additional explanations on the key ideas in the theoretical proof to help readers better understand the ideas. Please refer to our **Response 3 to Reviewer XSTz**.
**2. Response to Q1.**
Our theoretical results require that the base loss is bounded, and therefore cannot cover the case where the base loss is cross-entropy loss. In the future, we will further study the bounds with a faster convergence rate for unbounded and Lipschitz base losses. In addition, for some methods involving specific label correlations, when using our theoretical results to analyze these specific methods, it is necessary to introduce additional assumptions induced by these specific label correlations in the generalization analysis, so as to better reveal the impact of these setting-related factors on the bound. However, how to explicitly introduce label correlations in generalization analysis is still a crucial open problem. We will further explore related work in the future. We will incorporate these discussions into the paper in an appropriate manner.
**3. Response to Q2.**
As the reviewer suggested, a better presentation of related work and comparisons between them through a summary table can improve the readability of the paper and make the contribution of our results clearer. We will add a summary table in the revised version from the perspectives of loss function, additional assumption, order of bounds, and reference.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification! Most of my concerns are well addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your support. | Summary: The paper focuses on the theoretical analysis of multi-label learning, particularly in the context of smooth base loss functions. The authors introduce novel vector-contraction inequalities and derive tighter generalization bounds for multi-label learning with smooth base loss functions. These bounds exhibit improved dependency on the number of labels, reducing it to logarithmic terms, and demonstrate faster convergence rates with respect to the number of examples. The paper also discusses the application of these bounds to various multi-label learning methods, highlighting how these results provide general theoretical guarantees for the generalization performance of multi-label learning, especially for methods with smooth base loss functions.
Claims And Evidence: Yes. Specifically, the authors derive theoretical support for tight generalisation bounds and more efficient convergence rates for smooth basis loss functions in multi-label learning by means of mathematical proofs and local Rademacher complexity analyses. The authors also give a proof of broad applicability.
Methods And Evaluation Criteria: Yes. The theoretical analysis presented in the article, especially the tight generalisation bounds based on the common smoothed basis loss function, directly addresses the generalisation problem in multi-label learning.In addition, the generalisation bounds proposed in the article not only consider the effect of the number of labels on the model, but also improve the traditional Rademacher complexity analysis by localising the Rademacher complexity, providing faster convergence for large-scale datasets.
Theoretical Claims: Yes. 1. A new vector contraction inequality. This proof is based on the existing Rademacher complexity theory and an extension of local Rademacher complexity that rationally simplifies the otherwise complex multi-label learning problem by smoothing the basis loss function. 2. Application of local Rademacher complexity. This proof describes the complexity of samples in multi-label learning more finely by introducing local complexity.
Experimental Designs Or Analyses: No, the paper is purely a theoretical derivation without any numerical experimental analysis.
Supplementary Material: Yes. The main elements of the supplementary material include:
1. Detailed proofs, including: proofs of vector contraction inequalities, applications of local Rademacher complexity and proofs of macroscopic AUC generalisation bounds.
2. Related lemmas and auxiliary theorems, e.g., Khintchine-Kahane inequality, Dudley's integral inequality.
Relation To Broader Scientific Literature: Earlier generalisation analyses on multi-label learning, e.g. Wu and Zhu (2020) based on the loss function of Lipschitz continuity can derive generalisation bounds, but the part of the bounds that depends on the number of labels is linear.This thesis successfully reduces the effect of the number of labels on the generalisation bounds, in particular so that the generalisation bounds no longer depend on the number of labels but are only related to the logarithmic term, demonstrating a significant theoretical advance. Bartlett et al. (2005) proposed local Rademacher complexity and showed that it can provide faster convergence than traditional Rademacher complexity.In this thesis, the speed of convergence and generalisation bounds in multi-label learning are improved by introducing the local Rademacher complexity, which provides a faster convergence speed than the traditional method, and the effectiveness of this method is verified theoretically. Wu et al. (2023) proposed a generalisation analysis on macroscopic AUC and explored the effect of label imbalance on the results, without delving into how to reduce the dependence of the number of labels on the generalisation bounds.In this thesis, by combining the Lipschitz and smooth basis loss functions, tight generalisation bounds for macroscopic AUC are given and there is no dependence on the number of labels, which provides a new theoretical guarantee to deal with the label imbalance problem.
Essential References Not Discussed: No.I keep up with the literature in this area.
Other Strengths And Weaknesses: 1. A new vector contraction inequality is introduced in the article to derive tight generalisation bounds for multi-label learning. the proof method is very rigorous, but in practice, if the label distribution of the datasets does not exactly match the assumptions, it may affect the applicability of the method.
2. Although the introduction of local Rademacher complexity provides an improvement in convergence speed, the estimation of local complexity may face high computational costs, especially when the number of labels and sample size are very large.The correctness and practical feasibility of the proof relies on being able to efficiently estimate the local complexity and this may be challenging on large-scale datasets.
Other Comments Or Suggestions: 1. The subscript on the left side of the inequality in line 313 is incorrect?
2. The inequality in line 1026 to line 1030 are the same. Please check again.
Questions For Authors: 1. The universality of generalisation bounds, although theoretically proven, might there be some limitations for some special types of multi-label problems? Such as high-dimensional sparse data or highly unbalanced label scenarios.
2. The article provides a comparison with existing methods and demonstrates clear improvements.However, inter-label correlation may affect the theory in specific cases. For example, is the traditional Lipschitz approach likely to be more advantageous in the case of strong label correlation? How can more experiments be designed to verify the validity of these theories in practical applications.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper.
The following are our responses to the Questions:
**1. Response to Weakness 1.**
As the reviewer pointed out, when there is some type of label correlation between the labels of the dataset, the label distribution may satisfy some potential constraints, which may correspond to some additional assumptions such as sparsity assumptions or norm regularization constraints. Therefore, when dealing with these specific problems, we need to introduce some additional assumptions to adjust our analysis and explicitly introduce these potential label correlations into the generalization analysis. This is still an open problem and we will further explore related work in the future.
**2. Response to Weakness 2.**
When dealing with large-scale datasets, in practice, one often consider introducing specific strategies in the label space to deal with extremely large number of labels. In generalization analysis, it is shown that introducing effective and general specific strategies is not only an important open problem in theory, but also extremely challenging in practice. Therefore, it is indeed necessary to further introduce effective assumptions to better explicitly analyze the role of key factors in generalization that can effectively deal with challenging problems with large number of labels and large-scale datasets.
**3. Response to Suggestions.**
In line 313, the subscript $S$ in the inequality should be changed to $R$.
In line 1026 to 1030, this is a repeated typo and we will remove one of them.
**4. Response to Q1.**
As we replied above, effective analysis for more specific problems requires further explicit introduction of valid assumptions to reveal the impact of these setting-dependent factors on the bound. For example, for high-dimensional sparse data, one may need to introduce sparsity assumptions into the analysis, thereby inducing bounds that are weakly dependent on the sparsity rate and the number of key labels.
**5. Response to Q2.**
Different types of label correlations have an important impact on generalization analysis. How to explicitly introduce them in generalization analysis is a crucial open problem. Traditional Lipschitz methods do have more advantages since they are easier to satisfy the conditions. For example, for deep models, the smoothness of base losses also involves the boundedness of the second-order derivative of the function, which is often difficult to guarantee in deep models, while Lipschitz continuity is often established. Experimental verification can be considered from two aspects. On the one hand, verify whether the functions selected by the algorithm have a small error and whether the generalization performance of the small error functions is better. On the other hand, verify whether the smoothness of the model function can be guaranteed by some regularization, and explore which regularization-induced inductive biases are more effective for generalization in practice, and promote further theoretical research.
We will incorporate these discussions into the paper in an appropriate manner. | Summary: This paper investigates the generalization bound of multi-label loss functions. Specifically, for smooth base loss functions, the authors improve the generalization bounds by removing the dependency on the number of labels $c$. By exploiting local Rademacher complexity, the authors further improve the bound from $\tilde{O}(1/\sqrt{n})$ to $\tilde{O}(1/n)$. In addition, they. also derive tight bounds for Macro-Averaged AUC.
Claims And Evidence: The theoretical claims are.supported by proofs and comparisons with previous works.
Methods And Evaluation Criteria: NA.
Theoretical Claims: I only read the proof sketches, which seem to make sense.
Experimental Designs Or Analyses: NA.
Supplementary Material: No I did not read the supplementary.
Relation To Broader Scientific Literature: NA.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: **Strengths.**
1. This paper is overall well written, with sufficient discussions on the differences from previous works.
2. The derived bounds are tighter removing the reliance on the number of classes, while introducing no additional strong assumptions.
**Weakness.**
1. The theoretical results do not cover unbounded base loss functions, e.g. cross entropy loss.
2. Lack of experimental validation of the theoretical results. e.g. does the generalization gap really not depend on $c$? (Perhaps it is not possible to calculate the population risk?)
Other Comments Or Suggestions: NA.
Questions For Authors: 1. Definition 3.3. The fat-shattering dimension seems to depend on the witnesses $s_1,\ldots,s_p$. What are the meanings of these witnesses?
2. Theorem 5.5. The population risk $R(f)$ is bounded by $2\hat{R}_D(f)$ instead of $\hat{R}_D(f)$. Does it mean that this bound is somewhat loose? As $n\to\infty$, the bound becomes $R(f)\leq 2R(f)$, which is not tight.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper.
The following are our responses to the Questions:
**1. Response to Weakness 1.**
As the reviewer commented, the theoretical results for unbounded base losses need to be further explored in the future to develop new theoretical techniques to cover this situation. Under the new theoretical analysis method, although the smoothness of the cross-entropy loss may be difficult to hold for models with large capacity, the Lipschitz continuity of the cross-entropy is often established, i.e., tight bounds with no dependency on $c$ for cross-entropy loss can be obtained. However, improving the convergence rate of its bound with respect to $n$ is still an urgent open problem to be solved. In the future, we will further study the bounds with a faster convergence rate for unbounded and Lipschitz base losses.
**2. Response to Weakness 2.**
Since the $\widetilde{O}(\frac{1}{\sqrt{n}} )$ and $\widetilde{O}(\frac{1}{n} )$ bounds in our theoretical results is independent of the number of labels, **up to logarithmic terms**, the dependency of the bounds here on $c$ is logarithmic. We use $\widetilde{O}$ to omit the logarithmic terms since the logarithmic dependency is very weak. However, there is no contradiction between the theoretical results and empirical intuition. The increase in $c$ will affect the difficulty of learning, but the empirical success of multi-label methods suggests that the increase in $c$ have limited impact on the difficulty of learning, which means that the ideal bound should not be strongly dependent on $c$.
**3. Response to Question 1.**
If there are real numbers $s_1, \ldots, s_p$ such that for each $\delta_1, \ldots, \delta_p \in \\{-1, +1\\}$ there exists $f \in \mathcal{F}$ with
$$
\delta_i\left(f(\boldsymbol{x}_{i})-s_i\right) \geq \epsilon, \quad \forall i = 1, \ldots, p.
$$
We say that $s_1, \ldots, s_p$ witness the shattering.
**4. Response to Question 2.**
The multiplicative factor of 2 comes from the use of Lemma A.8, which can also be understood through its proof process. The result can often be shown through some derivation as follows:
$$
R(f) \leq \widehat{R}_{D}(f) + \widetilde{O}(1/n + \sqrt{R^* / n} ) ,
$$
where $R^* = \inf_{f\in \mathcal{F}} R(f)$. This means that in the separable case ($R^* =0$), the bound can be improved to a $\widetilde{O}(1/n)$ rate. The multiplicative factors often appear in bounds based on local Rademacher complexity, as also shown in literature [1-3], etc.
[1] Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning, JMLR 2018.
[2] Towards Sharper Generalization Bounds for Structured Prediction, NeurIPS 2021.
[3] Generalization Analysis for Ranking Using Integral Operator, AAAI 2017.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I have no further questions and decide to keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your support. | Summary: By incorporating smoothness assumption, author provides generalization guarantee achieving a tighter bound - independent of c, the number of labels up to log factors, a faster bound - 1/n, and a similar tighter bound for Macro-averaged AUC.
Claims And Evidence: Mostly seems sound, but I have a question.
Zhang & Zhang 2024, states in the introduction, remark 3.8, and remark 3.18 that their bound is also independent of c.
But in this paper, you only mention sqrt(c) factor bound in the related work section for Zhang & Zhang.
Am I missing something? It would be good to state the improvement.
Methods And Evaluation Criteria: No experiments.
Theoretical Claims: I did not go over the proofs.
Experimental Designs Or Analyses: No experiments.
Supplementary Material: Only skimmed through the proofs.
Relation To Broader Scientific Literature: I think this is well discussed in the introduction and related works.
Essential References Not Discussed: I recommend considering discussing the paper,
Busa-Fekete, Róbert, et al. "Regret bounds for multilabel classification in sparse label regimes." Advances in Neural Information Processing Systems 35 (2022), since the paper discusses obtaining fast rates (or ultra fast rates) in multi-label setting, which is an important contribution of the paper.
Other Strengths And Weaknesses: I find the paper well grounded.
Other Comments Or Suggestions: I think some part of definition 3.2 is cut-off around line 166.
Questions For Authors: I mentioned by question in "Claims And Evidence" above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper.
The following are our responses to the Questions:
**1. Response to the Question in Claims And Evidence.**
Here we mention that the bounds with a square-root dependency on $c$ in literature [1] mainly refers to the results for $\ell_2$ Lipschitz loss in literature [1], i.e., Lemma 3.6 and Theorem 3.7. Our improvement is mainly relative to Theorem 3.7. In the proof of Lemma 3.6, the $n$ in equation (9) should be changed to $nc$. This is a typo. In fact, the results in Lemma 3.6 and Theorem 3.7 require the introduction of an additional $\sqrt{c}$ factor, because they ignore the $\sqrt{c}$ factor in the radius of the empirical $\ell_2$ cover of $\mathcal{P}(\mathcal{F})$. Therefore, a $\sqrt{c}$ factor is missing in Lemma 3.6 and Theorem 3.7, and literature [1] improved the dependency of the bounds on $c$ from linear to square-root in the decoupling case for $\ell_2$ norm Lipschitz losses. Although for $\ell_2$ Lipschitz loss, the bounds in literature [1] is only improved by a factor of $\sqrt{c}$, they are still the tightest results in multi-label learning with $\ell_2$ Lipschitz loss. In addition, for Hamming loss, its Lipschitz constant can induce the tight bounds with no dependency on $c$. We found that the square-root dependency of the bound in [1] on $c$ is inevitable for $\ell_2$ Lipschitz loss, which essentially comes from the $\sqrt{c}$ factor in the radius of the empirical $\ell_2$ cover of the projection function class. We also found that the smoothness of the base loss function can eliminate the $\sqrt{c}$ factor in the radius of the empirical $\ell_2$ cover of the projection function class, so that the tight bounds with no dependency on $c$, up to logarithmic terms, can be derived.
[1] Yi-Fan Zhang, Min-Ling Zhang. "Generalization Analysis for Multi-Label Learning", ICML 2024.
**2. Response to the Essential Reference.**
The literature [2] derived tight bounds with a logarithmic dependency on $c$ for Hamming loss with KNN under the smoothness assumption of the regression function and multi-label margin and sparsity assumptions and also derived tight bounds with a logarithmic dependency on $c$ for Precision@$\kappa$ under the margin condition and the smoothness assumption. The margin condition ensures that the obtained bounds with a faster convergence rate. In our work, the local loss function space is the key to obtaining bounds with a faster convergence rate. The smoothness condition with respect to the $\ell_\infty$ norm in literature [2] is a variant of Holder-continuity. We also find that the $\ell_\infty$ norm has a positive effect on obtaining tight bound with a weaker dependency on $c$, i.e., tight bounds with a logarithmic dependency on $c$ can be derived for $\ell_\infty$ Lipschitz losses. However, how to improve the convergence rate of the bounds for Lipschitz losses is still an open problem, which we will further explore in future work. We will incorporate these discussions into the paper in an appropriate manner.
[2] Regret Bounds for Multilabel Classification in Sparse Label Regimes. NeurIPS 2022.
**3. Response to the Suggestion.**
In order to avoid the possibility of truncation in understanding, we will adjust Definition 3.2 to $\ell_p$ norm covering number instead of listing the $\ell_2$ norm and $\ell_\infty$ norm covering number separately in the definition.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. After reading the rebuttal, I feel that comparison to [1] needs to be discussed in detail and verified carefully to convey the main important point of the paper, which should have been stated in the paper. I lower my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your efforts in making our work clearer for readers. We compare with [1] in more detail and convey our key points as follows, and we hope our further response will address your concerns.
Regarding the reviewer's comments "[1] states in the introduction, remark 3.8, and remark 3.18 that their bound is also independent of $c$", Remark 3.8 refers to the bound for $\ell_2$ Lipschitz loss, and Remark 3.18 refers to the bound for $\ell_\infty$ Lipschitz loss. There is no problem that the bound for $\ell_\infty$ Lipschitz loss is independent of $c$. In fact, we have also pointed out in related work that "[1] derived a $\widetilde{O}(1/\sqrt{n})$ bound for $\ell_\infty$ Lipschitz loss...". When discussing bounds for Macro-AUC, we show in Remark 6.4 that our bound is tighter than bound for Macro-AUC in [1], since bound in [1] uses a looser vector-contraction inequality, while we develop a tight vector-contraction inequality for the case where base loss is smooth, which can improve the bound by a factor of $\sqrt{c}$. These discussions are clearly explained in our paper.
Below we explain more clearly the part that may cause confusion, i.e., the relevant results in Remark 3.8 of [1], which is "the bound with a square-root dependency on $c$ in [1]" described in the related work **for $\ell_2$ Lipschitz loss** in our paper. Please note that when we mention the bound with a square-root dependency on $c$, we emphasize the expression "**for $\ell_2$ Lipschitz loss**". In Remark 3.8 of [1], it is shown that bound for $\ell_2$ Lipschitz loss is independent of $c$. The conflict between these two descriptions is that Step 3 of the proof of Lemma 3.6 in [1] ignores the $\sqrt{c}$ factor in the radius of empirical $\ell_2$ cover of $\mathcal{P}(\mathcal{F})$. Hence, the third inequality below Eqn (10) in Step 3 of [1] should be modified as:
$$
\inf_{\alpha>0} \left( 4 \alpha+48 \sqrt{c} \mu \sqrt{c} \widetilde{\Re}_{nc}(\mathcal{P}(\mathcal{F})) \log^{\frac{1}{2}} (nc) \cdot T\right),
$$
$$
\text{(where $T=\int_{\alpha}^M \epsilon^{-1} d\epsilon$)}
$$
which will introduce an additional $\sqrt{c}$ factor and cause bounds in Lemma 3.6 and Theorem 3.7 to be square-root dependent on $c$. Hence, [1] improved the dependency of bounds on $c$ from linear to square-root in the decoupling case **for $\ell_2$ Lipschitz loss**.
In fact, we have previously confirmed and reached agreement with the authors of [1] on this issue. This issue does not affect the conclusion of [1] in general, since for Hamming loss, the inverse of the $\sqrt{c}$ factor in its Lipschitz constant can induce tight bounds with no dependency on $c$.
Our improvement mainly stems from the observation that for $\ell_2$ Lipschitz loss, the square-root dependency of bound in [1] on $c$ is inevitable, which essentially comes from the $\sqrt{c}$ factor in the radius of the empirical $\ell_2$ cover of the projection function class $\mathcal{P}(\mathcal{F})$, i.e., $\frac{\epsilon}{\mu \sqrt{c}}$. After careful analysis, we found that smoothness of base loss can eliminate this $\sqrt{c}$ factor, i.e., $\frac{\epsilon}{ \sqrt{12 \gamma M} }$. In addition, the method based on Sudakov's minoration used in [1] to upper bound the $\ell_2$ norm covering number of the projection function class is no longer applicable here. In our paper, according to the smoothness of base losses, we first derive the relationship between empirical $\ell_{2}$ norm covering number of the loss space and empirical $\ell_\infty$ norm covering number of the projection function class. Then, we show that empirical $\ell_\infty$ norm covering number $\mathcal{N}_{\infty}(\epsilon, \mathcal{P}(\mathcal{F}), [c] \times D)$ can be bounded by worst-case Rademacher complexity of the projection function class by using the fat-shattering dimension as a bridge.
The above key points and proof ideas can induce a bound independent of $c$. Hence, for the bound with a square-root dependency on $c$ for $\ell_2$ Lipschitz loss in [1], we consider the smoothness of base loss and improve the bound by a factor of $\sqrt{c}$. In addition, the smoothness of base loss combined with the local loss function space allows the development of novel local vector-contraction inequalities, which can induce bounds that not only have a faster convergence rate but also have a weaker dependency on $c$.
We will incorporate the above discussion into the paper in a suitable way to better convey the main important point of the paper, and we will not ignore the contribution of [1] to the multi-label community, especially the bounds with no dependency on $c$ for $\ell_\infty$ Lipschitz loss, which we objectively point out in the paper. We will objectively describe the relevant results without any negative impact.
We hope that our response will help further improve your opinion of our contributions. We are eager to hear back from you if you have any feedback or further questions, and we would love to know your updated reviews. | Summary: This paper focuses on the problem of multi-label classification, where each instance can be associated with multiple labels simultaneously. The authors derive several generalization bounds for this setting, assuming smooth loss functions. Their analysis relies on standard techniques for characterizing the complexity of function classes, such as local Rademacher complexity.
The core novelty of the paper lies in the development of specific vector contraction inequalities tailored to a particular class of multi-label classifiers. These inequalities are then used to establish generalization bounds. This approach leads to the derivation of slower convergence rates with respect to the sample size. Notably, the authors obtain rates of $\sqrt{c}$ and $c^{3/2}$ for several well-known multi-label losses, including Hamming loss, subset loss, and macro-averaged AUC (Area Under the Curve).
Claims And Evidence: In summary, while the paper presents valuable generalization bounds for multi-label classification, there are several points that need clarification and improvement. Providing concrete examples of the model class, addressing the applicability to cross-entropy loss, clearly explaining the novelty of the vector contraction inequalities, and correcting the error in Theorem 5.5 would significantly strengthen the paper and enhance its accessibility and impact.
Methods And Evaluation Criteria: No empirical evidence is given.
Theoretical Claims: *Specific Points and Concerns:*
1) *Model Class Examples:*
* It would be beneficial if the authors could provide concrete examples of state-of-the-art (SOTA) multi-label classification methods that fall within the specific class of classifiers considered in this paper. This would help readers understand the scope and applicability of the theoretical results. Clarifying which existing models align with their defined model class is crucial for practical interpretation.
2) *Smoothness and Cross-Entropy Loss:*
* In practice, a common approach for multi-label classification involves using cross-entropy loss for each label independently as a surrogate loss function. It is essential to clarify whether the authors' definition of "smoothness" encompasses this widely used cross-entropy loss. If it does, this should be explicitly stated. If not, the limitations of the analysis concerning this practical loss function should be discussed.
3) *Novelty of Vector Contraction Inequalities:*
* The paper's primary contribution seems to be the new vector contraction inequalities. However, the explanation of their novelty is lacking. It would greatly improve the paper if the authors highlighted the key insights or "core idea" behind their improved inequalities. What specific techniques or arguments allowed them to derive better bounds compared to existing vector contraction inequalities? Clearly articulating this contribution is essential for the paper's impact.
4) *Error in Theorem 5.5:*
* Theorem 5.5 appears to contain an error. The empirical error term seems to have a multiplicative factor of 2, which is likely incorrect. This should be carefully checked and corrected. Such errors can significantly undermine the credibility of the theoretical results.
Experimental Designs Or Analyses: No experiments are provided.
Supplementary Material: Not checked.
Relation To Broader Scientific Literature: Not included a paper which gives actually much better dependence on the number of labels under a more mild assumption on the function class than that of considered in the submission. Please see: Busa-Fekete et al.: Regret Bounds for Multilabel Classification in Sparse Label Regimes. NeurIPS 2022
Essential References Not Discussed: see comments above
Other Strengths And Weaknesses: The notation is quite hard to follow. And there are many inconsistency in the notation. Like $\ell_b$ has one argument, and some cases two arguments. And R_nc is not defined, or I have not found it.
Other Comments Or Suggestions: None.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper.
**1. Response to C1**
We add concrete examples of multi-label learning (MLL) methods after the function class definition to improve readability and practical interpretation, please refer to **Response 1 to Reviewer Wn57** due to character limitation.
**2. Response to C2**
For the case where base loss is cross-entropy loss, our theoretical results do not cover cross-entropy, mainly because cross-entropy loss is not bounded. Next we will further discuss the smoothness of cross-entropy. When the model is a linear classifier, smoothness of cross-entropy can be achieved by the boundedness of input, but from the perspective of model capacity, such a result is not general. The function class here involves general functions (i.e., nonlinear mappings $\phi_j$), so for nonlinear models, smoothness of cross-entropy involves not only boundedness of the gradient of model function but also boundedness of second-order derivative. For deep networks, changes in parameters may cause drastic changes in second-order derivative, resulting in the norm of second-order derivative being unbounded. Hence, smoothness of cross-entropy is often difficult to hold. However, the Lipschitz continuity of cross-entropy is often established, and the boundedness of gradient of model function is guaranteed by various strategies in practice, e.g., input normalization, weight initialization, gradient clipping, and regularization. This implies the need to develop new theories and analytical methods for unbounded base losses. Under the new theoretical analysis method, tight bounds with no dependency on $c$ for cross-entropy loss can be obtained using its Lipschitz continuity, but improving the convergence rate of its bound wrt $n$ is still an urgent open problem to be solved. In the future, we will further study bounds with a faster convergence rate for unbounded Lipschitz base losses.
**3. Response to C3**
Since the output of MLL is a vector-valued function, we need to convert Rademacher complexity of the vector-valued class into complexity of a tractable scalar-valued class. For $\ell_2$ Lipschitz losses, the analysis of MLL can be traced back to a basic bound with a linear dependency on $c$ that comes from a typical inequality:
$$\mathbb{E}\left[\sup_{\boldsymbol{f} \in \mathcal{F}} \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^c \epsilon_{ij} f_j\left(\boldsymbol{x}_{i}\right)\right]$$
$$\leq c \max_j \mathbb{E}\left[\sup_{f_j} \frac{1}{n} \sum_{i=1}^n \epsilon_{ij} f_j\left(\boldsymbol{x}_{i}\right) \right].$$
The dependency of bounds on $c$ can be improved to square-root. Such improvements essentially come from preserving the coupling among different components reflected by constraint $\\|\boldsymbol{w}\\| \leq \Lambda$.
As a comparison, when $\\|\boldsymbol{w}_j\\|_2 \leq \Lambda$ for any $j \in [c]$,
if we consider group norm $\\|\cdot \\|_{2, 2}$,
we have $\\|\boldsymbol{w}\\|_{2, 2} \leq \sqrt{c}\Lambda$, which means that these improved bounds still suffer from a linear dependency on $c$. [1] improved the dependency of bounds on $c$ from linear to square-root in decoupling case for $\ell_2$ Lipschitz losses. We found that square-root dependency of bound in [1] on $c$ is inevitable for $\ell_2$ Lipschitz losses, which essentially comes from a $\sqrt{c}$ factor in the radius of empirical $\ell_2$ cover of projection function class. We also found that smoothness of base loss can eliminate the $\sqrt{c}$ factor, so that tight bound with no dependency on $c$, up to logarithmic terms, can be derived. In addition, according to the above core ideas, we combine smoothness of base loss with local loss space to develop novel local vector-contraction inequalities, thereby obtaining sharper bounds with a weak dependency on $c$ and a faster convergence rate wrt $n$. We have also explained proof processes, ideas and specific theoretical techniques in proof sketches, which mainly includes conversions between complexities of different classes and some lemmas required to achieve these conversions.
[1] Generalization Analysis for Multi-Label Learning, ICML 2024.
**4. Response to C4**
We explain the multiplicative factor of 2, please refer to **Response 4 to Reviewer ZadH**.
**5. Response to Literature**
We give a detailed discussion of the paper pointed out, please refer to **Response 2 to Reviewer dgGN**.
We will incorporate these discussions into the paper in an appropriate manner.
**6. Response to Weakness**
We will carefully check and revise the notation to ensure consistency, e.g., the definition of base losses $\ell_b$. $\widetilde{\Re}_{nc}(\mathcal{P}(\mathcal{F}))$ is worst-case Rademacher complexity of projection function class. We define worst-case Rademacher complexity in Definition 3.1,
$\widetilde{\Re}_{nc}$ is the analog of the definition of worst-case Rademacher complexity for class $\mathcal{P}(\mathcal{F})$. | null | null | null | null |
Highly Compressed Tokenizer Can Generate Without Training | Accept (poster) | Summary: This paper proposes an optimization-based method to tweak the latent space of tokenizer for image editing tasks.
Claims And Evidence: - I think this paper highlights the generation ability of the 1D tokenizer, specifically TiTok. However, I find using the word of "image editing" or "image variation" is better, because it is not like the standard generation model like diffusion model, that can generate from very cheap random noise, it needs to start from a seed image
- And on a high level, I feel this paper is telling two stories, i.e. section 3; section 4-5. It firstly shows the position 1D tokenizer has semantic meaning, and later presents the 1D tokenizer can be used to search the latent space. However, I don't see a clear connection between these two stories, or they are kind of broken. I think the reason of 1D tokenizer works better than 2D tokenizer for gradient-based method is its latent space is smaller and easier to perform optimization. It looks to me that even without the section 3, section 4-5 can already explain itself.
- I hope to see more failure cases and limitation discussed in the paper
Methods And Evaluation Criteria: .
Theoretical Claims: .
Experimental Designs Or Analyses: - Regarding section 5, I wonder does 50k text prompts “a photo of a class” have duplicated prompts? Asking this because I think the optimization process is deterministic if you loss is deterministic and the number of iterations and optimizer are fixed. Then same text prompts should result in same image variation? Then I believe FID in this case won't be good? So I think if you want the FID low, you need to have non-duplicated promtps or make the optimization process have some randomness?
Supplementary Material: No
Relation To Broader Scientific Literature: .
Essential References Not Discussed: Some other open-sourced 2D discrete image tokenizers:
- Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation
- Cosmos World Foundation Model Platform for Physical AI
Other Strengths And Weaknesses: I think the design in section 3 is interesting, but I find it's a bit impractical to use because you always need to do a classification problem first to find the token that controls some concept.
Other Comments Or Suggestions: .
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review\! We hope to address some of your concerns in this response.
* **Image Editing vs. Generation**
We decided to choose the "image generation" nomenclature in line with previous work on so-called retrieval-augmented generative models, such as RDM \[1\].
**Furthermore, as requested by Reviewer mQYF (see "Initializing with Random Tokens" for more details and quantitative results), we find that it is indeed possible to generate images "from scratch", by starting with random tokens.**
Link to uncurated generations starting from random tokens: https://i.ibb.co/20L8xk1q/rand-tok.png
* **Additional Failure Cases**
The "from scratch" generations referenced above are uncurated, and we plan to include additional visualizations in the appendix of our revised submission.
You may also be interested in our response to Reviewer apSr on additional out-of-domain examples.
Finally, you can find a discussion of some limitations of the token optimization-based editing approach in our response to Reviewer N1ti's question regarding limited control over the editing process. In particular, there are no guarantees that text-guided editing will not result in unintended modifications to the input images, although we believe that this issue could be alleviated with further engineering effort (e.g., a more advanced objective function combining reconstruction and CLIP components).
* **Determinism and Duplicated Seed/Prompt Inputs**
All experiments in Section 5 do in fact include some small amount of randomness in the loss due to nondeterminism in some of the CUDA kernels used in our implementation \[2\].
Interestingly, even the small amount of nondeterminism caused by implementation details of the CUDA kernels used in PyTorch is sufficient to lead to diverse generations by the test-time optimization procedure.
We have experimentally verified that using deterministic implementations of these algorithms leads to deterministic results, in turn leading to degraded FID in the case of smaller number of seed images or deterministic seed selection (e.g. 500 seed images or 1000 seed images with top-1 selection). In the case of at least 1000 seed images with top-1% selection, there is enough diversity in the seed-to-prompt association to yield about 4000 unique inputs, such that the FID-5k results would be relatively unaffected by the use of a fully deterministic implementation.
Thank you for pointing out this issue. We will address this point explicitly in the main text of the revised submission.
### References
\[1\] Blattmann, A., Rombach, R., Oktay, K., Müller, J., and Ommer, B. Retrieval-augmented diffusion models. NeurIPS 2022\.
\[2\] https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html | Summary: The paper finds that tokens in the latent space of 1D tokenizers are strongly correlated with specific attributes of images (e.g. captured subject, lighting, background).
The authors build on this finding and propose gradient-based text-guided image editing and inpainting algorithms that optimize these 1D tokens.
This approach enables the use of pre-trained 1D image tokenizers for editing and inpainting without requiring further training.
Claims And Evidence: The proposition of this paper is built on two main claims, as discussed below.
## Correlation between token positions and semantics
The authors claim that particular token positions in 1D tokenizers correlate with certain high-level attributes of images.
This is confirmed (for the pre-trained TiTok 1D tokenizer) through experiments in section 3, which analyze the encodings of various partitions of ImageNet. Specifically, the authors divide ImageNet into different classes (by utilizing CLIP similarity with a given prompt), and compute the variance (across classes) of the mean of tokens within classes (grouped by index).
Their results show that some token positions have a particularly high variance within a partition.
## Editing through manipulation of tokenizers
The authors show that meaningful image edits can be achieved through the manipulation of tokens at certain positions.
They experiment with perturbing tokens at certain positions (by replacing them with the tokens leading to the highest visual difference), which qualitatively lead to edits that "make sense" given the prompt with which the token index was chosen.
They also show that certain image attributes can be *transferred* from a reference to a target image by replacing a token of the encoded target image with that of the reference image at the desired token index.
Methods And Evaluation Criteria: ## Methods
Building on the findings regarding the TiTok 1D tokenizer, the authors propose an image editing procedure that optimizes the encoded tokens of an image.
Specifically, they optimize encoded tokens to maximize a CLIP objective given a prompt.
However, key parts of the seed image (that are not related to the editing prompt) seem to change after the optimization (e.g. in Figure 5, the lighthouse itself seems to change between edits, although the prompt suggest changing its surrounding context).
This is expected as the suggested CLIP objective does not guarantee the conservation of elements unrelated to the editing.
Note that the authors claim in lines 271-274 that "the optimization preserves key aspects of the subject while aligning the generated image with the prompt". I would advise to reformulate this claim as key aspects of the subject **do change** between edits (e.g. the dog face in Figure 5, last row).
The authors also propose an inpainting procedure that utilizes a modified algorithm which periodically replaces the known parts of the image with their original counterparts, and encodes the resulting image back into the latent space.
- **mec.q.1.** How is the number of optimizer iterations chosen? In Figure 4, it seems that the higher the number of iterations, the further the resulting image deviates from the seed (and fewer elements of the seed image are conserved). Does running the optimization with a higher number of iterations result in the final image being completely different from the seed image? Could the authors provide the results of such an experiment?
- **mec.q.2.** In section 4.1, the authors say that optimizing $\mathbf{z}^{(k)}$ directly leads to poor results. Could the authors provide experiments that support this?
## Evaluation Criteria
The authors construct a "seed image" dataset that is subsampled from ImageNet.
They associate a target prompt to a seed image in various ways (random, top image in CLIP similarity, or randomly picked among top-k most similar images with CLIP).
They utilize various TiTok checkpoints, which allows them to compare results among different autoencoder types (with discrete or continuous tokens), different sizes, and different number of tokens.
They also compare with a 2D tokenizer (MaskGIT VQGAN).
To evaluate the diversity and quality of generation, the authors utilize the FID and IS metrics.
To evaluate alignment with target prompts, the authors utilize CLIP and SigLIP similarity.
Theoretical Claims: The authors do not make any theoretical claims worth discussing in this section.
Experimental Designs Or Analyses: The authors present a series of experiments to compare seed sizes and seed association strategies.
They build on these results and claim that around 1000 seed images are enough to produce diverse generations. They also conclude that adding some stochasticity in the seed-to-prompt associations leads to better FID (as it increases diversity) but worse IS (compared to taking the associations having the best similarity).
- **eda.q.1** In line 370, "achieving an FID of 8.6 for 50k samples". Could the authors also provide the other metrics (IS, CLIP, SigLIP) for this experiment (and ideally add it to Table 1)?
The authors also present experiments that compare performance across different latent space dimensions, discrete and continuous tokens, as well as 1D and 2D tokenizers.
The authors conclude from Table 2 that a decreasing number of tokens leads to "significant improvements in generation quality". Upon further investigation, the authors compare VQ-LL-32, a **large size** variant of the tokenizer with 32 tokens, and VQ-BB-64, a **base size** variant of the tokenizer with 64 tokens (while keeping a constant codebook size).
- **eda.q.2** Could the authors provide additional evidence to confirm that the improvements indeed stem from the smaller number of tokens and not the larger model?
The authors conclude from Table 3 (first two rows) that a discrete latent space is essential in achieving good generative performance.
The authors also conclude from Table 3 (first and last row) that the large number of tokens in MaskGIT's VQGAN "prevent the successful application of the test-time optimization for generation" and that "the 2D tokenizers' spatially arranged tokens lead to optimization results that are spatially incoherent".
- **eda.q.3a** Is the degraded generative quality caused by the large number of tokens of the 2D tokenizer, the spatially arranged 2D grid, or both? It seems that further experiments are needed to support this claim, especially that eda.q.2 is not a given.
- **eda.q.3b** Regarding the spatially incoherent results, could the authors provide visuals that illustrate these observations?
The authors also provide some tweaks of their algorithm along with an ablation study, which justifies their design choices.
Supplementary Material: In the paper appendix, the authors provide some background on the TiTok tokenizer, some further proof of the token position correlation with semantics, details on their proposed editing algorithm, and additional visualizations.
- **sm.q.1** Do the authors plan to make the code publicly available?
Relation To Broader Scientific Literature: This paper explores the latent space of one-dimensional image tokenizers which are relatively recent.
The authors highlight an interesting correlation between token positions and image semantics, and utilize it to perform editing and inpainting.
As such, this paper relates to other works involving 1D image tokenizers, and works tackling text-guided image editing and inpainting, specifically in a test-time optimization (training-free) context.
Essential References Not Discussed: The authors utilize test-time optimization on recent 1D image tokenizers for image editing and inpainting.
While the paper discusses other test-time optimization approaches for image editing, I recommend including a discussion on text-guided image editing approaches in a broader context, which would help clarify how the work relates to existing techniques.
Other Strengths And Weaknesses: ## Strengths
1. The paper is well written and follows a clear and logical structure, making it an enjoyable and insightful read.
2. The concepts presented regarding 1D tokenizers in the paper are original, interesting, and insightful.
3. Additionally, the concepts presented hold potential for future research in this area.
## Weaknesses
1. My main concern is that some claims and conclusions that are important for the context of this paper require additional experimental support (see related questions).
2. Moreover, the test-time optimization text-guided editing algorithm seems to affect aspects of the image that are not related to the prompt (or that should be conserved). Could the authors elaborate on this issue?
3. Providing additional explanation and interpretation of the results, particularly in Section 5.3, would strengthen the discussion and findings of the paper (e.g. visualizations and insights on why a 2D latent space leads worse results).
4. For the sake of reproducibility, it would be valuable if the authors provided the code used to conduct the experiments in this paper as part of the supplementary material.
Other Comments Or Suggestions: ## Writing suggestions
1. It would be helpful to provide equation numbers for future referencing.
2. In the equation of section 4.1, the text prompt is not represented and not given to the loss function (which might lead to confusion).
3. Typo in line 362 (second column).
4. Typo in line 375 (prevent**s**)
Questions For Authors: Questions for the authors are listed in bullet points in their relevant context within the review.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your thorough and detailed comments and suggestions, as well as the thoughtful questions. We hope the following answers can address your concerns.
* **mec.q.1.** Optimizer Iterations
The assessment that the number of optimizer iterations will result in the input image deviating further and further from the input matches our results. In fact, the FID and IS scores are relatively sensitive to the number of iterations, and the choice of iterations can provide control over the FID/IS score tradeoff.
We have therefore computed FID and IS for different number of optimizer iterations (between 50 and 500 in 50 iteration increments). We report FID and IS at the number of iterations corresponding to the best FID, and separately, to the best IS. *Note that these differ from the results in Table 2 because Table 2 reports results using the same number of optimizer iterations for all models.*
|@ best FID|iter|FID|IS|
|:-| :- | :- | :- |
|VQ-LL-32|300|15.1|160|
|VQ-BB-64|150|16.4|134|
|VQ-BL-64|150|18.6|105|
|VQ-BL-128|150|22.4|92|
|@ best IS|iter|FID|IS|
|:-|:-|:-|:-|
|VQ-LL-32|500|15.5|175|
|VQ-BB-64|250|17.1|144|
|VQ-BL-64|250|19.1|118|
|VQ-BL-128|250|24.5|95|
For best results, the number of iterations should be picked on an example-by-example basis: "smaller" edits may require less iterations than more significant ones. One could also design adaptive stopping criteria based on monitoring the objective function's value and change across iterations.
* **mec.q.2.** Optimizing without VQ
We find that the vector quantization step serves as regularization that prevents the test-time optimization procedure from behaving akin to an adversarial attack on the CLIP objective. In particular, the CLIP objective, when provided with the output of the 1D tokenizer with VQ, appears to be very adversarially robust (see also [1]). We observe weaker robustness in the case without VQ.
The best FID-5k achieved by the no-VQ optimization is 15.8 (at 100 optimization iterations, with an IS of 118) and the best IS is around 130 (with 250 optimization iterations, at an FID of 17.1). Even with L2 regularization applied on the tokens, we are not able to improve FID and see a very small improvement in IS. In all cases, the optimization with VQ significantly outperforms the no-VQ one with a top FID of 15.1 and IS of >160.
We have generated examples of how this manifests itself qualitatively, which we will include in the revised submission. We observe that the no-VQ optimization sometimes leads to more artifacts and more exaggerated and larger areas of repetitive texture.
**Link to visualization: https://i.ibb.co/3yYd9vF6/no-vq-vis.png**
* **eda.q.1.** FID 8.6 Experiment
We report this metric (FID-50k of 8.6) in Table 4, alongside the FID-5k and IS, CLIP and SigLIP scores. As you point out, IS, CLIP and SigLIP over 50k samples are not reported. We find that the metrics other than FID are very similar between the 5k and 50k sample evaluations, which is why we omit them.
* **eda.q.2.** Decoder Size vs. Number of Tokens
We agree that this is a shortcoming in our evaluation, so we have run additional experiments comparing the VQ-BL-64 and VQ-BL-128 tokenizers, which share the same model and codebook sizes, differing only in the number of 1D tokens. The results are included in the tables from our response to question **mec.q.1**, and show that the VQ-BL-64 tokenizer always outperforms the VQ-BL-128 tokenizer.
* **eda.q.3a.** Role of 2D Tokens
Our main reason for including the 2D VQGAN in this table was to demonstrate the poor performance of simple test-time optimization with widely used “traditional” tokenizers. Since we are not aware of any 2D tokenizers with VQ that achieve a similarly high compression ratio as TiTok, the conclusion from this experiment may be better reworded as *the high degree of compression enabled by 1D tokenization is key to generative performance*, rather than 1D tokenization itself being key.
* **eda.q.3b.** 2D Tokens Qualitative Results
Please see our response to Reviewer mQYF’s **Question 1** for some qualitative examples of 2D tokenizer generations.
* **sm.q.1.** Code
We will make the code publicly available.
* **Preserving Input Image Features**
There is indeed no guarantee on preservation of aspects of the input image.
This could be mitigated with further engineering effort, e.g., the difference in tokens compared to the tokenized version of the input image could be slightly penalized. One could also combine the text-guided objective with the inpainting one to explicitly preserve user-defined regions.
Certain differences wrt. the input are introduced by the tokenizer itself, due to imperfect reconstruction (VQ-LL-32 example: https://i.ibb.co/qMnp23fq/recons.png).
### References
[1] S. Santurkar et al. Image synthesis with a single (robust) classifier. NeurIPS 2019. https://arxiv.org/abs/1906.09453
---
Rebuttal Comment 1.1:
Comment: The authors have provided all the necessary clarifications and addressed my main concerns.
Additionally, during the discussion period, the method appears to be **even more interesting** than initially thought, as it can generate (or "*edit*") images even starting from a random seed (and not just from an original seed image).
I look forward to seeing the rebuttals' results reflected in the paper update.
I still have some minor concerns about the non-preservation of key aspects of the input image and the number of iterations being variable and manually tuned depending on the example.
Nevertheless, and in light of all of this, I have increased my Overall Recommendation. I would like to thank the authors for their diligent work during the rebuttal process ! | Summary: This paper introduces a generative pipeline leveraging a 1D image tokenizer (e.g., TiTok) to enable image editing and generation without training a dedicated generative model. By compressing images into highly compact 1D token sequences (e.g., 32 tokens), the authors demonstrate that simple token manipulations (e.g., copy-paste, gradient-based optimization) can achieve text-guided editing, inpainting, and unconditional generation. The approach relies on test-time optimization of tokens using objectives like CLIP similarity or reconstruction loss, bypassing the need for iterative generative models like diffusion. Experiments on ImageNet show competitive FID and IS scores compared to state-of-the-art methods, though qualitative results reveal limitations in handling complex scenes or novel concepts.
Claims And Evidence: N/A
Methods And Evaluation Criteria: N/A
Theoretical Claims: N/A
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**
1. Novel Compression Approach: The use of 1D tokenization with extreme compression (32 tokens) is innovative, enabling efficient latent space editing and generation.
2. Training-Free Generation: The framework avoids training generative models, reducing computational overhead and enabling rapid adaptation to new tasks.
3. Empirical Validation: Comprehensive experiments (e.g., text-guided editing, inpainting) provide evidence of the method’s effectiveness, with competitive FID/IS scores on ImageNet.
4. Practical Applications: The approach’s flexibility could facilitate real-world use cases like content moderation, image editing, or low-resource generation.
**Weaknesses**
1. While the 1D tokenizer is novel, the core idea of latent space optimization (e.g., VQGAN-CLIP) is not groundbreaking. The contribution lies more in engineering than foundational advance.
2. Results are confined to ImageNet, leaving applicability to diverse domains (e.g., medical imaging, abstract art) unproven. Qualitative failures (e.g., Figure A6) highlight limitations with novel concepts or complex scenes.
3. FID/IS scores lag behind modern generative models (e.g., ADM, RCG-G), and qualitative outputs exhibit artifacts (e.g., blurriness in inpainting).
4. The paper lacks a rigorous analysis of why 1D tokenization enables generative capabilities, limiting its contribution to empirical observations.
5. Test-time optimization (300+ iterations) is computationally intensive, undermining claims of efficiency compared to trained generative models.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank you for your review! As you point out, the idea of latent space optimization is indeed not new. However, we do believe that its application in the case of highly compressed latent spaces is noteworthy for a few reasons:
1. **Previous attempts to use test time latent space optimization for image editing, i.e. VQGAN-CLIP, have not demonstrated high quality generation of photorealistic scenes, and rely on "tricks"**, such as the usage of a large number of different augmentations (various crops of the image being optimized, color-space corruptions, flips, etc.), in order to be successful. While these additional tricks can also be used to improve image generation in the case of highly compressed latent spaces, we demonstrate that they are not necessary. Remarkably, even an extremely straightforward application of test time optimization can produce reasonable results when operating in the highly compressed latent space (*baseline* in Table 4).
2. Since our baseline test time optimization algorithm is very simple (7 lines of code, see Algorithm A1 in the appendix), **we do not claim any significant engineering contribution**. Instead, we view our findings as strong motivation to view tokenizers with increasingly high compression ratios as generative models. In particular, we find it surprising that tokenizers trained with a standard VQGAN-like objective can be used to perform a variety of generative tasks such as text-guided image editing or inpainting. We would also like to point to our response to Reviewer mQYF, in which we provide experimental evidence that *the VQ-LL-32 tokenizer can even generate “from scratch”, starting from pure noise*.
---
* **Comments on Out-of-Domain Generalization**
Since TiTok tokenizers are trained only on ImageNet, we expect limited ability to generate complex out-of-distribution scenes, such as those involving classes that are not part of the ImageNet-1k dataset or requiring composition of multiple subjects (since ImageNet images often feature a single prominent subject). Unfortunately, no highly compressed 1D tokenizer trained on larger scale datasets was available at the time of submission, so our experiments are confined to ImageNet.
However, as tokenizers with even higher compression ratios or trained on larger datasets become available, we expect application to more diverse domains to become possible. Further, we hope that the view presented in our paper – that **the lossy autoencoding task performed by tokenizers with very high compression ratios can be thought of as a generative modeling problem** – can provide an insightful perspective in scaling such tokenizers to these larger datasets.
* **Additional Out-of-Domain Examples**
As this was also requested by Reviewer mQYF, we generated additional qualitative results in the out-of-domain setting of text-guided style transfer. In particular, we use CLIP prompts like "a watercolor/pixel art/abstract/cartoon painting of a \<subject\>", and find that the model can produce qualitatively very compelling looking results for text-guided style transfer, even for styles which we expect to be mostly absent from the ImageNet-1k dataset. We will include the generations in the revised submission.
**Link to visualization: https://i.ibb.co/Bd1HkmX/style.png** | Summary: This work shows that a highly compressed 1D token set can learn different attributes in tokens, and perform generaion tasks such as inpainting and text-guided image editing with only a tokenizer, without any extra generative model training.
Claims And Evidence: The claims are supported by experiments. The authors show disentangled attributes in tokens as well as the image editing and generation applications.
Methods And Evaluation Criteria: The method is simple and the findings are interesting. Evaluation is with standard metrics (FID / IS).
Theoretical Claims: Theoretical proofs are not the focus in this work.
Experimental Designs Or Analyses: In experiments, the authors show several interesting applications and visualizations for analysis.
Supplementary Material: I reviewed the Supplementary Material.
Relation To Broader Scientific Literature: This work is related to the recent work TiTok which converts images to a 1D set of tokens, and explores several interesting properties and applications with this idea of compact 1D tokenizer.
Essential References Not Discussed: I did not find any missing key related works.
Other Strengths And Weaknesses: The attributes decomposition findings and training-free generation or editing are interesting.
However, the proposed method seems to be not directly scalable for the proposed applications. It relies on the properties of the compact tokens, which are more like emergent properties and not directly controllable.
Other Comments Or Suggestions: L290: top-1 (%) ?
Questions For Authors: Is the gradient-based latent editing stable and can get high quality results with high probability?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and helpful comments\!
* **Per-Token Attributes as Emergent Properties**
The direct token editing examples do indeed rely on emergent properties that are not controllable using the standard autoencoder-style/VQGAN training scheme used by TiTok.
As such, we agree that practical applications of token copy-paste for image editing is restricted to a few tasks (such as brightness change, background blur, etc.), and these tasks have to be manually discovered using methods such as those described in Sections 3.1 and 3.2.
However, we find the fact that such editing is possible at all very surprising. In particular, perturbation of individual tokens does not lead to semantically meaningful and globally coherent edits in the case of 2D tokenizers due to the spatial relationship of tokens and regions of the input image (see, for example, Figure 1 in \[1\]). We therefore intended to present this finding to draw attention to underexplored emergent properties of highly compressed latent spaces, as well as to motivate our more scalable gradient-based image editing approach from Section 4, which can be directly applied for practical applications such as text-guided editing or generation, or inpainting.
* **Notation: Top-1/Top-1%**
Regarding L290 (top-1 vs top-1%): Thank you for pointing out the confusing notation. "Top-1" is used to denote a seed image association procedure by which only the single most aligned seed image is chosen. In contrast, top-1% associates a random seed image from the subset of the top 1% most similar seed images (for example, with 1000 seed images, top-1% would correspond to random association to one of the top-10 images from the seed image pool). We will try to clarify this in the final version.
* **Stability of Optimization Process**
We do observe that the optimization process is sensitive to noise, such that even small perturbations to the objective and its gradients can lead to quite different generations. However, this does not mean that the generations are generally low quality – instead, we observe that this leads to generations that are diverse (even when adding only small amounts of noise), while still being reasonably high quality (as evidenced by our FID and IS scores).
### References
\[1\] Cao, S., Yin, Y., Huang, L., Liu, Y., Zhao, X., Zhao, D., and Huang, K. Efficient-VQGAN: Towards high-resolution image generation with efficient vision transformers. ICCV 2023, https://arxiv.org/abs/2310.05400 | Summary: The submission explores training-free image generation on TiTok's 1D tokenizer. It builds upon the observation that a heavily compressed tokenizer, like TiTok-L-32, is somewhat amenable to interpretable manipulation and editing of latents. The authors first demonstrate that by varying individual tokens, and by copy pasting latent manipulations from a reference example to a target image. They then build upon those insights and tune the tokens for an objective like CLIP score (between an image and prompt) with gradient descent through the TiTok decoder; i.e. without having to train a dedicated text-to-image model. Similarly, they are able to perform in-painting without training any model for that task.
## update after rebuttal
I thank the authors for their rebuttal. The conditional generation results from random initializations are quite interesting and I expect these results will strengthen the final paper. I will maintain my vote to accept this paper.
Claims And Evidence: The claims in the paper are sufficiently supported by evidence. There are, however, some areas of uncertainty, which I list in the "Methods and Evaluation Criteria" section.
Methods And Evaluation Criteria: I would say that most of the proposed evaluations make sense, and the visuals help motivate that the method can in fact generate images in a "training-free" manner. That said, there is a range of ablations that would be important to see:
1. How well does the proposed optimization procedure work when initializing with random tokens (instead of other images)? In other words, is this method limited to generating images from some existing seed image, or is it possible to perform the generation purely using the tokenizer's inductive biases and the optimization objective?
2. The paper shows results for 500-2k seed images, but how about choosing only a single (best) one? This goes into the direction of the question above, regarding the limitations of the optimization procedure.
3. Showing FID, IS, CLIP, and SigLIP scores is helpful, but I would be interested to see an analysis of how well a specific class can be generated, through the lens of a pre-trained classification model. CLIP and SigLIP scores show alignment, but the resulting scores are very close to each other after optimization, while there are much larger differences in FID and IS.
4. In Table 3, during optimization of the VAE model, did the authors apply the same KL-divergence, or KL-divergence between the optimized latents and real image latents? With discrete tokens, no token can individually be OOD, but in the VAE case it seems crucial to keep the soft constraints in mind. That then also goes into the analysis B.2.
5. How well does the proposed method compare to VQGAN-CLIP? This would be especially interesting as a comparison in Table 3.
Theoretical Claims: The paper does not make any theoretical claims.
Experimental Designs Or Analyses: The paper is not that detailed in terms of implementation details, but the methods are somewhat straight-forward and based on the descriptions I would feel comfortable reimplementing them to a good degree. Overall, the experimental design and analyses appear sound.
Supplementary Material: I reviewed all parts of the supplementary material.
Relation To Broader Scientific Literature: The proposed method mostly builds upon the recent 1D tokenizer model TiTok, and first analyzes its latents and then proposes a training-free way to generate images using that tokenizer. For now it seems that the method is somewhat limited to such highly compressed latent spaces and does not work with 2D tokenizers, VAE variants, nor less compressed 1D tokenizers.
Essential References Not Discussed: There are a few works that could be discussed in the larger context of "training-free" generation. One area is the "textual inversion" [1,2,3] line of research that optimizes one or multiple tokens to capture a concept, which can then be used to generate said concept. There is also a range of works that use CLIP as an optimization objective that could be discussed, e.g. see the ones listed in surveys like [4].
[1] An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion, Gal et al., 2022
[2] Visual Lexicon: Rich Image Features in Language Space, Wang et al., 2024
[3] Training-Free Consistent Text-to-Image Generation, Tewel et al., 2024
[4] A Survey on CLIP-Guided Vision-Language Tasks, Yu et al., 2022
Other Strengths And Weaknesses: Overall, this is a very creative paper and a fun read! The paper is a callback to "early" CLIP-optimization-based image generation attempts. The fact that image generation can be performed in such a controlled and high-quality manner without any generative model training on top of the tokens is quite interesting, and the ablations show that such extreme compression ratios are necessary to achieve that (with the given optimization algorithm here). I also appreciate the attention of using SigLIP as an evaluation criteria and running the "adversarial" baseline.
Other Comments Or Suggestions: The paper is easy to read and well motivated. I would suggest the authors to add more supporting visuals in the appendix.
Questions For Authors: 1. Visually speaking, how do the generations using the 2D tokenizer in Table 3 look?
2. For the ablation of continuous and 2D tokenizers, were they optimized using the vanilla objective, or did the authors try to regularize the latent space as well? (I.e. apply techniques such as presented in Table 4).
3. The additional qualitative results in Appendix D are interesting, and I would be glad to see more such OOD examples. Even though TiTok was trained on ImageNet-1k, it seems to generalize slightly for out-of-distribution images, which begs the question of how well this works on text-conditional generation in a larger domain.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are happy to hear you enjoyed our paper, and would like to thank you for the great questions, which we hope to answer below.
* **Q1.** 2D Tokenizer Results
We have produced visualizations of the optimization process using MaskGIT’s VQGAN, alongside the 32-token TiTok tokenizer, which we will include in the revised submission.
**Examples from Figure A5: https://i.ibb.co/YB72k9Fh/2d-tok-vis-A5.png**
**Additional example: https://i.ibb.co/WZnHmLx/2d-tok-vis.png**
In this visualization, we observe that the highly compressed tokenizer allows the optimization procedure to perform globally consistent edits to the image. On the other hand, the VQGAN optimization is easily able to change local texture and color to align with the prompt, but fails to perform "global" edits (for example, when changing the species of an animal, the shape of the head and position of the ears are relatively unchanged in the VQGAN case, but attributes such as color can be adapted more successfully).
**Regarding VQGAN-CLIP:** To the best of our knowledge, there is no existing evaluation of VQGAN-CLIP for ImageNet generation. We hope that our comparison of MaskGIT's VQGAN which is already included in Table 3 can provide such a comparison, as it should be very similar to VQGAN-CLIP while being fair in the sense that the same seed image selection and cost function are used as for the 1D tokenizer experiments.
* **Q2.** Token Regularization
**VAE Tokens.** The experiment in the paper did not include regularization on the VAE tokens. We agree this is crucial, and have repeated this experiment. Results are provided in the table below.
| |FID-5k|IS|CLIP|SigLIP|
|:-|:-|:-|:-|:-|
|**without L2 reg.**|39.6|66|0.48|2.66|
|**with L2 reg. (weight=0.2)\*** |33.2|93|0.46|3.05|
\*weight chosen for best results from sweep including {0.02, 0.1, 0.2, 0.5}
While the results for VAE tokens with regularization are improved over the numbers reported in the original submission, FID and IS are still poor compared to the VQ model, so we do not change our conclusion regarding the importance of VQ.
**2D Tokens.** For MaskGIT’s VQGAN, we do not use regularization in the experiment from Table 3. We do not find that it makes a substantial difference.
* **Q3.** Additional OOD Examples
We have run an additional out-of-domain text guided style-transfer task, which we describe in our response to Reviewer apSr.
* **Initializing with Random Tokens**
We decided to run some additional experiments and find that it is indeed possible to generate images "from scratch", starting from randomly sampled tokens!
While results are worse than in the case of CLIP-based prompt-to-seed association, FID and IS scores are still reasonable. Qualitatively, we find that it is still possible to generate relatively high quality samples, especially with more detailed CLIP prompts and longer optimization times (400-500 iterations). We plan to include uncurated generations starting from randomly sampled tokens (with **no** seed image) in the revised submission.
**Link to uncurated generations starting from random tokens: https://i.ibb.co/20L8xk1q/rand-tok.png**
Quantitative results (using the same settings as the line marked with (\*) from Table 4 in the paper) can be found in the table below:
| |iters|Seed Assoc|FID-5k|IS|CLIP|SigLIP|
|:-|:-|:-|:-|:-|:-|:-|
| **start from seed image** | 300 | CLIP-top-1% | 15.1 | 160 | 0.39 | 2.53 |
| **start from random tokens\*** | 300 | n/a | 17.0 | 114 | 0.38 | 1.68 |
**With** additional tweaks from Table 4 (token L2 regularization, extra token noise in the case of starting from a seed image, extra 100 iterations in the case of starting from random tokens), the gap in performance between the seed-image-initialized and random-token-initialized optimizations diminishes further:
| | iters | Seed Assoc | FID-5k | IS | CLIP | SigLIP |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| **start from seed image** | 300 | CLIP-top-1% | 14.6 | 182 | 0.38 | 2.07 |
| **start from random tokens\*** | 400 | n/a | 15.5 | 152 | 0.40 | 2.54 |
*\*For both tables, tokens are randomly initialized from normal distribution with std=0.3, chosen as the best performing value from a sweep including {0.05, 0.3, 1.0}.*
**Note:** Starting from random tokens leads to **better performance** than starting from random seed images (i.e., without CLIP-based seed-to-prompt association) – cf. Table 1 row 3 (line 284).
* **Initializing from Single Best Image**
In Table 1 (L290), we show the results of an experiment where only the best seed image from the seed image pool is picked for each prompt. This allows achieving high IS at the expense of degraded FID.
If you are referring to picking only one overall best image, and using that same image for every prompt, we suspect performance may not be great, since our previous experiment initializing with random tokens can achieve better results than initializing with random images. | null | null | null | null |
Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs | Accept (poster) | Summary: In this work the authors study quadratic optimization problems with linear constraints and answer the question if the objective value, feasibility and optimal solution of this problem class can be approximated by a class of graph neural networks. More precisely they study the standard graph representation of LCQPs and show that if all decision variables are continuous there exists graph neural networks which (i) can approximate the objective value up to an arbitrary accuracy with high probability, (ii) can decide if the corresponding problem is feasible with high probability, and (iii) can predict the optimal solution with minimal l2 norm up to an arbitrary accuracy with high probability. On the contrary the authors show that, if some of the variables may be restricted to be integer, the latter results do not hold in general. However, they can derive subclasses of LCQPs for which (i)-(iii) hold. Finally, the authors showcase approximation capabilities of GNNs in some numerical experiments.
"## update after rebuttal
No update.
Claims And Evidence: All the claims made in the submission are thoroughly explained and proved. The authors even provide experimental results which are in my opinion not necessary for such a theoretical work, since finding the GNN which approximates the metrics of the problem class depends a lot on the learning algorithm and hyperparameter setting and is a task for a completely different study.
Methods And Evaluation Criteria: The work is mainly theoretical and all results are properly explained and proved. The experimental confirmation of the expressiveness of the GNNs is a nice addition to the theoretical results showing that even standard training methods already lead to GNNs which nearly perfect approximate the desired property of the LCQPs. However, in practical settings the more important question is if the GNNs generalize well on unseen data, i.e., did the learning algorithm indeed find a GNN which has the theoretically desired expressiveness on the whole problem class. The latter question is not answered in detail and is in my opinion not needed for this work.
Theoretical Claims: I briefly checked the proofs in the Appendix without checking every detail.
Experimental Designs Or Analyses: Soundness of the experimental design is very satisfying, see comments above.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The work closely builds on the work of Chen et al (2023) where the same methodology is used for linear problems. However, extending the work to quadratic problems is valuable and relevant for the research field.
Essential References Not Discussed: No references.
Other Strengths And Weaknesses: Strengths:
- the paper is very well written and all concepts and proofs are mathematically thoroughly presented. I could not find any mathematical flaws in the main paper.
Other Comments Or Suggestions: - Line 47: "with with"
- Line 64: exploits -> exploit
Questions For Authors: - What was the main difficulty to extend the work of Chen et al. (2023) to the quadratic setting?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: __Reply to "Methods And Evaluation Criteria":__ Thank you for the encouraging comments, and we really appreciate it. We agree that generalization analysis is an important direction, especially for structured problems like LCQPs and MI-LCQPs, and we will highlight it as a key avenue for future work in the revised paper.
While we do not provide a theoretical generalization analysis, we included an empirical study in Section F.4 and Figure 6 (Appendix F), to investigate the generalization performance of GNNs on QP tasks. The results show that as the number of training instances increases, the generalization gap decreases and performance improves. This suggests that GNNs have the potential to generalize to unseen QPs drawn from the same distribution, provided sufficient and appropriately sampled training data.
__Reply to "Other Comments Or Suggestions":__ Thanks for the detailed comments, and we will fix these typos following your suggestions.
__Reply to "Questions For Authors":__ We acknowledge that our proofs use a similar high-level framework to those in Chen et al. (2023a;b). However, we would like to respectfully emphasize that there are several important and non-trivial differences in the technical details. These include:
* We have richer counterexamples for MI-LCQPs (Chen et al. (2023b) only have counterexamples for the feasibility).
* We establish the connection between the expressive power of the WL test and the specific properties of LCQPs and MI-LCQPs. In particular, we have some new analysis handling the quadratic terms in Appendix A that is not directly from Chen et al. (2023a;b), and we show that the GNN approximation theory for feasibility/optimal objective can hold with a weaker assumption than the optimal solution (GNN-analyzability vs GNN-solvability), which is also not covered in Chen et al. (2023b).
* We design the graph representation (sometimes with hyperedges, see Appendix E) to fully encode all elements of these quadratic programs; and we develop theoretical analysis involving hyperedges for QCQPs. Technically, the representation and analysis of hyperedges in Appendix E are significantly different. | Summary: This work proves that message-passing GNNs can universally represent fundamental properties of quadratic programs, including feasibility, optimal objective values, and optimal solutions. They also prove that GNNs are not universal for mixed-integer problems.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: This work also provides NN/GNN universality and complexity for solving convex quadratic programs [1].
[1] Yang, L., Li, B., Ding, T., Wu, J., Wang, A., Wang, Y., ... & Luo, X. (2024). An Efficient Unsupervised Framework for Convex Quadratic Programs via Deep Unrolling. arXiv preprint arXiv:2412.01051.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written.
2. The theoretical results on GNN universality for QP are solid and important for its applications.
Weakness:
1. The authors mention a concurrent work by Wu et al. (2024) that explores a tripartite graph representation and its associated GNN for QCQP, investigating its expressive power (line 105). Since convex QP is a special case of QCQP, the authors should compare and discuss their results and methodology with those of Wu et al. (2024).
2. Beyond establishing universality, the parameter complexity of the GNN is not addressed. Understanding this complexity is crucial for evaluating its superiority compared to standard neural networks, which are also universal approximators for continuous mappings.
3. Although the framework is demonstrated for LP and QP, it is unclear whether this graph representation approach can be generalized to more complex problems, such as polynomial optimization. Clarification or extension to these cases would strengthen the paper's contribution.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Are there any real-world MI-LCQPs that are GNN-friendly?
2. Is there any theoretical advantage to using GNNs instead of regular neural networks when solving QPs?
3. Could the authors clarify the specific methodological contributions of this work compared with the established methods in [1,2]?
[1] Chen, Z., Liu, J., Wang, X., Lu, J., and Yin, W. On representing linear programs by graph neural networks. In The Eleventh International Conference on Learning Representations, 2023a.
[2] Chen, Z., Liu, J., Wang, X., Lu, J., and Yin, W. On representing mixed-integer linear programs by graph neural networks. In The Eleventh International Conference on Learning Representations, 2023b.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: __To reviewer:__ Thank you for your insightful comments. Due to the 5000-character limit, our responses must be brief, but we’d be happy to elaborate on any specific points in the next stage of rebuttal.
__Reply to "Essential References Not Discussed":__ Thanks for highlighting this relevant work. We’ll cite and discuss it in the revision. This concurrent study [1] proposes an unrolling-based framework for convex LCQPs with explicit complexity bounds, while our work analyzes GNNs’ expressive power for broader problem classes (including mixed-integer LCQPs) with universal approximation results, though without complexity bounds. The two are complementary.
__Reply to Weakness 1:__ We will include a more detailed comparison with Wu et al. (2024) in the revision.
* Methodologies differ: Wu et al. use a tripartite graph for general QCQPs, while we use a variant of bipartite graph tailored to (MI-)LCQPs.
* Results differ with partial overlap: both cover convex LCQPs, but Wu et al. include quadratic constraints, whereas we handle mixed-integer cases and offer an alternative quadratic constraint approach.
Notably, while Appendix E discusses QCQPs, our method differs structurally: Wu et al. add nodes for quadratic terms, while we use hyperedges, leading to a hypergraph GNN.
__Reply to Weakness 2:__ We agree that analyzing GNN parameter complexity is important. While our current results are non-parametric, we will add a discussion: The algorithm-unrolling paper by Yang et al. [1] provides a way to estimate parameter complexity, as each layer's parameters in [1] are explicitly defined. Moreover, prior work (e.g., [2]) shows that unrolling can be viewed as a structured GNN. These links suggest that one may derive GNN complexity bounds for (MI-)LCQPs from unrolling results.
[2] Li et al. "On the Power of Small-size Graph Neural Networks for Linear Programming." NeurIPS 2024.
__Reply to Weakness 3:__ We'll clarify this in the revision: The graph representation can indeed be extended to polynomial optimization. We suggest modeling terms like $F_{j_1,...,j_k}x_{j_1}\cdots x_{j_k}$ as hyperedges:
- $(w_{j_1},...,w_{j_k})$ for objectives
- $(v_i,w_{j_1},...,w_{j_k})$ for constraints
Our QCQP analysis (Appendix E) shows a concrete example of this idea: hypergraph GNNs can approximate such properties, suggesting potential for broader extension to convex polynomial optimization.
__Reply to Question 1:__ Yes, there are. In QBLIB, 77 out of 96 binary-variable linear constraint QPs are GNN-friendly. Detailed results can be found in the paragraph “Frequency of GNN-friendly instances in practice” and in Table 2 (Page 22). We'll add this statistic to Section 4.3 (Page 7) in the revision.
__Reply to Question 2:__ Compared to standard NNs, GNNs provide some unique theoretical advantages:
* __Permutation Invariance/Equivariance__
- In QPs, swapping variable/constraint order should permute solutions accordingly.
- __Standard neural networks lack built-in permutation invariance/equivariance__, requiring explicit training on all $O(n!)$ input permutations. __In contrast, GNNs inherently maintain this property__ through their graph-based architecture, automatically handling variable/constraint reordering without additional training.
* __Scalability to Varying Sizes__
- In GNNs, the learnable functions ($g$'s and $r$'s) and their parameters are shared across nodes and do not depend on the specific index $i,j$, enabling **direct application to new problem sizes** without retraining.
- Standard NNs need fixed input/output dimensions, requiring architectural changes for size variations.
These points were briefly mentioned at Lines 64–67 (left column), and we will expand them in the revision.
__Reply to Question 3:__ While our proofs share a high-level framework with Chen et al. (2023a; b), we highlight several non-trivial technical advances:
1. **Enhanced Counter-Examples**
- Chen et al. (2023b) only show feasibility counterexamples
- We provide richer counterexamples (feasibility, optimal objective, and optimal solution) for MI-LCQPs
2. **New Theoretical Connections**
- Connect the expressive power of the WL test to key properties of (MI-)LCQPs.
- Novel analysis of quadratic terms (Appendix A)
- Prove GNN approximation under weaker assumptions (GNN-analyzability vs GNN-solvability). Chen et al. (2023b) always assume GNN-solvability.
3. **Hypergraph Innovations**
- Develop novel hyperedge representations and analysis for QCQPs (Appendix E)
- Significant technical differences from Chen et al. (2023a; b) that do not involve hypergraphs.
While our work builds on prior studies, we believe it makes valuable contributions. Given the importance of QPs, a theoretical foundation for GNNs in this setting is both timely and impactful. Our proposed criteria—GNN-analyzable and GNN-solvable—serve as practical tools for assessing datasets and diagnosing issues in MI-LCQP training. | Summary: This paper provides a theoretical analysis to investigate the expressive power of standard Message-Passing GNNs (MPGNNs) in solving the Linearly Constrained Quadratic Program (LCQP) and Mixed-Integer (MI) LCQP tasks. Specifically, the paper focuses on three mappings
with MPGNNs: feasibility mapping, optimal objective mapping, and optimal solution mapping. The theoretical findings are well-validated with a comprehensive experiment.
Claims And Evidence: Yes. The claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. I have checked the proofs listed in section 3 and section 4.
Experimental Designs Or Analyses: Yes. The paper conducts neumerical experiments to validate the GNN's expressive power to fit optimal objective mapping and optimal solution mapping on the LCQP amd MI-LCQP. The authors utilize both real-world datasets and synthetic dataset. For the experiment results of real-world datasets (Appendix F.5), in general, the training errors are high. This may indicate that standard MPGNNs may struggle with real-world QPs. It would be better to provide a discussion to analyze why.
Supplementary Material: Yes, I have reviewed the appendix.
Relation To Broader Scientific Literature: The paper explore the MPGNNs' expressive power for LP and MILP to the LCQP tasks. Specifically, the paper utilizes the Weisfeiler-Lehman (WL) test to prove that separation power (Section 3, Appendix A) for non-linear QP. In addition, the differences
between LCQP and MI-LCQP universality (Theorems 3.2–3.4 vs. Propositions 4.1–4.3) are well-explained, particularly the counterexamples that demonstrate MPGNNs limitations in mixed-integer settings.
However, Theorems and their proofs in sections 3 and 4 for QP and MIQP are almost direct extensions from (Chen et al. 2023a;b) which focused on LP and MILP, making this work incremental to previous works and thus may lack novelty and non-trivial contributions.
In addition, high expressiveness of the hypothesis space does not necessarily lead to better generalization performance in theory (universal neural models are in fact easy to acquire). It would be a great plus to complete the theoretical analysis by providing generalization bounds.
Essential References Not Discussed: None witnessed
Other Strengths And Weaknesses: Strengths:
1)Though this work mainly focuses on the theoretical part, the identification of "GNN-friendly" subclasses (Definition 4.4) and the criteria for verifying them (Section 4.2) provide insights into real-world tasks;
2) The numerical validation that was conducted on both synthetic and benchmark datasets (Maros-Meszaros) strengthens the practical effect.
Weakness: In the preliminaries section, Q is defined to be symmetric, however, it’s defined to be positive semidefinite in Theorem 3.3 and 3.4. It’s confusing whether “(Q is positive semidefinite almost surely)” in Theorem 3.3 refers to a real-world fact or the previous condition.
If it’s the latter, then the theorems do not guarantee the affirmative answer when Q is symmetric but not positive semidefinite. If it’s the former, the fact should be stated clearly out of the theorem.
Other Comments Or Suggestions: The writings could be improved, e.g., typos. (e.g. Line 668 “Fixe”) and having legends for figures.
Questions For Authors: The paper proves the expressive power limitations of standard MPGNNs for LCQP problems through WL equivalence (Theorem 3.2–3.4). Recent works have shown that high-order or hypergraph GNNs offer greater expressive power than standard MPGNNs[1]. Is it possible to
apply the approach used in the paper to these GNNs to address such limitations? In addition, for Quadratically Constrained Quadratic Programs (QCQP) discussed in Appendix E, constraints may involve interactions between multiple variables (e.g., xixj <= b), and hyperedges in a hypergraph can naturally represent such relationships. It would be interesting to investigate the possibility of providing a universal analysis for LCQP and QCQP.
[1] Feng, Jiarui, et al. "How powerful are k-hop message passing graph neural networks." Advances in Neural Information Processing Systems 35 (2022): 4776-4790
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: __To reviewer:__ Thank you for your valuable comments. Due to the 5000-character limit, our responses must be brief, but we’d be happy to elaborate on any specific points in the next stage of rebuttal.
__Reply to "Experimental Designs Or Analyses":__ We agree that the training error on the Maros-Mészáros test set is relatively higher compared to synthetic datasets. This is primarily due to its inherent diversity. The 138 quadratic programs are sourced from multiple domains (CUTE library, Brunel Optimization Group, and seven other institutions), making the dataset highly **heterogeneous** with few instances per application scenario.
This contrasts with prior successful empirical work on learning to solve QPs (Nowak et al., 2017; Wang et al., 2020b, 2021; Qu et al., 2021; Gao et al., 2021; Tan et al., 2024), which typically focus on single domains with more consistent problem structures.
While we recommend practitioners train GNNs on domain-specific instances, our theoretical goal requires validating GNNs' **general** ability to represent (MI-)LCQPs. Despite the dataset's challenges, we observe consistent improvement in GNN expressivity with increased model capacity (e.g., larger embeddings), mirroring trends in synthetic experiments.
Finally, we note the issue of **numerical instability**. Unlike synthetic datasets, Maros-Mészáros problems involve coefficients with wide-ranging magnitudes (e.g., from -780600000 to 1, after being converted to one-sided), making training numerically challenging. Despite this, GNNs demonstrate the ability to fit optimal objective values and solutions.
__Reply to "Relation To Broader Scientific Literature":__ While our proofs share a high-level framework with Chen et al. (2023a; b), we highlight several non-trivial technical advances:
1. **Enhanced Counter-Examples**
- Chen et al. (2023b) only show feasibility counterexamples
- We provide richer counterexamples (feasibility, optimal objective, and optimal solution) for MI-LCQPs
2. **New Theoretical Connections**
- Connect the expressive power of the WL test to key properties of (MI-)LCQPs
- Novel analysis of quadratic terms (Appendix A)
- Prove GNN approximation under weaker assumptions (GNN-analyzability vs GNN-solvability). Chen et al. (2023b) always assume GNN-solvability.
3. **Hypergraph Innovations**
- Develop novel hyperedge representations and analysis for QCQPs (Appendix E)
- Significant technical differences from Chen et al. (2023a; b) that do not involve hypergraphs.
While our work builds on prior studies, we believe it makes valuable contributions. Given the importance of QPs, a theoretical foundation for GNNs in this setting is both timely and impactful. Our proposed criteria—GNN-analyzable/solvable—serve as practical tools for assessing datasets and diagnosing issues in MI-LCQP training.
Additionally, we agree that generalization analysis is crucial, and will highlight this as key future work in the revision.
__Reply to Weakness:__ It is the latter. We will further clarify the convex assumption in the revised manuscript:
1. **Page 2 (Contributions):** Clarify convexity assumption and nonconvex counterexample
2. **Page 4:** Rename Section 3 → "Universal approximation for **convex** LCQPs"
3. **Page 5:** Add nonconvex LCQP counterexample showing GNN indistinguishability despite different optima.
Consider a convex LCQP
$$\min~~ \frac{1}{2} \begin{bmatrix}x_1 & x_2\end{bmatrix} \begin{bmatrix}1 & 0 \\\\ 0 & 1\end{bmatrix} \begin{bmatrix}x_1 \\\\ x_2\end{bmatrix}, \quad\text{s.t.}~~ -1\leq x_1,x_2\leq 1,$$
and a nonconvex LCQP
$$\min~~ \frac{1}{2} \begin{bmatrix}x_1 & x_2\end{bmatrix} \begin{bmatrix}0 & 1 \\\\ 1 & 0\end{bmatrix} \begin{bmatrix}x_1 \\\\ x_2\end{bmatrix}, \quad\text{s.t.}~~ -1\leq x_1,x_2\leq 1.$$
These two LCQPs are indistinguishable by GNNs, and they have different optimal objective/solution.
__Reply to "Other Comments Or Suggestions":__ We'll fix these typos in the revision.
__Reply to "Questions For Authors:"__ Thank you for this insightful comment. We fully agree that $k$-hop GNNs exhibit stronger separation power than MP-GNNs. In the proof of Prop. 4.2 (Appendix B), we construct MI-LCQP instances with distinct optimal objectives that MP-GNNs cannot distinguish. But **3-hop GNNs can** distinguish them. Crucially, when we scale these examples to 10 variables/10 constraints (one graph with 20 connected nodes vs. two 10-node components), **3-hop GNNs fail** while **5-hop GNNs succeed**, confirming that larger $k$ reduces unsolvable cases. We will include this analysis in the revision.
For QCQPs, Appendix E.1 presents a hypergraph GNN framework, with universal approximation analysis (Theorems E.2–E.4). While (MI-)LCQPs anchor our main narrative, we agree that unifying LCQP/QCQP/higher-order extensions via hypergraph GNNs is a compelling direction for future work, which we will emphasize. | Summary: The paper establishes that message-passing GNNs can express the feasibility, optimal value and optimal solution of convex linearly constraint quadratic programs as well as of mixed-integer linearly constraint quadratic programs, if they adhere to certain conditions often true in practice. Negative results are provided for mixed-integer linearly constraints quadratic programs in general. Numerical experiments validate the approximation results.
Claims And Evidence: In the introduction, it is claimed that GNNs can accurately predict the feasibility, optimal value and solution of a linearly constraint QP. However, the last two claims require the additional assumption of a convex quadratic program (i.e., positive semidefinite $Q$). This should be reflected in the claim. Other than this, all claims are clear and provided with convincing evidence.
Methods And Evaluation Criteria: Yes, for details see Experimental Designs Or Analyses.
Theoretical Claims: I checked the argumentations in the main draft, I did go over some arguments in the appendix, but did not check the proofs in the appendix in detail.
Two minor issues:
**(I)** Lines 357-360 (left column) reads as if in the GNN-solvable case, GNNs can approximate the optimal solution, but not feasibility. Given Prop. D.1, this should not be true. I would make this more clear.
**(II)** I think the proof leading to the statement that GNN-solvable and GNN-analyzable instances make up the majority of the MI-LCQP set could be assumed very synthetic. Coefficients in the objective would never be set to irrational numbers, it is rather reasonable to even assume that in practice $c$ may be the same for many $j$. The empirical investigation on QPLIB is interesting and supports that the assumption of randomly sampling $c$ from $\mathbb{R}^n$ and using properties of $\mathbb{R}$ seems too strong to explain the real-world occurrences of GNN-solvable/analyzable instances.
Experimental Designs Or Analyses: Yes, some issues:
**(I)** I'm not convinced of omitting the experiments where GNNs are used to fit $\Phi_{feas}$ based on arguing that feasibility falls to the case of LP and MILP (Chen et al. 2023). This is as due to the fact that the input graph representing a LCQP / MI-LCQP is still different to the one for an LP / MILP and thus, the empirical behavior of a GNN on the changed input graph is still interesting.
**(II)** Fig 2 to 4 and Tabl. 1 are missing standard deviations.
**(III)** The experimental details mention that four GNNs are trained and averaged over all instances during training. I would have assumed that the same experiments are also repeated for several seeds, which seems not to be the case.
**(IV)** While for showing approximation results, it is not necessary for the GNNs to be applied to unseen instances, I would be curious about how well the GNNs actually generalize to unseen instances. This is slightly touched upon in F.4 where results on the validation set are shown, but no results on a held-out test set are provided. It would be easy to generate a few more random instances to test this behavior.
**(V)** GNNs expressiveness results are not tested on practical MI-LCQP instances. Have you considered GNN-solvable/analyzable instances from QPLib?
Supplementary Material: None provided (except appendix).
Relation To Broader Scientific Literature: As universal approximation of message-passing GNNs has been investigated for LPs and MILPs, extending the study to QPs seems natural and important, especially as there are several works, also cited in this work, on empirically learning to solve QPs using GNNs.
Essential References Not Discussed: All essential related work has been discussed.
Other Strengths And Weaknesses: The paper is written in a very clear fashion and good to follow. As the results on LCQPs and MI-LCQPs have not been known, the results are significant. The idea to the research question may not be the most original, as after this problem has been approached for LPs and MILPs by Chen et al. (2023a, b) it is natural to extend the question to other optimization problems. However, it required new ideas to proof and is still an important and significant result, providing important theoretical grounding and guiding for the empirical developments in the field.
Other Comments Or Suggestions: * The definition of $\Phi_{sol}$ for an MI-LCQP is quite hidden in the mids of App. C, it would be helpful to make it more prominent.
* In Section 2, the mappings $g$ and $r$ are used before they are introduced.
* Define the $\succeq$ operator in Line 154 (second column). Here it is used for a positive semi-definite matrix. However, in the ML literature, it is sometimes used for matrix inequalities.
* Line 047 "with with"
* Section 3 header misses a "i" in "Universal"
* Line 061 and 062, use \citet for Wang & Yu's works
* Line 169/170 "there always be" -> "there always is"
Questions For Authors: **(I)** It seems that predicting the optimal objective value / solution is more difficult than predicting if an LCQP is feasible (for the first, the requirement of convexity is needed, for the second not). Can you comment on how the three problems relate to one another in some sense of difficulty hierarchy?
**(II)** Do you have a negative result for non-convex LCQPs?
**(III)** In Lines 188-192 (second column) the authors state that any two LCQP-graphs that are indistinguishable by the WL test, or equivalently by all GNNs, must have identical optimal value and solution. As Xu et al. (2019) showed that message-passing GNNs are upper bounded by the 1-WL but not necessarily reach its expressivity, does the proposed family of GNN architectures reach 1-WL expressivity?
**(IV)** Section 2, node-level output: There is some issue with the definition of $y$. It is indexed by $j$ even though defined for each $i \in V$. Shouldn't it be defined for each $j \in W$?
**(V)** Can you give some intuition on what "GNN-analyzable" means?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: __To reviewer:__ Thank you for your detailed comments! Due to the 5000-character limit, our responses must be brief, but we’d be happy to elaborate further in the next rebuttal stage.
__Reply to "Claims And Evidence":__ We will clearly state the convex assumption in the revision:
1. **Page 2 (Contributions):** Clarify convexity assumption and nonconvex counterexample
2. **Page 4:** Rename Section 3 → "Universal approximation for *convex* LCQPs"
3. **Page 5:** Add nonconvex LCQP counterexample showing GNN indistinguishability despite different optima.
Consider a convex LCQP
$$\min~~ \frac{1}{2} \begin{bmatrix}x_1 & x_2\end{bmatrix} \begin{bmatrix}1 & 0 \\\\ 0 & 1\end{bmatrix} \begin{bmatrix}x_1 \\\\ x_2\end{bmatrix}, \quad\text{s.t.}~~ -1\leq x_1,x_2\leq 1,$$
and a nonconvex LCQP
$$\min~~ \frac{1}{2} \begin{bmatrix}x_1 & x_2\end{bmatrix} \begin{bmatrix}0 & 1 \\\\ 1 & 0\end{bmatrix} \begin{bmatrix}x_1 \\\\ x_2\end{bmatrix}, \quad\text{s.t.}~~ -1\leq x_1,x_2\leq 1.$$
These two LCQPs are indistinguishable by GNNs, and they have different optimal objective/solution.
__Reply to "Theoretical Claims":__
__(I)__ We will clarify in our revision that in the GNN-solvable case, GNNs can approximate *not only* the optimal solution but also the feasibility and the optimal objective, as GNN solvability implies GNN-analyzability (Prop. D.1).
__(II)__ We acknowledge this conclusion is more synthetic, hence its appendix placement. In Section 4.3, we'll clarify:
1) GNN-solvable/analyzable cases dominate *under specific distributional assumptions*
2) *Real-world* datasets (e.g., QPlib) may contain GNN-insolvable cases (Appendix D)
__Reply to "Experimental Designs Or Analyses":__
__(I)__ While (MI)LP and (MI-)LCQP input graphs differ due to $Q$, our GNN architecture can emulate LP/MILP cases by setting $g_l^Q = 0$, making the architectures equivalent when $Q$ is ignored.
__(II)__ We re-measure the solving times in Table 1 and present the average solving times and standard deviations below. While the solving times are different due to changes in hardware environment and system load, the advantage of GNN with large batch sizes is consistent. We will update Table 1 and Fig. 2-4.
| GNN | BS=1 | BS=10 | BS=100 | BS=1000 | OSQP |
|---|---|---|---|---|---|
| Time | 53.62±16.72 | 5.37±1.87 | 0.504±0.142 | 0.089±0.002 | 4.48±3.62 |
__(III)__ We run two more sets of experiments on different random seeds, which decide the problem generation, GNN initialization, and stochastic optimization. The results are consistent. For example, over three experiments of training a GNN with an embedding size of 256 to fit the optimal solutions of 500 LCQP problems, the average relative errors we can achieve are $2.83\times 10^{-3}$, $2.69\times 10^{-3}$ and $2.84\times 10^{-3}$. We will add the full results in the revision.
__(IV)__ We apologize for using the confusing term "validation set" in F.4. The set was never seen during training. Hence, the results in Fig. 4 are the generalization results that you asked for. We will change the terms to avoid confusion.
__(V)__ We performed GNN training on 73 GNN-solvable instances from QPLib that provide the optimal solutions and objectives. We train GNNs of embedding sizes 128, 256, and 512 to fit the objectives and solutions. The training errors we achieved are shown below. GNNs can fit the objective values well and demonstrate the ability to fit solutions. The results show the model capacity improves as the model size increases. We will include the training curves in the revision.
| emb_size | 128 | 256 | 512 |
|---|---|---|---|
| objective | 3.28E-04 | 9.05E-07 | 4.00E-07 |
| solution | 6.57E-01 | 6.72E-01 | 6.06E-01 |
__Reply to "Other Comments Or Suggestions":__ We will revise accordingly.
__Reply to "Questions For Authors":__
__(I)__ The difficulty hierarchy differs by problem type:
- **LCQP:** feas (Assump 3.1) < obj (+convexity) < sol (+feasible/bounded)
- **MI-LCQP:** feas/obj (Assump 4.5 + GNN-analyzable) < sol (+GNN-solvable)
Convexity isn't required for MI-LCQP due to inherent integer non-convexity.
__(II)__ Yes, it is presented at the beginning of our rebuttal.
__(III)__ *Theoretically*, our GNN architecture (Page 3) achieves 1-WL expressivity when mappings ($g$'s, $r$'s) are injective (Xu et al., 2019), satisfied by our continuous function assumption.
*Practically*, MLP implementations limit expressivity, but with sufficiently large MLPs, the expressivity can closely approximate that of 1-WL on specific datasets, as Section 5 shows.
__(IV)__ We will correct it.
__(V)__ *GNN-solvability* requires all variable nodes to be distinguishable (no symmetry), while *GNN-analyzability* permits some symmetry, and requires that edges with identical node-color pairs must share weights. (For example, if two edges both connect a blue node to a red node, they must have the same weight) This requirement implies WL-equivalent nodes share edge-level properties (see Fig.5).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal that addressed my questions and concerns. Given the answers will be reflected in the camera-ready version, I'm happy to increase my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your positive feedback and for increasing the score. We’re glad our responses addressed your concerns and will ensure all changes are reflected in the final version. We appreciate your time and support.
Best,
Authors | null | null | null | null | null | null |
Symmetry-Robust 3D Orientation Estimation | Accept (poster) | Summary: This work presents a full orientation estimation method for generic shapes. Concretely, a two-stage framework is proposed for this task. The method first uses a quotient orienter to recover the shape's orientation up to octahedral symmetries by continuous regression. Then a flipper is employed to predict one of 24 octahedral flips that returns the first-stage output to canonical orientation via standard classification. Additionally, a conformal prediction stage is used to enable the flipper to output adaptive prediction sets, resolving ambiguities in the results through human-in-the-loop interaction. Experimental results show that the proposed method achieves SOTA performance in up-axis prediction and full-orientation recovery.
## update after rebuttal
The reviewer would be happy to accept this paper if the title is changed to "3D Orientation Estimation for Symmetric Shapes" as the reviewer `gQb9` suggested.
Claims And Evidence: Mostly yes. But the claim of "anything" is somewhat problematic.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, but I am not able to check the correctness of all the proofs.
Experimental Designs Or Analyses: Yes. There could be some more experimental analyses to support the claim of "anything".
Supplementary Material: Yes. All parts.
Relation To Broader Scientific Literature: This paper is related to orientation estimation, shape canonicalization and geometric deep learning. The main contribution is to make the full orientation estimation possible for a wide range of generic shapes.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
- As noted by the authors, this is the first attempt to solve the task of full-orientation estimation for generic shapes without class information.
- The paper analyzed why a naive regression approach would fail for full orientation estimation for generic shapes.
- The code is available.
Weaknesses
- In the experiments, the ShapeNet dataset is divided into a 90% training split and a 10% testing split. Then, where does the validation set come from?
- How robust is the method to noise? In practice, reconstructed shapes frequently include noise points. Could the authors evaluate the method’s resilience to noisy shapes, especially nearly symmetric shapes?
- The term "anything" should be used very carefully. To show that the method can achieve the "anything" capability, more experimental results might be needed. For instance, the method can be tested on human/face/hand/animal shapes and protein molecule shapes.
- Could the authors provide runtime analyses for both the training and inference stages of the method?
Other Comments Or Suggestions: - The initial two sentences of both the Abstract and the Introduction are essentially identical, rendering them somewhat redundant.
- The titles of certain subsections should not end with full stops.
- How were the standard deviations in Table 2 calculated?
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and your helpful comments. In this rebuttal, we will address many of the questions you have asked in your review.
*In the experiments, the ShapeNet dataset is divided into a 90% training split and a 10% testing split. Then, where does the validation set come from?*
Because our two models are relatively costly to train (see below), we did not tune our DGCNN’s hyperparameters and instead used the defaults from the original DGCNN implementation. We chose to subsample 2k points per point cloud in each training iteration to roughly match Upright-Net’s 2048 points, and chose the largest batch sizes possible given our GPU’s memory constraints. We did monitor validation metrics during training, but our decision to end our training runs were driven primarily by computational constraints, and our validation metrics were continuing to improve (albeit very slowly) when we ended our final training runs.
We consequently do not have a separate test set. We have referred to our metrics as “validation” metrics rather than “test” metrics elsewhere in the manuscript, and would be happy to refer to the 90-10 split as a training-validation split. Our results on ModelNet40 and on Objaverse provide a comprehensive picture of our method’s performance on fully unseen data, and our ModelNet40 results show that our method also strongly outperforms Upright-Net on fully unseen data.
*How robust is the method to noise? In practice, reconstructed shapes frequently include noise points. Could the authors evaluate the method’s resilience to noisy shapes, especially nearly symmetric shapes?*
Thank you for this suggestion. We have repeated our experiments testing our pipeline’s up-axis estimation accuracy on Shapenet, where we now add Gaussian noise with a standard deviation of 0.05 and 0.1 to the point clouds (which have been normalized to lie in the unit ball) and normals (we re-normalize the normals after adding noise). We depict the result of this experiment [here](https://imgur.com/a/K1rTayu). Our method is fairly robust to small amounts of noise, with a noise std of 0.05 resulting in our pipeline’s accuracy (\% of shapes with angular error of up-axis prediction $<10^\circ$) dropping from 89.2% to 86.2%. Increasing the noise std results in a larger accuracy penalty, with our pipeline’s accuracy dropping further to 77.6%.
We also highlight that Figure 11 in the appendices depicts our pipeline’s outputs on meshes from Objaverse. These meshes are both highly ood for our pipeline, which was trained on Shapenet, and generally lower-quality than those found in Shapenet.
*Could the authors provide runtime analyses for both the training and inference stages of the method?*
Please see our response to Reviewer 3giS for our method’s inference times. Our orienter takes 420 seconds per epoch to train on a single V100 GPU – so 1719 epochs of training is 9.33 days for the orienter. Our flipper takes 410 seconds per epoch on the same V100 GPU – so 3719 epochs is 17.6 days for the flipper. We found that both models’ validation metrics continued to improve after many epochs of training. Improving our problem’s conditioning so that both models require fewer epochs to converge would be a valuable direction for future work.
*How were the standard deviations in Table 2 calculated?*
The mean and standard deviation of the chamfer distances were computed over the Shapenet validation set.
*The term "anything" should be used very carefully. To show that the method can achieve the "anything" capability, more experimental results might be needed. For instance, the method can be tested on human/face/hand/animal shapes and protein molecule shapes.*
We would be pleased to change our paper's title if the program chairs allow it, especially since we have become aware of another recent paper titled "Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models." For example, we could call our paper "Robust 3D Orientation Estimation for Symmetric Shapes".
---
Rebuttal Comment 1.1:
Comment: The reviewer appreciates the authors' rebuttal and acknowledges that the newly proposed title is a significant improvement over the original, which sounded too overclaimed. However, to demonstrate that the method is really "robust", the reviewer suggests evaluating the method on real-world reconstructed 3D shapes, such as those obtained from 3DGS, SfM, or SLAM. Relying solely on ShapeNet models with synthetic noise and the relatively noise-free Objaverse shapes is insufficient to demonstrate true robustness. Additionally, the reviewer is unsure whether ICML allows title changes after submission, though this has been permitted in similar conferences like ICLR. | Summary: This paper explores the challenges faced by current baselines that aim at predicting 3D-shape orientation. It shows that trying to minimize an L2 distance cannot recover the ground-truth orientation in the presence of intrinsic symmetries as the solution to the L2 distance will not be unique. To address this challenge, the paper proposes a two stage pipeline that(1) predicts a set of approxiamations to the ground truth orientation (which recovers a shape’s orientation up to octahedral symmetries), classify this approximations to return the best orientation.
Claims And Evidence: Look okay. The paper proposes to separately predict approximations to the grounth-truth approximations which they do using their proposed "quotient orienter", and then classify these proposed approximations to return the best approximation using their proposed "flipper". The main idea being that directly solving the regression problem to predict the orientation in the presence of intrinsic symmetries is not possible which they show using their propositions 1-3.
Methods And Evaluation Criteria: Looks okay. But we have two questions:
- We are wondering in lines 296-306 of the experimental setup. It seems the proposed method is trained on 10K points per shape and 2K points are sampled out of the 10k points per shape for each epoch, while UprightNet is only trained on 2k points in total per shape?
- We would also like to ask the authors how the proposed pipeline performs when trained on a similar dataset to that proposed in Upright-Net?
Theoretical Claims: Propositions 3.1-3.3 all look okay in the paper and appendix.
Experimental Designs Or Analyses: Looks okay. However, please refer to the "Methods And Evaluation Criteria" Section.
Supplementary Material: Looks okay (Looked through all parts of the supplementary material)
Relation To Broader Scientific Literature: Looks okay.
Essential References Not Discussed: No recommendations.
Other Strengths And Weaknesses: No recommendations.
Other Comments Or Suggestions: No recommendations.
Questions For Authors: Please refer to the "Methods And Evaluation Criteria" Section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and your helpful comments. In this rebuttal, we will answer the two questions you have posed to us in your review.
*We are wondering in lines 296-306 of the experimental setup. It seems the proposed method is trained on 10K points per shape and 2K points are sampled out of the 10k points per shape for each epoch, while UprightNet is only trained on 2k points in total per shape?*
This is correct. We followed the original implementation of Upright-Net as closely as possible, which calls for drawing a fixed reservoir of 2048 points to use in each iteration. Because one can draw arbitrarily large point clouds from the surface of a mesh, we saw no reason to restrict ourselves to a fixed reservoir of 2048 samples when training our pipeline.
*We would also like to ask the authors how the proposed pipeline performs when trained on a similar dataset to that proposed in Upright-Net?*
Because our models are relatively costly to train, we have not had the opportunity to run this experiment during the rebuttal period. However, we believe that training both methods on all of Shapenet provides a more complete and realistic picture of how each method will perform in practice than training both methods on a small subset of classes from ModelNet40. | Summary: The paper presents a two-stage deep learning method to orient shapes, dubbed Orient Anything. It proves why naive L2 regression of the orientation matrix fails for shapes with symmetries, and presents a theoretical framework to overcome the problem. The method consists in selecting a finite group $\hat{R}$ that contains the rotational symmetries of the object. In practice, the authors choose the octahedral group that contains the 24 rotational symmetries of a cube and captures most of the rotational symmetries of real-world shapes. Then, the problem of orientation estimation is divided into two stages, corresponding to two neural networks: the first regresses the orientation up to a rotation in $\hat{R}$ (so called "quotient regressor"), the second one selects one of the rotations in $\hat{R}$ (so called "flipper"). The composition of the two transformations gives the estimated orientation. Finally, since the orientation of some shapes is inherently ambiguous, the authors propose to use conformal prediction when a human is in the loop, to let the network output adaptive prediction sets whose size varies with the flipper's uncertainty. The method is trained on ShapeNet and quantitatively tested on ShapeNet and ModelNet40, both standard and challenging benchmarks for up-axis estimation, i.e. estimating only the vertical axis of the object, and estimation of the full orientation. It outperforms an existing baseline, UpRightNet, for up-axis estimation and it outperforms some made-up baselines, based again on UpRightNet, for the full orientation case. Some qualitative results of generalization capabilities to ObjaVerse are reported in the appendix.
## Update after rebuttal
I carefully read all reviews and responses. The authors clarified how TTA is implemented and performed an ablation study on its contribution, which was found to be not critical for the overall performance of the method and its superior performance with respect to the baseline. The explanation of the first step of Prop 3.1 is also clear now. I'd suggest to add this explanation in the appendix. The other reviews do not uncover critical weaknesses and the responses to them seems convincing. For all these reasons, I confirm my overall recommendation. However, I agree that the title should be toned down, if possible. In particular, I'm in favor of the proposal by the authors "3D Orientation Estimation for Symmetric Shapes" (dropping "Robust" for the reasons discussed by reviewer ffTb).
Claims And Evidence: The claims are fully supported by clear and convincing theoretical results and experimental evidence.
Methods And Evaluation Criteria: Evaluation criteria make sense and, for the up-axis estimation, are the one used in previous work. The main weakness is the lack of proper baselines for the full orientation case. The baselines created by the authors, being based on UpRightNet, were clearly at a disadvantage, so, in some sense, the second experiment does not provide new evidence in terms of comparisons, Yet, the authors tried their best and the experiment provides the absolute performance of the method on the harder problem of full orientation estimation. Hence, I think it is valuable and should stay in the final version.
Theoretical Claims: I believe the theoretical claims are correct. My only issue is with the first step of the proof of proposition 3.1 in A.2. It is not clear why the equation at line 576 is equivalent to Equation 2. In equation 2, the Expected value is on $R \in U(SO(3))$. Not sure why here it becomes the expected values on $R'$ such that $RS = R'S$, even after reading the introduction to the section. I'd suggest to clarify this step.
Experimental Designs Or Analyses: Experimental designs and analyses are in general sound and valid.
One concern I have is on the effect of test-time augmentation: the authors did not ablate its contribution, so I think OrientAnything without TTA should be added to Table 1 and 2, to assess its importance. Moreover, for the flipper, it is not clear how a single output is selected when not using conformal prediction. The paper just reads "(4) output the plurality prediction", but there are multiple predictions from the multiple augmentations. Moreover, it is not clear how this works when using adaptive sets: in which order are logits from the different shapes ranked? A full rank across different input orientations or is some criterion deployed to choose one of them? I'd suggest to provide more details on TTA.
Supplementary Material: I reviewed all of the supplementary material.
I find the final sentence of B.1 "(4) output the prediction... with the smallest average quotient distance to the remaining predictions." unclear. Since the same sentence appears identical in the main paper, were it puzzled me as well, I'd suggest to provide an extended / more formal description of the exact procedure in the supplementary. In particular, it is not clear to me what "remaining predictions" are? Is there some undescribed filtering happening?
Typo at line 705 in the math: ^23 -> ^{23}.
Relation To Broader Scientific Literature: The paper cites the relevant literature and provides a clear original improvement with respect to it, both at the theoretical and at the performance level.
Essential References Not Discussed: None
Other Strengths And Weaknesses: S1. The paper is clearly written and presented.
S2. The idea of dividing orientation estimation into two steps based on the quotient regression is original, sound, non-trivial and effective.
S3. The method is shown to outperform the considered baseline.
W1. Test-time augmentation is not explained clearly, its hyperparameters are not reported and its importance is not ablated.
Other Comments Or Suggestions: Fonts in Figure 2 are too small.
Figure 4 does not really illustrate that Equation 2 has many solutions. This became clear to me only after reading A.3. I'd suggest to remove Figure 4 and move A.3 in the main text or to create a more informative figure.
line 353 "validation" -> is it "test"?
Questions For Authors: Q1. Please clarify how TTA is realized and its impact on the model performance.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and your helpful comments. We will gladly fix the typos you have pointed out and make the changes to the figures that you have requested in the camera-ready. **Please see our response to Reviewer 3giS for links to new versions of Tables 1 and 2 and Figure 6, where we have now included our method’s performance without TTA.** Below, we provide further details on how we implement TTA, and we clarify the first step of the proof of Prop. 3.1.
**TTA implementation details.**
**Quotient orienter.**
Our use of TTA for the quotient orienter is motivated by the observation that the quotient orienter succeeds on most rotations and fails on small subsets of rotations; see our response to Reviewer 3giS for a discussion of how this may originate from discontinuities in its outputs. Given an arbitrarily-oriented input shape $RS$, we would like to mitigate the possibility that the shape is in an orientation for which the quotient orienter fails. To do so, we apply $K$ random rotations $R_k$ to the input shape to obtain randomly re-rotated shapes $R_kRS$. Because the quotient orienter only fails on small subsets of rotations, we expect it to succeed on most of the $R_kRS$ and recover the orientations of $R_kRS$ up to an octahedral symmetry.
Because we are interested in the orientation of $RS$ rather than $R_kRS$, we then need to apply the inverse of $R_k$ to each predicted orientation $f_\theta(R_kRS)$ to obtain a set of $K$ predicted orientations $R_k^\top f_\theta(R_kRS)$. These will be correct orientations of $RS$ up to octahedral symmetries for each $k$ where the quotient orienter succeeds.
Because we expect this to be the case for most $k$, we use a voting scheme to choose one of the candidate orientations $R_k^\top f_\theta(R_kRS)$. (This is step (4) in B.1, which you have inquired about.) In this step, we compute each candidate’s average quotient $L_2$ loss w.r.t. every other candidate (i.e. the loss in Problem (3)), and choose the candidate which minimizes this measure. Because we expect the quotient orienter to have succeeded for most of the $R_kRS$, this average quotient loss will be small for most candidates and large for the few outlier candidates on which the orienter failed. Choosing the candidate with the minimum average quotient loss wrt the other candidates filters out these outliers and makes it likelier that we output the correct orientation of $RS$ up to an octahedral symmetry.
**Flipper.**
The TTA procedure is similar here. We are given an input shape $FS$, which is correctly-oriented up to an octahedral symmetry (a “flip”) $F$ if the quotient orienter succeeded. We apply $K$ random flips $F_k$ to the input shape to obtain randomly re-rotated shapes $F_kFS$ and apply the flipper to each of these shapes. Similarly to the previous case, if the flipper succeeds on some $F_kFS$, it predicts $F_kF$ rather than the flip $F$ we are actually interested in. We consequently left-multiply each prediction $g(F_kFS)$ by $F_k^\top$, which maps each successful prediction $g(F_kFS) = F_kF$ to the true flip $F$. Because we expect the flipper to have succeeded on most inputs $F_kFS$ and failed on a minority of inputs, we again use a voting scheme to pick out the plurality prediction. In this case, we simply return the most common flip among the set of $F_k^\top g(F_kFS)$.
We do not use TTA when outputting adaptive prediction sets. We would be pleased to add these details to the relevant appendices in the camera-ready.
**Proof of Prop. 3.1.**
We now clarify the first step of the proof of Prop. 3.1. Because we do not place any restrictions on the functions $f : \mathcal{S} \rightarrow SO(3)$ other than being functions (i.e. not one-to-many), Problem (2) decouples over the inputs to $f$. That is, we can independently solve for the optimal $f^*(RS)$ for any input shape of the form $RS$ with $R \in SO(3)$. If $S$ has rotational symmetries, there will be several $R’ \in SO(3)$ for which $RS = R’S$. Because $f^*$ cannot be one-to-many, we must have $f^*(RS) = f^*(R’S)$ for all such $R’$. Consequently, the terms in Problem (2) that are relevant to determining $f^*(RS)$ are precisely those involving the $R’$ s.t. $RS = R’S$.
This means that the $f^*$ which solves Problem (2) is defined at any input shape $RS$ as the rotation $f^*(RS) := R^* \in SO(3)$ which solves the problem on line 576.
---
Rebuttal Comment 1.1:
Comment: I carefully read all reviews and responses. The authors clarified how TTA is implemented and performed an ablation study on its contribution, which was found to be not critical for the overall performance of the method and its superior performance with respect to the baseline. The explanation of the first step of Prop 3.1 is also clear now. I'd suggest to add this explanation in the appendix. The other reviews do not uncover critical weaknesses and the responses to them seems convincing. For all these reasons, I confirm my overall recommendation. However, I agree that the title should be toned down, if possible. In particular, I'm in favor of the proposal by the authors "3D Orientation Estimation for Symmetric Shapes" (dropping "Robust" for the reasons discussed by reviewer ffTb). | Summary: The paper introduces a two-stage method for estimating the pose of an object (3D point cloud). In the first stage, the pose is regressed modulo octahedral symmetries, which prevents prediction collapse for symmetric objects. In the second stage, the remaining octahedral ambiguity is resolved through classification. Experiments on ShapeNet/ModelNet show that the approach works well.
## update after rebuttal
The authors adequately responded to my comments. A refined title, refined propositions, and including the shown discontinuity plot in the paper/appendix improve the work. So, I have raised my score to 4.
Claims And Evidence: Broadly, the claims are well supported and the paper is well written.
1. The conclusion states "*Whereas previous approaches can only
infer upright orientations for limited classes of shapes, our
method successfully recovers entire orientations for general
shapes.*" But, as mentioned in the introduction "*Our work may be viewed as
an efficient canonicalization method for the specific case of
3D shapes with rotational symmetries.*" In particular, earlier energy-based canonicalization approaches would not have an issue with symmetric objects.
2. This is slightly subjective, but the title could be more specific than "Orient Anything". In this paper, 3D point clouds of man-made objects are oriented.
Methods And Evaluation Criteria: The evaluation makes sense.
Theoretical Claims: I skimmed most of the proofs. I checked Prop 3.2 in more detail.
In general, there is an issue where the minimization problems specified minimize over functions $f$, but it is not specified what function class $f$ belongs to. This can lead to problems, e.g., if the proof requires $f$ to be discontinuous, not aligning well with neural network approximators.
### Proposition 3.2
1. If I'm not mistaken, the proof shows that there exists an $f^*$ of the specified form, not that all minimizers are of the specified form (as claimed in the prop.).
2. Here is an example, which I think should be explained.
Let $S$ be a cube. Let $\hat{R}$ be the octahedral group. Consider rotating the cube around one of its symmetry axes continuously from 0 to 90 degrees. At 0 and 90 degrees, the output must be the same since the input is the same. Is there a discontinuity somewhere in between? What happens if one inputs a cube into the trained network and varies its rotation?
3. What happens when $S$ has more symmetries than present in $\hat{R}$? For instance, in the experiments, there are several objects that have SO(2) symmetry (e.g. vases). Does the regression objective degenerate for these, similarly to in Prop 3.1?
Experimental Designs Or Analyses: 1. Why is the network trained for "3719 epochs"? Is it early stopping with a larger number of epochs?
2. The inference time should be reported.
3. The proposed method uses TTA, does the baseline Upright-Net?
Supplementary Material: I skimmed the proofs.
Relation To Broader Scientific Literature: It is a promising idea to factor out symmetries in canonicalization methods. It requires the existence of a subgroup that covers the symmetries of standard inputs.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: - The authors state that the octahedral group is among the largest subgroups of SO(3), without mentioning the icosahedral group. It would be interesting to see if using the icosahedral group leads to worse performance due to not being aligned with the symmetries of common man-made objects.
Other Comments Or Suggestions: N/A
Questions For Authors: Let me copy my questions to this section for the convenience of the authors.
### Proposition 3.2
1. If I'm not mistaken, the proof shows that there exists an $f^*$ of the specified form, not that all minimizers are of the specified form (as claimed in the prop.).
2. Here is an example, which I think should be explained.
Let $S$ be a cube. Let $\hat{R}$ be the octahedral group. Consider rotating the cube around one of its symmetry axes continuously from 0 to 90 degrees. At 0 and 90 degrees, the output must be the same since the input is the same. Is there a discontinuity somewhere in between? What happens if one inputs a cube into the trained network and varies its rotation?
3. What happens when $S$ has more symmetries than present in $\hat{R}$? For instance, in the experiments, there are several objects that have SO(2) symmetry (e.g. vases). Does the regression objective degenerate for these, similarly to in Prop 3.1?
### Experiments
1. Why is the network trained for "3719 epochs"? Is it early stopping with a larger number of epochs?
2. What is the runtime of the proposed method and Upright-Net respectively?
3. The proposed method uses TTA, does the baseline Upright-Net?
I will raise my score to accept if these are reasonably addressed and no other important issues are discovered during the reviewing.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and your helpful comments. We address your questions below.
**Proposition 3.2:**
*If I'm not mistaken, the proof shows that there exists an $f^\*$ of the specified form, not that all minimizers are of the specified form (as claimed in the prop.).*
This is correct; thank you for pointing this out. We will gladly amend Prop. 3.2 in the camera-ready to state that there exists a solution of the specified form.
*Let $S$ be a cube. Let $\hat{R}$ be the octahedral group. Consider rotating the cube around one of its symmetry axes continuously from 0 to 90 degrees. [...] Is there a discontinuity somewhere in between? What happens if one inputs a cube into the trained network and varies its rotation?*
In practice, our trained orienter does exhibit discontinuities at certain rotations. To demonstrate this, we perform an experiment on a bench shape, which is symmetric under a 180 degree rotation about the y-axis. We rotate the bench around this axis in increments of 1 degree and pass the resulting shape into our orienter for each rotation. We track two metrics throughout this process:
- The quotient loss of our orienter’s prediction (i.e. the loss function from problem (3), where we quotient the $L_2$ loss by the octahedral group). This measures the accuracy of our orienter’s predictions up to an octahedral symmetry.
- The chamfer distance between the shapes obtained by applying the orienter’s output to the rotated bench at subsequent angles of rotation. This detects discontinuities in the orienter’s output.
We have plotted both of these metrics across the rotation angle [here](https://imgur.com/a/hjvfOM1). Our orienter exhibits several sharp discontinuities, manifested as spikes in the chamfer distance plot. These discontinuities are associated with spikes in the quotient loss, which are occasionally large. However, these spikes in the quotient loss are highly localized, and our orienter performs well for the vast majority of rotations. These localized spikes motivate our use of TTA to improve our pipeline’s performance. We provide further details on TTA in our response to Reviewer gQb9.
*What happens when $S$ has more symmetries than present in $\hat{R}$? For instance, in the experiments, there are several objects that have SO(2) symmetry (e.g. vases). Does the regression objective degenerate for these, similarly to in Prop 3.1?*
If S has more symmetries than $\hat{R}$, then we would expect the regression objective to degenerate. As you note, this typically occurs for shapes with continuous symmetries such as vases, because the octahedral group covers many of the symmetries of shapes with finitely-many rotational symmetries. However, this degeneracy is in fact benign for such shapes. For example, the solution to (2) for a vase which is symmetric under any rotation about its up-axis may be any rotation about the up-axis (the shape’s continuous axis of symmetry). But unlike the bench shape that we discuss in lines 192-205, any such rotation is a valid orientation of the vase up to one of its symmetries, so our pipeline will output a rotation that returns the vase to its correct pose (up to a symmetry).
**Experiments:**
*Why is the network trained for "3719 epochs"? Is it early stopping with a larger number of epochs?*
We were primarily bottlenecked by computational constraints, and the validation accuracy was continuing to increase (albeit very slowly) when we decided to end our final training run. We believe that our pipeline’s accuracy could be slightly improved with further training, particularly for the flipper.
*What is the runtime of the proposed method and Upright-Net respectively?*
Our method’s inference time with TTA is 0.4517 seconds/shape (std=0.0252). Its inference time without TTA is 0.0126 seconds/shape (std=0.0466). Upright-Net’s inference time is 0.0672 seconds/shape (std=0.0252). We will add these details to the camera-ready.
Our method’s inference time without TTA is faster than Upright-Net’s, but employing TTA makes our method’s inference time notably longer than Upright-Net’s. However, the new metrics that we have computed for our pipeline without TTA (links below) show that our method continues to strongly outperform Upright-Net if we omit TTA.
*The proposed method uses TTA, does the baseline Upright-Net?*
Upright-Net does not use TTA. For the sake of comparison, we have re-run the experiments in our paper without TTA. The angular error plots in Figure 6, now including our method without TTA, are [here (6a, Shapenet)](https://imgur.com/a/YpsbHrw) and here [(6b, ModelNet40)](https://imgur.com/a/FwyEydw). A screenshot of a new version of [Table 1 is here](https://imgur.com/a/y5KYtOR), and [Table 2 is here](https://imgur.com/a/4jBy2ko).
Omitting TTA causes our method’s up-axis estimation accuracy to fall by roughly 4-5% depending on the dataset, but our method continues to strongly outperform Upright-Net without TTA. | null | null | null | null | null | null |
DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs | Accept (oral) | Summary: The paper introduces DistiLLM-2, a novel approach to distilling knowledge from large language models (LLMs) into smaller, more efficient student models. DistiLLM-2 employs a contrastive approach that leverages the synergy between loss formulations and data types, simultaneously increasing the likelihood of teacher responses and decreasing that of student responses. The authors claim that this method achieves state-of-the-art performance for student LLMs across various tasks, including instruction-following, mathematical reasoning, and code generation. Additionally, DistiLLM-2 supports diverse applications such as preference alignment and vision-language extensions. The key innovation lies in the contrastive loss function, which applies distinct loss functions to different types of training samples, effectively incorporating the synergy between loss formulations and data perspectives. The paper also introduces optimized dataset curation strategies and curriculum-based adaptive loss mechanisms to further enhance the distillation process.
Claims And Evidence: Supported Claims:
- Improved performance across tasks
- Contrastive approach effectiveness is analytically supported
- Data curation strategy for teacher and student generations
- Showed impact of datset size
- Showed generalization to new tasks including vision
Methods And Evaluation Criteria: Yes.Methods:
Contrastive Loss Function: The core idea of using a contrastive loss function that leverages the synergy between loss formulations and data types is well-motivated and addresses what the authors see as the limitations of previous distillation methods. Paper also introduced: Data Curation Strategy and Curriculum-based Adaptive Learning
Evaluation Criteria:
Benchmark Evals: AlpacaEval, Evol-Instruct, UltraFeedback, GSM8K, MATH, HumanEval, and MBPP are widely recognized and represent relevant tasks for evaluating LLM capabilities.
LLM-as-a-Judge: The use of LLM-as-a-Judge for evaluating instruction-following tasks provides a robust and comprehensive assessment of the generated responses.
Pass@k for Code Generation: The use of Pass@k as an evaluation metric for code generation tasks is standard practice and accurately reflects the best case ability of the student model to generate correct and executable code.
Speculative Decodeing: Is a great proxy for how well a student matches a teacher.
In summary, the proposed methods and evaluation criteria are well-aligned with the problem of LLM distillation.
Theoretical Claims: AppendixB had all the proofs; all very mathy... I'm not 100% sure I'm super qualified to look thru this, but here we go...
- Behavior of KL and RKL: The paper provides a mathematical explanation for the behavior of KL and RKL, including the "pulling-up" effect of KL and the "pushing-down" effect of RKL. The explanation made sense / didn't seen to have issues.
- Derivation for Remark 1: The paper presents a derivation for Remark 1, which establishes a mathematical connection between the proposed CALD loss function and DPKD/DPO (Appendix B.2). I looked thru the derivation; it didn't seen to have issues.
- First-order Approximation for Mercator Series: idk if I'm super qualified to be checking series but it looked fine.
Experimental Designs Or Analyses: As mentioned in the methods, it looks good.
Supplementary Material: skimmed most of it; looked at the details of some of it.
in the specdec section, can the authors also show the acceptance rates.
Relation To Broader Scientific Literature: distillation losses are extremely important to a large set of problems including RL, specdec, teaching student models.
Essential References Not Discussed: none that i can think of
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your constructive comments. We have rephrased the question for ease of reference and provide our corresponding responses below. Look forward to any further discussions.
***Q1. Inclusion of acceptance rates in speculative decoding analysis for completeness***
**A1.** Thank you for the suggestion. To provide a more concrete answer, we evaluated the acceptance rates using the vLLM [1] framework, which supports detailed speculative decoding metrics. The results of acceptance rates are presented as follows:
| | SFT | GKD | DistiLLM | DistiLLM-2 |
|----------|-------|-------|----------|------------|
| Phi-3-medium | 0.412 | 0.464 | 0.469 | 0.487 |
| Phi-3.5-mini | 0.397 | 0.443 | 0.452 | 0.522 |
We could see that DistiLLM-2 significantly boosted the acceptance rates leading to a higher speed-up via speculative decoding.
[1] Kwon et al., “Efficient Memory Management for Large Language Model Serving with PagedAttention.” SOSP. 2023
---
Rebuttal Comment 1.1:
Comment: thank you for updating. | Summary: This paper addresses the critical challenge of compressing large language models (LLMs) for practical deployment by focusing on knowledge distillation (KD). The authors highlight the limitations of existing KD approaches, which primarily focus on either optimizing loss functions (like Kullback-Leibler divergence or skew KL) or curating training data (teacher-generated vs. student-generated outputs).
They also point out that current methods often overlook the synergistic relationship between loss formulations and data types, limiting the potential performance gains in student models. Additionally, the authors note the rise of contrastive learning methods (like DPO) but observe that direct application of DPO to KD suffers from reward hacking issues.
To address these shortcomings, the paper introduces DISTILLM-2, a novel contrastive approach for KD of LLMs that builds upon DistilLLM. Its main contributions are as follows:
(1) The introduction of a contrastive approach with asymmetric loss dynamics (CALD): This involves analyzing and leveraging the behavior of forward and reverse KL (and SKL) during training, and applying distinct loss functions to different training samples.
(2) Optimized dataset curation and curriculum-based adaptive loss mechanisms: These enhancements provide practical guidelines for their contrastive approach.
(3) Demonstrated performance: The paper claims state-of-the-art performance for small language models (sLMs) across various tasks, including instruction-following, reasoning, and code generation, and also demonstrates the methods versatility by applying it to preference alignment and vision-language models.
Claims And Evidence: I feel the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria do make sense for the problem at hand.
Theoretical Claims: The paper does not have any major theoretical contributions — only a few theoretical remarks / observations. (I have not checked the calculations behind these remarks, but they seem straightforward and they're not a major part of this paper anyway.)
Experimental Designs Or Analyses: I went through the experimental design and analysis of the paper and it looks sound and valid to me.
Supplementary Material: I reviewed the Appendices.
Relation To Broader Scientific Literature: This work builds upon DistiLLM (Ko et. al., 2024) which is a state-of-the-art approach for distilling LLMs. The paper proposes a new approach for distilling LLMs, by both combining and refining several known techniques, and by introducing new ideas. It seems to be achieving significant improvements over the state-of-the-art across various benchmarks.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength: The authors have conducted extensive experimentation both to demonstrate the effectiveness of their techniques but also to motivate their ideas.
Weakness: The paper does not have any major weaknesses.
Other Comments Or Suggestions: This is minor, but I was a bit confused when reading Lines 152-156 — maybe the authors can rephrase their statement there.
Questions For Authors: I have no questions for the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. Below we address each concern in detail. Look forward to any further discussions.
***Q1. Lines 152–156 might be confusing – rephrasing suggested***
A1. Thank you for the suggestion. We will revise Lines 152–156 to improve clarity. This paragraph aims to illustrate the relation between contrastive learning in preference alignment and our approach to knowledge distillation: increasing the student model's likelihood of generating outputs aligned with teacher responses, and decreasing it for those resembling weaker student responses, using distinct loss functions.
To make this point clear, we plan to revise the paragraph as follows:
“Similarly, we incorporate this contrastive concept from preference optimization into KD by assigning different loss functions to different types of responses: encouraging the student model to assign higher likelihood to high-quality responses generated by the teacher ($y_t$) -- $q_\theta(y_t|x)$ -- while reducing the likelihood of lower-quality student responses ($y_s$) -- $q_\theta(y_s|x)$ -- that deviates from the teacher.” | Summary: This paper introduces DistiLLM-2, a contrastive approach for LLM distillation, optimizing student models by increasing teacher response likelihood (SKL loss) and decreasing student response likelihood (SRKL loss). It improves data curation, adaptive learning, and curriculum-based loss weighting, outperforming baselines in instruction-following, math reasoning, and code generation.
Claims And Evidence: The claims are supported by clear and convincing evidences.
Methods And Evaluation Criteria: The evaluation methods make sense.
Theoretical Claims: The approximation in Equation 7 seems problematic. In Appendix B.3., the approximation relies on the assumption that $p(y|x) \approx 1$ and $\alpha p(y|x) + (1 − \alpha) q_{\theta}(y|x) \approx 1$. However, when y is a sequence, p(y|x) can be quite small due to the accumulation of per-token probabilities, making the approximation unreasonable. Could the authors provide empirical results on how the values of $p(y|x)$ are distributed on real datasets?
Experimental Designs Or Analyses: The experiments on general instruction tuning, math, and code domains are sound and valid.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper provides modifications to the previous on-policy KD methods proposed in MiniLLM[1] and GKD[2] and shows better empirical approaches.
[1] MiniLLM: Knowledge Distillation of Large Language Models.
[2] On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Weakness on clarity:
The presentation of the method can be clearer. For example, what does the term CALD stand for? It comes out in lines 61-62 without any explanation.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. Below we address each concern in detail. Look forward to any further discussions.
***Q1. Empirical validation of the approximation in Equation (7) for sequence-level probabilities***
**A1.** Thank you for pointing out the potential mismatch between our first-order Mercator approximation in Equation (7) and actual sequence-level probabilities $p(y|x)$. Indeed, this approximation assumes $p(y|x) \simeq 1$, which can be problematic for long sequences where $p(y|x)$ might vanish.
In our implementation, we compute probabilities at the token level, clip extreme values to avoid outliers, and average them over the sequence, yielding a reasonably scaled alpha that is shared across all tokens.
The corresponding code (in `src/distillm_trainer.py`, lines 1161–1164) is:
```
anchor = (1 - base_alpha_1) * logp_logq
logps_logqs = (
(tea_per_token_logps * loss_mask).sum(-1) / loss_mask.sum(-1)
).exp() - (
(per_token_logps * loss_mask).sum(-1) / loss_mask.sum(-1)
).exp() # sentence-level
alpha_1 = torch.clip(
1 - anchor / (logps_logqs + 1e-5),
min=1e-2, max=base_alpha_1
).unsqueeze(-1).unsqueeze(-1)
```
This design makes the approximation **invariant to sequence length**, effectively resolving the concern about very small $p(y|x)$ values in long sequences. Notably, we want to highlight that this first-order approximation has advantage in closed-form $\alpha$ updates for efficient, per-sample adaptation.
Additionally, we provide empirical results of token-level probabilities on teacher ($y_t$) and student ($y_s$) responses to quantify the first-order approximation using the teacher (Mistral-7B) and student (Danube2-1.8B) models. To better characterize the distribution of token-level probabilities, we report the first (Q1), second (Q2, median), and third (Q3) quartiles for both models:
| | $y_t$ | | | $y_s$ | | |
|---------|------|------|------|------|------|------|
| | Q1 | Q2 | Q3 | Q1 | Q2 | Q3 |
| teacher | 0.82 | 0.88 | 0.94 | 0.79 | 0.85 | 0.92 |
| student | 0.80 | 0.85 | 0.94 | 0.81 | 0.89 | 0.96 |
The results show that most of the token-level probability values are close to 1, which ensures small approximation error of our first-order approximation. We appreciate the reviewer’s insightful comment and will update the paper accordingly to better explain this implementation and its practical implications.
***Q2. CALD term appears without explanation – unclear terminology***
**A2.** Thanks for the question. CALD stands for **C**ontrastive **A**pproach for **L**LM **D**istillation, which is first introduced in lines 60-61 in our main manuscript. To improve the readability and clarity, we will also define the term at the beginning of Section 3.1.2. We will additionally refine Section 3 and add more descriptive languages right after the definition (Equation (5)). | Summary: The paper introduces DISTILLM-2, a novel approach for LLM knowledge distillation. Unlike prior work that applies identical loss functions to both teacher- and student-generated data, DISTILLM-2 leverages a contrastive loss function to explicitly increase the likelihood of teacher responses while decreasing that of student responses. Extensive experiments demonstrate that DISTILLM-2 achieves superior performance across multiple tasks, including instruction-following, mathematical reasoning, and code generation.
Claims And Evidence: The claims in the paper are well-supported by clear and convincing evidence.
Sections 4.1 to 4.3 present experiments across three tasks—general instruction-following, mathematical reasoning, and code generation—using three different LLM teacher-student pairs. The results consistently demonstrate that DISTILLM-2 outperforms baseline KD methods, providing enough empirical validation for its effectiveness.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes, I reviewed the experimental design and analyses, which appear to be sound and well-structured. However, the largest LLM used in the study is only 9B, which raises concerns about the generalizability of the conclusions to larger-scale models. Additional experiments with larger models would strengthen the validity of the findings.
Supplementary Material: Yes, the code and Appendix
Relation To Broader Scientific Literature: Yes.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Please refer to the raised concern in `Experimental Designs Or Analyses`
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback. We have rephrased your comments for simpler reference and have included our respective responses. Look forward to any further discussions.
***Q1. Use of models up to 9B; interest in evaluating scalability to larger models***
Thank you for the insightful comment. In the evaluation, we already demonstrated the effectiveness of DistiLLM-2 over varying sizes of teacher and student models. For teacher models, we have included larger backbones with over 9B parameters, such as **Qwen1.5-14B** in **Appendix D.2** (Table 10) and **Phi-3-Medium-14B** in **Appendix D.4** (Table 12).
For larger student models, we are working on conducting more such experiments. Although we might not be able to report the results in a timely manner during rebuttal due to the resource cost, we will report them in the future.
Mathematically, as explained in Remark 1, DistiLLM-2 proposes a dedicated CALD objective function that could exhibit behavior similar to DPO in preference alignment. Given DPO’s wide validation across diverse architectures and the supporting results from our experiments, we are confident in the generality of the proposed approach. | null | null | null | null | null | null |
Focus On This, Not That! Steering LLMs with Adaptive Feature Specification | Accept (poster) | Summary: The paper proposes a modification to the typical instruction-tuning process by including a "focus prompt" in the context, to guide the model to focus on certain aspects of the user input. Experiments on two synthetic settings demonstrated the effectiveness of the proposed method over vanilla SFT and few-shot baselines.
Claims And Evidence: It is evident that including a focus prompt in the instruction will help with focusing on certain parts of the user input. It is unclear why training (instruction-tuning) on such data is necessary to achieve said effect. See more elaboration in the "Questions For Authors" section below.
Methods And Evaluation Criteria: The toy experiment on SS with synthetic keyword labels of {Bayesian, Pineapple} is a bit too artificial. The choice of keyword is so specific in a way that it almost seems like the keywords are cherry-picked. Since the experiment shouldn't be expensive, consider expanding the datasets to validate the method on with the additional baselines mentioned in the "Experimental Designs Or Analyses" section below.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, but there are important baselines missing, specifically vanilla SFT + testing focus instruction prompting and directly rewriting the user input with the spurious features removed. Another missing baseline is the typical conditional supervised fine-tuning, by appending control tokens in place of the focusing prompt.
Supplementary Material: I read through the entire Appendix.
Relation To Broader Scientific Literature: Typically people solve the issue of spurious features when querying LLMs with prompting approaches. Fine-tuning on focusing prompts have not been explicitly investigated. However, FIT can be viewed as a specialized version of conditional supervised fine-tuning (cSFT) where control tokens are appended to steer the model to behave in certain ways.
Essential References Not Discussed: Not to the extent of my knowledge.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: My main concern of this work is the relevance of the technique. A major assumption that FIT makes is knowing which features are spurious beforehand (even before the training phase). If we do have the information of which features are spurious, there are a number of easier techniques we could apply to prevent the spurious prediction. One is prompting the model during testing with focus instructions. A well-trained instruction-tuned should be able to properly ignore certain parts of the input to make the prediction. Another even more fundamental solution is to first have the language model rewrite the question while leaving out the spurious piece of information, then respond to the augmented question. These are all fairly straightforward techniques that don't involve complicating the training process. So unless the authors can justify why modifying the instruction tuning process to include the focus prompt is absolutely critical, it seems to be an unnecessary complication of the problem.
One other concern is the scalability of FIT with respect to multiple potentially spurious features. In the experiments, the setting involves one very specific set of spurious features (e.g. {Bayesian, Pineapple}). In non-synthetic scenarios, there are multiple potential spurious features we would hope to control independently during inference time. It would be devastating if FIT requires retraining the model every time a new spurious feature is introduced. It would also be infeasible to maintain multiple copies of the model for different spurious relation protections. Thus, it is important that the method scales efficiently with the number of spurious features and can effectively switch between different combinations of the features. This part is lacking in the paper.
Another concern is the sensitivity to the focus instruction during test time. During test time, sometimes we don't wish the model to focus of certain features. In this case, would the FITed model be over-reliant on the focus prompt during inference time to make the correct prediction. If the focus instruction is removed, would the model still perform similarly to the vanilla SFTed version? Are there tradeoffs to the performance with the model being over-sensitive to the focus instruction?
My final concern is regarding the assumption of oracle access to knowing which features are spurious beforehand. This is a very strong assumption and consequently a lot of simpler techniques could be applied if we do have such information (see additional baselines in the first paragraph). Most cases we don't have the information of the spurious features. One could argue that this is out of scope of this paper and that might be true, but some heuristics to identify spurious features should also be provided to make FIT generally useful.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## P1: Spurious Feature Knowledge and Baselines
**Simpler Baselines suggested by the reviewer:**
- **Test-Time Prompting using focus instructions without FIT training:** Already tested in our paper via:
- **Zero-Shot** (lines 318–321, Figures 6–8)
- **SFT(y)** (line 1053, Figures 6–8)
*Result:* These methods fail to reliably steer model outputs without FIT training, indicating that models do indeed struggle to ignore certain parts of the input, showing that the inclusion of focus instruction for FIT is necessary.
- **Input Rewriting:** Impractical for complex datasets (e.g., SMNLI), as spurious features (like genre) are usually latent, and deeply embedded and cannot be easily rewritten without altering the underlying example and could lead to label mismatch in the process. FIT is therefore a lot more practical and simpler in these more complex and realistic settings.
**Prior Knowledge of Spurious Features:**
Identifying spurious features is not a practical limitation of FIT as it aligns with standard industry practices for transparent, reliable modeling, and moreover existing methods such as automated spurious correlation detectors can be used alongside FIT (see our detailed discussion in Appendix B, which also includes heuristics for identifying spurious features, as requested).
## P2: Scalability and Retraining
**Scalability:** FIT does not require retraining for every new spurious feature. Experiments on BBQ (Section 4.2) and in NLG settings (see response P1 to reviewer ivne) confirm FIT’s ability to adapt at test time to unseen features without retraining or providing additional knowledge.
**Multiple Features:** Our current experiments provide initial evidence that FIT handles combined instructions (i.e., the presence of both 1 focus and 1 ignore type in the same specification), and we will expand this in future work. We agree that focusing on and/or ignoring arbitrarily many features simultaneously is an important direction for future work, and will further highlight this in the future work section.
## P3: Sensitivity to Focus Instructions
Our paper already addresses this concern:
- **Default Prompt Performance:** FIT maintains comparable accuracy to SFT models even without explicit instructions (Figures 6–8) - compare default/empty focus types and does not show sensitivity when dropping focus instructions (compare default/ empty to focus(C) focus types, corresponding to task casual accuracy). Moreover, the ablation in line [391, right] shows that FIT does not harm pre-existing instruction-following capabilities.
- **Robustness to Wording Variations:** Ablation studies (Section 5 line 430, left) show variations in instruction wording between train- and test-time prompts have minimal impact.
## P4: Comparison with Conditional SFT (cSFT) and Control Token Variants
We appreciate the reviewer suggesting additional related work, though specific references were not provided. We identified two potentially relevant methods: conditional SFT (cSFT) ([Zhang et al., 2024](https://arxiv.org/abs/2406.01976)) and the control token variant SteerLM ([Dong et al., 2023](https://arxiv.org/abs/2310.05344)). FIT significantly differs from these approaches as follows:
**Conditional SFT (cSFT):**
- **Objective:** cSFT prevents corpus-level spurious correlations but lacks dynamic adaptability at test-time.
- **FIT Difference:** FIT explicitly trains for flexible test-time adaptability via natural language instructions, dynamically generalising to unseen features without retraining (see Section 4.3 BBQ experiments).
**Control Token Methods (e.g., SteerLM):**
- **Data Annotation:** SteerLM uses fixed, human-annotated stylistic attributes (e.g., humor), focusing on output style. FIT directly annotates inputs with dynamic instructions to prioritize input features.
- **Feature Specification:** SteerLM uses predefined attribute values, limiting flexibility. FIT dynamically detects and prioritizes input features through natural language prompts, without explicit attribute values.
- **Training:** SteerLM relies on iterative bootstrapping. FIT employs straightforward supervised fine-tuning without bootstrapping.
- **Prompting Mechanism:** SteerLM's fixed attribute tokens limit its adaptability. FIT uses flexible natural language prompts enabling real-time steering and generalisation to unseen features at test-time.
- **Goals:** SteerLM aims for stylistic output adjustments, whereas FIT directly enhances model robustness, fairness, and alignment by controlling input-feature relevance.
**Summary:** FIT fundamentally differs from cSFT and control-token variants. These distinctions will be clearly detailed in the final paper. Please let us know if there is a specific cSFT paper you had in mind, and we are happy to further detail its relationship to FIT.
## Conclusion and Final Comments
We appreciate the reviewer’s valuable feedback. We hope our clarifications fully address the concerns raised.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for the clarification. Many points raised in the review were due to misreading the paper and I take full responsibility. The radar diagrams are slightly harder to parse than, say tables, but ultimately they do convey the supporting evidence for the claims made in the paper.
One crucial future work direction is extending to tasks beyond text classification. For example, text-conditioned image generation should benefit significantly from similar techniques for control that is not restricted by the data distribution.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your updated feedback.
- We agree the radar plots alone are not the easiest to parse, so we will include full tables in the appendix of the final version of the paper and point to these from the main paper for clarity.
- Finally, we agree the extension of FIT to text-conditioned image generation is definitely and interesting future direction. We will include a mention of this within our future work section.
Thank you again for your helpful comments and responsiveness during the reviewer process. | Summary: The main aim of this work is to finetune language models to be robust to known spurious correlations and features. To this end, Focus Instruction Tuning (FIT) is introduced, a method for instruction tuning language models with prioritization towards certain features, and “ignoring” others. the approach is based on adding two sets of focus and ignore labels to the prompt for the model. The method is supported by some theoretical analysis and experiments on classification tasks such as NLI and QA.
Claims And Evidence: There are three main claims in the paper:
1) A new method FIT for flexible and dynamic adjustment of feature focus during inference time.
2) Experiments across several NLP tasks such s sentiment analysis, QA, and NLI.
3) Generalization to unseen features and distribution shifts over feature values.
At a surface level all of the above claims are supported. The proposed approach allows for adjustment of features during inference time and is evaluated on three datasets focusing on sentiment analysis (SS), NLI (SMNLI), and QA (BBQ). As well as an additional dataset called SHANS for an additional NLI comparison in the appendix (L).
However, I have concerns about the setup and evaluations and whether they support the claims entirely.
(1) I am unconvinced that the method is flexible or dynamic as it requires explicit knowledge of the task and spurious features. The user need to already know apriori whether something is spurious or not and be able to express it in “ignore X” format. This would require already knowing the answer, to then know whether it should ignore anything outside the context or not as in the given example.
(2) The evaluated tasks are easy tasks for the size of model and do not accurately reflect the current evaluation landscape for such sized models.
(3) Spurious features that are added may not be highly relevant and challenging to the task. How often is pineapple in the SS dataset?
(4) Evaluations are limited to the same size of model (7B). It would be good to see models on either end for trends. Do smaller models have more trouble following these instructions, for example?
Methods And Evaluation Criteria: For evaluation criteria, there are two limitations to the proposed criteria: (1) limited datasets and setup, and (2) metrics.
1) The proposed evaluation datasets are commonly used to evaluate NLP models, however these are old evaluations (pre-LLM era) and many newer benchmarks are used to evaluate LLMs. Given that the paper mostly investigates larger language models and focuses on instruction tuning of LLMs, I would expect evaluations which are closer to the more conventional QA evaluations such as ARC, MMLU, HellaSwag, etc. these tasks pose a harder challenge for LLMs of this scale which may induce different performance for the given approach.
2) The authors propose to use focus accuracy to measure performance of the model. While I understand why it’s important to evaluate focus accuracy, conventional evaluation of spurious correlations (e.g. in the vision literature such as evaluations over Waterbirds classification, CelebA spurious features, etc.) evaluate both group performances as well as overall accuracy. I would like to see overall accuracy as an added metric for comparisons.
Theoretical Claims: I have looked at the proofs in the Appendix I. The propositions make some strong assumptions about the problem, especially a balanced label distribution, and independence conditions. I understand this may only be needed for the theory, and the theory is not a focus of the paper as it is not stated in the main text, however authors should add some explanation or justification for if this is needed.
There may also be some minor typos in this section such as Theorem I.1 vs. Proposition I.1 (line 1367).
Experimental Designs Or Analyses: Yes. Some concerns that arise are
1) How are the spurious features like Pineapple and Bayesian chosen?
2) Whether the tasks are challenging enough for the model size. See questions above for suggested other tasks.
3) There is no demonstration that forgetting does not occur. It would be good to see that post finetuning performance does not decrease performance on other tasks. This can be verified on other tasks as is standard in studying the alignment tax.
Supplementary Material: I have reviewed all supplementary materials for the paper. In particular, theory from Section I, extra experiments in Appendix L, additional baseline results, and prompts for focus tuning (D).
Relation To Broader Scientific Literature: Instruction tuning is one of the premier methods used to specialize LLMs, and LLMs are currently a main focus of the machine learning community. This paper targets bias and fairness of IT for LLMs, which is an important topic. The paper has highlighted many works in the space of instruction tuning, aligning LLMs, latent steering, though does not mention bias mitigation and reliance on spurious features in the related work section, though this seems a major focus based on the intro (lines 31+).
Essential References Not Discussed: I believe the authors have done due diligence to cite related work in the paper. However, they do not run comparison to many of these works, some of which seem to be highly relevant - those in lines 161+. They mention being white box methods, however, I think it’s important still to make comparison in order to know if there’s any decrease in performance compared to such methods. Another fair comparison from related work are the RLHF strategies - e.g. positive and negative sentences with focus. Given the easy data generation process it also seems a relatively straightforward approach that should be discussed. The paper also does not mention too much CoT style approaches which could similarly be used to potentially mitigate spurious features as discussed.
Other Strengths And Weaknesses: One particular strength I want to highlight is the use of example figures (Figures 1, 2). These help make the paper clear in terms of the problem and examples.
Other Comments Or Suggestions: N/A
Questions For Authors: How are the spurious features chosen to add to the dataset (for example bayesian, pineapple)? What about other features that might be more closely tied to the task?
The exact numbers from the Figures 3-5 are hard to read. Is it possible to include tables with the full results in the Appendix?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## P1: SS Dataset, Keyword Feature Choices, and Theoretical Assumptions
The SS dataset provides a controlled setting to verify FIT’s effectiveness by comparing focus accuracies against theoretical predictions without confounding artifacts present in more complex datasets (e.g., BBQ, SMNLI). These simple-to-complex checks with new methods aligns with common practice in ML literature (e.g., [SNGP](https://arxiv.org/pdf/2205.00403), or see [this survey on causal ML](https://arxiv.org/pdf/2206.15475)). Our theoretical assumptions, such as balanced labels, are practical and commonly used, and although flexible, relaxing them would unnecessarily complicate analysis given our goal to eventually test FIT on realistic scenarios (BBQ, SMNLI).
In adding “Bayesian” and “pineapple” to SST to create SS, we intentionally chose words that are arbitrary and semantically neutral to avoid sentiment alteration or label mismatches during synthesis. [Prior work](https://aclanthology.org/2024.findings-eacl.68.pdf) has shown that even task-irrelevant words (e.g., “Performances”) can act as spurious features in sentiment datasets, motivating our design. FIT’s effectiveness is not dependent on specific keywords; any neutral words would suffice.
## P2: Dataset Choices and Model Sizes
BBQ was originally tested with UnifiedQA’s 11B model, comparable to or larger than our evaluated models. Although extending FIT to more challenging tasks is a valuable future direction, our current results show even large models struggle (e.g., SMNLI causal accuracies remain below 90% after fine-tuning), highlighting that these tasks are still non-trivial even for current LMs. Moreover, this controlled setting with decent base accuracies achievable allows clear evaluation of FIT’s effectiveness in dynamically switching focus without confounding factors from extremely low accuracy. To further address the difficulty comment, we demonstrate FIT’s effectiveness in a more complex NLG setting, see response P1 to reviewer ivne.
## P3: On Demonstrating that FIT Does Not Lead to Forgetting
We refer the reviewer to Section 5 (starting at line [391]), where our ablation study on the Alpaca-GPT dataset demonstrates that post-fine tuning with FIT shows no over-specialisation or forgetting.
## P4: On Model Size Trends for FIT
We agree with the reviewer that investigating model size trends is important. In our work, we have already examined this upward trend, demonstrating strong and consistent performance from 7B to 13B models (note that the Vicuna-13B-v1.5 model used in our experiments is indeed 13B, not 7B). To further illustrate FIT’s adaptability, we have run an additional experiment to evaluate its performance on the Qwen-2.5-3B-Instruct model. For instance, for focus on causal (focus(C)) prompts, we observed a trained accuracy of approximately 96.3% and an unseen accuracy of around 95.7%. For focus on spurious (focus(S)) prompts, the trained accuracy was about 73.9% with an unseen accuracy near 66.0%. These results demonstrate that FIT’s robustness to model size. Full results can be found at [this anonymised document](https://osf.io/5kncm/?view_only=ca4c684c7ee642d7ad7cdcc84c87ea17).
## P5: Dynamic Adaptability and Prior Knowledge
Our experiments on BBQ and SMNLI (Sections 4.2 and 4.3) demonstrate clearly FIT’s dynamic adaptability, enabling models to generalise under distribution shifts and to unseen or shifted features using only provided focus instructions, without pre-identifying spurious features or their spurious labels during inference. While FIT requires initial identification of spurious features for training only, this aligns well with common industry practices for transparency and reliability, can be used with automated methods of spurious feature identification, and doesn’t limit FIT practically (see our existing detailed discussion in Appendix B).
## P6: On Comparing to White-Box Methods and CoT Baselines
White-box methods discussed in related work aren't directly comparable to FIT, as they require model access (white-box) during both training and testing. Moreover, latent steering methods (LSMs) typically require training per feature, whereas FIT teaches a generalizable capability to adapt to unseen features (demonstrated by our BBQ generalization results). Furthermore, note that recent work ([Wu et al., 2025](https://arxiv.org/abs/2501.17148)) shows LSMs significantly underperform compared to SFT, the stronger baseline that we use throughout our paper.
CoT baseline: Using the base Llama-3.1-8B-Instruct model on BBQ, zero-shot CoT focus accuracy for each prompt type was near random (e.g., ~35% for causal, ~33% for spurious), indicating FIT's superiority.
## Conclusion and Final Comments
We thank the reviewer for their constructive comments, which have helped strengthen our paper. We hope our responses adequately address their concerns and remain available for any further clarifications. | Summary: This work presents a training method to improve model steerability w.r.t specific features. The main idea is to add instructions during training about what to focus on and what to ignore. The trained model is evaluated on a modified SST dataset, a modified MNLI dataset, and BBQ, and it demonstrates significantly better steerability in terms of following the instruction to avoid spurious features (even unseen ones in the case of BBQ).
Claims And Evidence: The experiments successfully demonstrate the effectiveness of this method. The high accuracies indicate that different instructions can steer the model to different predictions. Additionally, it is really great to see that on the BBQ dataset, even for untrained features, the steerability seems to be improved quite significantly. That said, I still have slight concerns about how generalizable this approach would be (see below).
Methods And Evaluation Criteria: My main concern about this work is that it works in a very clean setting where the testing set and the training set are in similar distributions, and all these features are also defined in a relatively clean setting. Ideally, the goal of steerability is to be able to steer the model in out-of-distribution data where retraining is hard or expensive to do. If the authors can show how the model's general steerability improves on significantly different tasks or even tasks beyond classification, this work will be substantially more impactful.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: No major issues.
Supplementary Material: No major issues.
Relation To Broader Scientific Literature: This work proposes a method to improve steerability w.r.t specific features. Debiasing the model through contrastive examples and balancing the training set are well-known ideas. This work is a natural step from those works and leverages instruction following the ability of LLMs and showing good performance in the evaluation.
Essential References Not Discussed: No missing essential references.
Other Strengths And Weaknesses: This is in general a well-written paper with clear takeaways. Despite my concern on the further generalizability of this method, I think it already has value in improving steerability in relatively clean and in-distribution settings.
Other Comments Or Suggestions: I'd like to encourage the authors to bring some of the content on how the training set is constructed from the Appendix to the main paper. I find that part to be quite interesting, and I also feel that some early sections can be compressed.
Questions For Authors: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments regarding our introduction, specifically noting the “significantly better steerability” and overall “effectiveness” of our method. We hope that this rebuttal response addresses your additional comments regarding our paper.
## P1: Showing Further Generalisability of Our Method
We agree that extending beyond classification and MC-QA tasks is highly beneficial for demonstrating that our method can generalise to more complex settings, thereby enhancing the overall impact of our work. To build on our previous results, we have extended FIT to operate in an NLG setting, an environment that poses additional challenges compared to the classification scenarios considered thus far.
**Extension to NLG Setting:** We adapt our BBQ experiments to formulate a new NLG experiment. In this adaptation, we remove the predefined answer options from each prompt, leaving only a context and a question. Consequently, the model must generate open-ended text responses rather than select from a limited set of choices (e.g., a, b, or c). This change forces the model to independently identify and determine the answer within the context based on the question, making the task harder than its MC-QA counterpart.
**Evaluation Methodology:** : Assessing correctness in NLG tasks introduces challenges beyond those in classification tasks, due to the need to account for semantically equivalent expressions of the correct response meaning. To evaluate model correctness in a computationally efficient manner, we use the Llama-3.1-8b-Instruct model as a judge [1], which determines if the model’s response is semantically equivalent to the reference answer. We manually verified that this approach provides a good measure of semantic equivalence, aligning well with human judgments.
**Focus Accuracy Results:** : We present the focus accuracy results for the same dataset setup as in the original BBQ experiments. Our results include evaluations on both in-distribution (seen) features and out-of-distribution (unseen) features. The trends observed for our Llama model are consistent with those found for the Mistral and Vicuna models.
### LLaMA Focus Accuracy for Seen / Unseen Features
| | $\emptyset$ | focus(C) | focus(C) $\wedge$ ignore(S) | ignore(S) | focus(S) | focus(S) $\wedge$ ignore(C) |
|-----------|---------------------------|---------------------------|----------------------------|----------------------------|----------------------------|-----------------------------|
| Few-shot | 76.07% / 81.33% | 73.78% / 79.00% | 77.67% / 78.67% | 78.20% / 82.67% | 44.44% / 39.67% | 41.69% / 34.00% |
| SFT | **99.31%** / **98.33%** | 98.93% / **97.67%** | 98.32% / 97.00% | 98.32% / **96.67%** | 23.86% / 23.67% | 23.55% / 23.33% |
| FIT | 99.24% / 96.67% | **99.54% / 97.67%** | **99.62% / 97.33%** | **99.54% / 96.33%** | **97.10% / 77.67%** | **97.41% / 79.00%** |
*Summary of P1:* These results indicate that FIT successfully generalises to the more complex NLG setting, exhibiting significant steerability for both in-distribution and out-of-distribution test data. The improvements over the zero-shot, few-shot, and SFT baselines underscore the robustness of our method. Furthermore, we will include the full experimental results in the final version of the paper to further support our generalisability claims. In addition, we plan to modify and extend the future work section of the paper to reflect these updates and to propose new extensions along the lines suggested by the reviewer.
## P2: On Moving Some of the Content of the Training Set Construction to the Main Paper from the Appendix.
We also recognise the value of moving some of the detailed information regarding the dataset construction to the main section of the paper. Where possible, we plan to integrate additional details into the main paper. The final extent of this inclusion will depend on the available space in the final version; however, we will at least provide further context on the training set construction to enhance clarity.
## Conclusion and Final Comments
In summary, our extended NLG experiment demonstrates that FIT can generalise to more complex tasks and perform effectively on out-of-distribution data, thereby following the reviewer’s suggestion, enhancing the potential impact of the paper.
We thank the reviewer for their constructive feedback, which has been useful in further refining our work. We remain available to respond to any further comments or questions.
# References
[1] Gu, Jiawei, et al. "A survey on llm-as-a-judge." arXiv preprint arXiv:2411.15594 (2024). | null | null | null | null | null | null | null | null |
Breaking the Quadratic Barrier: Robust Cardinality Sketches for Adaptive Queries | Accept (poster) | Summary: This work revisits the problem of robust cardinality sketches for adaptive queries and provides improved results. In the classic cardinality sketch, the queries are independent of the sampled sketch, and the sketches can answer an exponential number of queries (in the sketch size $k$). In the adaptive queries setting, a query can be chosen based on the output of previous queries. Existing work shows that these sketches can fail after $\tilde{O}(k^2)$ queries.
The paper breaks this quadratic barrier. It presents an adaptive data analysis framework that builds on the generalization property of differential privacy. It shows that by limiting each element's participation to $\tilde{O}(k^2)$ times, the framework can answer an exponential number of queries. The paper further fits the Bottom-k sketch into this framework, resulting in a cardinality sketch that can answer $\Omega(k^2)$ adaptive queries.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Experiments validate the theoretical findings.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The paper broadens the toolkit for designing robust algorithms.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper is well presented. It introduces the problem and relevant background in a succinct and accurate manner and presents the algorithms in an organized way. The contribution is substantial and offers a clear improvement over previous work in this area.
Other Comments Or Suggestions: N/A
Questions For Authors: I don't have any specific questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your time and comments. | Summary: Cardinality sketches support computing a small sketch/summary of a set of keys S from a universe U. The sketch supports estimating |S|. This problem is trivial without further requirements as one can merely store |S| in log(|U|) bits. A cardinality sketch thus further requires that given sketches of two sets A and B, one can compute the sketch of their union and intersection from the sketches themselves. When analysing the performance of a sketch, the authors considers how many adaptively chosen queries (cardinality estimates) that a sketch of size k can process without making a mistake. The paper designs sketches that can handle more than O(k^2) queries with sketch size k. Compared to previous work, the guarantee they give depend on the number of queries that share an element in U, instead of just the number of queries.
Specifically, they claim to design a sketch and an estimator that can handle an exponential number of adaptive queries if each key participates in at most \tilde{O](k^2) queries. They also claim to extend this result to the case where this condition fails on a small fraction of keys.
They also state that one of their contributions is arguing that one can reduce constructing robust sketches to "adaptive data analysis (ADA)" instead of reducing to differential privacy.
Claims And Evidence: The claims are supported by theoretical proofs. The ones that I read seem correct. Additionally, there is a small simulation study. It is not clear to me how the simulations support their claims.
Furthermore, they claim that it is among their contributions to reduce the problem to adaptive data analysis instead of reducing it to the usual differential privacy. However, they then solve the adaptive data analysis problem using differential privacy, so it's not clear to me that their approach is as novel as claimed.
Methods And Evaluation Criteria: The comparisons to theoretical results in the litterature are appropriate.
The emperical evaluation using simulated synthetic data however, is not sufficiently explaned. Hence, it is difficult to determine if it's appropriate or not.
Theoretical Claims: I checked all the proofs on pages 1-8 except for the proofs of the following which I merely skimmed:
lemma 3.1, claim 3.2, corollary 3.3.
All the proofs seem correct.
Experimental Designs Or Analyses: As mentioned above, it is unclear to me what the "emperical evaluation" actually says about their results, so I cannot determine if they are valid or not.
Supplementary Material: None
Relation To Broader Scientific Literature: Their algorithm and analysis are based on the privacy of their Algorithm 1 (Cohen & Lyu 2023), and they claim to break the "quadratic barrier" given by Cohen et al. (2024).
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: The paper is relatively easy to follow and so are the proofs. I think the results are nice and seem like they could be useful.
The biggest weakness is the section with the simulated data and the plots, which I think would benefit from a better explanation.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your time and comments.
**> The biggest weakness is the section with the simulated data and the plots, which I think would benefit from a better explanation.**
We will make sure to add a more clear explanation of the empirical evaluation for the full version of the paper. To explain briefly here, the plots show the gain of our "tracking" robust estimator (Algorithm 4, which keeps track of the number of times each key is used) over our "basic" robust estimator (Algorithm 3), and over the baseline from prior work. The plots show how long the theoretical guarantee of each algorithm lasts, in the case where the query sets are drawn from some selected natural distributions. The plots demonstrate that, indeed, for these distributions, Algorithm 4 is likely to maintain its guarantee for much longer than Algorithm 3, which in turn is likely to outperform the baseline from prior work.
**> Reduce the problem to ADA instead of the usual DP... then solve using DP. So it's not clear to me that their approach is as novel as claimed.**
By now we know of several approaches to solve the ADA problem. DP is definitely one of the prominent approaches, but it is not the only approach. In particular, as Blanc (2023) showed, the ADA problem can be solved using "subsampling" (without adding noise to the responses directly). Even though in our paper we do use DP at the end of the day to solve the resulting ADA problem, our new reduction shows that if tomorrow someone will come up with yet another solution to the ADA problem (maybe with some additional properties / providing more fine-tuned guarantees) then this new solution could potentially be applied, via our reduction, to design new robust sketches. | Summary: The paper addresses the problem of sketching under the additional constraint that each key appears in at most $r$ queries. This setting generalizes prior work by introducing a finer-grained robustness notion based on per-key participation rather than the total number of queries. The main result establishes that, under this assumption, the required sketch size should be proportional to $\sqrt{r}$, rather than the sqrt of the number of queries that can be answered accurately. The authors provide theoretical justification for this claim and propose a robust estimation procedure based on per-key tracking.
## update after rebuttal
I believe the paper still requires substantial revision, including the overall structure of the paper, clarity of exposition, problem formulation, and experimental depth. I will maintain my original score.
Claims And Evidence: The primary claim made by the paper is that the sketch size should be order of $\sqrt{r}$ rather than a sqrt of the total number of queries. This result is formalized in Theorem 4.1. The theorem appears well-supported, and the authors provide proofs and an empirical evaluation. However, the paper lacks a rigorous, self-contained problem formulation, making it difficult for those unfamiliar with sketching literature to follow the argument effectively. The definition of "query" is ambiguous in the introduction and should be clarified.
Methods And Evaluation Criteria: The proposed methods involve a modification of bottom-$k$ sketches by introducing per-key participation tracking and a robust estimation procedure. While the techniques seem reasonable, the exposition is convoluted, making it difficult to assess the practical implications. The evaluation is based on synthetic data and demonstrates the improvement over prior methods in terms of the number of adaptive queries the sketch can support. However, a more extensive comparison with standard adaptive sketching techniques would strengthen the argument.
Theoretical Claims: Theorem 4.1 states that robustness in the per-key participation model requires a sketch size of $\sqrt{r}$. While the proof structure appears correct at a high level, I have not verified the details rigorously. Given the importance of this result, providing a clear high-level overview would be beneficial while having proof details in the appendicies.
Experimental Designs Or Analyses: The experiments are conducted on toy datasets, which provide some intuition for the effectiveness of the approach but lack depth.
Supplementary Material: Due to my role as an emergency reviewer, I am unable to review the supplementary materials at this time.
Relation To Broader Scientific Literature: he paper situates itself within the literature on cardinality estimation and sketching but does not provide sufficient background for readers unfamiliar with the area.
Essential References Not Discussed: I am not familiar with sketching literatures.
Other Strengths And Weaknesses: A major weakness of the paper is its organization and clarity. The structure is unfriendly to readers unfamiliar with sketching. A more rigorous and formal problem definition should be included early on. The term "query" is not well-defined in its initial usage, making the motivation unclear.
Other Comments Or Suggestions: see the Strengths and Weaknesses section.
Questions For Authors: see the Strengths and Weaknesses section.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you very much for your time and comments.
**> The paper lacks a rigorous, self-contained problem formulation, making it difficult for those unfamiliar with sketching literature to follow the argument effectively. The definition of "query" is ambiguous in the introduction and should be clarified.**
We will make sure to define the model more clearly in the introduction. To clarify here, the model is that there is an adversary selecting a set V (the "query"), and the algorithm receives the sketch S(V) and must compute an estimate of the cardinality |V| and return this to the adversary. This process is repeated over multiple rounds.
**> a more extensive comparison with standard adaptive sketching techniques would strengthen the argument.**
We will survey additional related results. Prior to our work, all existing approaches to adaptive cardinality sketching obtained worst-case bounds, supporting at most a quadratic number of queries. The blue line in our plots ("baseline") captures the best prior result, showing our improvements.
**> The experiments are conducted on toy datasets, which provide some intuition for the effectiveness of the approach but lack depth.**
We agree that we used synthetic datasets for our evaluations. The overall goal of this paper is to present a theoretical framework and provide proven guarantees, which we consider to be the main contribution of our work.
**> The exposition is convoluted... does not provide sufficient background... Providing a clear high-level overview would be beneficial...**
Thank you for the feedback. For the revised version we will make sure to make the explanations, especially in the introduction, more clear and accessible, and properly define what a query is. | null | null | null | null | null | null | null | null |
Towards the Efficient Inference by Incorporating Automated Computational Phenotypes under Covariate Shift | Accept (poster) | Summary: This paper explores the integration of automated computational phenotypes (ACPs) in semi-supervised learning settings. ACPs are used to derive phenotype data from electronic health records (EHRs) using machine learning models, reducing the labor-intensive nature of manual phenotype extraction. However, direct replacement of gold-standard phenotype data with ACPs can introduce bias.
Claims And Evidence: The paper provides rigorous mathematical justifications and asymptotic efficiency analyses to show the advantages of using ACPs.
The empirical results from simulations and a real-world application to diabetes prediction support the claim that ACPs improve inference efficiency. The paper assumes that the ACP prediction errors do not introduce systemic biases beyond covariate shift, which might not always hold in real-world scenarios. A sensitivity analysis could be conducted to investigate this.
However, the practical effectiveness of these methods in more complex, real-world medical settings (beyond a single dataset) could be better validated.
Methods And Evaluation Criteria: The paper defines clear evaluation metrics (asymptotic efficiency bounds, mean squared error comparisons).
They compare scenarios with and without ACPs to show how the additional data impacts estimation.
The use of cross-fitting for nuisance parameter estimation ensures robustness when applying machine learning methods.
It does not extensively discuss computational trade-offs (e.g., increased complexity due to doubly robust estimation).
Theoretical Claims: I didn't see any issue.
Some technical assumptions (e.g., regularity conditions on nuisance estimators, independence assumptions for ACP generation) are not explicitly tested.
Experimental Designs Or Analyses: No issue, would be better to have more experiments.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper is well-grounded in existing literature on: Semi-supervised learning (e.g., Zhu 2005, Rigollet 2006, Wang et al. 2022).
Inference with predicted data (prediction-powered inference, PPI) (Angelopoulos et al. 2023).
Covariate shift and domain adaptation (Sugiyama et al. 2008, Gretton et al. 2009). Surrogacy in biostatistics and causal inference (Athey et al. 2019, Imbens et al. 2024).
The paper’s main contribution is combining semi-supervised learning, covariate shift adjustments, and ACPs in a unified framework. The paper does not extensively compare its approach to existing robust semi-supervised estimation methods beyond theoretical efficiency arguments. T
Essential References Not Discussed: The authors can perhaps also discuss the relationship with the paper Doubly Robust Calibration of Prediction Sets under Covariate shift (Yang, Kuchibhotla, Tchetgen Tchetgen 2024), which is highly relevant to their work.
Other Strengths And Weaknesses: The efficiency analysis is mathematically well-founded and there is a experiment on both real world and synthetic datasets.
Some of the assumptions are too strong.
Other Comments Or Suggestions: None
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *Q1: [Claims and Evidence]*
**Response:**
- Thank you for this insightful comment! To understand the benefits of incorporating ACP $\hat Y$, we introduce an assumption with a format similar to covariate shift, as presented in Lines 181-192. In real world scenarios, this assumption can be tested when $Y$ is available in the unlabeled data (e.g., in the income, politeness, wine datasets) but not when $Y$ is unavailable (e.g., in the diabetes dataset). In cases where the assumption cannot be tested, we agree that conducting a sensitivity analysis could be valuable and of help.
- We also agree that evaluations on more datasets are worthwhile. We provided a more comprehensive response in our answer to Q2 of reviewer mNmv and to Q4 of reviewer 6g7c. Please kindly refer to that section for details. Briefly we newly analyze three datasets. For each of them, we present the comparison results of confidence interval length for regression coefficients in Table 1 of [More Results](https://drive.google.com/file/d/1ihn6OAqb26TZ0KVaelSpWO1bQzqGDQaH/view?usp=sharing). The proposed method is generally a lot more efficient than PPI, PPI++, and RePPI, with the largest reduction reaching approximately 60%.
*Q2: [Methods and Evaluation Criteria]*
**Response:**
- This is definitely an excellent point that warrants further discussion in our revised version. In the presence of covariate shift, one must account for two sources of nuisance functions: the density ratio model and the regression model. Estimating both sources of nuisance functions is computationally more complex than estimating just one, as in some alternative methods. However, the inclusion of both sources forms the basis of double robustness, which is essential for achieving efficient estimation. This represents a tradeoff between estimation efficiency and computational complexity. In cases where the mechanism for selecting labels is known, such as in certain design-based studies, the proposed method only needs to estimate one source of nuisance function.
*Q3: [Theoretical Claims]*
**Response:**
- The regularity conditions on nuisance estimators are standard and widely advocated in double/debiased ML literature, e.g., van der Laan (2011), Chernozhukov et al. (2018), Kennedy (2024), Chernozhukov et al. (2024). The convergence rate is achievable for many ML methods such as in regression trees and random forests (Wager and Walther 2015) and a class of neural nets (Chen and White 1999). One can refer to hernozhukov et al. (2018) for more examples.
- As we briefly mentioned in our answer to Q1, the independence assumption for ACP generation can be tested when $Y$ is available in the unlabeled data but cannot if otherwise. When it cannot be tested, one can decompose the assumption as $p(\hat y|x) = q(\hat y|x)$ and $p(y|x, \hat y) = q(y|x,\hat y)$. It is clear that, the untestable part $p(y|x, \hat y)=q(y|x,\hat y)$ is something similar to the covariate assumption $p(y|x) = q(y|x)$. Note that this assumption is popularly adopted in semi-supervised learning, and distribution shift.
*Q4: [Experimental Designs or Analyses]*
**Response:**
- We implemented our proposed method and compared with some alternatives on three new datasets (income, politeness, wine). We provided a more comprehensive response in our answer to Q2 of reviewer mNmv, to Q4 of reviewer 6g7c, and to Q1 above. Briefly, we present the comparison results of confidence interval length for regression coefficients in Table 1 of [More Results](https://drive.google.com/file/d/1ihn6OAqb26TZ0KVaelSpWO1bQzqGDQaH/view?usp=sharing). The proposed method is generally a lot more efficient than PPI, PPI++, and RePPI, with the largest reduction reaching approximately 60%.
- We also expanded our simulation studies by comparing the proposed method with PPI, PPI++ and RePPI. We provided a more comprehensive response in our answer to Q2 of reviewer 6g7c. Briefly, the MSE results are presented in Tables 2 and 3 of [More Results](https://drive.google.com/file/d/1ihn6OAqb26TZ0KVaelSpWO1bQzqGDQaH/view?usp=sharing). Across all considered scenarios, the proposed method consistently outperforms the three alternatives.
*Q5: [Relation to Broader Scientific Literature]*
**Response:**
- As in our answers to Q1 and Q4 above, we extensively compare the proposed method with PPI, PPI++ and RePPI in both real data and simulated data.
*Q6: [Essential References Not Discussed]*
**Response:**
- Thank you for pointing out this important reference! Indeed, Yang et al. (2024) studies how to calibrate prediction sets in the presence of covariate shift and proposes a doubly robust approach to enhance the reliability and coverage of predictions. We also realized that there is work, such as Qiu et al. (2023), that explores prediction sets adaptable to unknown covariate shift. In the revised version, we will be sure to include these relevant, important, and exciting works!
*Q7: [Other Strengths And Weaknesses]*
**Response:**
- Same as Q3 above.
---
Rebuttal Comment 1.1:
Comment: The additional experiments are satisfactory, I would like to change my score to 4.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer pTuV,
Thank you once again for your thoughtful review of our work, your insightful comments, and for increasing your score from 3 to 4, as indicated in your response to the rebuttals.
We noticed, however, that the submission summary page still reflects your original score of 3. We’re wondering if the score change might not have been updated in the system.
Could you kindly take a moment to check and ensure the updated score is reflected? We truly appreciate your time and support.
Sincerely,
The Authors | Summary: This paper introduces a semiparametric framework for efficient inference under covariate shift by leveraging automated computational phenotypes (ACPs). The authors propose a doubly robust, semiparametrically efficient estimator for a target parameter $\beta$ by integrating ACPs, density ratios, and conditional expectations.
Theoretical results show that when ACPs offer extra predictive information beyond $\boldsymbol{X}$, their inclusion strictly reduces the estimator's asymptotic variance. Simulation studies and a diabetes case study empirically validate that the proposed method achieves lower mean squared error and yields narrower confidence intervals compared to methods without ACPs.
Claims And Evidence: Yes. Overall, the paper’s claims are supported by theoretical derivations and experiments.
Methods And Evaluation Criteria: Yes. I think the use of ACP and covariate shift problem should be a typical problem. And the experiment on real dataset looks like a reasonable setting.
However, the author should have indicated what $\boldsymbol{X}$, $\boldsymbol{Y}$, and $\boldsymbol{Z}$ mean in those experiments, especially for the diabetes experiment.
Theoretical Claims: I reviewed the theoretical claims provided in the paper.
I did not perform a line-by-line check of every technical detail.
I think the proofs for the main theoretical claims appear correct.
Experimental Designs Or Analyses: 1. The simulations compare non-ACP and using-ACP settings. While this is useful for demonstrating efficiency gains, a comparison with alternative methods from the literature (e.g., Prediction-Powered Inference) might further validate the practical benefits of the approach.
2. Why SuperLearner is used? Are there other options?
3. For real dataset, additional different datasets or disease contexts would strengthen the generalizability of the results.
4. For the diabetes dataset, how well does the covariate shift assumption hold?
5. For the diabetes dataset, what $\boldsymbol{X}$, $\boldsymbol{Y}$, and $\boldsymbol{Z}$ mean?
Supplementary Material: No. I suggest the authors indicate in the main text where the reviewers should look for the needed details in the supplementary material.
Relation To Broader Scientific Literature: The paper’s contributions are related to established statistical theories such as semiparametric theory, double robustness, and semi-supervised learning under covariate shift. This paper focuses specifically on the use of ACP.
This paper also relates to some work about data imputation, the use of pseudo-labeled data, and epistemic uncertainty.
Essential References Not Discussed: [1] proposes an approach for semi-supervised learning algorithms that can address covariate shift. Their framework also recovers some popular methods, including entropy minimization and pseudo-labeling.
[2] discussed the use of pseudo-labeled data.
[1] Aminian, Gholamali, et al. "An information-theoretical approach to semi-supervised learning under covariate-shift." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
[2] Rodemann, Julian, et al. "In all likelihoods: Robust selection of pseudo-labeled data." International Symposium on Imprecise Probability: Theories and Applications. PMLR, 2023.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: In line 423-425, "We adopted a logistic regression to predict diabetes status with all selected variables,
and use the regression coefficients as association measures"
What linear/logistic regression is used? Did the authors try different models?
What kind of models can be analyzed with the proposed theory? For example, can large deep learning models be explored with this theory? What if MLP-based regressors or classifiers?
Can $\boldsymbol{X}$ be images or other modalities?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: *Q1: [Methods and Evaluation Criteria]*
**Response:**
- Thank you. We provided a more comprehensive response in our answer to Q1 of reviewer mNmv. Please kindly refer to that section for details.
*Q2: The simulations...[Experimental Designs or Analyses]*
**Response:**
- Yes. We conducted comparisons between the proposed method and PPI, PPI++, RePPI, across a variety of settings with different values of $n$, $N$, $\alpha$ and $\zeta$. The MSE results are presented in Tables 2 and 3 of [More Results](https://drive.google.com/file/d/1ihn6OAqb26TZ0KVaelSpWO1bQzqGDQaH/view?usp=sharing). As you can see, across all considered scenarios, the proposed method consistently outperforms the three alternatives.
*Q3: Why SuperLearner...*
**Response:**
- We use Super Learner in our implementation, which integrates a library of flexible statistical learning tools to ensure consistent estimation of relevant nuisance functions. While these functions can be estimated using simple parametric models, such models are prone to misspecification, potentially introducing bias and reducing efficiency. More flexible machine learning methods mitigate this issue by avoiding strict parametric assumptions and enabling consistent estimation across a broader range of function classes. Instead of relying on a single estimation method, Super Learner leverages the strengths of multiple algorithms by constructing an optimally weighted combination, offering greater robustness and flexibility. Moreover, van der Laan et al. (2007) established a theoretical guarantee that the estimation error of Super Learner converges to that of the best-performing learner in the ensemble.
- Alternatively, we may use individual learning algorithms, such as random forests or XGBoost. Neural networks are also a potential option, though the size of labeled data in our dataset---such as the diabetes data---is relatively small compared to typical deep learning applications.
*Q4: For real dataset...*
**Response:**
- Yes we completely agree with you. We provided a more comprehensive response in our answer to Q2 of reviewer mNmv. Please kindly refer to that section for details. Briefly we newly analyze three datasets. For each of them, we present the comparison results of confidence interval length for regression coefficients in Table 1 of [More Results](https://drive.google.com/file/d/1ihn6OAqb26TZ0KVaelSpWO1bQzqGDQaH/view?usp=sharing). The proposed method is generally a lot more efficient than PPI, PPI++, and RePPI, with the largest reduction reaching approximately 60\%.
*Q5: For the diabetes dataset...*
**Response:**
- Figure 2 in our submitted paper demonstrates that the distribution of inpatient visit count indeed shifts between labeled and unlabeled data. To provide a more comprehensive comparison, Table 4 of [More Results](https://drive.google.com/file/d/1ihn6OAqb26TZ0KVaelSpWO1bQzqGDQaH/view?usp=sharing) presents the summary statistics of each variable in $X$. Overall, the covariate shift assumption appears reasonable in this dataset.
*Q6: For the diabetes dataset...*
**Response:**
- Same as Q1 above.
*Q7: [Supplementary Material]*
**Response:**
- Thank you for the suggestion, and we apologize for overlooking this issue. In the new version, we will make sure to appropriately reference the corresponding details in the supplementary material within the main paper.
*Q8: [Essential References Not Discussed]*
**Response:**
- Thank you for pointing out these references! Both are highly exciting works. In the new version, we will be sure to include them and conduct a more thorough literature review, particularly on the topics of covariate shift and pseudo-labeling.
*Q9: [Questions for Authors]*
**Response:**
- In the diabetes dataset, the scientific goal is to understand the relationship between $Y$ (diabetes status) and $X$ (7 variables representing patient characteristics). Since $Y$ is binary, we use logistic regression, which is arguably the most commonly used model for this type of analysis. More generally, other suitable regression or classification models, such as probit regression or support vector machines, could also be applied in this context.
- In a broader sense, as long as the parameter of interest can be defined as the minimizer of a loss function, as presented in Lines 144-146, the proposed method can be used. This includes the case with MLP-based regressors or classifiers, high-dimensional models, or large deep learning models in general. To address this question more explicitly, we explore the implementation of the proposed method on the benchmark dataset MNIST where $X$ is image, $\hat Y$ is generated using ResNet, and the comparison with the standard PPI method, for estimating the probability of the outcome being a certain label. In terms of the length of 95\% confidence interval (the shorter, the more efficiency gain), the proposed method shortens about 15\% (from 0.0448 to 0.0376 when $Y$=1/7, and from 0.0440 to 0.0370 when $Y$=6/8).
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My questions are solved. I appreciate the additional experiments and illustrations about the dataset and experiment setting. I have increased my score to 4.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 6g7c,
Thank you so much for your acknowledgement, your thoughtful reviews of our work, and for increasing the score! We really appreciate it!
Sincerely
The Authors | Summary: This paper proposes an approach that leverages both labeled and unlabeled data to estimate target parameters. The approach first uses the pre-trained model to estimate $Y$ for the unlabeled data and then uses the estimated $\hat{Y}$ to estimate the target parameters. The proposed approach is applied to both simulated and real-world data to demonstrate the efficiency gain as compared to the benchmark approach.
Claims And Evidence: The proposed approach is supported by theoretical analysis and empirical study.
Methods And Evaluation Criteria: It makes intuitive sense to augment the labeled data with unlabeled data to estimate target parameters. The paper shows in Section 3.2 that the unlabeled data can improve the estimation efficiency of target parameters. However, it is a bit unclear when the efficiency gain can actually happen, i.e., when the generation of $Y$ depends on covariates $X$ and other variables $Z$. Could the authors provide a concrete example? What are $X$, $Z$, and $Y$? When $Z$ is used to generate $Y$ but is not included in $X$ itself?
Theoretical Claims: The theoretical claims look correct to me.
Experimental Designs Or Analyses: The experimental results make sense. However, I would appreciate more real-world experiments (on other data sets) to demonstrate that the proposed approach has a substantial efficiency gain.
Supplementary Material: I reviewed the proofs, and they look correct to me.
Relation To Broader Scientific Literature: This paper is related to the literature on semi-supervised learning and prediction-powered inference.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: Please address my comments on methods and experiments.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *Q1: However, it is a bit unclear...[from Methods and Evaluation Criteria]*
**Response**:
- Thanks for raising this question. In general, the outcome $Y$ and covariate $X$ are variables of scientific interest. The variable $Z$ represents additional information that may not be of direct scientific interest but is highly relevant to $Y$.
- For example, in our diabetes dataset, $Y$ indicates whether a patient has diabetes, and $X$ includes 7 variables measured at the year of diagnosis: age, type of insurance (self-pay vs others), counts of inpatient and outpatients visits, BMI, congestive heart failure (yes vs no), and the Charlson Comorbidity Index (CCI, larger than 2 or not). Our primary scientific interest is in understanding the relationship between $Y$ and $X$, which captures patient characteristics. Meanwhile, ACP $\hat Y$ is generated by a tree-based algorithm using diagnosis codes, medication history, and HbA1c lab test results from electronic health records---these variables form $Z$.
- In another example using newly analyzed income data, we fit a least squares model to study the relationship between $Y$ (log-income) and $X$ (age, sex). The ACP $\hat Y$ is obtained from an XGBoost model predicting log-income based on 14 variables, including education, marital status, citizenship, and race, among others---these variables consititute $Z$.
- In some cases where $\hat Y$ is generated by LLMs, defining $Z$ precisely is more challenging. For instance, in our politeness dataset, the goal is to study the relationship between $Y$ (politeness score) and $X$ (a binary indicator of hedging in requests), while $\hat Y$ is generated by OpenAI's GPT-4o mini model. Here, $Z$ represents some latent information used to produce $\hat Y$ but is not as explicitly defined as in previous examples. Similarly, in our wine dataset, the goal is to examine the relationship between $Y$ (rating) and $X$ (price and region), with $\hat Y$ also generated by GPT-4o mini.
- Finally, in a synthetic data example, consider the following data generating process $Y = \xi^T X + \alpha Z + \varepsilon$, where $\varepsilon \sim N(0,1)$, $\alpha>0$, as in our simulation studies. Suppose the parameter of interest is $\xi$, and $\hat Y = Z$. The magnitude of $\alpha$ determines whether efficiency gain occurs: as long as $\alpha\neq 0$, $Z$ correlates with $Y$, leading to an efficiency gain.
*Q2: However, I would appreciate...[from Experimental Designs or Analyses]*
**Response**:
- Yes. Thank you for this question. We implemented the proposed method on three new data sets, and compared with some existing methods: PPI (Angelopoulos et al. 2023a), PPI++ (Angelopoulos et al. 2023b) and RePPI (Ji et al. 2025).
- Income data: We analyze the relationship between wage (measured by log-income) and age, confounded by sex, based on US census data, under covariate shift. The ACP $\hat Y$ is generated by fitting XGBoost of log-income with 14 variables, including education, marital status, citizenship, and race. For covariate shift, we partition the labeled and unlabeled datasets using the probability $\exp(\alpha^T X)/\{1+\exp(\alpha^T X)\}$ where $\alpha=(0,1,0)$ and $X =(1,X_1,X_2)$ with $X_1$ age and $X_2$ sex, resulting in a ratio of 2:8.
- Politeness data: Using data that comprises texts from 5,512 online requests posted on Stack Exchange and Wikipedia, we understand the association between politeness score (range from 1 to 25) and a binary indicator for hedging within the request. The ACP $\hat Y$ is generated using OpenAI's GPT-4o mini-model that has the same range as the politeness score. For covariate shift, we split the labeled and unlabeled data in a 1:9 ratio, following the same procedure as above, where $X=(1,X_1)$ with $X_1$ hedge and $\alpha=(0,1)$.
- Wine data: Using the Wine Enthusiast review dataset, we investigate the association between wine rating (range from 80 to 100) and wine price, adjusted by wine region. Similar to the politeness data, the ACP $\hat Y$ is also generated by employing OpenAI's GPT-4o mini-model that produces predicted ratings with the same scale. To assess covariate shift, we follow the same procedure as in the previous experiments, splitting the labeled and unlabeled data in a 3:7 ratio. Here, $X=(1,X_1,X_2,X_3,X_4,X_5)$, where $X_1$ represents price, $X_2$ to $X_5$ represents California, Washington, Oregon and New York, respectively, with $\alpha=(0,1,0,0,0,0)$.
- For each of these three data sets, we present the comparison results of confidence interval length (the shorter, the more efficient, the better) for regression coefficients in Table 1 of [More Results](https://drive.google.com/file/d/1ihn6OAqb26TZ0KVaelSpWO1bQzqGDQaH/view?usp=sharing).
As you can see, while there are some cases that the proposed method is tiny slightly less efficient than PPI++ or RePPI, it is generally a lot more efficient than PPI, PPI++, and RePPI, with the largest reduction in confidence interval length reaching approximately 60%. | null | null | null | null | null | null | null | null |
TeDS: Joint Learning of Diachronic and Synchronic Perspectives in Quaternion Space for Temporal Knowledge Graph Completion | Accept (poster) | Summary: This paper proposes TeDS, a temporal knowledge graph completion method cosidering both diachronicic and synchronic flows within and between temporalized facts. Extensive experiments demonstrate that TeDs is capable of achieving state-of-the-art performance across multiple benchmarks.
Claims And Evidence: The performance improment compared with baselines is more marginal on the first three datasets (Table 2), could the authors explain this phenomenon?
Methods And Evaluation Criteria: Yes, the paper is easy to follow, with clear writing and well-presented technical details.
Theoretical Claims: I didn't check all the proofs.
Experimental Designs Or Analyses: TPComplEx seems to be a very strong baseline, so could the authors use its source code to produce results on the remaining three datasets? It would be much helpful to make a fairer comparison.
Supplementary Material: It seems the authors did not upload their source code and did not commit to releasing the code after acceptance.
Relation To Broader Scientific Literature: Modeling TKGC problem from both diachronicic and synchronic perspectives is novel and highly motivated. The usage of quaternion theroies is also interesting.
Essential References Not Discussed: It may exists some related works this paper hasn't included.
Other Strengths And Weaknesses: In Section 3.2, the author may briefly introduce Quaternion and Hamilton rule with examples for readibility.
Other Comments Or Suggestions: The font-size of most figures is overly small.
Questions For Authors: Please see above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for all your valuable comments. Note: The pictures and tables used in response are available at https://anonymous.4open.science/r/TEDS-033A/To_Re_rpDA.pdf See: To_Re_rpDA.pdf
Q1: The performance improment compared with baselines is more marginal on first three datasets (Table2), could the authors explain this phenomenon.
A1: TeDS gets consistent performance improvements across three benchmark datasets—ICEWS14, ICEWS05-15, and ICEWS18—with particularly notable enhancements on ICEWS18. This disparity primarily stems from ICEWS18 containing approximately ten times more facts than ICEWS05-15 at the same temporal density, significantly increasing the complexity of temporal context modeling. By effectively capturing intricate dependencies among high-density facts through its Synchronous Perception (SP) mechanism, TeDS attains optimal performance on this dataset.
Furthermore, TeDS demonstrates even more pronounced advantages on larger-scale TKGs with greater challenges, including YAGO11k, Wikidata12k, and GDELT. For YAGO11k and Wikidata12k, which feature long temporal spans and distinct long-tail distributions, TeDS's Diachronic Perception (DP) employs dynamic smooth embedding techniques to simultaneously model short-term fluctuations and long-term evolutionary trends, effectively addressing the challenges of sparse temporal data modeling. Meanwhile, on the extremely fact-dense GDELT dataset, TeDS significantly enhances capture of complex temporal patterns by collaboratively optimizing diachronic and synchronous feature representations. These comparative results fully demonstrate TeDS's unique strengths in handling high-density facts and complex temporal dependencies, with its synchronous and diachronic perception capabilities being more fully realized.
Q2: TPComplEx seems to be a very strong baseline, so could the authors use its source code to produce results on remaining three datasets? It would be much helpful to make a fairer comparison.
A2: To compare, we reproduce TPComplEx (selecting best result from rank $\in$ {1000, 1500, 2000}) on Wikidata12k, YAGO11k, and ICEWS18. Table 1 and 2 show that TeDS outperforms TPComplEx across all datasets, with significant improvements on Wikidata12k and YAGO11k, which we attribute to TeDS's modeling of temporal scenarios.
Besides, we compare two under strongly constrained datasets in the Appendix of our manuscript (B.6. Performance comparison between TPComplEx and TeDS).
Finally, we perform a computational complexity comparison between TPComplEx and TeDS. Table 3 shows TeDS's superior training speed: 43% faster per epoch on Wikidata12k (5.63s vs. 9.87s) and 35% faster on YAGO11k (2.82s vs. 4.33s) compared to TPComplEx. Crucially, these speedups are achieved alongside dramatic parameter reductions (e.g., 90% fewer parameters on YAGO11k). The marginally longer runtime on ICEWS05-15 (30.75s vs. 22.06s) is justified by TeDS using only 27% of baseline's parameters—a favorable tradeoff for memory-constrained applications. Table 4 reveals TeDS's most striking advantage: achieving superior performance with just 20.64M parameters versus TPComplEx's 201.2M on Wikidata12k—a 10× improvement in parameter efficiency.
The above analysis demonstrates that TeDS is particularly well-suited for large-scale TKG applications, maintaining competitive performance while significantly reducing memory overhead and computational resource requirements
Q3: It seems the authors did not upload their source code and did not commit to releasing the code after acceptance.
A3: All source code used for conducting and analyzing the experiments will be publicly available upon the publication of the paper under a license that permits free use for research purposes.
Q4: It may exists some related works this paper hasn't included.
A4: We add more important references to our paper (See our response to Reviewer C2mh's A2 for details). We continue collecting latest (2025) references (e.g., MTE, Neo-TKGC, and GLARGCN) to further validate TeDS's superiority (Table 5 and 6).
Q5: In Section 3.2, the author may briefly introduce Quaternion and Hamilton rule with examples for readibility.
A5: Based on your detailed comments, we have added the following content: Quaternion is a prominent example of hypercomplex number system, extending the traditional complex number system into four-dimensional space. A quaternion $Q$ consists of one real component and three imaginary components, defined as $Q=a + e \mathbf{i} + f \mathbf{j} + g \mathbf{k}$, where $a, e, f, g$ are real numbers, $\mathbf{i}, \mathbf{j}, \mathbf{k}$ are imaginary units, satisfying Hamilton rule:
1. $\mathbf{i}^{2} = \mathbf{j}^{2} = \mathbf{k}^{2}=-1$
2. $ij=k, ji=-k$
3. $jk=i, kj=-i$
4. $ki=j, ik=-j$
Q6: The font-size of most figures is overly small.
A6: We have reviewed our manuscript and accepted your suggestions. In revised version, we will adjust the font size of the figures to enhance the readability of our manuscript. | Summary: This paper introduces TeDS, which is a unified framework to simultaneously consider both diachronic timestamp and synchronic timestamp for TKGC. TeDS achieves significant improvements over the existing SOTA on six datasets.
Claims And Evidence: The effectiveness of modeling TKGs from the perspectives of synchronic perception and diachronic perception has been validated in the experimental results.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the TKGC problem.
Theoretical Claims: I have checked the correctness of the proofs for the theoretical claims.
Experimental Designs Or Analyses: I have checked the soundness and validity of the experimental designs and analyses.
Supplementary Material: I have reviewed all contents in the supplementary material.
Relation To Broader Scientific Literature: Considering both diachronicity and synchronicity is beneficial for TKGC.
Essential References Not Discussed: Please see Weakness 1:
Some approaches of mapping diachronicity and synchronicity to Quaternion Space for modeling are missing:
[1] Combination of translation and rotation in dual quaternion space for temporal knowledge graph completion, IJCNN 2023
[2] TELS: Learning time-evolving information and latent semantics using dual quaternion for temporal knowledge graph completion, KBS 2024
Other Strengths And Weaknesses: Strength:
The experiments are relatively comprehensive, and TeDS achieves the state-of-the-art performance.
Weakness:
1. The concepts of diachronic timestamp and synchronic timestamp appear to correspond to the temporal and structural dependencies in TKGs, respectively, which are fundamental considerations in most TKGs studies. The approach of mapping these dependencies to Quaternion Space for modeling is reasonable. However, there are notable similarities between this paper and ComTR [1] in terms of structure and equations, with the exception of the Diachronic Perception component. Despite this, ComTR is not cited or discussed, which seems inappropriate. The authors should explicitly clarify the differences and similarities between their work and ComTR. In addition, TELS [2] also adopts Quaternion Space for modeling TKGs, and a comparison with it would further strengthen the paper.
[1] Combination of translation and rotation in dual quaternion space for temporal knowledge graph completion, IJCNN 2023
[2] TELS: Learning time-evolving information and latent semantics using dual quaternion for temporal knowledge graph completion, KBS 2024
2. The Related Work section would be more informative if it included a clearer discussion on the relevance of previous studies to TeDS.
3. The visualizations of the temporal relation embeddings should include visualizations results from some baseline methods, rather than only comparisons with variations of TeDS.
4. Adding a discussion on the limitations of the study and potential future directions would enhance the completeness of the paper.
5. The text in most figures is not easily readable. Enhancing the clarity of the images would improve overall readability and presentation quality.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for all your valuable comments. The pictures and tables used in response are available at https://anonymous.4open.science/r/TEDS-033A/To_Re_WGeZ.pdf See: To_Re_WGeZ.pdf
Q1: The concepts of diachronic timestamp and synchronic timestamp appear to correspond to the temporal and structural dependencies in TKGs, respectively, which are fundamental considerations in most TKGs studies. The approach of mapping these dependencies to Quaternion Space for modeling is reasonable. However, there are notable similarities between this paper and ComTR in terms of structure and equations, with the exception of Diachronic Perception component. Despite this, ComTR is not cited or discussed, which seems inappropriate. The authors should explicitly clarify the differences and similarities between their work and ComTR. Besides, TELS also adopts Quaternion Space for modeling TKGs, and a comparison with it would further strengthen paper
The related work would be more informative if it included a clearer discussion on relevance of previous studies to TeDS
A1: We add above references and provide following response
1)Differences in underlying technical details: TeDS uses quaternions, while ComTR uses dual quaternions. In SP, TeDS achieves a thorough integration of temporal and relational information by reorganizing the synchronous timestamp $ W_{s\tau} = a_{s\tau} + e_{s\tau} \mathbf{i} + f_{s\tau} \mathbf{j} + g_{s\tau} \mathbf{k} $ and relation $ R_r $, forming two quaternions $ Q_{r\tau_{sp}} $ and $ Q_{\tau_{sp}r} $. In contrast, ComTR directly applies dual quaternions without deep information interaction. Details in Section 4 (TeDS for TKGC).
2)Different motivations: TeDS observes regularities of facts within a temporal context and summarizes two important temporal perspectives: synchronicity and diachronicity. In contrast, ComTR focuses on capturing multi-relational patterns in a temporal context.
3)TeDS integrates dual perception channels into a unified framework to handle multi-perspective temporal facts. It is not limited to quaternions and adapts to tensors, complex numbers, dual quaternions, and homogeneous transformations. In contrast, ComTR relies on rule modeling based on dual quaternion properties.
Like ComTR, TELS uses dual quaternions to: 1)model multiple relations pattern; 2)model evolutionary hierarchical pattern; 3) capture unique latent semantics based on an entity's position in a relation. Unlike ComTR, TELS maximizes dual quaternion advantages rather than just applying them. Besides, TELS components become more portable and less reliant on dual quaternion technology (e.g., latent semantic and evolutionary hierarchical awareness).
To show TeDS's portability and effectiveness, we implement it with dual quaternions as DTeDS (see Table1 and 2). Compared to baselines, DTeDS and TeDS consistently get best and second-best results. We also observe that DTeDS and TELS outperform TeDS on GDELT. Due to GDELT's high data density at same temporal granularity, many multi-relation patterns emerge. DTeDS and TELS leverage the strength of dual quaternions in capturing these patterns, aligning with our hypothesis. Besides, we implement TeDS with complex numbers as CDS, which also shows competitive results. This further confirms the portability and effectiveness of our framework.
Q2: The visualizations of temporal relation embeddings should include visualizations results from some baseline methods, rather than only comparisons with variations of TeDS.
A2: We enhance comparison by visualizing TeLM’s temporal relation embeddings. First, we examine distribution of same relation over time, extracting relation Consult between Obama and Netanyahu in 2014 (Fig 1). TeLM outperforms HTM in classifying Consult across months, but is less effective than TeDS in aggregating data from multiple months, with blurred boundaries between adjacent months. Next, we observe various relations between Obama and Netanyahu from Jan-Jun 2014 (Fig 2). TeDS and SP perform better than HTM and TeLM in distinguishing relations. TeDS surpasses SP in differentiating identical relations, rather than clustering them together. Even within the same month, relations may show different trends based on context. Thus, clustering different relations while preserving uniqueness of identical relations in specific contexts is key.
Q3: Adding a discussion on limitations of study and potential future directions would enhance completeness of paper.
A3: Our TKG research focuses on factual records, which lack clear cycles like seasons or biological rhythms. Modeling cycles is key for prediction. We may use Fourier transforms or seasonal decomposition to enhance TeDS.
Q4: The text in most figures is not easily readable. Enhancing the clarity of images would improve overall readability and presentation quality.
A4: We will revise paper structure and adjust the font size in figures in revised version to enhance readability of manuscript. | Summary: The paper introduces a quaternion-based model for temporal knowledge graph completion that integrates diachronic and synchronic perspectives. The model demonstrates significant improvements over state-of-the-art methods on six benchmark datasets, showcasing its effectiveness in handling both short and long temporal spans.
Claims And Evidence: The authors provide comprehensive experiments on multiple datasets, showing that TeDS outperforms existing methods in terms of metrics such as Mean Reciprocal Rank (MRR) and Hits@n. The ablation studies further validate the effectiveness of the proposed dual perception channels. However, a more detailed discussion on the computational complexity and scalability of the model would strengthen the claims.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem. The use of quaternion embeddings and the dual temporal perception channels are innovative approaches for temporal knowledge graph completion. The evaluation metrics (MRR, Hits@n) are standard in the field and suitable for assessing the model's performance.
Theoretical Claims: The paper does not present extensive theoretical claims beyond the quaternion-based representation and its application to temporal knowledge graphs. The correctness of the quaternion operations and their application to temporal reasoning appears sound, but a deeper theoretical analysis (e.g., convergence properties, bounds on performance) would be beneficial.
Experimental Designs Or Analyses: The authors conducted experiments on multiple benchmark datasets, including ICEWS, YAGO11k, and Wikidata12k, demonstrating the robustness of TeDS across different temporal scenarios. The ablation studies provide insights into the contributions of the synchronic and diachronic perception channels.
Supplementary Material: The supplementary material is comprehensive and supports the main findings of the paper.
Relation To Broader Scientific Literature: The paper builds on prior work in quaternion embeddings (e.g., QuatE) and extends it to temporal knowledge graphs. The dual temporal perception channels address limitations in existing methods that often treat temporal information as supplementary.
Essential References Not Discussed: No critical omissions noted
Other Strengths And Weaknesses: Strengths:
1. The dual temporal perception channels are an effective approach for capturing temporal dynamics.
2. Using quaternion embeddings provides a unique way to integrate temporal and relational information.
Weaknesses:
1. The paper lacks a detailed discussion on the computational complexity and scalability of the model.
2. Limited discussion of computational overhead compared to simpler models (e.g., TransE variants).
3. SP and DP modules underperform TeDS, but the combination’s superiority is attributed to "deep integration" without mechanistic explanation.
4. Quaternion operations introduce complexity without clear advantages over other methods.
Other Comments Or Suggestions: 1. Figure 3’s time distribution analysis could include more datasets.
2. Clarify whether TeDS can handle time intervals (e.g., [start, end]) beyond points.
Questions For Authors: 1. How does TeDS handle extremely sparse temporal data, and are there any limitations in such scenarios?
2. How does TeDS generalize to timestamps not seen during training (e.g., future events)?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for all your valuable comments. Note: The pictures and tables used in response are available at https://anonymous.4open.science/r/TEDS-033A/To_Re_fTHA.pdf See: To_Re_fTHA.pdf
Q1: A more detailed discussion on model's computational complexity and scalability would strengthen the claims. The paper lacks in-depth analysis, particularly regarding computational overhead compared to simpler models (e.g., TransE variants)
A1:
(1)Complexity Comparison. In Table1, TeDS has same theoretical complexity as mainstream models (space: $\mathcal{O}(n_ed + n_rd + n_td)$, time: $\mathcal{O}(d)$). Compared to quaternion models (RotateQVS and EHPR, rank=2000), TeDS gets best results with rank=100, reducing dimensionality by 80-95%, improving storage and computation efficiency while maintaining theoretical completeness.
(2)Training Speed Comparison. Table2 shows TeDS's superior training speed.
(3)Parameter Comparison. Table3 compares actual parameter counts.
For (2) and (3), see our response to Reviewer cu6z's A6 for details TeDS ensures optimal performance while significantly reducing computational overhead, making it ideal for large-scale TKGs.
Q2: The correctness of the quaternion operations and their application to temporal reasoning appears sound, but a deeper theoretical analysis (e.g., convergence properties, bounds on performance) would be beneficial
A2: We add experiments: 1) loss function convergence curves on ICEWS14 and ICEWS05-15 (see Fig1); 2) hyperparameter sensitivity analysis (see Fig2); 3) embedding dimension analysis (See Fig3). (See our response to Reviewer cu6z's A3 for details)
Q3: SP and DP modules underperform TeDS, but combination’s superiority is attributed to "deep integration" without mechanistic explanation
A3: For SP and DP, we further analyze characteristics between modules through temporal relation embedding visualization (Section 6.1) and strong constraint experiments (Section 6.2). Besides, we visualize and compare temporal relation embedding of baseline TeLM (see our response to Reviewer WGeZ's A2 for details).
Q4: Quaternion operations introduce complexity without clear advantages over other methods
A4: We add latest baselines (e.g., MTE(2025), Neo-TKGC(2025), GLARGCN(2025)) to continuously evaluate TeDS's performance (Table4 and 5). Compared to existing models, TeDS consistently gets a significant lead. Next, we find that TPComplEx gets performance close to TeDS on ICEWS14 and ICEWS05-15. To compare the two, we reproduce TPComplEx (rank=2000) results on Wikidata12k, YAGO11k, and ICEWS18. Finally, we perform a dual comparison based on both performance (See our response to Reviewer rpDA's A1 and A2) and efficiency (See our response to your A1). Besides, we extend TeDS to complex numbers and dual quaternions, named CDS and DTeDS, respectively. Table4 and 5 show that our framework gets competitive performance, with DTeDS further improving performance when computational overhead is not considered.
Q5: Fig3’s time distribution analysis could include more datasets. Clarify whether TeDS can handle time intervals beyond points
A5:
1)We add a data density comparison between ICEWS18 and ICEWS05-15 (Fig4), along with a fact distribution chart for YAGO11k and a data density comparison between YAGO11k and Wikidata12k (Fig5). These additions help better analyze TeDS's performance on different datasets
2)For facts missing part of time (e.g., (s, r, o, [$t_b$, -]) or (s, r, o, [-, $t_e$])), the score is same as quadruple with known time, i.e., $\phi$(s, r, o, [$t_b$, -]) = $\phi$(s, r, o, $t_b$), $\phi$(s, r, o, [-, $t_e$]) = $\phi$(s, r, o, $t_e$). For facts with no missing time (e.g., (s, r, o, [$t_b$, $t_e$])), we split quadruple into (s, r, o, $t_b$) and (s, r, o, $t_e$), and score is average of the two, i.e., $\phi$(s, r, o, [$t_b$, $t_e$]) = $\dfrac{1}{2}$($\phi$(s, r, o, $t_b$) + $\phi$(s, r, o, $t_e$)). This operation ensures compatibility with different time types while minimizing computational overhead. This is a commonly used processing method in existing models (e.g., TeLM and TPComplEx)
Q6: How does TeDS handle extremely sparse temporal data, and are there any limitations in such scenarios?
A6: We further randomly remove 30% of ICEWS14 training set (see Fig 6) to test TeDS's robustness under extremely sparse data conditions. Besides, to test TeDS's effectiveness in industrial and sparse scenarios, we use a company's historical cost-price time-series dataset A (see our response to Reviewer C2mh's A1 for details). TeDS consistently outperforms, showcasing stable advantages of TeDS and high-dimensional quaternion space.
Q7: How does TeDS generalize to timestamps not seen during training (e.g., future events)?
A7: TKGC refers to task of completing facts by inferring missing ones from a given subset. We focus on studying missing facts rather than predicting future ones. Predicting future facts is an interesting topic, and adapting our work to it will be a key focus. | Summary: The paper introduces TeDS, a framework designed for temporal knowledge graph completion using quaternion representations to merge time and relational data. Key findings show that TeDS significantly outperforms existing models on various benchmarks, effectively managing issues like data sparsity and incompleteness. The authors emphasize the model's robustness and provide comprehensive experimental results, along with publicly available source code for reproducibility.
Claims And Evidence: The paper evaluates TeDS, a framework for temporal knowledge graph completion, using datasets like ICEWS14, ICEWS05-15, and ICEWS18. It conducts ablation studies to compare TeDS with models such as TPComplEx and HTM, employing metrics like MRR, H@1, H@3, and H@10. The motivation is to validate TeDS's effectiveness in managing temporal and relational information, showcasing its advantages in capturing complex temporal patterns. The results indicate that TeDS outperforms existing models, particularly in handling sparse and incomplete data, and the design integrating synchronicity and diachronicity significantly enhances knowledge graph completion. This motivation aligns with the experimental findings, emphasizing TeDS's innovation in processing time-related data. Most claims are well-supported by evidence, particularly regarding performance metrics and comparative analysis; however, some claims about the model's limitations and potential societal impacts are less thoroughly discussed, which could weaken their overall persuasiveness.
Methods And Evaluation Criteria: The proposed methods, including quaternion representation and a unified framework that integrates synchronic and diachronic perspectives, are highly relevant for addressing the complexities of temporal knowledge graphs. These methods effectively facilitate the analysis and summarization of intricate real-world scenarios involving time-dependent data. Furthermore, the evaluation criteria, which involve benchmark datasets like ICEWS14, ICEWS05-15, and ICEWS18, along with metrics such as MRR, H@1, H@3, and H@10, are appropriate for assessing the model's performance and provide a comprehensive evaluation of its effectiveness in this application.
Theoretical Claims: Yes, the correctness of the proofs for the theoretical claims has been verified, as the appendix includes complete proofs for the theoretical results presented in the main text. This thorough documentation is intended to ensure transparency and rigor regarding the model's performance and capabilities, allowing readers to validate the theoretical foundations. Additionally, the appendix outlines the full set of assumptions underlying these results, clarifying the conditions for the applicability of the proposed methods and contextualizing the findings, while also providing supplementary details on experimental setups and an extended error analysis to enhance reproducibility and guide future research directions. There are no reported issues with the proofs or assumptions, reinforcing the credibility of the findings.
Experimental Designs Or Analyses: The soundness and validity of the experimental designs and analyses have been thoroughly checked.
The ablation study effectively isolates the contributions of different perceivers in the TeDS model, and the results validate the model's effectiveness without any identified issues. The performance comparison across multiple datasets is robust, demonstrating TeDS's superiority over state-of-the-art models, with no apparent flaws in the experimental setup. The standard deviation analysis provides a clear assessment of the model's robustness, and the methodology for calculating standard deviations is sound. The detailed descriptions of the training process enhance reproducibility, and the thoroughness of the training settings supports the validity of the findings. Lastly, the approach to identifying model limitations through error analysis is well-structured, offering valuable insights for future research. Overall, the experimental designs are well-founded, and no significant issues have been identified.
Supplementary Material: I reviewed the supplementary material, which includes several important sections. I examined Appendix A, the Reproducibility Checklist, which outlines the steps taken to ensure research transparency and replication; Appendix B, which details the Theoretical Results and Assumptions, including complete proofs to validate the TeDS model; and Appendix C, which provides comprehensive information on the Experimental Setup and Details, including data splits and hyperparameters. Additionally, I looked at Appendix D, which contains Additional Figures and Tables for further insights into the experimental results, and Appendix E, which elaborates on the Error Analysis Details, highlighting model performance shortcomings and areas for improvement.
Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature on knowledge graph embeddings and temporal reasoning. The work builds upon foundational models like TransE and ComplEx, which established the importance of embedding techniques for static knowledge graphs, and extends these ideas to temporal contexts through models such as TTransE and TA-DistMult. By integrating insights from tensor decomposition methods like RESCAL and TuckER, the paper enhances the representation of temporal relationships, addressing limitations identified in prior research, such as the inability to effectively model dynamic interactions over time. Furthermore, the innovations presented in the TeDS framework align with recent advancements in temporal knowledge graphs, such as ChronoR and TeLM, by providing a unified approach that captures both synchronic and diachronic perspectives, thereby contributing to a more comprehensive understanding of knowledge representation in evolving contexts.
Essential References Not Discussed: While the paper discusses several foundational models and recent advancements in knowledge graph embeddings and temporal reasoning, there are essential references that could further contextualize its contributions. For instance, the work by Zhang et al. (2020) on "Temporal Knowledge Graph Completion" introduces a novel approach that leverages recurrent neural networks for dynamic relationships, which could provide insights into alternative methodologies for handling temporal data. Additionally, the recent advancements in graph neural networks (GNNs) for knowledge representation, such as the work by Kipf and Welling (2017) on semi-supervised learning with GNNs, could be relevant as they offer a different perspective on embedding techniques that may complement the TeDS framework. Lastly, the exploration of attention mechanisms in knowledge graphs, as seen in the paper by Wang et al. (2020) on "Graph Attention Networks," could provide valuable context for understanding how attention-based approaches can enhance the representation of temporal relationships in knowledge graphs.
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review and evaluate our manuscript. Your comments have not only helped us improve the manuscript but also given us confidence to further enhance quality of our work. Note: The pictures and tables used in response are available at https://anonymous.4open.science/r/TEDS-033A/To_Re_C2mh.pdf See: To_Re_C2mh.pdf
Q1: Some claims about the model's limitations and potential societal impacts are less thoroughly discussed, which could weaken their overall persuasiveness.
A1: Thank you for your valuable comment regarding need for a deeper discussion of TeDS's limitations and societal implications. To evaluate practical effectiveness of TeDS in industrial scenarios, we use a historical cost-price time-series dataset A provided by a company. This dataset includes data on resource exploration, mining operations, material consumption, labor costs, logistics and transportation, and comprehensive costs, covering production cost data from January 2000 to December 2022. With a 12-hour sampling interval, we obtain a total of 16,031 valid samples, with all monetary values denominated in 10,000 yuan.
To simulate data-missing conditions in real-world mining scenarios, we randomly remove 10%, 20%, and 30% of training data from dataset A, constructing three sparse datasets: A 10% SPARSE, A 20% SPARSE, and A 30% SPARSE. For comparison, we reproduce baseline models including TComplEx and TeLM. Fig 1 shows that TeDS gets best performance on complete dataset A, with an MRR of 71.5, significantly outperforming baseline models. On the sparse datasets, TeDS maintains high performance, demonstrating strong robustness. In contrast, TComplEx and TeLM exhibit more significant performance degradation under data-missing conditions, particularly on A 30% SPARSE, where their MRR drops to 33.0, and 34.5, respectively. These results indicate that TeDS has a clear advantage in the task of missing-value imputation for mining cost-price data, validating its potential for industrial time-series data completion scenarios.
Q2: While the paper discusses several foundational models and recent advancements in knowledge graph embeddings and temporal reasoning, there are essential references that could further contextualize its contributions. For instance, the work by Zhang et al. (2020) on "Temporal Knowledge Graph Completion" introduces a novel approach that leverages recurrent neural networks for dynamic relationships, which could provide insights into alternative methodologies for handling temporal data. Additionally, the recent advancements in graph neural networks (GNNs) for knowledge representation, such as the work by Kipf and Welling (2017) on semi-supervised learning with GNNs, could be relevant as they offer a different perspective on embedding techniques that may complement TeDS framework. Lastly, the exploration of attention mechanisms in knowledge graphs, as seen in the paper by Wang et al. (2020) on "Graph Attention Networks," could provide valuable context for understanding how attention-based approaches can enhance the representation of temporal relationships in KGs.
A2: Thank you for your insightful comments on the latest developments in basic models and knowledge graph embedding and time reasoning. We have carefully included more relevant and important reference materials in our manuscripts, which inspire and complement the TeDS framework. The specific references added to manuscript are as follows:
Welling et al. (2017) provide a different perspective on embedding technology in their Gnn semi-supervised learning work. Wang et al. (2020) apply the graph attenuation attention network to KGC, and used an efficient graph convolution method to semi-supervise the classification of graph structure data. Zhang et al. (2020) propose a relational graph neural network with hierarchical attention used for KGC, which effectively uses local neighborhood information. Xiao et al. (2024) propose a new method that uses comparative learning to break down the local and global perspectives in TKGs to obtain better reasoning. Zhu et al. (2021) use a replication generation mechanism to predict future facts by quoting historical data or generating new facts. Wang et al. (2025) combine global historical event frequencies with local temporal relative displacements to efficiently learn query representations from TKGs. Qiu et al. (2025) enhance the capabilities of graph neural networks by integrating node weights and future information.
These additions not only strengthen the contextualization of our work but also provide a more comprehensive overview of the relevant literature, ensuring that our research is situated within the broader academic landscape. Thank you for your feedback, which has been instrumental in improving the quality and depth of our manuscript. | Summary: The paper proposes TeDS, a novel temporal knowledge graph completion (TKGC) model that jointly learns diachronic (temporal evolution) and synchronic (cross-relation interactions) perspectives in quaternion space. The key innovations include: 1) Dual temporal perception through synchronic (time-relation composite quaternions with Hamilton operators) and diachronic (continuous time encoding via trigonometric mapping) modules; 2) A unified quaternion-based framework that deeply integrates temporal and relational information. Experiments on six benchmarks show significant improvements over SOTA models (e.g., +27.4 MRR points on ICEWS14 vs CEC-BD). The paper demonstrates thorough ablation studies and visual analysis of temporal patterns.
Claims And Evidence: The main claims are well-supported:
- Claim of dual temporal perception: Validated through ablation studies (Table 4) showing SP and DP modules contribute 79.6/71.1 vs 90.7 combined MRR on ICEWS14
- Claim of time-aware representation: Supported by temporal pattern visualizations (Figures 4-6) showing improved relation clustering in temporal contexts
- Superiority over SOTA: Comprehensive comparisons across 19 baselines on 6 datasets (Tables 2-3) with clear performance gaps
Potential weakness: The claim about handling various temporal constraints (Section 3.1) lacks explicit evaluation on datasets with different temporal annotations (time points vs intervals).
Methods And Evaluation Criteria: Methods are appropriate:
- Quaternion operations naturally model temporal rotations and interactions
- Dual perception aligns with temporal KG characteristics (evolution + interaction)
Evaluation is rigorous:
- Standard TKGC metrics (MRR, Hits@n) used consistently
- Diverse datasets cover different temporal scenarios (ICEWS for events, YAGO/Wikidata for facts)
Missing: No evaluation on emerging temporal patterns like cyclical events.
Theoretical Claims: The paper contains no formal theoretical proofs. Mathematical components (quaternion operations in Section 3.2, Eqs 1-2) are correctly presented following standard quaternion algebra.
Experimental Designs Or Analyses: Strengths:
- Comprehensive comparisons with 19 SOTA methods
- Detailed ablation studies (Table 4) and component analysis
- Visualization of temporal patterns (Figures 4-7)
Weaknesses:
- No parameter sensitivity analysis
- Training time comparison limited to 4 models (Figure 9a)
- No statistical significance testing for reported improvements
Supplementary Material: The appendix contains reproducibility checklist but lacks:
- Implementation details for baselines
- Complete hyperparameter configurations
- Additional case studies
Relation To Broader Scientific Literature: Key connections:
- Extends quaternion KG embedding (QuatE, DualE) with temporal perception
- Improves upon temporal KG models (TComplEx, RotateQVS) through dual perspectives
- Combines advantages of tensor decomposition (CEC-BD) and neural approaches (SANe)
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- Novel integration of dual temporal perspectives
- Effective quaternion-based temporal encoding
- Comprehensive evaluation across multiple datasets
Weaknesses:
- Limited analysis of computational complexity
- No evaluation on temporal constraint types (point vs interval)
- Potential scalability issues with quaternion operations
Other Comments Or Suggestions: None
Questions For Authors: How does TeDS handle time interval annotations compared to time points? The experiments only show results on datasets with time points.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for all your valuable comments. Note: The pictures and tables used in response are available at https://anonymous.4open.science/r/TEDS-033A/To_Re_cu6z.pdf See: To_Re_cu6z.pdf
Q1: The claim in Section 3.1 about handling various temporal constraints lacks explicit evaluation on datasets with different temporal annotations (time points vs. intervals). How does TeDS handle time interval annotations compared to time points? The experiments only show results on datasets with time points
A1: We add details on handling various temporal constraints (See our response to Reviewer fTHA's A5 for details). In fact, this is a commonly used processing method in existing SOTA models (e.g., TeLM and TPComplEx). In the future, finding better ways to handle various temporal types and model dataset characteristics more effectively will be a key focus of our work.
Q2: No evaluation on emerging temporal patterns like cyclical events
A2: In our current research on TKGs, such as ICEWS, YAGO, and Wikidata, the focus is primarily on factual records (e.g., news facts, concept facts), which do not follow clear cyclical patterns like seasonal changes, economic cycles, or biological rhythms. In fact, modeling cyclical events is crucial for prediction and reasoning. Your comment broadens our future scope. We may use Fourier transforms or seasonal decomposition to enhance TeDS’s cyclical modeling. Integrating external data and known patterns could further boost performance.
Q3: The appendix contains reproducibility checklist but lacks:
(1)Implementation details for baselines
(2)Complete hyperparameter configurations; No parameter sensitivity analysis
(3)Additional case studies
A3:
A(1)The results of all baselines involved in comparison is taken from original papers.
A(2)We add complete hyperparameter configuration (See Fig 1). We find that $𝜆_a$ impacts results more than $𝜆_b$. Even without regularization, TeDS remains highly competitive, proving its effectiveness. We add an embedding dimension analysis (See Fig 2). TeDS outperforms baselines at all dimensions, achieving best cost-performance at rank=100. Further increases bring minimal gains but higher costs.
A(3)We further randomly remove 30% of ICEWS14 training set (See Fig 3) to test TeDS's robustness under extremely sparse data conditions. Besides, to test TeDS's effectiveness in industrial and sparse scenarios, we use a company's historical cost-price time-series dataset A (See our response to Reviewer C2mh's A1 for details).
Q4: Limited analysis of computational complexity
A4: Table 1 shows complexity comparison of mainstream baselines (see our response to Reviewer fTHA's A1 for details).
Q5: Potential scalability issues with quaternion operations
A5: To avoid impact of scaling, TeDS normalizes quaternions using Schmidt orthogonalization. Specifically, for SP, we normalize $Q_{\tau_{sp}r}$ to unit quaternion $Q_{\tau_{sp}r}^{\Delta}$ by dividing $Q_{\tau_{sp}r}$ by its norm to eliminate scaling effects $Q_{\tau_{sp}r}^{\Delta} =\frac{Q_{\tau_{sp}r}}{\left|Q_{\tau_{sp}r}\right|}$
Next, we use Hamilton operator $\otimes$ to perform rotation operations on $Q_{r\tau_{sp}}$ via $Q_{\tau_{sp}r}^{\Delta}$, obtaining $\mathscr{M} = Q_{r\tau_{sp}} \otimes Q_{\tau_{sp}r}^{\Delta}.$
Meanwhile, we normalize $\mathscr{M}$ to unit quaternion $\mathscr{M}^{\Delta}$. Finally, we rotate $Q_s$ by doing $\otimes$ between it and $\mathscr{M}^{\Delta}$:
$Q_{sp} = Q_{s} \otimes \mathscr{M}^{\Delta}. $
Similarly, for DP, we normalize $R_{r\tau_{n}}$ to unit quaternion $R_{r\tau_{n}}^{\Delta}$. Then, we rotate $Q_s$ by doing $\otimes$ between it and $R_{r\tau_{n}}^{\Delta}$:
$Q_{dp} = Q_s \otimes R_{r\tau_{n}}^{\Delta}.$
Details are in Section 4 (TeDS for TKGC).
Q6: Training time comparison limited to 4 models (Fig 9a).
A6: We further compare TeDS with SOTA model TPComplEx (using the hyperparameters claimed by TPComplEx) to evaluate computational efficiency. In Table 2, we show that TeDS is 43% faster per epoch on Wikidata12k compared to TPComplEx (5.63s vs 9.87s), and 35% faster on YAGO11k (2.82s vs 4.33s). These speedups are achieved while significantly reducing the number of parameters (e.g., 90% fewer parameters on YAGO11k). The runtime on ICEWS05-15 is slightly longer (30.75s vs 22.06s), but TeDS uses only about 27% of the parameters of TPComplEx, demonstrating its efficiency advantage.
Parameter Comparison. Table 3 compares actual parameter counts of models: On Wikidata12k, TeDS gets superior performance with only 20.64M parameters, while TPComplEx requires 201.2M, resulting in a 10× improvement in parameter efficiency. TeDS ensures optimal performance while significantly reducing computational overhead, making it ideal for large-scale TKGs.
Q7: No statistical significance testing for reported improvements.
A7: In Section 6.4, we run experiments five times and calculate the standard deviation to show TeDS's stability across different datasets. | null | null | null | null |
Sample-efficient diffusion-based control of complex nonlinear systems | Reject | Summary: The paper presents SEDC, a new approach to improving how we control complex systems using limited data. Traditional methods struggle with high-dimensional spaces, nonlinear behaviors, and the challenge of learning from imperfect training data. SEDC tackles these problems with three key ideas: Decoupled State Diffusion, which separates state prediction from action generation to make learning more efficient; Dual-Mode Decomposition, which splits system dynamics into linear and nonlinear components for better modeling; and Guided Self-finetuning, which refines control strategies over time by generating improved training data. Experiments show that SEDC improves control accuracy over existing methods while needing less training data.
Claims And Evidence: The claims in the submission are generally supported by experimental results, but some aspects require further clarification. The claim that SEDC improves control accuracy by 39.5%-49.4% while using only 10% of the training samples is backed by quantitative comparisons across three nonlinear systems, showing improved performance over baselines. The effectiveness of Guided Self-finetuning could benefit from further theoretical justification or broader validation.
The submission includes comparisons with PID and data-driven methods but should also evaluate optimal control approaches like Model Predictive Control (MPC) and Linear Quadratic Regulators (LQR). These methods are widely used for nonlinear systems and offer strong theoretical guarantees. Comparing SEDC with MPC, which optimizes control inputs over a finite horizon, and LQR, which minimizes a quadratic cost function, would provide a more comprehensive benchmark. Additionally, methods like Hamilton-Jacobi-Bellman control or Pontryagin’s Minimum Principle could offer further insights. Including these would strengthen the claims and clarify SEDC’s position among control strategies.
Methods And Evaluation Criteria: The methods proposed in the paper lack clear motivation and detailed explanations for key design choices, making it difficult to fully understand their necessity and effectiveness. For instance, the rationale for isolating linear and nonlinear system components is unclear—while nonlinear decomposition is a common approach in control theory, the paper does not sufficiently explain why this improves the performance of diffusion-based methods or how it compares to alternative strategies. Similarly, the use of gradient guidance to refine control trajectories is not well justified; while gradients can theoretically provide optimization signals, it is unclear how they interact with the diffusion process or whether they introduce stability issues. The fine-tuning process is also confusing, particularly how the model reuses previously generated control sequences. The paper suggests that the generated trajectories are fed back into training, but it does not clarify whether this introduces compounding errors or biases. A clearer breakdown of the fine-tuning mechanism, its impact on sample efficiency, and how it avoids overfitting to its own generated data would help in understanding its effectiveness. More intuitive explanations, ablation studies, or comparisons to alternative fine-tuning approaches would make these methods easier to evaluate.
Theoretical Claims: The paper lacks formal proofs for its theoretical claims, relying mainly on empirical results.
Experimental Designs Or Analyses: The experimental design is generally well-structured, but there are some concerns about its validity and completeness. The paper evaluates SEDC on three nonlinear systems (Burgers, Kuramoto, and Inverted Pendulum), which provide a reasonable benchmark, but it lacks real-world datasets or more diverse nonlinear control tasks to test generalizability. The comparisons with baselines, including PID, reinforcement learning, and diffusion-based methods, are useful, but the absence of optimal control methods leaves gaps in the evaluation. While the paper reports improvements in control accuracy and sample efficiency, it does not thoroughly analyze potential trade-offs, such as computational cost, training stability, or sensitivity to hyperparameters. Additionally, the fine-tuning process relies on self-generated data, but the impact of compounding errors or overfitting to model-generated trajectories is not examined. A more robust evaluation with additional baselines, real-world validation, and deeper analysis of computational efficiency would strengthen the experimental soundness.
Supplementary Material: The supplementary material includes additional details on the proposed algorithm, dataset descriptions, implementation specifics, training and inference time analysis, baseline descriptions, and extended experimental results.
Relation To Broader Scientific Literature: The paper builds on prior work in diffusion-based control, reinforcement learning, and nonlinear system optimization but lacks connections to optimal control methods.
Essential References Not Discussed: The paper builds on prior work in diffusion-based control, reinforcement learning, and nonlinear system optimization but lacks connections to optimal control methods.
Other Strengths And Weaknesses: The definition of the symbol y in the paper is unclear and inconsistent, leading to confusion about whether it represents the observed state, system state, or observed output. In control theory, the system state refers to the internal variables that fully describe the system's dynamics, while the observed state or output is what is measurable from the system, which may be a function of the true system state. The paper seems to use y interchangeably as both the observed state and the system state, which is problematic because in many systems, the observed state does not directly correspond to the full system state. A clearer distinction between the true system state, the observed variables, and the control input is needed to avoid ambiguity. Definitions should explicitly clarify whether y is the full internal state of the system or just the observable portion and how it relates to the system’s evolution equations.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's feedback. Our responses are as follows.
Tip: Please visit the link(https://drive.google.com/file/d/1VWaCyEv0NPMPPqCdVfgJDPXoTiuN76MV/view?usp=sharing) for new Tables and Figures.
**1. New optimal control baseline**
We additionally compare SEDC with learning-based Model Predictive Control (MPC), the only data-driven method among suggested optimal control approaches. Results (link:Table I) show MPC achieves higher target losses across all tasks, likely due to error accumulation. MPC also has significantly longer inference times (e.g. >1000 vs. 0.5) that increase with control horizon. SEDC directly maps initial/target states to complete control trajectories, avoiding compounding errors and reducing computation time.
**2. The rationale for isolating linear and nonlinear components in DMD**
The design decomposes the clean trajectory prediction into linear and nonlinear components to overcome the limitations of single-network approaches that struggle to model both simultaneously with limited data. It is theoretically grounded in the Taylor expansion of vector-valued functions. Please refer to part 1\&2 of our response to reviewer **YCVy** for the explanation of DMD. We compare this to alternative strategies in the ablation studies. Table 1 shows that compared to single-UNet, the dual-mode architecture reduces error by 54-94\% when using only 10\% of data, and by 47-57\% with full data, confirming higher effectiveness under data scarcity. In Table 3, we also verified that as the nonlinearity of the system increases, the performance benefit gained by applying DMD becomes more pronounced.
**3. The use of gradient guidance**
Our gradient guidance method incorporates control cost optimization directly into the denoising process, which steers each denoising step toward trajectories that minimize an cost function $J$(e.g. control energy), using equation (3) in the paper to guide the sampling process $p_\theta(\mathbf{x}^{k-1} | \mathbf{x}^k, \mathbf{y}_0^*, \mathbf{y}_f)=\mathcal{N}(\mathbf{x}^{k-1}; \mathbf{\mu} _\theta(\mathbf{x}^k, k, \mathbf{y}_0^*, \mathbf{y}_f), \mathbf{\Sigma}^k).$
The gradient term $\nabla_{\mathbf{x}^k}J(\hat{\mathbf{x}}^0(\mathbf{x}^k))$ in equation (3) computes how changes in the current noisy state affect the cost and updates the sampled mean in a gradient-descent like way. The stability is guaranteed by setting appropriate guidance strength $\lambda$ and is proven by numerous previous works like classifier-guided diffusion (Dhariwal \& Nichol, 2021) and diffusion-based planning (Janner et al., 2022). We will make the description of gradient guidance clearer in the revised manuscript.
**4. Compounding errors and overfitting of GSF**
GSF ensures physical consistency by re-simulating the system with generated controls (Section 4.3), ensuring all finetuning pairs $[\mathbf{u}^0_{\text{update}}, \mathbf{y}^0_{\text{update}}]$ follow system dynamics. Complementarily, our diffusion-based method considers entire trajectories holistically, allowing these two mechanisms to work together to avoid compounding errors. Moreover, generated trajectories with updated states differ from the original training data, preventing overfitting, as confirmed by the stable validation loss (link:Figures I,II) which demonstrates no upward trend.
**5. Response to the datasets used**
Our benchmark selection (Inverted Pendulum, Kuramoto, and Burgers) follows established control systems research practice, chosen for real-world relevance and diverse nonlinearity and complexity:
- Inverted Pendulum: state 2, control 1, timestep 128
- Kuramoto: state 8, control 8, timestep 15
- Burgers: state 128, control 128, timestep 10
We add experiments on the power grid system (please see part 5 of our response to reviewer **YCVy**) to demonstrate our generalizability on real-world scenarios.
**6. The potential trade offs**
Computation costs in the form of training/inference times are reported in appendix C.2, following AdaptDiffuser; other baselines do not report computational costs. The loss curve (link:Figure I) demonstrates our method's training stability. Sensitivity tests (link:Figure III) show performance is highly affected by low diffusion steps but yields only 5-10\% improvements at higher steps, following typical diminishing returns in diffusion models. These demonstrations will be included in the final version.
**7. The definition of the symbol $\mathbf{y}$**
In our paper, we assume full state observability throughout, with y representing the complete observable system state vector. This is reasonable for our evaluated systems. We will revise for consistent terminology and explicitly state this assumption. | Summary: This paper introduces SEDC (Sample-Efficient Diffusion-based Control), a novel diffusion-based framework designed for controlling complex nonlinear systems while addressing key challenges in sample efficiency, high-dimensional state-action spaces, and non-optimal training data. The proposed approach incorporates three major innovations: Decoupled State Diffusion (DSD) to improve efficiency in high-dimensional systems, Dual-Mode Decomposition (DMD) to enhance learning of nonlinear system dynamics, and Guided Self-Finetuning (GSF) to bridge the gap between suboptimal training data and near-optimal control policies. The model achieves remarkable performance improvements, demonstrating 39.5%-49.4% better control accuracy than baselines while using only 10% of the training data. Experiments across three nonlinear systems—Burgers dynamics, Kuramoto dynamics, and the Inverted Pendulum—validate the effectiveness of the proposed framework.
Claims And Evidence: The paper makes five key claims: (1) the ability to handle high-dimensional state-action spaces (2) effective learning of nonlinear system dynamic, (3) overcoming the lack of optimal control training data by generating synthetic data, (4) achieving significant improvements in control accuracy, and reducing training data requirements and energy consumption. These claims are strongly supported by extensive experiments. They have particularly demonstrated the role of DMD structure in handling high dimensionality, DSD in effective learning of nonlinear systems, and GSF in energy efficiency through ablation studies.
Methods And Evaluation Criteria: The methods are well-structured, intuitive, and supported by strong theoretical grounding. The evaluation benchmarks SEDC against classical, reinforcement learning, and diffusion-based baselines, including PID, BC, BPPO, DecisionDiffuser, AdaptDiffuser, RDM, and DiffPhyCon. Performance is assessed using standard control metrics, such as Target Loss (MSE between predicted and actual target states) and Energy Consumption (integral of control effort over time). The selection of evaluation criteria is appropriate and effectively demonstrates the advantages of SEDC.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design is robust and methodologically detailed. The study provides extensive information about the systems and datasets used, making it highly reproducible. The experiments systematically compare SEDC to existing methods across multiple nonlinear systems, ensuring comprehensive evaluation. Various key metrics are investigated, including control accuracy, sample efficiency, and energy consumption, providing a well-rounded assessment of the model’s performance. The study is further enhanced by clear and informative visualizations that effectively highlight the advantages of SEDC over benchmarks. The ablation studies are particularly impressive, significantly strengthening the paper’s impact by explicitly demonstrating the benefits of each architectural component, thereby validating the necessity of the proposed innovations.
Supplementary Material: I have reviewed the supplementary material, which effectively provides important additional details missing from the experimental studies.
Relation To Broader Scientific Literature: This paper is well-positioned within the broader literature on data-driven control, advancing diffusion-based models by building on works like DecisionDiffuser and DiffPhyCon while overcoming their limitations in sample efficiency and nonlinearity handling. Given the widespread applications of nonlinear system control and the strong performance demonstrated, this work has significant potential for impact.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This paper is exceptionally well-written and well-organized, with a clearly explained and well-visualized model architecture. As detailed in the Experimental Designs or Analyses section, the experimental studies are extensive and highly convincing. Additionally, the work has a broad range of applications, making it both impactful and valuable to the research community. Overall, recommend clear acceptance.
Other Comments Or Suggestions: N/A
Questions For Authors: Are there any plans to extend this work to stochastic control settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer vifZ for thorough review and strong recommendation. We greatly appreciate your positive assessment of our work, particularly your recognition of our experimental design, ablation studies, and the potential impact of our work.
**Response to extending SEDC to stochastic control settings:**
Thank you for your question on extending our work to stochastic control settings. This represents an interesting direction for future research. Our present work focuses on deterministic non-linear systems where SEDC demonstrates significant advantages in sample efficiency and control accuracy. Although we believe the diffusion-based nature of our approach provides a conceptual foundation that could potentially be adapted to stochastic settings, this would require substantial theoretical modifications to our framework components (DSD, DMD, and GSF).
Extending to stochastic control would involve addressing additional complexities in modeling state transition probabilities and optimizing over distributions rather than deterministic trajectories. This remains an open research question we are interested in exploring. We will add a brief discussion of these potential extensions in our limitations section to acknowledge the current deterministic focus of our work.
We thank the reviewer again for their valuable feedback and encouraging assessment of our contribution. | Summary: 1. The paper proposes a diffusion-based controller for high-dimensional nonlinear systems.
2. A diffusion model is used to generate a sequence of states y, and an additional autoregressive MLP is used for learning the control inputs through inverse dynamics.
3. Gradient-guidance during the reverse process and inpainting are used to satisfy an optimal control objective.
4. A dual UNet-based denoising network is proposed to decouple linear and non-linear terms of the system dynamics.
5. Experiments indicate the proposed method achieves lower target loss than baseline methods on different systems.
Claims And Evidence: 1. Empirical Improvements: The reported improvements in target loss and energy cost are substantial. However, many of the core innovations—gradient guidance and in-painting for goal conditioning—are direct adaptations of known methods from diffusion models and image generation (e.g., DecisionDiffuser and RePaint).
2. Concerns on Novelty: The novelty appears incremental. While the integration of these modules yields performance gains, the lack of fundamentally new theoretical insights or novel decomposition guarantees is a significant drawback.
Methods And Evaluation Criteria: 1. DMD and mode decomposition: The denoising network uses a DMD inspired architecture to predict clean states from conditions y_c and noisy trajectory x^k. The notion of decomposing a noise-corrupted state trajectory into modes is lacking clarity. In theory, for the simplified objective (eq 14) in [1], the denoising network learns to predict the noise \epsilon. The authors directly predict the clean state (\hat{x^0}) as in most implementations, however this raises question on using DMD in this case. What does it mean to decompose a noise-corrupted state trajectory into modes?
[1]: Ho, Jonathan, Ajay Jain, and Pieter Abbeel. "Denoising diffusion probabilistic models." Advances in neural information processing systems 33 (2020): 6840-6851.
2. Sample Efficiency Experiments: A critical point for sample efficiency is whether the baselines (such as DecisionDiffuser, AdaptDiffuser, etc.) were allowed to perform any form of iterative fine-tuning during inference as SEDC does with GSF. If the baselines were not fine-tuned or augmented similarly, then the comparisons might be biased. The paper should clarify if all methods underwent comparable adaptation procedures; if not, the experiment is inherently flawed. If the baselines were finetuned, the authors should clearly mention the procedure in detail for clarity.
3. Target loss calculations: The authors seem to report performance on a single held-out test set. For control applications, especially when working with limited data, it is standard practice to use cross-validation or multiple train-test splits to ensure that the reported gains are not due to a particular split. I suggest the authors to report the mean and standard deviation of the performance metrics across multiple test seeds.
4. Figure 2 appears to be plotted with a specific value of the guidance strength \lambda which controls the satisfiability of the energy constraint. This does not give the full picture of the performance of the proposed method. To compare the ability of different methods to optimise an objective with an additional constraint, it is standard practice to compare the Pareto frontiers of the methods. This would bring out the ability of the methods to optimise the objective (target loss) while having the energy constraint.
5. The paper attempts to address high-dimensional problems, however, considers the inverted pendulum and kuramoto dynamics as benchmarks. Although the inverted pendulum is considered as a classic benchmark in control theory and robotics due to its nonlinearity and underactuation, it remains a comparatively low-dimensional problem. The kuramoto model is also often considered a canonical model for studying synchronisation phenomenon and often does not capture the full complexity encountered in real-world nonlinear control problems. To demonstrate the success of the proposed method, I strongly recommend the authors to consider some well-known high-dimensional control tasks, for e.g.: a. Ant (105 states, 8 controls), Humanoid (348 states, 17 controls); from MuJoCo. b. Adroid or Shadow Hand tasks from gymnasium-robotics.
Theoretical Claims: 1. There are no theoretical contributions in this manuscript.
2. Please see point 1 in Methods and Evaluation Criteria.
Experimental Designs Or Analyses: Please see points 3 and 4 in Methods and Evaluation Criteria.
Supplementary Material: 1. Reviewed Supplementary sections A-D.
2. Code link https://anonymous.4open.science/r/DIFOCON-C019 does not work.
3. Inference time analysis in Table 5 is not insightful at all. For control tasks, the control signals need to be generated at a specified frequency for the platform to function properly (e.g., 30Hz, 50Hz, etc.). Authors should report the frequency at which the benchmark systems can be operated using the methods.
Relation To Broader Scientific Literature: The paper attempts to propose a data-driven method for model-based control of high-dimensional systems.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Please see Claims and Evidence
Other Comments Or Suggestions: 1. Writing can be improved in several places, for eg:
“We denote that x^k represents … ” can simply become “We denote x^k as the sequential data…”
2. This paper has some grammatical errors and would benefit from thorough proof-reading.
Questions For Authors: Please see Methods and Evaluation Criteria
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's feedback. Our responses are as follows.
Tip: Please visit https://drive.google.com/file/d/1JmK5ZuMIg0CJCf1L2fQqobK6gtueOgts/view?usp=sharing for new tables and figures.
**1.The core innovations**
The gradient guidance and in-painting for goal conditioning aren't our core innovations. The innovation is a new data-driven complex system control framework that significantly improves diffusion model sample-efficiency in high-dimensional nonlinear system control. To tackle the challenge of nonlinearity of complex systems, we innovatively propose DMD that decomposes trajectory prediction into linear and nonlinear components for better capturing nonlinearity under data scarcity. The design is novel and theoretically based on the Taylor expansion. To handle high-dimensional complex physical systems, we are the first to incorporate inverse dynamics into diffusion-based control, maintaining physical consistency in trajectory generation with limited training data. Moreover, to address the unique non-optimal data problem in complex physical systems, we novelly propose GSF that enables exploration beyond the initial training data distribution.
**2.Explanation of decomposition into modes**
We are not decomposing a noise-corrupted state trajectory into modes. Instead, the DMD actually decomposes the prediction of the clean sampled trajectory into linear and nonlinear modes, overcoming the limitations of single-network approaches that struggle to model both simultaneously.
The theoretical foundation is as follows:
$\mathbf{y}_c$ is the conditional input, a learnable combination of initial and target states as in the paper. Our denoiser is designed to output the clean state trajectory $\hat{\mathbf{x}}^0$, expressed as a vector function $\mathbf{f}(\mathbf{y}_c)$. It admits a vector Taylor expansion at $\mathbf{y}_c=\mathbf{0}$:
$$
\hat{\mathbf{x}}^0=\mathbf{f}(\mathbf{y}_c) = \underbrace{\mathbf{C}_1 \mathbf{y}_c} _{\mathbf{O}_1:\text{1st-order}} + \underbrace{\mathbf{y}_c^T \mathbf{C}_2 \mathbf{y}_c} _{\mathbf{O}_2:\text{2nd-order}} + \mathcal{O}(||\mathbf{y}_c||^3)
$$
For linear systems, only the first-order term remains. For nonlinear systems, by neglecting higher-order terms for simplicity, we can decompose the prediction into linear and nonlinear quadratic modes. In our dual-Unet architecture, the first UNet learns to produce linear coefficient ($\mathbf{C}_1$) from noisy trajectories $\hat{\mathbf{x}}^k$, while the second extracts nonlinear modes ($\mathbf{C}_2$) that capture higher-order interactions.
It is validated that as the nonlinearity of the system increases, the dual-Unet achieves more benefits compared to the single-Unet(Table 3).
We will refine the explanation of DMD in the final manuscript.
**3.sample efficiency experiment**
Our experimental design is fair-our baselines include AdaptDiffuser which uses fine-tuning yet performs worse(e.g. Figure 3 shows our method achieves lower target loss with just 10\% of training data). Unlike AdaptDiffuser, we collect generated trajectories for fine-tuning without reward/discriminator filtering, exposing the model to more diverse samples and better balancing exploration/exploitation. Following your advice, we test applying GSF on the most competitive baseline DiffPhyCon. Result(link:Table III) confirms GSF improves baseline performance, validating the generalizability of our GSF framework.
**4.Experiment demonstration**
Similar to DecisionDiffuser and DiffPhyCon, we report mean/standard deviation metrics(link:Table I) and pareto frontiers(link:Figure I). Results confirm our method maintains competitive performance across all datasets. We will refine this demonstration in the final manuscript.
**5.Limitation of the problems chosen**
Our benchmark selection follows established research practice(DiffPhyCon,[1][2]), chosen for diverse nonlinear characteristics and real-world relevance. Although the Inverted Pendulum is low-dimensional, our evaluations in Burgers system(128 states/controls) have sufficiently represented our method's performance in high dimensional dynamics.
Since Kuramoto may oversimplify real-world complexity, we conduct additional experiments on **swing dynamics** [1,3], which models real-world power grid behavior with higher fidelity and complexity(see link:Figure II). Results (link:Table II) show our method achieves lowest target loss, outperforming DecisionDiffuser by 60%, confirming that our approach's benefits extend to practical complex scenarios.
[1] Data-driven control of complex networks. Nature Communications
[2] Closed-loop Diffusion Control of Complex Physical Systems. ICLR25
[3] How dead ends undermine power grid stability. Nature communications
**6.Problems in the supplementary materials**
We fix the problem of version compatibility error of the codes in the code link. We also report the frequency at which the benchmark systems can be operated using the methods in link:Table IV.
---
Rebuttal Comment 1.1:
Comment: Thanks for the explanations, in particular point 2. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind feedback and suggestions. We are glad that our rebuttal has addressed your concerns and deeply appreciate the raised score. We will incorporate them into the final paper following your advice.
Thank you again for your time and consideration. | Summary: The paper presents SEDC, a novel diffusion-based control framework designed to achieve sample-efficient and robust control of complex nonlinear systems. SEDC is developed to overcome challenges associated with high-dimensional state–action spaces, strongly nonlinear dynamics, and the scarcity of optimal training data.
Experimental results across several benchmark systems—including Burgers, Kuramoto, and inverted pendulum dynamics—demonstrate that SEDC achieves 39.5%–49.4% improvement in control accuracy over state-of-the-art baselines while requiring only 10% of the training samples. Additional ablation studies confirm the effectiveness of each key component.
Claims And Evidence: The claims made by the paper, from my perspective, are well-supported by its thorough numerical evaluations.
Methods And Evaluation Criteria: The proposed method makes sense for the solving deterministic control problems and it would be interesting to see if this framework can be further extended to solve Schrondinger bridge problems or stochastic optimal control.
Theoretical Claims: The paper does not make any significiant theoretical claims.
Experimental Designs Or Analyses: I think the experiments done in the paper are very thorough and the abalations are done nicely.
Supplementary Material: I looked at the whole appendices, which gave me a better understanding of the experimental tasks the authors tried and how they conducted numerical experiments.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: I am not very familiar with the literature related to this paper so I am not sure if the literature review of the paper needs any improvement or not. But it seems to me that the authors have successfully compared their methods to many existing methods in the literature.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: I don't have any questions for authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer qYwS for positive assessment of our work and thorough review.
**Response to extending SEDC to stochastic control problems:**
We appreciate the reviewer's insightful suggestion regarding potential extensions to Schrödinger bridge problems and stochastic optimal control. This represents an interesting direction we had not fully explored yet.
Our framework is primarily designed for deterministic systems, and extending it to stochastic settings would require significant theoretical and architectural modifications. We believe our diffusion-based approach might provide a foundation for addressing stochastic control problems. And for further extension to stochastic scenarios, the inverse dynamics and non-linear decomposition components have to be adapted to accommodate stochastic processes, which would require careful investigation.
We will add a limitation section in the final paper that acknowledges the current deterministic focus of our work and discusses potential future extensions to stochastic settings as an open research question.
We thank the reviewer for this valuable suggestion that opens up interesting avenues for future work. | null | null | null | null | null | null |
LAuReL: Learned Augmented Residual Layer | Accept (poster) | Summary: Authors propose extensions to ResNet blocks that can improve performance with minimal addition of parameters. The extensions modifies the residual connection by adding a learnable transformation to it and/or utilises the output of previous layers.
Claims And Evidence: OK, see below.
Methods And Evaluation Criteria: OK, see below.
Theoretical Claims: Does not apply.
Experimental Designs Or Analyses: OK, see below.
Supplementary Material: Was not available
Relation To Broader Scientific Literature: OK, see Strengths and Weaknesses.
Essential References Not Discussed: There is a study from Kaiming He et al from 2016 (Identity Mappings in Deep Residual Networks) which also quite extensively considerers different versions of ResNet blocks. The authors should check it and cite in their paper.
Another study which might be relevant is Huang et al, Densely Connected Convolutional Networks, 2018. They also describe approach to use previous activations, but not in ResNet style.
Other Strengths And Weaknesses: The authors mainly consider augmented learned residual layers. Specifically they propose three variants: 1) residual weight, 2) low-rank approximation, and 3) one using previous activations.
About the first one (RW), I am not sure there is enough novelty. He et al 2016 proposes quite similar form, even thought they do not train the parameters in similar manner as in this Laurel version. Also, He et al 2016 proposes to use 1x1 convolution for the residual layer, which is especially useful if $f(x)$ part changed dimension (e.g. different number of input and output channels in convolutions). This can perhaps be thought to be extended version of this Laurel but of course is also much more expensive and not exactly same. On the other hand, I am not fully confident that these introducing these two learnable parameters are meaningful in all cases. I would guess that $\alpha$ and $\beta$ could be either be included to the weights in $f$ of this or next layer or would the effect would disappear if layer or batch normalisation is applied. Anyway, the results show some kind of improvement, but I am not sure if this depends on the version of ResNet block used (see He et al 2016 for different options). The authors could perhaps experiment with it more.
The second one (low rank LR) seems novel and interesting and there are clearly gains shown by results (albeit their results are for combination of the first and second, but considering above, I think this LR is the key factor).
The third one (Laurel PA) is again a question. They did not have experiments using PA version only, but only for the version which combines all of these version. It might be that all gains are coming from LR.
To summarize: as a whole, I think this study is interesting and results looks good. However, I would wish to see Hu et al 2016 to be cited and reflect their work against this study. Also I would like to see experiments with LR and PA alone (e.g. in ResNet case) to see which one contributes has the highest contribution.
Other Comments Or Suggestions: The manuscript is well written. I didn't find any typos.
Questions For Authors: No questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions.
#### __Ablation study with practical footprint metrics__
Thank you for suggesting this experiment. This section demonstrates improvements of LAuReL variants individually, as well as when they are combined.
For the purpose of comparing different LAuReL variants on the LLM pre-training task, we set up the following baseline. We pre-trained on the C4 corpus with $\approx$ 10B tokens. We used a $4 \times 4$ Google Cloud TPU v6e topology, but we expect similar results with a comparable GPU setup.
In order to simplify the comparison across many ablations (and also to avoid the noise in downstream evals at 10B token scale), we report model performance using the test loss, which is a good proxy for downstream model quality.
We train our main baseline with 24 layers and 157.2M params, along with a larger baseline with 28 layers for comparison. We run all the LAuReL variants (RW, LR, PA) on top of the regular baseline with 24 layers. We also run two combinations of variants (RW+LR, RW+LR+PA). We report the number of params, test loss, peak memory as reported by profiling tools, and average step time. Lower is better for all metrics. The table below shows the results; note that all LAuReL experiments use L=24.
| Variant | Params(M) | Test Loss | Peak Mem(GB) | Avg. Step(sec) |
|-------------------------:|:---------:|:---------:|:-----------:|:--------------:|
| Baseline (L=24) | 157.20 | 3.0159 | 11.65 | 0.095 |
| Baseline-Large (L=28) | 179.23 | 2.9963 | 13.23 | 0.105 |
| LAuReL-RW | 157.20 | 2.9557 | 11.93 | 0.095 |
| LAuReL-LR | 158.40 | 2.9624 | 12.29 | 0.098 |
| LAuReL-PA | 157.22 | 2.9512 | 12.55 | 0.100 |
| LAuReL-RW+LR | 158.40 | 2.9531 | 12.57 | 0.099 |
| LAuReL-RW+LR+PA | 160.83 | 2.9499 | 12.90 | 0.104 |
All LAuReL variants perform better than the large baseline in terms of the test loss while using much fewer parameters, lower peak memory, and lower average step time. Given the above tradeoffs in loss, memory, step time, etc. we recommend trying the LAuReL variants in this order: RW $\rightarrow$ LR $\rightarrow$ PA / RW+LR $\rightarrow$ RW+LR+PA. We will add these numbers in the revision.
#### __Can α and β be learned by the network__
That’s a good question. Since $f(.)$ is non-linear, absorbing $\alpha$ within the function is not equivalent to having an explicit $\alpha$ outside $f(x)$. Additionally, the normalization would apply on top of the weighted combination of $f(x)$ and $x$, so the individual scalars would still be useful in learning the relative weights of the two components. We also demonstrate that the -RW variant is useful on its own in the above ablation study.
#### __Relevant Work__
Thank you for the references. DenseNet (Huang et al., 2018) connects every pair of layers in the network and hence in the vanilla version, all the activations need to be in memory. This is prohibitively expensive for deep LLMs and other modern transformers. When introducing dense-blocks, all previous activations within the block need to be visible to any given layer within the block; this requires refactoring the model architecture into dense blocks.
On the other hand, LAuReL requires minimal changes. In fact, in LAuReL-PA, which is the most similar to DenseNet, we make three design choices to achieve memory efficiency and performance. Firstly, each layer only looks at the past $k$ activations. For the above experiments, $k=3$ was sufficient. Secondly, we also propose using low-rank linear functions to further reduce memory usage due to activations. Thirdly, the LAuReL-PA variant uses learned scalars ($\gamma_{i}$, $\gamma_{i-1}$, …) to learn the weights of the previous activations (which we found to be crucial), whereas DenseNet assumes a simple sum of the previous activations.
Additionally, as seen above, LAuReL-RW and -LR variants provide significant improvements over naive-scaling, and can be combined with the -PA method.
Identity Mappings in Deep Residual Networks (He et al., 2016): This paper introduces variants of residual connections with different types of 'gating', which look similar the -RW variant, except that they use a much larger number of parameters for either the exclusive gating or the 1x1 conv gating ($D_in \times D_out$ params per layer), which is much more expensive than the 1/2 params per layer of the -RW variant. For the rest of the paper, the authors introduce and focus on the pre-activation residual connection which places the activation functions such that it helps with optimization.
We will include these discussions in the revision.
We hope these address your concerns - thanks! | Summary: The paper introduces Learned Augmented Residual Layer (LAUREL), a novel enhancement to residual connections in CNNs and Transformers. LAUREL enriches the residual stream by incorporating learned scalar parameters and low-rank transformations, improving efficiency and expressivity.
Key Contributions:
1. Three variants (LAUREL-RW, LAUREL-LR, LAUREL-PA) balancing expressivity and efficiency.
2. Improves performance in vision (ResNet-50, ImageNet-1K) and language models (1B, 4B LLMs) with minimal parameter, latency, and memory overhead.
3. Matches naïve scaling accuracy on ImageNet-1K while using 2.6× fewer parameters.
4. Boosts reasoning, math, reading comprehension, translation, and multimodal tasks with negligible overhead.
Claims And Evidence: The claims presented in the submission are strongly supported by experimental evidence provided. The key claims of superior performance and efficiency are substantiated through well-structured experiments comparing variants of LAUREL against standard baseline models and naïve scaling. Results across ResNet-50 and LLMs convincingly support the authors' assertions about improved model efficiency.
Methods And Evaluation Criteria: The claims presented by the authors—namely, LAUREL’s efficiency in parameter utilization, computational latency, and memory overhead relative to naïve scaling—are convincingly supported by clear empirical evidence presented through systematic experiments. The authors conducted extensive experiments, comparing multiple LAUREL variants with standard baselines across tasks from image classification (ImageNet-1K) and various LLM evaluation benchmarks (MATH, GSM8K, BoolQ, MMLU, etc.), thus substantiating their claims effectively. No problematic claims were identified.
Theoretical Claims: The paper does not contain any explicit theoretical proofs that require verification. Instead, the contributions are algorithmic and empirical. Mathematical formulations provided (equations defining LAUREL variants) appear correct and straightforward.
Experimental Designs Or Analyses: The experimental design is sound and well-structured. The evaluation on ImageNet-1K and multiple well-established LLM benchmarks is methodologically rigorous. The authors used clear baselines, performed multiple trials to report statistical significance (mean and standard deviation), and conducted parameter sensitivity analyses. One minor area for deeper exploration could be a broader ablation on initialization schemes, but the current treatment is already thorough and sufficient for the current claims.
Supplementary Material: The paper does not explicitly reference supplementary material, nor does it appear to have separate supplementary files included in the provided document. Thus, no supplementary material was reviewed.
Relation To Broader Scientific Literature: The paper is appropriately situated within the broader scientific literature on deep learning efficiency, clearly citing key related work such as LoRA, AltUp, transformer block simplifications, model compression techniques, and distillation methods. The authors articulate how LAUREL differs and complements these approaches, providing a clear perspective of novelty and positioning their contribution clearly as a generalization of residual connections with lightweight augmentations.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The proposed method is conceptually straightforward yet impactful, providing clear theoretical motivation and intuitive interpretation.
2. Significant empirical evidence across multiple model architectures (ResNet and transformers) and domains (vision and NLP) convincingly demonstrates general applicability.
3. Clear, thorough, and insightful experimental analysis, including ablation studies and analysis of the trade-offs regarding rank (r) parameters.
Weaknesses:
1. The paper could provide deeper insights or theoretical justification into why specific variants perform better under certain conditions.
2. Practical considerations for very deep networks (e.g., scaling to 100B+ parameter models) could be discussed to better position the approach for current state-of-the-art LLM scale.
Overall, the strengths outweigh the minor weaknesses mentioned above.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your kind comments and valuable feedback!
#### __Justification into why specific variants perform better under certain conditions__
Deep networks generally do better with reasoning, math, coding etc. tasks. However, residual connections are crucial for such networks, and LAuReL helps augment these residual connections with learned components. So at a high-level we expect LAuReL to help with reasoning/math/coding tasks.
Specifically we have the following intuition for why the different variants work:
- LAuReL-RW: Helps with learning the importance of the residual input ($x_i$). This might not be as useful in the earlier layers, but would be important in other later layers to tackle the vanishing gradient problem using a higher learned weight for the residual input.
- LAuReL-LR: Helps with allocating learning capacity for the linear part of the network (the residual input, i.e., $x_i$ in Figure 2), such that the main network can use it’s capacity towards learning better nonlinear functions ($f(x)$), while LAuReL contributes the linear components ($x_i + ABx_i$) to the residual stream.
- LAuReL-PA: A hybrid of -RW and -LR variants, where multiple previous activations are used in a learned weighted manner, along with learning linear functions on top of them. This allows layers accelerated access to previous activations, along with learning their relative importance.
Overall, LAuReL provides a general formulation for the residual connection, along with practical variants that can operate on the residual stream in different ways as highlighted above. LAuReL variants can be combined with each other, as seen in the ResNet experiments, as well as the small-scale ablations on an LLM reported in response to Reviewer gnJr.
Interestingly, LAuReL variants consistently show better performance than naively adding a layer on top of the baseline (ResNet experiments in Table 1, small-scale LLM ablations in response to Reviewer gnJr, and LLM-2 naive scaling experiments in response to Reviewer jdcS). This demonstrates that operating on the residual stream using LAuReL has a non-trivial impact on model convergence, which cannot be matched by adding a full additional layer whereas LAuReL variants add $\leq$ 0.1% params in these experiments.
#### __Practical considerations for very deep networks (e.g., scaling to 100B+ parameter models)__
While it is hard to experiment with 100B+ params, we can extrapolate from the ablations done in the paper as well as in response to Reviewer gnJr and Reviewer jdcS.
- The -RW variant is straightforward to try and does not have a tangible impact on latency and memory. We expect initialization of these scalars to also be important. For earlier layers, the initial weight given to the non-linear component can be higher.
- The -LR variant is cheap enough to also be included. We expect a slight increase in memory and latency footprint. $r$ should be scaled up as $D$ grows. In our experiments up to 4B params, the ideal ratio of $D / r$ was between 24--32.
- The -PA variant is also helpful if the network is very deep. Using $k={3, 4, …}$ was helpful in smaller-scale LLM experiments. When keeping a larger value of $k$, the accelerator memory usage should be monitored and appropriate rematerialization strategies should be employed. Although we do not see a large increase in memory usage with the -PA variant in the smaller-scale experiments.
If the above variants work well with some headroom available in latency and memory, a combination of RW+LR or RW+LR+PA can be tried.
We hope these address your concerns - thanks! | Summary: The paper introduces a new method for residual connections with an additional layer called Laurel. Their method involves introducing learnable parameters into the residual stream, which the authors argue might be too restrictive in its original form. The learnable parameters allow the authors to decide how much information might be incorporated from different parts of the residual stream.
The authors introduce three methods for Laurel. Laurel-RW is the simplest version that only introduces two learnable weights. Laurel-LR introduces a low-rank approximation of the residual stream. Laurel-PA applies the same approach as LR except over activations from prior layers. The authors test variants that apply these methods in tandem. The authors show that their method allow for better scaling, where their methods adds fewer parameters but leads to a larger increase.
Claims And Evidence: In general, authors report consistent improvements with Laurel in comparison to naive scaling of their base architectures. But, in general, I have the following concerns:
* In most settings, Laurel seems best with Laurel-RW+LR or Laurel-RW+LR+PA settings. In general, using the low-rank approach seems tricky to me because you now have a new hyperparameter to tune. The authors generally address this with Figure 3 -- but I remain concerned with the trend of selecting the best r. The Figure doesn't show consistent results on when the best r is actually applicable. In general, the results also seem sensitive to r if I read Figure 3 correctly.
* I found the description in Section 4.4 for LLMs to be vague. For instance, I think the argument with ResNets is convincing where the authors show that the addition of a single layer compared to Laurel saves 2.6x parameters. Could the authors provide the same numbers for language models? If I add an additional layer to language model i.e. their 1B/4B model, how many parameters am I adding? The authors should discuss depth/parameter scaling in LLMs as well. It would be useful to have these upper bound memory costs in the table if possible.
* Under what setting would I use Laurel-PA? It seems to add additional overhead for using sharding, etc.
* The results of Laurel in Table 2/3 is really confusing to me. See questions. Overall, could error bars be added to results to make significance more obvious? Beyond this, do authors have any intuition on why certain baselines improve significantly while others stay relatively similar.
Methods And Evaluation Criteria: Yes; Authors use traditional computer vision and NLP benchmarks for their evaluation.
Theoretical Claims: N/A
Experimental Designs Or Analyses: * If the authors' aim is to have a parameter efficient scaling technique, why not compare with techniques that allow for scaling like parameter sharing or the methods that save over transformer blocks? I'm curious to see how much these methods in save in comparison to Laurel.
Supplementary Material: No supplementary material was made available.
Relation To Broader Scientific Literature: The authors compare against other parameter efficient scaling methods, highlighting that theirs is parameter efficient.
Essential References Not Discussed: The authors may want to reference the many modifications that have been introduced to residual connections that overlap with their work in the related work. I think a section referring to HighwayNets or ResidualGates. Many similar ideas have been proposed and Laurel should discuss and cite them. Space is available so I recommend an additional related work subsection.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: didn't find any typos
Questions For Authors: * What do green percentages mean in Table 2/3? I found this very confusing since the percentages didn't match improvements and some were highlighted green. This isn't explained in the text as far as I can tell although I may be missing something.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions.
#### __Tuning $r$__
Since is $r$ the rank of $A$, $B^T$, we expect $r \ll D$. Indeed, if $D = 512, 768, 1024, …$, this leaves a small range of discrete values for $r$ (unlike hyperparameters such as learning rate, weight decay, that can take continuous values). In our experience $r = {32, 48, 64}$ work well for LLMs.
#### __LLM Depth / Parameter Scaling with Error Bars:__
Similar to the comparison with naive depth-scaling with ResNet (Sec 3.1), we ran scaling experiments with LLM-2. Originally, both baseline/LAuReL had 40 layers. Adding the 41st layer required turning off a minor architectural change (which enforced the number of layers to be divisible by 2). To be fair, we re-ran the baseline and LAuReL.
We present the number of params and average training step times.
| Model | Params(B) | Avg. Step (sec)|
|--------------:|:---------|:---------------|
| Baseline (40 layers) | 4.40 | 1.65 |
| Baseline$^{+1}$ (41 layers) | 4.56 (+3.63%) | 1.68 (+1.81%) |
| LAuReL | 4.44 (+0.1%) | 1.69 (+2.42%) |
Note LAuReL adds only 0.1% parameters and incurs a step time penalty of 2.42%. On latency, it is slightly above Baseline$^{+1}$ . This is because LAuReL is invoked for each layer, but the per-layer invocation cost is small.
The table below shows the downstream quality of the baselines and LAuReL.
| Model | Math | MGSM | MMLU | Belebele | BookQA | WMT23 | MMMU | Coco-Cap | DocVQA | TextVQA |
|:------------------------------------------------------------------------------------|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:-----------------:|
| Baseline (40 layers) 4.40B Params | 14.20 ±0.88 | 20.29 ±3.16 | 48.83 ±0.81 | 57.92 ±3.42 | 47.11 ±4.06 | 67.72 ±0.20 | 33.77 ±3.11 | 97.29 ±4.41 | 66.87 ±2.67 | 60.86 ±2.86 |
| Baseline$^{+1}$ (41 layers) 4.56B Params | 14.50 ±0.9 | 20.29 ±3.15 | 49.10 ±0.82 | 59.30 ±3.34 | 42.77 ±4.15 | **67.74** ±0.21 | 35.33 ±3.12 | 98.50 ±3.53 | 66.18 ±2.76 | 60.23 ±2.87 |
| Laurel (40 layers) 4.44B Params | **15.11** ±1.01 | **23.12** ±3.51 | **50.32** ±0.82 | **62.65** ±3.15 | **57.22** ±3.81 | 67.71 ±0.19 | **37.57** ±3.10 | **99.27** ±5.03 | **66.92** ±2.65 | **63.15** ±2.82 |
| _Laurel %Change wrt Baseline_ | _(+6.48%)_ | _(+13.94%)_ | _(+3.05%)_ | _(+8.16%)_ | _(+21.46%)_ | _(-0.02%)_ | _(+11.25%)_ | _(+2.03%)_ | _(+0.07%)_ | _(+3.76%)_ |
LAuReL wins on all tasks except WMT23; it is more parameter-efficient than naive-scaling.
#### __When to use Laurel-PA?__
LAuReL-PA works well for deep networks where there is a risk of vanishing gradients. A suitable rematerialization and sharding, given the extra activations in memory, would help; the new ablation results (see Reviewer gnJr) are positive. We expect LAuReL-PA to do better than works such as DenseNet (Huang et al. '16), where the number of activations in-memory is quadratic.
#### __Comparison with Parameter Sharing__
Techniques like param-sharing are complementary, since they can be applied in conjunction with LAuReL.
#### __Comparison with Highway Nets/Residual Gates__
Highway Nets (Srivastava et al., 2015) is similar to LAuReL-RW, except that the Transform Gate requires ($D^2$ (weight matrix) + $D$ (bias)) params, in addition to the latency incurred by a full-rank matmult. But we use 1-2 scalars per Laurel-RW layer, with no significant latency impact. Similarly, the Residual Gates (Savarese et al., 2017) also is similar to LAuReL-RW, except they use ReLU as the gating function. However, LAuReL is more general including variants like -LR, -PA, which can be combined.
#### __Green Percentages in Table 2 / 3__
Apologies. Green and bold font indicated statistically significant improvements for ResNet and LLM-2. The percentages for a couple of tasks (TyDiQA, MGSM, etc.) had a typo, but the absolute values were correct.
#### __Intuition for asymmetrical improvement in tasks__
Deep networks generally do better with reasoning/math/coding. However, residual connections are crucial and LAuReL helps augment them with learned components. So we expect LAuReL to improve such tasks.
However, for certain other tasks the network might be bottlenecked on the number of parameters in the MLP layers. Thus it is hard to pinpoint why a particular task improves more than others with LAuReL.
We hope these address your concerns and you can reconsider the assessment/score - thanks!
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and the results in comparison to DenseNet are quite exciting as well as additional results reported here. I believe this addresses most of my concerns and I would recommend these results are added to the paper in the main paper or some appendix sections.
I just had one clarification to ask just to make sure I understand the results. When you report *Laurel %Change wrt Baseline*, what is this actually measuring? Is it a combination of accuracy and the number of parameters saved? I'm still a little confused where these numbers come from although I may have missed something in the paper.
One other remaining question I have is about scale. Do the authors have any intuitions about how Laurel will operate at larger scales? Of course the assumption is that at larger model sizes, the savings will be larger since adding additional layers may be expensive. But I'm curious whether Laurel will have the same model improvements. Will Laurel have any benefit in scaling models as well?
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to go through our response carefully - glad that it addressed most of your concerns! As you suggest, we will add these new numbers in the revision (since we cannot update the pdf now).
#### __New results__
Apologies for the confusion caused by brevity; please allow us to clarify.
The results reported above are from new pre-training runs that we started to verify the scalability of LAuReL with LLMs as requested. They are similar to the results from LLM-2 (Table 3, Section 3.2.2), except that we had to re-run the baseline and LAuReL runs (both with 40 layers). This was because starting a naively-scaled baseline with an additional layer (i.e., with 40 + 1 = 41 layers) violated an implementation assumption, which expected the number of layers to be even. Thus, we had to disable this implementation assumption, re-run the baseline (to allow for a fair comparison), the naively-scaled baseline, and LAuReL.
Regarding the metrics, the last row (‘LAuReL % Change w.r.t. Baseline’) in Table 2 reports the percentage improvement / regression on a downstream task achieved by LAuReL when compared to the Baseline. For example, in the ‘BookQA’ task, the baseline model scores 47.11, whereas LAuReL scores 57.22. This is an improvement of +21.46\% (i.e., 100 * ((57.22 / 47.11) - 1) = 21.46%). Similarly, on the MGSM task, the baseline model scores 20.29 while the model with LAuReL scores 23.12. This is an improvement of +13.94\%. Sorry about the non-descriptive label; we will clarify this in the revision.
In terms of parameters, as seen in Table 1 LAuReL has +0.1\% more parameters than the baseline, and +2.42\% more step-time latency (forward + backward pass) than the baseline, however not only does LAuReL outperform the baseline, it also does significantly better than the naively scaled baseline which had one extra layer (referred to as Baseline$^+1$ with 41 layers in Table 1 & 2) on almost all tasks except WMT23, while using a fraction of the params as the naively scaled baseline.
#### __Laurel at larger scales__
That’s a good question! As you pointed out we expect that with deeper networks naive scaling to be less useful, while the residual connection will become even more important. Therefore, we expect LAuReL to continue to perform an important role in augmenting the residual connection, and hence improving model quality.
For instance, we expect the -RW variant to learn that the weight of the linear component ($x$) and the non-linear component ($f(x)$) should vary across the layers. The -LR variant to free up more capacity for learning richer nonlinear functions ($f(.)$), and the -PA variant to further help with the vanishing gradient problem. For the -LR variant, expect $r$ to be scaled sub-linearly as $D$ is scaled, since $r \ll D$.
We hope that we satisfactorily addressed your remaining concerns, and you can reconsider your score / assessment. Thank you for your time! | Summary: The paper introduces LAUREL (Learned Augmented Residual Layer), a new generalization of residual connections that can replace standard skip connections in neural networks. LAUREL outperforms traditional residual connections in both model quality and efficiency across vision and language tasks. When tested on ImageNet-1K, LAUREL achieved the same improvements as adding an entire extra layer while using 2.6× fewer parameters. In large language model experiments with 1B and 4B parameter models, LAUREL improved performance on downstream tasks by 2.54% to 20.05% while adding only 0.012% and 0.1% additional parameters respectively.
Claims And Evidence: To support the claim, the authors conduct experiments on ResNet-50 on ImageNet-1K, and LLMs.
Methods And Evaluation Criteria: The proposed methods make sense for the problem or application at hand. However, the authors only shows theoretical extra memory, and latency incurred for each LAUREL. Do the authors have practical numbers?
Theoretical Claims: There is no theoretical claims in the paper.
Experimental Designs Or Analyses: The experimental designs are reasonable, but why only conduct experiments of ResNet on ImageNet, why not use ViT? ViT is very popular in computer vision.
Supplementary Material: I did not reveiw the supplementary material.
Relation To Broader Scientific Literature: This paper is related to DenseNet, but iit does not discuss DenseNet-like works.
Essential References Not Discussed: This paper is related to DenseNet, but iit does not discuss DenseNet-like works.
Other Strengths And Weaknesses: Strengths:
1. Figure 2 clears show the idea of the paper and it is eaay to follow.
2. Experiments on ResNet and LLMs demonstrate the effectiveness of the proposed paper.
Weaknesses:
1. Do not report the pratical memory usage and latency compared with other methods? Especially on GPUs.
2. Lack of detailed ablation study for the design choises. How does different design choises influence pratical memeory usage and latency?
3. Lack of discussion with previous related methods, like DenseNet. In related work, the authors discuss Architectural Changes, Compression Techniques and Learning Techniques. Acutally, they are not very related to this paper. It is necessay to discuss the realtionship between this work and other DenseNet-like works.
Other Comments Or Suggestions: Plese see weaknesses.
Questions For Authors: Plese see weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions.
#### __Ablation study with practical footprint metrics__
Thank you for suggesting this experiment. For the purpose of comparing different LAuReL variants on the LLM pre-training task, we set up the following baseline. We pre-trained on the C4 corpus with $\approx$ 10B tokens. We used a $4 \times 4$ Google Cloud TPU v6e topology, but we expect similar results with a comparable GPU setup.
In order to simplify the comparison across many ablations (and also to avoid the noise in downstream evals at 10B token scale), we report model performance using the test loss, which is a good proxy for downstream model quality.
We train our main baseline with 24 layers and 157.2M params, along with a larger baseline with 28 layers for comparison. We run all the LAuReL variants (RW, LR, PA) on top of the regular baseline with 24 layers. We also run two combinations of variants (RW+LR, RW+LR+PA). We report the number of params, test loss, peak memory as reported by profiling tools, and average step time. Lower is better for all metrics. The table below shows the results; note that all LAuReL experiments use L=24.
| Variant | Params(M) | Test Loss | Peak Mem(GB) | Avg. Step(sec) |
|-------------------------:|:---------:|:---------:|:-----------:|:--------------:|
| Baseline (L=24) | 157.20 | 3.0159 | 11.65 | 0.095 |
| Baseline-Large (L=28) | 179.23 | 2.9963 | 13.23 | 0.105 |
| LAuReL-RW | 157.20 | 2.9557 | 11.93 | 0.095 |
| LAuReL-LR | 158.40 | 2.9624 | 12.29 | 0.098 |
| LAuReL-PA | 157.22 | 2.9512 | 12.55 | 0.100 |
| LAuReL-RW+LR | 158.40 | 2.9531 | 12.57 | 0.099 |
| LAuReL-RW+LR+PA | 160.83 | 2.9499 | 12.90 | 0.104 |
All LAuReL variants perform better than the large baseline in terms of the test loss while using much fewer parameters, lower peak memory, and lower average step time. Given the above tradeoffs in loss, memory, step time, etc. we recommend trying the LAuReL variants in this order: RW $\rightarrow$ LR $\rightarrow$ PA / RW+LR $\rightarrow$ RW+LR+PA. We will add these numbers in the revision.
We found that r={32, 48} work well for LAuReL-LR, and k={3, 4} works well for LAuReL-PA. We went with r=32, and k=3 respectively.
#### __Relationship to DenseNet__
Thank you for the reference. DenseNet connects every pair of layers in the network and hence in the vanilla version, all the activations need to be in memory. This is prohibitively expensive for deep LLMs and other modern transformers. When introducing dense-blocks, all previous activations within the block need to be visible to any given layer within the block; this requires refactoring the model architecture into dense blocks.
On the other hand, LAuReL requires minimal changes. In fact, in LAuReL-PA, which is the most similar to DenseNet, we make three design choices to achieve memory efficiency and performance. Firstly, each layer only looks at the past $k$ activations. For the above experiments, $k=3$ was sufficient. Secondly, we also propose using low-rank linear functions to further reduce memory usage due to activations. Thirdly, the LAuReL-PA variant uses learned scalars ($\gamma_{i}$, $\gamma_{i-1}$, …) to learn the weights of the previous activations (which we found to be crucial), whereas DenseNet assumes a simple sum of the previous activations.
Additionally, as seen above, LAuReL-RW and -LR variants provide significant improvements over naive-scaling, and can be combined with the -PA method.
We will include these discussions in the revision.
#### __ViT__
Unfortunately these experiments are still in-progress at the time of the rebuttal deadline; we will add them as soon as the runs finish. However, we expect LAuReL to provide improvements on top of ViT baselines, given LAuReL has shown improvements on ResNet as well as three LLM baselines (LLM-1, LLM-2, and the new small LLM baseline mentioned above). Note that the latter three are transformer-networks, very much like ViT.
We hope we have addressed your concerns and questions and we hope you can reconsider your assessment/score. Thank you. | null | null | null | null | null | null |
N2GON: Neural Networks for Graph-of-Net with Position Awareness | Accept (poster) | Summary: This paper introduces Graph-of-Net (GON), a novel graph structure where each node is itself a graph, enabling multi-level modeling of complex systems that involve hierarchical relationships. Examples include biological networks (e.g., protein-protein interactions, where individual proteins are represented as graphs within a larger network) and citation networks (where papers, modeled as text graphs, are interconnected). The authors propose N2GON, a position-aware neural network designed to learn node representations in GONs by jointly modeling intra-graph (within-node) and inter-graph (between-node) connections.
Claims And Evidence: The claims made in the paper are generally supported by clear evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria are largely appropriate for the problem of multi-level graph learning, with strengths in architectural design and empirical breadth.
Theoretical Claims: n/a
Experimental Designs Or Analyses: The experimental designs and analyses presented in the paper generally appear to be sound and robust, supporting the claims made about the Graph-of-Net (GoN) model.
Supplementary Material: I have reviewed all of the supplementary material.
Relation To Broader Scientific Literature: 1. The key contributions of the paper, namely the introduction of the Graph-of-Net (GoN) structure and the position-aware neural network model, build upon several foundational ideas and methods from the broader graph learning and neural network literature. The paper situates itself within a well-established body of work, while introducing novel elements that extend existing techniques.
2. The idea of nodes being graphs themselves shares conceptual similarities with hypergraphs and heterogeneous graphs. In these graph structures, edges (or hyperedges) can connect more than two nodes, and the nodes can represent different types of entities or relationships. GoN extends this idea by introducing a more general notion where nodes are not just connected through edges but are, in fact, entire graphs with their own structure.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: **Strengths:**
1. The paper is well-written and easy to understand.
2. The proposed Graph-of-Net (GoN) structure is an innovative contribution. By representing each node as a graph, GoN enables multi-level representations of individual nodes.
3. The "position-aware neural network" mechanism introduced in the paper is a meaningful enhancement to model capabilities. This mechanism allows the model to not only focus on node features but also process interactions and dependencies between nodes.
4. The paper provides thorough experimental validation on multiple datasets, covering a range of domains such as social networks, citation networks, and biomedical networks. The experimental results are impressive.
**Weaknesses:**
1. In the introduction, the definition of Graph-of-Net (GoN) is somewhat vague. Although a general conceptual understanding is provided, a more precise mathematical definition may be needed. For instance, the definition mentions that "each node is itself a graph," but the theoretical aspects of how the "graph" size, structure, and features are defined might not be sufficiently clear. Further clarification of how GoN differs from traditional graphs, especially in terms of modeling multi-dimensional relationships in hierarchical node structures, would be helpful.
2. In some formulas (e.g., the PPR algorithm), the derivation process could benefit from more detailed explanations. For new readers, the PPR algorithm may not be immediately familiar, and providing a more comprehensive derivation would aid in understanding.
Other Comments Or Suggestions: 1. The explanation of the GoN concept could be further strengthened, especially with regard to how the complexity of "graphs within nodes" is handled during the modeling process. For example, it would be helpful to clarify how GoN manages the hierarchical structures within the graphs of nodes.
Questions For Authors: Refer to the "Other Strengths and Weaknesses" part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1. The Introduction's definition of Graph-of-Net (GON) is vague and needs a precise mathematical formulation of each node-as-graph (size, structure, features) and clarification of how it differs from traditional graphs in modeling hierarchical, multi-dimensional relationships.**
>R1: Thank you for your insightful comments. In our framework, GON is designed as a hierarchical graph where each node is not a single data point but an entire graph. Formally, let the top-level graph (network) be defined as $\mathcal{G}^N = ( \mathcal{V}_G, \mathcal{E}_G )$, where $\mathcal{V}_G$ is the set of graph nodes and $\mathcal{E}_G$ denotes the edge set. For each node $v_G \in \mathcal{V}_G$, we associate a graph: $\mathsf{G}_v = (V, E, X)$. Here, $V$ represents the nodes within the graph, $E$ is the set of edges among these nodes, and $X$ is the feature matrix corresponding to the graph nodes. The size of each graph is determined by $|V|$, which depends on the inherent structure or the domain-specific construction of the graph. The graph structure, represented by $E$, captures the internal relationships among the nodes in the graph. This structure may vary depending on the level of detail or domain-specific insights desired. Each graph is equipped with a feature representation $X$ that encodes the characteristics of the nodes in $V$. The formation of these features can be based on raw data attributes or results from a prior processing step.
>
>Traditional graph models represent data as a general graph where each node corresponds to an atomic data point. In contrast, GON captures multi-dimensional relationships by explicitly modeling two levels of interaction. This dual-level representation allows GON to be particularly effective for complex hierarchical data, as it provides the capacity to model nested relationships and capture both local and global patterns within the data.
**Q2. In some formulas (e.g., the PPR algorithm), the derivation process could benefit from more detailed explanations. Providing a more comprehensive derivation would aid in understanding.**
>R2: The Personalized PageRank algorithm provides a way to measure how “close” or “important” one node is relative to another within a network. Imagine a random walker who starts at a specific node in the network. Instead of wandering the network entirely at random, the walker follows a rule: at each step, they decide either to move to one of the neighboring nodes or to jump back to the starting node. This jump-back mechanism ensures that the influence of the starting node remains strong throughout the walk.
>
>What makes PPR particularly useful is that it considers not only the direct connections between nodes but also the broader network structure. In simple terms, it captures the idea that even if two nodes share the same label or initial property, they can have varying levels of relatedness depending on how they are connected within the network.
>
>By adopting this method, our approach goes beyond simply saying, "nodes with the same label are similar." Instead, we are able to gauge the subtle nuances in how closely nodes are related based on both their labels and their positions within the graph structure. This allows our model to better capture complex relationships and provides a more refined way of measuring similarity among node graphs.
**Q3. It would be helpful to clarify how GON manages the hierarchical structures within the graphs of nodes.**
>R3: Thank you for the valuable feedback. In a Graph-of-Net (GON), each high-level node represents an entire graph that can have its own structure and detailed relationships. Instead of treating every graph as a monolithic object, we decompose the problem into two levels: 1) At the lower-level, we focus on extracting meaningful representations from the individual graphs. We utilize graph neural networks to process each graph, effectively summarizing its properties into a fixed-size embedding. 2) Once each graph has been transformed into a representation, these embeddings serve as the nodes for the higher-level graph. The interrelations among these nodes are then modeled, combining both the abstract representation of the graphs and the positional information within the larger network. The two-stage process helps us manage the inherent complexity of graphs that reside within nodes. We will add above description in subsequent revisions of the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed reply. My concerns have been largely addressed, therefore I'd like to raise my score. Please include the discussions of relevant content in the new version of this paper.
---
Reply to Comment 1.1.1:
Comment: We are pleased to have addressed the reviewers' concerns and are grateful for your recognition of our work. We will incorporate the above discussion into the revised version of the manuscript. | Summary: The paper introduces a novel framework called Graph-of-Net (GON), which extends traditional graph structures by modeling each node as a graph itself, creating a multi-level perspective on relationships between objects. This approach enables the capture of both the internal structure of individual nodes and the broader network of dependencies. To learn node representations within GON, the paper proposes a position-aware neural network model, which processes both intra-graph and inter-graph connections. The model incorporates dual encoders and graph constructors to build and refine a constraint network, where nodes are adaptively arranged based on their positions, as determined by the network’s constraint system.
Claims And Evidence: The claims made in the paper are generally well-supported by clear and convincing evidence. The novelty of the Graph-of-Net (GON) structure is justified through the discussion and examples from general networks and biological systems. The position-aware neural network model is supported by the detailed methodology, including the dual encoders and graph constructors, which capture intra- and inter-graph relationships. The effectiveness of GON is empirically validated through extensive experiments on 16 network datasets, outperforming state-of-the-art methods.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-suited for the problem.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, I examined the soundness and validity of the experimental design and analyses, and overall, they are largely systematic.
Supplementary Material: Yes, I have reviewed the appendices, including Appendices A, B, and C.
Relation To Broader Scientific Literature: The paper successfully bridges gaps in hierarchical graph representation learning. Its innovations—particularly the GON structure and PPR constraints—address challenges in modeling multi-scale systems, positioning it as a meaningful contribution to the broader graph learning literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
1. The concept of treating each node as a graph is intereting. The GON structure extends traditional graph nodes into subgraphs (Graph-as-Node), enabling a shift from single-level (node-edge) modeling to multi-level modeling (subgraph internal structure + global network topology). This design is important in biological networks (such as protein-protein interactions).
2. The paper is well-written and easy to follow.
3. The experimental results are promising. The datasets are comprehensive and well-executed, covering a variety of domains including social networks, citation networks, and biomedical networks.
Weaknesses
1. The term "Position Awareness" is not strictly defined in the paper. A formal definition could be added for clarity.
2. The distinction between GON and existing hierarchical graph structures (e.g., Hierarchical Graph Networks, Hypergraphs) is not clearly quantified.
Other Comments Or Suggestions: 1. In the description below Equation (1) in the algorithm section, "funcitons" should be corrected to "functions."
2. The similarity function ( \text{sim}(\cdot, \cdot) ) in Equation (6) lacks a clear explanation of its basis (e.g., if using cosine similarity, the vector normalization step should be specified).
3. "node graph" → It is recommended to standardize this term as "node-graph" (with a hyphen).
4. The paper would benefit from additional discussion on the future research directions. For example, investigating how GON could be integrated with other deep learning techniques (e.g., reinforcement learning, meta-learning) to adapt to new or evolving graphs could open up interesting lines of inquiry.
Questions For Authors: See Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1. The term "Position Awareness" is not strictly defined in the paper. A formal definition could be added for clarity.**
>R1: Thank you for highlighting the need for a more formal definition of the term "Position Awareness." In our work, we use "Position Awareness" to refer to the model’s ability to capture and integrate the relative placement and connectivity of nodes within the overall network structure. Position Awareness is the property of a network model whereby each node's representation explicitly encodes its structural context within the graph. This encoding captures not only the node’s intrinsic features (e.g., labels or attributes) but also its topological characteristics—such as how it is connected to other nodes, and its overall influence in the network. In our framework, this is operationalized by leveraging a scoring mechanism (i.e., via the Personalized PageRank algorithm) that quantifies the relative importance or influence of nodes.
>
>By assigning a score that measures the likelihood of reaching one node from another through random walks with restarts, our approach formalizes a node’s “position” within the network. This score effectively differentiates nodes that might share similar labels but occupy distinct roles depending on their connectivity pattern and centrality in the graph. These scores are then integrated into the node representations, ensuring that position-dependent information contributes to the final embedding. As a result, the model is better equipped to capture nuanced relationships that go beyond simple label similarity.
>
>We will include this formal definition in the revised manuscript to enhance the clarity and rigor of our contribution.
**Q2. The distinction between GON and existing hierarchical graph structures (e.g., Hierarchical Graph Networks, Hypergraphs) is not clearly quantified.**
>R2: Thank you for your comment. We distinguish our Graph-of-Nets (GON) framework from other hierarchical graph structures as follows: 1) **Explicit Two-Level Representation:** In GON, each node is a complete graph that retains its full internal structure. This is different from many hierarchical graph networks, which typically process standard graphs where the nodes are represented as vectors rather than graphs, potentially losing fine-grained structural details. 2) **Preservation of Intra-Node Complexity:** Instead of merging all details into a single representation, GON processes each graph independently (using specialized graph neural networks) and then integrates these detailed embeddings into the higher-level network. This two-stage approach effectively captures both local (intra-node) and global (inter-node) relationships.
We will add these distinctions in the revised manuscript.
**Q3. In the description below Equation (1) in the algorithm section, "funcitons" should be corrected to "functions."**
>R3: Thank you for catching the typo. We appreciate your attention to detail, and we will correct in the revised manuscript.
**Q4. The similarity function ( \text{sim}(\cdot, \cdot) ) in Equation (6) lacks a clear explanation of its basis (e.g., if using cosine similarity, the vector normalization step should be specified).**
>R4: Thank you for your comment. We clarify that in Equation (6), we directly compute the cosine similarity on the final representations without any additional normalization. We will update the manuscript to clearly state this.
**Q5. "node graph" → It is recommended to standardize this term as "node-graph" (with a hyphen).**
>R5: Thank you for your comment. We appreciate your suggestion to standardize the term. We will update the manuscript to use "node-graph" consistently.
**Q6. The paper would benefit from additional discussion on the future research directions. For example, investigating how GON could be integrated with other deep learning techniques (e.g., reinforcement learning, meta-learning) to adapt to new or evolving graphs could open up interesting lines of inquiry.**
>R6: Thank you for the insightful suggestion. We plan to explore several integration approaches with GON in our future work. For example, integrating reinforcement learning could allow for dynamic graph adaptation, where RL agents iteratively rewire inter-graph connectivity in applications like drug discovery and evolving recommender systems. Additionally, we are considering meta-learning approaches to pre-train GON encoders on diverse tasks—enabling rapid adaptation to new scenarios, especially when encountering limited labeled data. Moreover, we can extend GONs to handle temporal dynamics by incorporating architectures for gradually updating both intra- and inter-graph connections in evolving systems. We believe these strategies will further enhance GON’s flexibility and robustness across various challenging, real-world applications. | Summary: The paper N2GON presents a new approach to graph learning, with a focus on the Graph-of-Net structure and a position-aware neural network model. The comprehensive experimental evaluation and detailed methodology are significant strengths. However, the paper could be further improved by including runtime comparisons, expanding the related work section, and correcting some grammatical errors.
Claims And Evidence: Yes, the claims made in the paper are well-supported by the evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria align with the problem's core challenges.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are sound.
Supplementary Material: I reviewed all the supplementary material provided with the paper.
Relation To Broader Scientific Literature: The key contribution of this paper extends traditional graph models by representing each node as a graph itself, enabling a more sophisticated multi-level representation GON. It builds on prior work in graph, which have been used to model complex relationships in social and biological networks. However, existing graph models do not fully capture the hierarchical dependencies present in real-world systems. GON provides a more flexible and general framework, making it a valuable work in graph representation learning.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: Pros
1. I appreciate the motivation behind this paper. The concept of representing each node as a graph within a larger network is innovative and extends traditional graph structures.
2. The proposed algorithm integrates both intra-graph and inter-graph connections, and incorporating PPR to capture the relative position of node graphs is interesting.
3. The experimental results also seem to validate the effectiveness of the proposed model.
Cons
1. Although the paper provides a complexity analysis in the appendix, it lacks experimental comparisons in terms of runtime performance. Including an intuitive comparison of computational efficiency could enhance the paper.
2. The related work section is not comprehensive enough. More discussion on position-awareness in graph learning should be included.
3. In the experimental section, although multiple datasets are mentioned, the paper does not provide sufficient details on their specific characteristics (e.g., the number of nodes, edges, and class distribution).
Other Comments Or Suggestions: 1. It would be helpful to expand the discussion on real-world applications, particularly in complex domains like drug discovery, bioinformatics, or social networks. While the paper covers some biomedical datasets, offering a broader perspective on potential use cases (including possible limitations in these domains) would give readers a more complete understanding of the impact of GON.
2. There are some grammatical errors and typos in the paper:
Line 23: "an" should be changed to "a".
Line 60 (left column): "applicable" should be removed.
Line 123 (right column): "are" should be "is".
Line 155 (right column): "possess" should be in plural form.
Line 267 (left column): "usally" should be corrected to "usually".
Line 261 (right column): "is" should be "are".
Questions For Authors: In addition to the weaknesses mentioned earlier, I have the following specific questions:
> 1. In Algorithm 1, the phrase "Sample all node graphs" seems contradictory. "Sample all" suggests sampling the entire set, whereas "Sample mini-batch" would indicate a subset. Could you clarify whether full-batch training is used or update the terminology accordingly?
> 2. The paper does not specify the weighting coefficient between the constraint loss \( L_{\text{con}} \) and the NLL loss. How is this coefficient set and adjusted in the algorithm?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1. It is recommended to add runtime comparisons.**
>R1: Thank you for your comment. We conducted the runtime experiments and the results (the average elapsed time per epoch), summarized in the table below, indicate that on benchmark graphs, our runtime is comparable with that of SOTA baselines, while on biomedical datasets our method is significantly more efficient than traditional algorithms.
*Table I. Running Time per epoch on Benchmark Graph Datasets (in seconds)*
| Algorithms | Cora | Ciesear | PubMed | Cornell | Texas | Actor | Squirrel | Chameleon |
| :---: | :---: | :-----: | :----: | :-----: | :---: | :---: | :------: | :-------: |
| Ours | 0.055 | 0.062 | 0.239 | 0.039 | 0.043 | 0.09 | 0.071 | 0.055 |
| H2GCN | 0.005 | 0.005 | 0.016 | 0.005 | 0.004 | 0.011 | 0.088 | 0.043 |
| DAGNN | 0.012 | 0.013 | 0.01 | 0.011 | 0.019 | 0.01 | 0.012 | 0.013 |
| HopGNN | 0.01 | 0.01 | 0.024 | 0.012 | 0.009 | 0.013 | 0.013 | 0.012 |
*Table II. Running Time per epoch on Biological Datasets (in seconds)*
| Algorithms | PEP-MHC | PPI | TCR | MTI |
| :------------: | :-----: | :----: | :---: | :-----: |
| Ours | 2.00 | 0.502 | 1.17 | 1.15 |
| ConjointTraid | 82.08 | 31.96 | 6.01 | 175.96 |
| Quasi-Seq | 117.15 | 30.91 | 5.97 | 174.99 |
| ESPF | 131.97 | 25.99 | 7.02 | 177.98 |
| CNN | 1023.49 | 310.96 | 76.96 | 1103.97 |
| Transformer | 351.97 | 88.49 | 25.29 | 1040.97 |
**Q2. More discussion on position-awareness in graph learning should be included.**
>R2: Thank you for the feedback. In our work, *position-awareness* is achieved by leveraging the Personalized PageRank algorithm, which computes the topological influence of node graphs on each other, implicitly revealing their *relative positions* in the network. In essence, once all pairwise similarities are determined, each node inherently carries information about its relative position within the similarity network structure.
**Q3. Providing dataset details (e.g., node/edge counts) for clarity.**
>R3: Thank you for your valuable suggestion. We have now added detailed descriptions of benchmark dataset in the table below.
| Datasets | **Texas** | **Wisconsin** | **Actor** | **Squirrel** | **Chameleon** | **Cornell** | **Citeseer** | **Pubmed** | **Cora** |
| :----------: | :-------: | :-----------: | :-------: | :----------: | :-----------: | :---------: | :----------: | :--------: | :------: |
| **#Nodes** | 183 | 251 | 7,600 | 5,201 | 2,277 | 183 | 3,327 | 19,717 | 2,708 |
| **#Edges** | 295 | 466 | 26,752 | 198,493 | 31,421 | 280 | 4,676 | 44,327 | 5,278 |
| **#Classes** | 5 | 5 | 5 | 5 | 5 | 5 | 7 | 3 | 6 |
**Q4. **Discuss more real-world applications (e.g., drug discovery) and potential limitations to better illustrate GON's impact.**
>R4: Thank you for your valuable suggestions. We will further expand the discussion of GON’s practical applications in the manuscript. For example, in drug discovery, GON can model drug molecules as graphs of atoms while representing proteins as residue graphs, capturing the binding patterns between them through a global network. In bioinformatics, GON enables multi-scale analysis of protein interaction networks, such as identifying functional modules at the residue level. For social networks, GON can model user communities (e.g., user-centric social subgraphs) and the relationships across communities, though real-world deployment must address privacy concerns and data sparsity issues. Common challenges include the cost of data construction (e.g., the need for expert annotations for molecular graphs) and computational overhead.
**Q5. There are some grammatical errors and typos.**
>R5: Thank you for your detailed feedback. We will review the manuscript and corrected all the mentioned grammatical errors and typos.
**Q6. Could you clarify if Algorithm 1 uses full-batch training or mini-batch sampling?**
>R6: Thank you for your valuable feedback. For the GON datasets, we use full-batch training—processing the entire data at once—since it is feasible to handle these datasets in one go. In the revised manuscript, we will clarify this point.
**Q7. How is this coefficient set for the loss in the algorithm?**
>R7: Thank you for raising this important point. In our work, we deliberately did not introduce a weighting coefficient between the constraint loss ($L_{\text{con}}$) and the NLL loss. We found that both loss components naturally operate on comparable scales, allowing us to simply sum them without additional tuning. This design choice not only simplifies the loss function but also reduces the number of hyperparameters, streamlining the training process.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. The responds have largely addressed my concerns regarding the aforementioned issues. Additionally, concerning the Biomedical Datasets, I would like to better understand how the node features in the constructed graph were derived. Although Section 4.2 provides a general explanation, it does not specifically clarify how the atomic-level feature vectors are generated for drug molecules—are they based on SMILES sequences, for example? Providing a more detailed description of this process would enhance the clarity of the paper.
---
Reply to Comment 1.1.1:
Comment: We are pleased to have addressed most of the reviewers’ concerns and appreciate the recognition of our work. For the node features of drug molecule construction, we can use the transformation methods provided by the biomedical domain library Therapeutics Data Commons (tdcommons.ai) to extract node features from SMILES sequences. For example, each node (atom) feature is composed of five concatenated parts: a one-hot encoding representing the atomic symbol (type), a one-hot encoding representing the number of bonds connected to the atom (degree), a one-hot encoding representing the atom’s formal charge, a one-hot encoding representing the chiral tag, and a binary feature indicating whether the atom is aromatic. We thank the reviewers for your constructive suggestions and will include the above discussion in the paper. | Summary: This paper introduces Graph-of-Net (GON), a novel graph structure where each node itself is a graph, enabling multi-level representation and analysis of complex real-world systems. To effectively learn representations within GONs, the authors propose N2GON, a position-aware neural network that captures both intra-graph and inter-graph interactions using dual graph encoders and an implicit constraint network. Extensive experiments demonstrate that N2GON outperforms state-of-the-art models in graph learning tasks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes. However, the proposed method is designed for GON, but the way this structure is partitioned in the paper does not seem entirely reasonable. This issue is particularly evident in datasets such as CiteSeer, where the partitioning is performed directly using KNN, resulting in computed outcomes that merely replicate the information propagation process of GNN.
Theoretical Claims: N/A
Experimental Designs Or Analyses: In the first part of the experiments, the partitioning of GON does not seem entirely reasonable, whereas in the later part, the partitioning in the chemical datasets appears to be more appropriate.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Yes
Essential References Not Discussed: None
Other Strengths And Weaknesses: The paper investigates GON and proposes a more generalizable approach. However, the experimental setup does not seem sufficiently comprehensive. I believe the authors provided excellent examples of GON in the introduction, but it is unclear whether experiments on related datasets are feasible. Of course, obtaining relevant data may be challenging.
Other Comments Or Suggestions: The full name of GON is mentioned twice in the paper.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1. The partitioning of GON —particularly in datasets like CiteSeer—appears questionable, where the partitioning is performed directly using KNN, resulting in computed outcomes that merely replicate the information propagation process of GNN.**
>R1: Thank you for your feedback. Below, we provide clarification as follows:
>1. **Rationale for $k$-hop Sampling Strategy**. We would like to clarify that we adopt a $k$-hop sampling (instead of KNN, which only involves 1-hop) to construct a graph for each node. This method is motivated by the observation that in many real-world scenarios, a node’s full identity is not encapsulated solely by its features but also by the structural context provided by its neighboring nodes.
>For example, 1) in **Citation Networks** such as CiteSeer, a paper’s identity is defined not only by its content but also by its *references* and *cited-by* relationships. By sampling $k$-hop neighbors, we explicitly model a node’s "extended identity," which includes its local academic context (e.g., related works). This aligns with the GON philosophy of representing nodes as hierarchical structures. 2) in **Social Networks**, a user’s social graph (friends, followers) is a natural extension of their identity, and $k$-hop sampling preserves this contextual information. Therefore, sampling the $k$-hop neighborhood effectively captures the local context.
>The reviewer noted that, in the case of CiteSeer, direct partitioning by $k$-hop might induce outcomes very similar to standard GNN information propagation. We believe this observation is, in fact, **supportive** of our methodology. The successful aggregation of neighbor information by GNNs substantiates that a node’s neighborhood plays a crucial role in the learning process. Our method does not merely replicate the GNN propagation process but rather formalizes the node’s expanded representation through explicit graph construction. This ensures that our Graph-of-Net structure inherently reflects the multi-level composition of nodes—each node graph is a manifestation of both its own features and the collective representation of its neighbors.
>2. **Alternative Graph Construction Method**. Beyond $k$-hop sampling, we explored constructing graphs from raw node data. We appreciate the reviewer's note on the challenge of accessing original data. In response, we made significant efforts and successfully obtained the raw textual data for the CiteSeer data through the GitHub project (https://github.com/sivaramanl/Information-Retrieval/tree/master/Text%20Processing/citeseer). This repository provided us with nearly all the title and abstract information for each paper. For this alternative graph construction, we processed each paper’s text as follows: 1) **Text Preprocessing:** We used NLTK to split the text of each paper into sentences, treating each sentence as a node in the paper’s graph. 2) **Feature Extraction:** We generated embeddings for each sentence using the popular sentence transformer model `all-MiniLM-L6-v2`, which provided the node attributes. For papers where the original information is missing, we default to representing the paper as a single node with an all-zero attribute vector. 3) **Edge Construction:** Edges were formed by computing the cosine similarity between sentence embeddings and applying a threshold of 0.7 to retain only strong connections.
>We then conducted experiments on this newly constructed data. The generated data statistics and performance comparisons, as presented in the tables below. The comparable performance of $k$-hop and text-based GONs (**81.27% vs. 81.82%**) demonstrates that both methods validly capture hierarchical semantics. The $k$-hop approach is a **pragmatic and effective** proxy when raw data is unavailable, while text-based construction confirms GON($k$-hop)’s flexibility for domains with explicit substructures.
*Table I: Resulting GON Statistics on Citeseer*
| Metric | Value |
|-----------------------------|-------------|
| Avg. Nodes per Graph | 7.72 |
| Max Nodes | 336 |
| Min Nodes | 1 |
| Avg. Edges per Graph | 156.36 |
| Max Edges | 106,286 |
| Min Edges | 0 |
| Node Feature Dimension | 384 |
*Table II: Accuracy on data CiteSeer*
| Algorithm | Acc (%) |
| ---------------------- | ---------------- |
| APPNP | 77.06 ± 1.73 |
| GPRGNN | 75.56 ± 1.62 |
| MixHop | 76.26 ± 1.33 |
| FAGCN | 74.86 ± 2.42 |
| DAGNN | 76.44 ± 1.97 |
| HopGNN | 76.69 ± 1.56 |
| **N2GON($k$-hop)** | **81.27 ± 1.30** |
| **N2GON (Text-Based)** | **81.82 ± 1.46** |
**Q2. The full name of GON is mentioned twice.**
>R2: Thank you for pointing that out. We will remove the extra name. | null | null | null | null | null | null |
High Probability Bound for Cross-Learning Contextual Bandits with Unknown Context Distributions | Accept (poster) | Summary: The paper studies the setting of cross-learning contextual bandit problem and provides a new analysis to the algorithm in Schneider & Zimmert (2023), showing the algorithm can achieve near optimal bound in high probability. They utilizes the weak dependency structure between different epochs and this approach might have chance to generalize to other analysis of algorithm.
## update after rebuttal
I thank the authors for the response. Nothing major has changed, and I keep my score.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I check the main theorem and its proof.
Experimental Designs Or Analyses: They don’t have an experiment part.
Supplementary Material: Yes, I checked the part that proves their main theorem.
Relation To Broader Scientific Literature: The paper provided theoretical improvement for an algorithm. If the technique can generalize to other algorithms, then the paper can be considered as a good contribution.
Essential References Not Discussed: Nothing particular.
Other Strengths And Weaknesses: They provide a new analysis and improve the previous result, from expectation bound to a high probability bound.
Other Comments Or Suggestions: The line 319 seems overfull.
Questions For Authors: The paper gives us a new analysis of algorithm in Schneider & Zimmert (2023) and I wonder whether this analysis approach can be applied to other bandit algorithms.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear review McaC:
Thank you for your positive feedback. We answer your question below.
---
**Question: The line 319 seems overfull**
Thanks for pointing this out. We will fix it in the new version.
---
**Question: whether this analysis approach can be applied to other bandit algorithms**
This is an good question. We believe that this analysis could potentially be applied to other algorithms that have an epoch-based execution structure and have weakly dependent structures between epochs. We will further investigate this question in future research.
---
Once again, we sincerely thank you for your positive review. We hope our response addresses your concern. | Summary: This paper considers contextual bandits with cross learning, where the learner observes the loss associated with the action across all possible contexts. The losses are adversarial, but the contexts are stochastic.
The authors improve the analysis of Schneider & Zimmert (2023) to prove high probability regret bound, rather than the known expected regret bound, to the same algorithm. They do that by making use of the weak dependency structure between different epochs, which was overlooked innprevious analyses. Additionally, they refine martingale inequalities to complete their analysis.
Claims And Evidence: Yes, they prove high probability regret gurantee.
Methods And Evaluation Criteria: The authors consider context regret, which is appropriate measure of perfirmance.
Theoretical Claims: I read the analysis sketch appears in the main paper, seems OK.
Experimental Designs Or Analyses: I read the analysis sketch appears in the main paper, seems OK.
Supplementary Material: No.
Relation To Broader Scientific Literature: Not sure that exists relation to broader literature.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The authors present novel and improved analysis to a known algorithm.
2. Results seems sound.
3. Well-written paper.
Weakness:
My worries are about the novelty – which is only the analysis; and about the broader impact of this work – are those techniques will be useful in other settings?
Other Comments Or Suggestions: Line 319 left – overflow in inline equation.
Questions For Authors: See weakness.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer m6dx,
Thank you for your positive feedback. We answer your question below.
---
**Question: the novelty and the broader impact**
The reviewer questioned the novelty of our work, particularly noting that our contribution appears to be “only the analysis.” We would like to clarify this point. First, in the field of adversarial bandit research, providing a high-probability bound alone is often regarded as a substantial contribution. Second, we respectfully argue that enhancing the results of an existing algorithm through novel analysis—achieving the same outcome with a simpler approach—can, in a certain sense, be considered an advantage over proposing a more complex algorithm. Finally, our work demonstrates technical novelty, as decomposing regret by epochs is a relatively uncommon approach in high-probability bound studies.
Regarding the broader impact of our work, the reviewer asked, “Are those techniques useful in other settings?” This is an excellent question and aligns with the direction we aim to explore next. We believe our techniques could be beneficial to other algorithms that involve epoch-based subroutine and exhibit weakly dependent relationships between epochs. We are committed to further investigating this question.
---
Once again, we sincerely thank you for your positive review. We hope our response addresses your concern. | Summary: This paper provides a high probability bound for the cross-learning problem in Schneider & Zimmert, 2023, where a regret bound is provided in expectation.
Claims And Evidence: N/A
Methods And Evaluation Criteria: N/A
Theoretical Claims: N/A
Experimental Designs Or Analyses: N/A
Supplementary Material: No I didn't.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. I appreciate the detailed description of the literature and I think the current presentation is almost clear (but see my comments below).
Weakness:
1. On line 275, the authors mentioned that a term of $\sum\sum Pr(c) \langle \pi_c, \tilde{l}_{t, c} - l_{t, c}\rangle$ is saved. However, the significance of this term is not discussed. If the same technique in Schneider & Zimmert, 2023 is used to derive the high probability regret bound, will this term be of non-trivial order that will lead to a term worse than sqrt(KT) and eventually lead to a nonoptimal bound? Or is this term hard to bound given the technique in Schneider & Zimmert, 2023? Or is there any other terms that cause difficulty for the technique in the prior literature? Some discussions would be helpful for me to understand.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer JF8C,
Thank you for your positive feedback. We answer your question below.
---
**Question: the significance of this term $\sum \sum Pr(c) <\pi_c, \tilde{l} _{t, c} - l_{t, c} >$**
To achieve a high-probability bound, saving this term is essential. In Schneider & Zimmert (2023), this term is not saved. Thus they need to bound the term $\sum\sum Pr(c) < \pi_c, \hat{l}_{t, c} - \tilde{l}_{t, c} >$ (implicitly in their $\text{bias}_2$ term). As a result, their approach yields a bound in expectation only. In our work, saving the term $\sum \sum Pr(c) < \pi_c, \tilde{l}_{t, c} - l_{t, c} >$ allows us to counteract the random deviation, thereby strengthening the expectation bound into a high-probability bound.
---
Once again, we sincerely thank you for your positive review. We hope our response addresses your concern. | Summary: The submission studies contextual bandits with cross-learning, where the feedback information is the loss of the chosen action in all contexts. The main result is a regret bound that holds in high probability, which improves on a previous bound that holds only in expectation (Schneider & Zimmert 2023).
## Update after rebuttal
The authors' reply addressed my concerns. In light of this, I have revised my recommendation to weak accept.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The technical contributions are as follows.
- The submission proposed a novel way to decompose the regret. The new term contributes a negative term (lines 297R and 637) to make the high probability bound possible.
- The newly introduced term is unbounded, hence, a surrogate indicator function (lines 324R and 452) is designed to make the sequence of epoch values a martingale difference sequence.
With the two modifications above, Freedman’s inequality can be applied to provide a high probability bound.
Theoretical Claims: I have read appendices A, B1, and B2 and browsed through the rest of Appendix B.
Experimental Designs Or Analyses: There is no experimental result; the submission is a theoretical paper.
Supplementary Material: I have read appendices A, B1, and B2 and browsed through the rest of Appendix B.
Relation To Broader Scientific Literature: The way the unbounded sequence is handled, which makes martingale concentration analysis possible, could benefit concentration analysis.
Essential References Not Discussed: I am fine with the related work discussed in Section 1.2.
Other Strengths And Weaknesses: Weaknesses
- Although clever, the impact of the techniques developed is unclear. Currently, it seems to be a specific treatment to a specific algorithm and seems to be incremental (from the perspective of the result). That is, the impact of the proposed technique on the theoretical community is unclear.
- The main context mentioned several practical cross-learning applications. It is not clear whether the algorithm developed in this line of research (Schneider & Zimmert 2023) outperforms conventional contextual bandit algorithms. An empirical verification on realistic datasets would reinforce the need to develop a deeper theoretical understanding.
- Cross-learning is not defined mathematically in section 2 [110R].
Other Comments Or Suggestions: Please see the comments in the above parts.
Questions For Authors: What is the mathematical meaning of "weakly dependent" [110R]? Where is it used in the derivation (which inequality in the appendix)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer Kzo3,
We sincerely appreciate your valuable suggestions and thoughtful feedback. Below, we address each of your concerns in detail.
---
**Question: The impact of our results on the theoretical community**
The reviewer noted that “the result seems to be a specific treatment to a specific algorithm and seems to be incremental.” We respectfully disagree with this assessment. We would like to emphasize that in the field of adversarial bandit research, providing high-probability bounds is a widely recognized and significant contribution in its own right (e.g., [1, 2, 3]). From a technical perspective, our argument for the high-probability bound is novel. Specifically, we decompose the regret across different epochs and leverage the weakly dependent structures between these epochs—a method that, to the best of our knowledge, is not commonly employed in existing high-probability bounds for adversarial bandits.
---
**Question: the suggestion for empirical verification on realistic datasets**
We thank the reviewer for raising this point, as it highlights an opportunity to strengthen our work. In response, we will include experimental data in the revised version of the paper to reinforce the need for a deeper theoretical understanding and to complement our theoretical findings.
---
**Question: the mathematical definition of cross-learning in Section 2 [110R]**
We are grateful to the reviewer for pointing out this oversight. While we provided an explanation of cross-learning in [122R], we acknowledge that a clearer presentation is warranted. Based on your suggestion, we will introduce a standalone definition of cross-learning in the revised version to ensure this concept is explicitly and effectively conveyed.
---
**Question: What is the mathematical meaning of "weakly dependent" [110R]? Where is it used in the derivation (which inequality in the appendix)?**
"Weakly dependent" is an informal term commonly used in the context of concentration inequalities([4]). It generally refers to a sequence of random variables $X_i$ that, while not fully independent, forms a martingale, allowing us to derive concentration inequalities akin to those for independent random variables. In our work, it means that we can decompose the regret into epochs and the regrets of different epochs form an martingale sequence.
This property is utilized wherever epoch-based decomposition appears in our proofs. For example, it is applied in bounding $\sum_e \text{Bias5}_e$ in Appendix B.2 and $\sum_e \text{Bias4}_e$ in Appendix B.3.
---
Once again, we express our gratitude to the reviewer for their insightful and valuable feedback. These comments have greatly assisted us in refining the presentation of our paper and better articulating its contributions. We hope that our responses adequately address your concerns and that you will consider increasing your support for our work in light of these revisions.
---
References:
[1] Luo, H., Tong, H., Zhang, M., & Zhang, Y. (2022). Improved High-Probability Regret for Adversarial Bandits with Time-Varying Feedback Graphs. International Conference on Algorithmic Learning Theory.
[2] Neu, G. (2015). Explore no more: Improved high-probability regret bounds for non-stochastic bandits. Neural Information Processing Systems.
[3] Bartlett, P.L., Dani, V., Hayes, T.P., Kakade, S.M., Rakhlin, A., & Tewari, A. (2008). High-Probability Regret Bounds for Bandit Online Linear Optimization. Annual Conference Computational Learning Theory.
[4] Pelekis, C., & Ramon, J. (2015). Hoeffding's inequality for sums of weakly dependent random variables. arXiv: Probability.
---
Rebuttal Comment 1.1:
Comment: Question: The impact of our results on the theoretical community
Thank you for your reply. I wasn't clear enough. Of course it is a contribution to provide a bound that holds with high probability. My point was, can your method be extended to analyse other algorithms/problems, or does your method simply provide "yet another" high probability bound?
Question: What is the mathematical meaning of "weakly dependent" [110R]?
Thank you very much. I think this answer is very helpful in understanding the logic behind your analysis. Please include it in the paper.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer Kzo3,
Thanks for your timely reply.
About the impact of our results on the theoretical community. Currently we have not identified a direct additional application of our method beyond what is currently presented. However, we believe that the techniques developed in our work have the potential to be applied to other algorithms with multiple epochs and a weakly dependent structure between epochs. The reason we believe this comes from the simplicity and generality of the key structure underlying our analysis—namely, the weakly dependent structure between epochs—which we believe may appear in other algorithms. We view this as a promising direction for future investigation and plan to explore it further in subsequent studies.
About the mathematical meaning of "weakly dependent". Thanks for your recognition, we will include a new paragraph explaining this term in the new version of our paper.
Thank you once again for your valuable input. We hope that our responses adequately address your concerns and that you will consider increasing your support for our work in light of these revisions. | null | null | null | null | null | null |
ViTally Consistent: Scaling Biological Representation Learning for Cell Microscopy | Accept (poster) | Summary: High content screening (HCS) involves subjecting cells to thousands of perturbations in parallel, and capturing subsequent morphological changes via fluorescent imaging. The scale of data generated my modern experimental workflows has since necessitated automated analysis. In light of the success of foundation models, which leverage generic datasets to create representations that are more useful than representations derived from task-specific data alone, the field of machine learning has developed a number of foundation models for HCS. Experimental data, however, is inherently limited, and hence methods are needed to address how to improve model performance without simply scaling to every larger amounts of data.
This work provides evidence to support three key claims i) that careful curation of data generated by HCS that balances a dataset in terms of morphological variation, while keeping diversity high, can improve the performance of HCS foundation models ii) that intermediate layers of models for HCS can provide stronger performing representations than the commonly used final layer and iii) that they train the largest HCS model to data, that significantly improves on a number of challenging biological benchmarks, and compare performance with models trained on non-biological images, and models trained on alternative HCS data.
## Update after rebuttal
I have reviewed the paper, the reviews, and rebuttals again. I am inclined to keep my score at a strong accept - this is a well written paper, with results which I think are noteworthy for the community working in machine learning for high throughout screening data.
I believe that curating a dataset by filtering images with respect to perturbation consistency and diversity is sufficiently novel for publication at ICML. I also think the work showing that the linear probing task is correlated with performance on biological recall is noteworthy, and adds merit for publication for an Applications-Driven submission.
Claims And Evidence: There are four main claims in this paper. The experiment design and choice of benchmarks supports these claims with evidence that is sufficient for acceptance.
Claim 1: by curating the images in the proposed manner, and training for longer, better performance can be achieved from fewer epochs.
- The selection of baselines includes MAE-L/8 trained on either PP-16M or 93M. The performance in table 1 shows little change in the recall of known biological relationships between these two models, despite the large difference in number of images and epochs used in training. More importantly, the KS and CM statistics reflecting replicate consisitency, show large improvements when using the curated dataset.
- These claims would be better had the authors trained a MAE-L/8 with 93M for longer. How much impact did training length have on replicate consistency?
Claim 2: Intermediate layers can provide better representations for downstream tasks
- This claim is well supported throughout the manuscript, where representations from earlier blocks are found in many cases to outperform final layer representations. This is not only noteworthy for performance gains, but also reducing inference costs.
Claim 3: linear probing performance on a subset of genetic perturbations correlates strongly with downstream performance on whole-genome benchmarks
- Figure 4 shows a clear linear relationship, between linear probing and whole-genome benchmarks.
Claim 4: MAE-G/8 outperforms SOTA across biologically relevant benchmarks
- MAE-G/8 (trimmed) is consistently shown to outperform the well-selected baselines.
Methods And Evaluation Criteria: This work includes comparisons between a number of models. Each of these are variants of Vision Transformers (ViT), differing in number of parameters and training dataset. Including a MAE-L/8 trained on RPI-93M and comparing this to a MAE-L/8 PP-16M provides good evidence for the claim that effective models for HCS can be produced with curated datasets. It would have been interesting to include a comparison with the MAE-G/8 trained on the 93M dataset, albeit for less epochs, to show that it is the PP-16M dataset which is the decisive factor in scaling to the ‘giga’ family of ViTs for microscopy.
The use of linear probing to evaluate the performance of embeddings at earlier layers of a trained model, is well motivated in the paper, as it is computationally unfeasible to perform a whole genome evaluation. Using the 1139-class RxRx1 dataset for genetic perturbation prediction is a challenging task, and demonstrates the claim that earlier layers provide better embeddings with sufficient evidence.
Perhaps most crucially, the model is evaluated on external data, namely the JUMP-CP dataset, which the authors describe as likely to be generated by different assay protocols. Recall of biological relationships using any model is lower, but their MAE-G/8 still shows best recall overall.
Overall, high quality datasets and models are used to provide evidence to support the author’s claims. At a high level, this work is about developing a high-capability model for data generated by high content screening. Due to the variety of sources of batch effects that challenge machine learning in this domain, the combination of evaluation on high quality datasets and focussing on replicate consistency and recall of biological relationships, positions this paper well in the literature.
Theoretical Claims: Theoretical proofs are not a point of discussion within this paper.
Experimental Designs Or Analyses: I checked the validity of the experiment testing linear probing across different ViT blocks. A new dataset, different to the pretraining datasets used to train the models, was used for linear probing. Comparing each of these pretrained models on a new evaluation dataset is a fair comparison, and the set of baselines used for comparison are strong. Hence, the experiment seemed to be well designed and supports the relevant claim that representations from earlier model layers can be more useful than the output layer.
Supplementary Material: I reviewed the appendices provided in the manuscript.
This included a detailed account of how the initial dataset was curated. This revealed the under/over sampling that was applied with respect to the negative and positive controls of the screen that generated the data. The consideration of negative/positive controls in the balance of the dataset is not a point that I think has been a focus of in the literature, and I think the impact of this manuscript could be improved if this detail was described in the main text. Perhaps rephrase lines 134-141 to be more specific about the issue that positive/negative controls introduce to the balance of the dataset generated from HCS.
The details on perturbation and replication consistency are also welcome, and could set a new standard for benchmarking foundation models for HCS.
Relation To Broader Scientific Literature: This paper provides a new model for HCS that scales to a 1.9 billion parameter model. Previous work [1] has shown that the downstream performance of HCS models is correlated with the number of FLOPs in training. This work has shown that this scaling continues into the regime of billion parameter models. Crucially, to achieve this, they did not have to scale the size of their dataset, by scaling their HCS data generation pipeline, but instead only needed to curate their dataset and scale the number of model parameters and training time.
This model is a ViT and is not channel agnostic as in works [1] and [2]. Previous work has shown that regular ViTs can outperform channel agnostic ViTs and it remains an open question whether a channel-agnostic model can be developed with SOTA performance.
Previous work has shown that representations of data from layers prior to the output layer have better downstream performance, for example the work [3]. This work also demonstrates that earlier layers of a trained ViT can provide representations that perform better on downstream tasks. As such, this is not a novel result in and of itself, but this is the first time this has been demonstrated in models for HCS. However, using linear probing to more efficiently search for the best performing model is a novel result, as it was not clear that linear probing performance on a genetic perturbation classification task would correlate with the whole genome biological relationship recall task.
- [1] Masked autoencoders for microscopy are scalable learners of cellular biology O. Krauss et al 2024
- [2] ChAda-ViT : Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images, N. Bourriez et al 2024
- [3] MIM-Refiner: A Contrastive Learning Boost fromntermediate Pre-Trained Masked Image Modeling Representations, B. Alkin et al 2025
Essential References Not Discussed: No essential references come to mind. The related work sections correspond to the core contributions of the paper, and mention the relevant literature.
Other Strengths And Weaknesses: The overall quality of authorship is very high, this was a good paper to read. The potential impact of this work is high, due to it addressing the issue of needing to generate HCS data at greater scales to scale ML models.
Other Comments Or Suggestions: Figure 5 is a little difficult to read due to the large number of overlapping lines. Would a table work as an alternative to highlight its main points?
Questions For Authors: The claim that the curated dataset allows to train effective models with higher performance, by training with a curated set of images for more epochs would be more clearly supported had the authors trained a MAE-L/8 with 93M for more epochs. In particular, the recall performance between the MAE-L/8's trained on PP16M and 93M is similar in Table 1. How much impact did training length have on replicate consistency?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful and encouraging review.
**On training ViT-G/8 on RPI-93M**: We agree this would have been valuable. Due to compute constraints, and the insight gained from training ViT-L/8 on both RPI-93M and PP-16M, we prioritized training ViT-G/8 on the curated PP-16M dataset only. Based on the consistent improvements we observed with ViT-L/8 across replicate consistency metrics, we hypothesized that scaling model size on PP-16M would be the most effective use of resources.
**On recall differences on JUMP-CP**: Thank you for highlighting this point. The comparison to RxRx3 is complicated by both the reduced set of gene KOs in JUMP-CP (∼8,000 vs ∼17,000 in RxRx3) and the increased variability due to assay differences and batch effects. As such, recall values across datasets are not directly comparable, but we agree it's a valuable direction to further explore model generalization across assay types.
**On dataset curation and positive/negative controls**: We appreciate this suggestion and agree that the handling of positive and negative controls is important for shaping the final training distribution. We’ll clarify these decisions in the main text for the camera-ready version, particularly in the section describing dataset construction (lines 134–141).
**On Figure 5 readability**: Thank you for pointing this out. We’ll include a summary table in the appendix to complement Figure 5 in the camera-ready version. This benchmark is also now described in detail in RxRx3-core [Kraus et al. 2025].
**On impact of longer training for MAE-L/8 on RPI-93M**: This is a great question. While we did not extend training for MAE-L/8 on RPI-93M beyond what was reported in Kraus et al. 2024, the contrast between RPI-93M and PP-16M models trained for the same number of epochs suggests that dataset composition has a major effect on replicate consistency. We agree that further experiments exploring longer training on RPI-93M would help isolate the contributions of training length vs. data curation.
Thank you again for your supportive feedback and helpful suggestions.
**References**:
* Kraus et al. 2025 [RxRx3-core] – https://arxiv.org/abs/2503.20158 | Summary: This paper proposes a three-stage framework for pretraining foundation models on large-scale microscopy datasets to address measurement errors and enhance biological signal extraction. The framework involves (1) curating diverse and self-consistent training samples, (2) scaling a vision transformer architecture (ViT-G/8 MAE) trained on 8 billion microscopy image crops, and (3) evaluating intermediate model layers to optimize representations for downstream tasks. The authors introduce the largest known foundation model for cell microscopy (1.9B parameters), demonstrating a 60% improvement in linear separability of genetic perturbations compared to the prior ViT-L/8 MAE. The model achieves state-of-the-art performance across four benchmarks: whole-genome relationship recall, batch effect correction consistency, compound-gene activity prediction, and perturbation analysis. Key innovations include systematic error mitigation through pretraining, intermediate layer feature optimization, and scaling laws applied to biological imaging.
## update after rebuttal
Thanks for the clarification. I have no further questions and will keep my score.
Claims And Evidence: All the claims are well-supported.
Methods And Evaluation Criteria: I am not familiar with the application and evaluation criteria, but they seem to make sense. The evaluation leverages multiple biologically relevant benchmarks, including Linear probing tasks (RxRx1 gene classification, Anax functional group classification) to assess representation quality. Whole-genome relationship recall using public databases (e.g., CORUM, StringDB) to measure biological relevance. Replicate consistency tests (Kolmogorov-Smirnov and Cramer-Von Mises) to ensure model robustness.
These benchmarks are well-suited for assessing biologically meaningful representations and align well with the intended application.
Some potential issues are:
The computational cost of whole-genome benchmarking is extremely high (e.g., 80M forward passes). While this provides strong empirical evidence, an analysis of whether a subset of perturbations could yield similar insights with lower computational demands would be valuable.
The study provides some interpretability through linear probes and analysis of intermediate ViT layers, which is a notable contribution. While the correlation between linear probing results and whole-genome benchmarks is strong, further discussion on the biological interpretability of learned features would enhance the paper. For example, are certain gene classes more separable than others? Do embeddings capture known biological pathways?
Theoretical Claims: There are no theoretical claims made in the paper.
Experimental Designs Or Analyses: The use of self-supervised masked autoencoder (MAE) training aligns with state-of-the-art methods in computer vision and biological imaging. The authors explore different model sizes and configurations, providing valuable insights into scaling behavior. Some potential issues are that the justification for certain hyperparameters (e.g., patch size, mask ratio) is not explicitly discussed. A brief ablation study or sensitivity analysis on these parameters would strengthen the experimental design. Furthermore, the computational cost of training the 1.86B parameter model (48,000 GPU hours) is substantial, and an efficiency analysis comparing model performance versus computational cost would be beneficial.
The study uses a diverse set of evaluation metrics, but some alternative distance metrics (e.g., Euclidean, Mahalanobis) could be explored to ensure robustness.
Supplementary Material: The paper does not include supplementary material.
Relation To Broader Scientific Literature: This work makes significant contributions to the field of biological representation learning by scaling self-supervised Vision Transformer (ViT) models for cell microscopy and improving the quality of learned embeddings. The work is related to the areas of self-supervised learning (SSL), biological image analysis, and model scaling in deep learning.
Essential References Not Discussed: The paper appears self-contained, with citations covering relevant literature.
Other Strengths And Weaknesses: The paper presents the largest-scale foundation model for cell microscopy to date, leveraging self-supervised ViTs and a curated dataset (Phenoprints-16M). While self-supervised learning (SSL) has been explored in biological imaging, the combination of large-scale MAE training, dataset curation, and systematic layer-wise probing represents a novel and impactful approach.
My major concern is about the technique innovation. Given that the study primarily applies existing methodologies, its contribution in terms of methodological novelty may be limited.
The study focuses on cell microscopy images, but its applicability to other biological imaging modalities (e.g., histopathology, electron microscopy, organoid imaging) is not discussed. A brief analysis of transferability would improve the paper’s broader impact.
The paper focuses on ViT-based MAE models but does not compare against CNN-based architectures (e.g., EfficientNet, ResNet, U-Net), which are still widely used in biomedical imaging.
While ViTs have advantages in self-supervised learning and scaling, a baseline comparing MAE-G/8 to a strong CNN or hybrid transformer-CNN model would provide additional perspective.
Other Comments Or Suggestions: I have no other comments or suggestions.
Questions For Authors: I have no more questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and supportive comments.
**On the cost of whole-genome benchmarking**: This was a key motivation for developing lightweight classification proxy tasks. Selecting a small subset of perturbations for genome-wide benchmarks is challenging due to the random distribution of gene KOs across HCS plates, which limits the number of relationships we can evaluate without embedding thousands of full plates. That said, a promising direction is RxRx3-core [Kraus et al 2025], a curated subset of RxRx3 that may provide more efficient evaluation going forward.
**On interpretability**: We appreciate this suggestion. Related work explored mechanistic interpretability in these embeddings and directly motivated our use of intermediate layer analysis [Donhauser et al. 2024]. We agree that further investigation into which biological pathways or gene classes are best captured by these models is an important next step.
**On hyperparameter choices and CNN baselines**: Patch size, mask ratio, CNN baselines (e.g., ResNet, EfficientNet), and SSL vs. supervised training regimes were thoroughly benchmarked in Kraus et al. [2024]. Our work builds directly on those findings by focusing instead on scaling ViTs, refining the training dataset, and evaluating intermediate representations.
**On computational cost**: We agree the training cost is substantial (48K GPU hours), but we believe the investment is justified. Improved microscopy representations can accelerate early biological discovery and drug development. We are exploring ways to make training and evaluation more efficient in future work.
**On distance metrics**: We considered alternatives but chose cosine distance due to its popularity and strong performance in deep embedding–based image retrieval. It is scale-invariant and more robust in high-dimensional spaces compared to Euclidean or Mahalanobis distance (Zhe et al. 2018, Deng et al. 2018).
**On applicability to other biological imaging modalities**: We would have loved to explore this further but were limited by space. Recent work has trained ViT-G with DINO on histopathology data (Virchow 2), which suggests promising transferability to other imaging domains. We're excited by the possibility of adapting our approach to other modalities like H&E, organoids, or EM.
**On novelty and contribution**: While our techniques build on existing methods, we identify three underexplored strategies that, when combined, significantly improve performance on microscopy-specific benchmarks:
* Curation: Our dataset curation focuses on selecting diverse, experimentally consistent conditions across replicates. This contrasts with typical SSL curation strategies that aim to reduce redundancy by removing similar examples. We show that this approach enhances model consistency and downstream performance.
* Scaling: We demonstrate that further scaling ViTs beyond ViT-L/8 continues to yield improvements, validating the neural scaling hypothesis in microscopy representation learning.
* Intermediate representations: While intermediate layers have been used in CNN transfer learning (Moshkov et al. 2024), they remain under-utilized in ViTs trained with SSL. We show that selecting the right layer, determined via a fast, low-cost proxy task, leads to meaningful improvements in zero-shot biological recall.
Together, these findings offer a generalizable framework for building scalable, biologically relevant foundation models across experimentally derived datasets. We hope our study serves as a foundation for further work at the intersection of self-supervised learning and foundation model development for scientific datasets.
Thank you again for your constructive feedback and support.
**References**
* Kraus et al. 2024 – https://arxiv.org/abs/2404.10242
* Kraus et al. 2025 [RxRx3-core] – https://arxiv.org/abs/2503.20158
* Donhauser et al. 2024 – https://arxiv.org/abs/2412.16247
* Zhe et al. 2018 – https://arxiv.org/pdf/1802.09662
* Deng et al. 2018 – https://arxiv.org/abs/1801.07698
* Zimmermann et al. 2024 [Virchow2] – https://arxiv.org/html/2408.00738v1
* Moshkov et al. 2024 – https://www.nature.com/articles/s41467-024-45999-1 | Summary: The authors present a framework for training large-scale computer vision models for microscopy imaging data.
Claims And Evidence: The authors claim to train a large-scale ViT model that should work better than a previous model by Kraus et al, 2024.
The evidence for that is quite unclear and scarce: The tables report performance values for a set of architecture, where it is unclear which architectures come from the authors and which are from previous works. Furthermore, almost all performance metrics are presented without error bars and confidence intervals (and also without statistical tests), such that it is unclear which method performs best. The effect size (difference) between methods is very small. For some metrics it remains even unclear that they are (e.g. "KS" and "CM" in Table 1).
Also the methodological contributions that are claimed are unclear: it is unclear what the main contributions to this framework are and how they are justified and motivated. It appears that the proposed steps (data curation, scaling, and selecting a block for features) are ad-hoc decisions.
Methods And Evaluation Criteria: The work does not propose a method, but a framework.
The benchmark datasets make sense, like RxR3 and JUMP-CP, because those are large imaging datasets. However, the evaluation metrics are not well motivated, or even completely unclear: e.g. Table 1 has only recall without mentioning precision (a method can always trade-off recall against precision), Table 2 has "REactome" and "Stringdb" as metrics, which is also unclear what kind of metric this should be and why it is relevant. Table 3 again only reports only precision and not recall (or AUC-PR). Across the whole paper, it remains unclear what the main evaluation criteria for foundation models for microscopy images are.
For comparison, ref [1] sets up a battery of zero- and few-shot downstream tasks, for which clear evaluation criteria and metrics exist. The authors should set up a set of zero- and few-shot downstream tasks (together with metrics and evaluation criteria), which should be solved by the foundation models. Also Kraus et al [2], should be an inspiration for the authors.
References:
[1] Sanchez-Fernandez, A., Rumetshofer, E., Hochreiter, S., & Klambauer, G. (2023). CLOOME: contrastive learning unlocks bioimaging databases for queries with chemical structures. Nature Communications, 14(1), 7339.
[2] Kraus, O., Kenyon-Dean, K., Saberian, S., Fallah, M., McLean, P., Leung, J., ... & Earnshaw, B. (2024). Masked autoencoders for microscopy are scalable learners of cellular biology. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11757-11768).
Theoretical Claims: There are not theoretical claims in this work.
Experimental Designs Or Analyses: The experimental analysis is mostly done on checking how good intermediate layers of the ViTs are w.r.t. to performance. This is a reasonable analysis.
Question: Is this analysis done on a separate validation set (distinct from all other downstream tasks), such that selecting the best layer here does not introduce a bias for the later comparisons and benchmarks?
Supplementary Material: No supplementary material provided. Appendix read, but not reviewed in detail.
Relation To Broader Scientific Literature: The paper is well placed within the broader scientific literature of ViT, MAE, etc. However, early works on using deep neural nets for microscopy images are completely absent, and also other self-supervised learning approaches for microcopy images are missing. (see below).
Essential References Not Discussed: There are many earlier works on deep learning, e.g. CNNs, for microscopy images, e.g. ref [3], which are not mentioned and referred to. I only provide an exemplary reference here, but the authors should re-do the literature research to give a better view on this field:
[3] Ciresan, D., Giusti, A., Gambardella, L., & Schmidhuber, J. (2012). Deep neural networks segment neuronal membranes in electron microscopy images. Advances in neural information processing systems, 25.
Also, contrastive learning approaches (e.g. SimCLR, CLIP, etc) are hardly mentioned, which have also been successfully applied to microscopy imaging data. The authors should provide at least a paragraph in related work on papers around that topic.
Other Strengths And Weaknesses: Strengths:
- Effort to scale ViTs to large-scale microscopy datasets
- Substantial computational resources used to develop the model
Weaknesses:
- Unclear what the novelty is or just a framework proposed
- Limited relevance because of missing error bars and statistical test; unclear benchmark task and metrics
- Main analysis steps are not well justified and motivated
- Unclear what the machine learning aspect of this work is; maybe better suited for a bio-venue
Other Comments Or Suggestions: Citing Micrographia by Hooke is really a great one!
Questions For Authors: What is the "recall % @ 0.05-0.95 cosine threshold" ???
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful comments.
**On tables and metrics clarity**: We apologize for not clearly marking the Kraus et al. 2024 model in the tables. As noted in the “Models/prior work” section, MAE-ViT-L/8+ trained on RPI-93M is from Kraus et al. In Table 1 and 3, we report mean ± standard deviation computed from 3 and 100 random seeds, respectively. Table 2 caption notes a maximum standard deviation of ±0.0023. Perturbation consistency is evaluated using Kolmogorov-Smirnov (KS) and Cramér-von Mises (CVM) statistics, which compare replicate similarities to an empirical null. These are detailed on line 281 and Appendix 7, and were introduced in Celik et al [2024].
**On evaluation and metrics**: Our relationship recall task evaluates whether embeddings can zero-shot identify known interactions between perturbations based on cosine similarity. Because we use genome-wide CRISPR KO screens, many true interactions may be missing from annotation databases. Thus, precision is not meaningful in this context, as we cannot assume non-annotated gene pairs are false positives.
The “recall % @ 0.05–0.95 cosine threshold” measures the fraction of annotated gene-gene relationships found in the most similar (top 5%) or dissimilar (bottom 5%) embedding pairs. We use multiple knowledge bases (CORUM, hu.MAP, Reactome, StringDB) and report results across them. These metrics are described in Appendix 6 and used in both Celik et al. and Kraus et al. 2024.
**On methodological contributions and motivation**: This work extends efforts to scale ViT architectures and microscopy datasets. We found three key strategies that meaningfully improve performance:
* Curation: Surprisingly, training ViT-L/8 on a smaller, curated dataset improved performance vs. RPI-93M. The curation focuses on selecting diverse experimental conditions with consistent replicate phenotypes, rather than removing redundancy as done in typical SSL data prep. This led us to train MAE-G/8 on the curated PP-16M set.
* Scaling: Training larger ViTs continues to yield benefits, consistent with the neural scaling hypothesis. Our largest model, MAE-G/8, outperforms smaller ones in both replicate consistency and biological relationship recall.
* Intermediate representations: We found intermediate layers often outperform penultimate layers for zero-shot tasks. Since fine-tuning is not feasible in our setting, we propose lightweight proxy tasks (RxRx1, Anax classification) to select the best layer. We show these proxy tasks strongly correlate with more expensive genome-wide evaluations.
These insights, while based on well-known techniques in general vision or language settings, are novel in their application and combination for microscopy-specific representation learning. They provide a practical and reproducible framework applicable to any experimental dataset with replicate measurements.
**On prior work and references**: We appreciate the suggestion to cite earlier microscopy-specific deep learning work. Due to space constraints, we omitted a drafted section discussing such papers and instead prioritized prior work most relevant to our methods: dataset curation for SSL, layer selection, and embedding evaluation in microscopy. While we did not focus on contrastive approaches like CLOOME or MolPhenix, we agree they are valuable contributions. Our work differs in focus as we do not relate images to molecular structures, but instead aim to build general-purpose embeddings for microscopy data.
We hope this clarifies the design and motivation behind our choices, and the contributions our framework makes to the field. Thank you again for your detailed and constructive feedback.
**References**:
* Celik et al. 2024 – https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012463
* Kraus et al. 2024 – https://arxiv.org/abs/2404.10242
* Fradkin et al. 2024 [MolPhenix] – https://arxiv.org/abs/2409.08302 | Summary: This paper provides a framework to improve biological representation learning on large-scale microscopy datasets. Three steps are introduced: (1) Curating the training dataset to have a better distribution of samples over the phenotypic spectrum, (2) scaling training to larger models, and (3) evaluating intermediate representations to identify the best representation for downstream tasks.
Superior performance of the largest model trained on the curated dataset is shown over previous baselines in terms of biological relationship recall and zero-shot generalization. It is shown that finding the best layer for downstream tasks using linear probing improves accuracy, and linear probing accuracy on a smaller validation set correlates well with whole genome evaluation results.
Claims And Evidence: I will address each claim one by one:
(1) Curating the training dataset to have a better distribution of samples over the phenotypic spectrum leads to improved performance: I’m not sure if this is necessarily true. Imbalanced datasets do affect the model’s ability to learn about certain semantic concepts, but I doubt this is true at the scale of models and datasets we are talking about. It might happen that certain phenotypes have a very low frequency and the model will not learn about these, but in general imbalances in the number of samples per phenotype should not affect the semantic quality of the model’s embeddings. The results in Table 1 reflect this – the differences between the MAE L/8 trained on the RPI-93M and PP-16M are minor, and in fact are the same in terms of biological relationship recall for the untrimmed versions. Overall I don’t think there is enough evidence to suggest that the curation described here leads to meaningful results in performance.
(2) Scaling model size leads to improved performance: This is not surprising, but the improvements shown in Tables 1, 2, and 3 are pretty minor given that the model size increase 6 times between MAE L/8 and MAE G/8. It is hard to evaluate whether these differences are meaningful, and even if they are meaningful, if they are worth the increase in model size.
(3) Evaluating intermediate representations to choose the best layer for downstream tasks improves performance: Again, this is not really a novel idea. It is well known since the early days of representation learning that successive layers learn information at different layers of abstraction. While the penultimate layer is generally used for downstream tasks, it is not uncommon to use another intermediate layer. That said, mostly consistent, if somewhat minor, increases in accuracy are shown for most models, which does support the claim that trimming models is useful in the context of representation learning for cell microscopy.
Methods And Evaluation Criteria: The methods generally make sense for the problem of representation learning for microscopy, and the evaluation is shown on multiple datasets and for multiple models. None of the methods in the paper are necessarily novel, but experiments are performed at scale and a novel foundation model is introduced.
I would prefer it if the evaluation of the model scaling and dataset curation were done independent of each other in Tables 1, 2, 3, i.e. train models of the same size on the 3 datasets (RxRx3, RP-93M, Phenoprints-16M) and also train models of different sizes on all 3 datasets. The scale of improvement is not drastic enough for me to determine whether either of these contributes significantly.
Addition if out-of-distribution generalization on JUMP-CP and RxRx3 are good additions to the evaluation section. Biological validation is also mostly robust. While I will say that it is not surprising that linear probing accuracy on the manually curated Anax dataset correlates well with the whole-genome score, I think this is a good addition and could be a benchmark for future studies.
Theoretical Claims: There are no theoretical claims made in the paper.
Experimental Designs Or Analyses: NA
Supplementary Material: I did not review the supplementary material in detail
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
I think the main strength of this paper is that it provides baseline results related to the effect of 3 components of the model building pipeline in the context of representation learning for microscopy – dataset curation, scale, and best intermediate layer to choose for downstream tasks. The trained foundation model as well as the results would provide a good benchmark for future work in this area. The paper is well written and understandable, and generally thorough in terms of evaluation and metrics.
Weaknesses:
I have two main issues with the paper: (1) The methodological contributions are minimal and the paper is pretty bland in terms of novel ideas. All of the methods discussed here have been widely evaluated in different settings, and I’m not sure if it has enough methodological novelty for being published at this venue, and (2) The performance improvements on the whole seem pretty minor to me, for example ~1-2% for the microscopy MAEs in Table 1. I think combined with the lack of novelty I this makes me question if the contribution is that strong.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback.
**On the significance of data curation**: Filtering is critical even in large language models. Penedo et al [2023] highlight the value of de-duplication in RefinedWeb. While such techniques (e.g., MinHash) don't directly apply to images, our curation serves a similar role. Dataset curation had a strong impact on replicate consistency (KS): ViT-L/8 improved from .52 (RPI-93M) to .59 (PP-16M). We hypothesize that RPI-93M contains many control/inactive perturbations, leading to overfitting to subtle batch effects even after alignment. In contrast, PP-16M better captures phenotypic diversity and yields more consistent representations.
**On the significance of model scaling**: We show that combining curation, scaling, and layer selection yields strong gains. In Table 1, ViT-G/8 on PP-16M-trimmed improves replicate consistency (KS: .52 → .63, +21%; CM: 12.3 → 18.2, +48%) over prior SOTA (ViT-L/8 on RPI-93M). This matters as biological recall is less meaningful without consistent perturbation representations.
Table 2 shows improved zero-shot recall on an OOD dataset (JUMP-CP): ViT-G/8 trimmed recalls more interactions in the 0.05/0.95 cosine thresholds [CORUM (134), HuMAP (65), Reactome (42), STRING (157)] demonstrating generalization across experimental labs.
Table 3 reinforces this with gains on a benchmark for compound-gene relationships, showing ViT-G/8 more effectively encodes biological interactions.
**On the novelty of evaluating intermediate representations**: Layer selection has precedent in supervised transfer learning, but its use in SSL-trained transformers is rare. Prior work typically fine-tunes the penultimate layer, but in microscopy, supervised fine-tuning often reduces performance [Kraus et al. 2024]. We show that selecting intermediate layers in ViTs trained with SSL improves zero-shot performance without retraining. This holds across models, including those pre-trained on natural images, and offers a practical contribution to microscopy-based representation learning. The only related work we’re aware of is MIM-Refiner [Alkin et al. 2024], which appeared contemporaneously.
While intermediate layers were known to sometimes yield better representations, their inference compute advantages became significant with recent model sizes. For ViT-G/8, metrics from layer 38 required ~3,000 L4 GPU hours vs. ~4,000 from the final layer.
**On model/dataset ablations**: We appreciate the suggestion. ViT-G/8 training required 256 H100 GPUs for over a week, so full cross-comparisons were infeasible. However, we provide targeted ablations:
* Layer selection: All models were evaluated at multiple layers. Intermediate layers consistently outperform penultimate. Proxy tasks like RxRx1 and Anax classification correlate with genome-wide evaluations, offering cost-effective benchmarks.
* Dataset comparison: We compare ViT-L/8 trained on RPI-93M vs. PP-16M across all metrics, motivating our choice to train ViT-G/8 on PP-16M.
* Model scaling: Scaling benefits in microscopy SSL have been demonstrated [Kraus et al. 2024, Fig. 5], supporting our decision to go beyond ViT-L/8.
**On the novelty of contributions**: While curation, scaling, and layer selection have been explored individually in other domains, we demonstrate their combined effectiveness for representation learning in microscopy. These insights also apply to other biological settings with repeated measurements.
* Curation strategy: Unlike prior SSL curation that removes duplicates, we identify experimental conditions with consistent phenotypes across replicates, improving signal and reducing batch effects. This is a novel form of statistical deduplication.
* Model scaling: Scaling ViTs beyond ViT-L/8 provides measurable gains and supports the neural scaling hypothesis in microscopy. At this scale, sharing what works is crucial. Reproducing one ViT-G/8 run would cost \~\\$470K (43,000 H100 GPU hours × \$11/hour). Prior scaling papers (e.g., Touvron et al. 2023, DeepSeek-V3) have proven valuable by sharing such insights.
* Layer selection: While intermediate layers are used in supervised CNNs [Moshkov et al. 2024], they’ve not been systematically explored in SSL ViTs. We show that layer-wise evaluation improves performance without fine-tuning, highlighting a practical and underused method for biology-specific tasks.
We appreciate your recognition that the paper provides strong baselines and a new foundation model, and hope our responses clarify the significance and generalizability of our contributions.
**References**:
* Alkin et al. 2024 – https://arxiv.org/abs/2402.10093
* Kraus et al. 2024 – https://arxiv.org/abs/2404.10242
* Moshkov et al. 2024 – https://www.nature.com/articles/s41467-024-45999-1
* Touvron et al. 2023 – https://arxiv.org/abs/2302.13971
* DeepSeek-V3 – https://arxiv.org/pdf/2412.19437
* Penedo et al. 2023 – https://arxiv.org/abs/2306.01116 | null | null | null | null | null | null |
Bayesian Weight Enhancement with Steady-State Adaptation for Test-time Adaptation in Dynamic Environments | Accept (poster) | Summary: This paper proposes steady-state adaptation (SSA), a novel test-time adaptation (TTA) method that can be combined with existing ones.
SSA aims to reduce noise accumulation in gradients caused by the unsupervised nature of a TTA loss (e.g., entropy).
SSA models the distribution of the model weights and estimates it using the Bayesian weight enhancement.
Experimental results show that SSA improved classification accuracy under image corruption, label shifts, and domain shifts.
## update after rebuttal
I appreciate the author's further experiment. My concerns have been addressed. I have updated my score to 3.
Claims And Evidence: Existing TTA methods accumulate noise in gradients due to the unsupervised nature of TTA losses, which results in the model weights being no longer reliable (weight degradation).
SSA robustly estimates the posterior weights by modeling their distribution by Gaussian, which improves the stability and efficiency of TTA.
Experimental results show that existing TTA methods degrade accuracy when adaptation is continually performed, while SSA retains high accuracy.
Methods And Evaluation Criteria: Strength: Modeling and updating the posterior weights with the Bayesian weight enhancement is compelling and intriguing.
Weakness: The modeling is too simple. Does it sufficiently represent the weight distribution?
- According to Eq. (6), $p({w}_{k+1}|{u})$ does not affected by ${u}$.
- Is it sufficient to represent the covariance with a single scalar as $\Sigma_k=\sigma_k^2 \mathbf{I}$? Intuitively, each model weight has a different variance. For example, Modeling the covariance as diagonal, i.e., $\Sigma_k = \text{diag} (\sigma_{k,1}^2,\ldots, \sigma_{k,d}^2)$ would be interesting.
Theoretical Claims: I have checked the derivation of SSA using the SDE approximation of SGD.
Explaining how the proposed method addresses label shifts would be interesting.
Experimental Designs Or Analyses: - In Tables 1 to 3, combining SSA with not only ROID and CMF but also the other methods would be interesting. Specifically, combining SSA with simple TTA methods like TENT would enhance SSA's generality and efficacy, which would also support the hypothesis.
- A comparison of each method's performance with and without SSA would provide valuable insights.
- The accuracy gains achieved by SSA (e.g., ROID vs. ROID+SSA, CMF vs. CMF+SSA) appear to be marginal.
- Experiments on other tasks, e.g., semantic segmentation, would strengthen the SSA's generality.
Supplementary Material: I have checked the details of SSA, experiments, and additional results.
Relation To Broader Scientific Literature: The Bayesian weight enhancement can be integrated with various TTA methods.
It can raise the baseline of existing TTA literature.
Essential References Not Discussed: BACS[a] also models the posterior weight distribution of the model using SWAG[b].
[a] Zhou and Levine, Bayesian Adaptation for Covariate Shift, NeurIPS 2021.
[b] Izmailov et al., Averaging Weights Leads to Wider Optima and Better Generalization, UAI 2018.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: There are many typos, which are confusing:
- Eq. (1): $\hat{{w}\_o} \rightarrow \hat{{w}}\_0$
- Eq. (5): ${w}\_{+1} \rightarrow {w}\_{k+1}$
- Eq. (16): Is the $\int$ missing?
- The right column of L315, L658, etc.: $1.0^{-12} \rightarrow 10^{-12}$?
- Title: Stead -> Steady
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable review and insightful comments. We have carefully examined your feedback, fully understood the concerns and questions you raised, and have made every effort to reflect them thoroughly and faithfully in our manuscript.
**Methods And Evaluation Criteria:**
> According to Eq. (6), $p({w}_{k+1}|{u})$ does not affected by ${u}$.
→ In Eq. (6), the mean, represented by the source weight $w_0$, corresponds to the enhanced weight $u_0$ at time step 0, and thus is indirectly influenced by $u$.
> Is it sufficient to represent the covariance with a single scalar as $\Sigma_k=\sigma_k^2 \mathbf{I}$? Intuitively, each model weight has a different variance.
→ In Appendix D, we explicitly mention exploring more complex and potentially better-suited weight distributions for the TTA process as future work, which includes considering the diagonal covariance structure suggested by the reviewer. However, the rationale behind retaining a scalar covariance in our current work is motivated by the considerable noise inherent to unsupervised learning within the TTA process. Under such noisy conditions, scalar covariance helps mitigate potential overfitting from covariance flexibility. The following results substantiate this claim:
| Method | covariate shifts | label shifts ($\gamma=0.1$) | label shifts ($\gamma=0.0$) |
| --- | --- | --- | --- |
| SSA (scalar) | **42.2±0.16** | **13.1±0.29** | **35.9±0.04** |
| SSA (diagonal) | 42.6±0.11 | 13.2±0.30 | 36.3±0.03 |
**Theoretical Claims:**
> Explaining how the proposed method addresses label shifts would be interesting.
>
→ Under label shift scenarios, models often converge to trivial solutions, excessively increasing the probability assigned to specific classes. In such situations, gradients become excessively large, causing model weights to enter narrow optimal points (Section 3.3). SSA prevents the model from falling into narrow optimal points by decreasing the learning rate in response to gradients that significantly enlarge the variance.
**Experimental Designs Or Analyses:**
> Specifically, combining SSA with simple TTA methods like TENT would enhance SSA's generality and efficacy, which would also support the hypothesis.
>
→ In Table 5 of Section 4, we have demonstrated the effectiveness and compatibility of combining SSA with TENT in the covariate-shift scenario. As shown in that table, even when combined with TENT, SSA yields a substantial performance improvement.
→ Additionally, beyond the covariate-shift scenario presented in Table 5, we measured the average error rates (%) for TENT combined with SSA in the label-shift scenarios (corresponding to Tables 2 and 3) as follows:
| Method | covariate shifts | label shifts ($\gamma=0.1$) | label shifts ($\gamma=0.0$) |
| --- | --- | --- | --- |
| TENT | 53.3±0.22 | 55.4±1.58 | 56.2±0.98 |
| TENT+SSA | **49.3±0.30** | **49.9±0.03** | **50.3±0.02** |
**Essential References Not Discussed:**
→ In Appendix C (Bayesian Deep Learning), we have discussed SWAG [b], as you have suggested. SWAG has demonstrated robustness to out-of-distribution data, particularly when considering multiple models. Based on SWAG, BACS [a] obtains a posterior distribution over the data by marginalizing parameters through a prediction ensemble. However, such an approach is computationally inefficient because it requires making $N$ predictions for each inference, using $N$ models stored beforehand.
→ In contrast, SSA directly infers a time-varying posterior weight distribution in environments subject to gradient noise. It achieves this by performing Bayesian filtering through dynamics derived via an SDE approximation. Subsequently, SSA selects a single weight from the posterior distribution, thus requiring only one prediction per inference. Therefore, our proposed algorithm enables significantly more efficient inference than existing methods. Furthermore, SSA theoretically derives and accounts for the temporal evolution of the weight distribution (Section 3.3). We will include this discussion explicitly in Appendix C to further clarify and strengthen the contribution of our manuscript.
**Other Comments Or Suggestions:**
We sincerely appreciate your careful proofreading for typographical errors. Following your valuable suggestions, we have corrected the identified errors to enhance the readability of our manuscript. However, for the following cases, we either maintained our original notation or revised it differently from your suggestion for the reasons explained below:
→ In Eq. (16), we first computed the joint distribution $p(w_{k+1},u_{k+1}|w_{0:k})$ and then converted it into $p(u_{k+1}|w_{0:k},w_{k+1})=p(u_{k+1}|w_{0:k+1})$ with $Z_k=p(w_{k+1}|w_{0:k})$, which aligns with Bayes' rule. Hence, we decided to maintain our original notation.
→ Regarding $1.0^{-12}$, this notation was intended to represent $1^{-12}$. We have revised this to indicate $10^{-13}$ to prevent potential confusion.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's response and additional experiments.
> → In Table 5 of Section 4, we have demonstrated the effectiveness and compatibility of combining SSA with TENT in the covariate-shift scenario.
I'm curious about the versatility of SSA.
Can SSA be combined with other methods, e.g., LAME, RoTTA, SAR, and EATA?
Adding +SSA score for each method to Tables 1-3 would be interesing.
> Regarding $1.0^{-12}$, this notation was intended to represent $1^{-12}$. We have revised this to indicate $10^{-13}$ to prevent potential confusion.
$1^x=1$ regardless of $x$. Is the notation really correct?
-----
6 Apr 2025
I appreciate the author's further experiment. My concerns have been addressed. I have updated my score to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful and detailed review of our manuscript. We sincerely appreciate your insightful questions and would like to provide the following responses:
- **Additional Experiments**
→ LAME is a parameter-free method, and RoTTA is a consistency training approach based on the student-teacher framework (see Appendix B.1). Therefore, they are not encompassed by the Bayesian weight enhancement framework that forms the basis of SSA. Excluding these two methods, the results of applying SSA to SAR and EATA are as follows:
| Method | covariate shifts | label shifts ($\gamma=0.1$) | label shifts ($\gamma=0.0$) |
| --- | --- | --- | --- |
| SAR | 51.3±0.35 | 48.0±0.10 | 50.5±1.38 |
| SAR+SSA | **49.0±0.09** | **47.3±0.21** | **46.6±0.08** |
| EATA | 50.2±0.06 | 43.3±0.03 | 47.0±0.11 |
| EATA+SSA | **47.6±0.06** | **41.3±0.02** | **45.2±0.01** |
→ As your insightful anticipation suggested, SSA demonstrates strong compatibility and generality when combined with a variety of existing methods in improving performance across both SAR and EATA, under various types of distribution shifts .
- **Notation**
→ Thank you for pointing out. Following your suggestion, we have revised the notation to $10^{-12}$ accordingly.
Your detailed review has significantly contributed to improving the rigor and clarity of our manuscript. Once again, we are deeply grateful for your careful evaluation and constructive feedback. | Summary: This manuscript proposes a novel Bayesian-based framework to enhance existing weight-based TTA methods. They investigate the distribution shifts issue and reflect the reason behind gradient noise. A tailored steady-state adaptation algorithm shows SOTA performance on several benchmarks.
Claims And Evidence: The claims made in the submission are well-supported by a robust analytical framework and experimental results, which collectively validate the effectiveness of the work.
Methods And Evaluation Criteria: 1. The proposed framework employs the SDE approximation to ensure steady-state covariance, which can align with the discrete TTA process.
2. The authors design a dynamic algorithm for the step size calculation, which can balance covariance based on the posterior weight distribution.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: The authors have conducted several experiments that demonstrate the ability of the proposed SSA algorithm to enhance the generalization capabilities of recent ROID and CMF methods. The experimental design appears to be comprehensive, with thorough consideration of various submodules and hyperparameters.
Supplementary Material: NA
Relation To Broader Scientific Literature: The study of online TTA presented in this paper is of significant relevance to both computer vision and protein prediction fields. However, the paper's exclusive focus on weight-based methods limits the scope of its contributions and may overlook other potential approaches.
Essential References Not Discussed: None
Other Strengths And Weaknesses: + The manuscript is well-organized with clear illustrations.
Other Comments Or Suggestions: None.
Questions For Authors: An additional issue is that, based on the experimental data, the performance improvement of SSA on ROID appears to be less significant than that of CMF. Have the authors analyzed the reasons for this difference?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We deeply appreciate your valuable review and insightful comments. We have carefully considered your feedback, fully understood the concerns and questions raised, and endeavored to address them thoroughly and sincerely.
**Relation To Broader Scientific Literature:**
> However, the paper's exclusive focus on weight-based methods limits the scope of its contributions and may overlook other potential approaches.
>
→ As you rightly pointed out, our study primarily focuses on the mechanisms underlying weight-based methods. Nonetheless, our contribution goes beyond merely analyzing weight-based techniques; we uncover and advance the underlying probabilistic framework. By doing so, we effectively mitigate instability issues commonly encountered in TTA methods, especially those caused by increasing learning rates (Section 5). Furthermore, as demonstrated in Table 5, the SSA method has shown notable generality by significantly improving performance when combined with fundamental TTA methods such as TENT.
**Questions For Authors**
> An additional issue is that, based on the experimental data, the performance improvement of SSA on ROID appears to be less significant than that of CMF. Have the authors analyzed the reasons for this difference?
>
→ The issue raised by the reviewer can be explained through the discussion presented in Appendix A.1. As mentioned in that section, CMF provides a distribution for the hidden source weights that evolves in real-time based on observed weights. This approach contrasts with the typical scenario in Eq. (16), where the mean of the likelihood $p(w_{k+1}|u_{k+1})$ is fixed as the source weights (i.e., combining ROID with SSA). In other words, CMF's modeling of the hidden source weights facilitates a more sophisticated likelihood estimation.
→ This advantage explains why the combination of CMF and SSA achieves superior performance. We will incorporate this explanation into Section 4.2 of the manuscript to clarify further the superior performance of the CMF and SSA combination, thereby enhancing the robustness of our paper. | Summary: This paper proposes using a stochastic differential equation (SDE) to handle temporal distribution shifts in test-time adaptation scenarios. The SDE is applied to handle the temporal dynamics of stochastic gradient descent, balancing the current updates of the model weight with that of the pre-trained model. The experiment results give the performance gains of the proposed method over two datasets.
Claims And Evidence: One of the key claims is a "Bayesian weight enhancement framework that unifies and generalizes existing weight-based TTA methods." However, the approach appears to primarily apply an SDE to SGD dynamics, which was a major contribution of prior work (Li et al., 2019). This work extends it by introducing Bayesian filtering to estimate the posterior of weight distributions.
Methods And Evaluation Criteria: The proposed methods make sense, and the datasets for evaluation seem reasonable.
Theoretical Claims: N.A.
Experimental Designs Or Analyses: - The method achieves state-of-the-art (SoTA) results on two datasets, ImageNet-C and D109. However, a more in-depth analysis is lacking, particularly regarding the computational cost introduced by the new method or analysis that indicates noise covariance has been effectively handled.
- The ablation study is insufficient. For example, is Bayesian filtering truly necessary (empirically and theoretically), or would directly applying the SDE from Li et al. (2019) suffice?
Supplementary Material: Yes, just roughly go through the supplements.
Relation To Broader Scientific Literature: In the broader scientific context, this work builds upon SDE+SGD (Li et al., 2019) by incorporating Bayesian filtering to handle distribution shifts in TTA. Since both components are well-established with existing solutions, the theoretical contribution remains unclear. While combining these two techniques is reasonable, the overall contribution does not appear particularly significant.
Essential References Not Discussed: The key contribution lies in the application of SDE or a dynamic system to address online distribution shifts in TTA. However, this general idea has been explored before, albeit with different definitions of SDE or dynamic systems. For example:
- Huang et al. (2022): Extrapolative continuous-time Bayesian neural network for fast training-free test-time adaptation (NeurIPS 2022).
- Schirmer et al. (2024): Test-time adaptation with state-space models (ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling).
While this work builds on these foundations, the novelty may be limited given prior investigations in similar directions. But I do appreciate the practical contribution of this work.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: N.A.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable review and insightful comments. We have carefully examined your feedback, fully understood the concerns and questions raised, and have earnestly endeavored to address them.
**Claims And Evidence:**
> However, the approach appears to primarily apply an SDE to SGD dynamics, which was a major contribution of prior work (Li et al., 2019).
**Experimental Designs Or Analyses:**
> The ablation study is insufficient. For example, is Bayesian filtering truly necessary (empirically and theoretically), or would directly applying the SDE from Li et al. (2019) suffice?
>
→ Below, we address your comments across Claims And Evidence, Experimental Designs Or Analyses. We understood your comments to revolve around questioning whether modeling the TTA process solely via SDE approximating SGD is sufficient. Relying exclusively on the SDE approximation is inadequate for capturing weight evolution based on discrete observations, and it prevents the use of empirically validated weight-based TTA methods. The detailed reasoning is as follows:
- **The SDE approximation** describes how weights evolve without data during the TTA process.
- Additionally, while the SDE approximation provides dynamics in continuous time, actual observations occur only at discrete time steps $k$, necessitating the discretized transition distribution we derived in Eq. (12).
- **Bayesian filtering** explains how the TTA process evolves given observations (i.e., weights).
- The TTA process deals explicitly with noisy observations (Section 3.1). To model these noisy observations effectively, it is essential to interpret weight-based TTA methods probabilistically through Bayesian weight enhancement, as discussed in Section 3.2. This necessity results in the likelihood $p(w_{k+1}|u_{k+1})$ in Eq. (16), implicitly incorporating noise.
- Combined with the transition distribution, this likelihood integrates into Bayesian filtering in Eq. (13) and Eq. (16), thus yielding the posterior weight distribution incorporating past observations.
→ These points underscore the significance of integrating SDE approximation with Bayesian filtering and highlight our contributions.
**Experimental Designs Or Analyses**
> However, a more in-depth analysis is lacking, particularly regarding the computational cost introduced by the new method or analysis that indicates noise covariance has been effectively handled.
>
→ As shown in Algorithm 1, the SSA method requires only simple arithmetic operations among weights, resulting in minimal additional computational overhead. In Table 5, we addressed your concern regarding computational cost by analyzing GPU wall time and relative execution time before and after applying SSA. Our analysis reveals that SSA incurs only an additional relative computational cost ranging from 1% to 5%, depending on the specific approach.
→ The SSA method incorporates a mechanism to drive the noise covariance towards a steady-state over time. As demonstrated in Figure 4, we measured the posterior covariance over time for TENT and CMF methods without SSA applied. Our results show that existing methods exhibit significant temporal fluctuations in covariance. In contrast, such fluctuations decrease significantly when SSA is used, and the covariance converges towards a particular steady-state value.
**Essential References Not Discussed**
> The key contribution lies in the application of SDE or a dynamic system to address online distribution shifts in TTA. However, this general idea has been explored before, albeit with different definitions of SDE or dynamic systems. For example:
→ In Appendix C (Bayesian Filtering), we have cited the reference you recommended [Huang et al. (2022)]. This reference employs a particle filtering approach, which requires offline training using source and target data and a sampling process to learn the distribution parameters of weights and their importance. In contrast, SSA uses only online learning and requires only one sampling, resulting in significantly higher computational efficiency (Section 3.3).
→ Another reference you suggested [Schirmer et al. (2024)] utilizes a state-space model to capture the distribution of representation and weight changes. Consequently, their method leads to a complex design for the posterior of the weight distribution, resulting in additional computational overhead. On the other hand, SSA directly infers the posterior of the weight distribution using a transition model derived by introducing an SDE approximation. This approach allows SSA to perform inference through simple arithmetic operations, achieving high computational efficiency (Table 5). We will include this discussion in Appendix C (Bayesian Deep Learning) to clarify the contributions of SSA further.
---
Rebuttal Comment 1.1:
Comment: Most of my concerns have been resolved. However, in this work, the SDE approximation seems to function as a prior (am I understanding right ?), not incorporating data or observations during the TTA process. It does, however, include some methods for using observations to parameterize the variables in the SDE. Additionally, discretization can be achieved by solving the equations at discrete time intervals. Nonetheless, it is still necessary to conduct an ablation study to clarify the contribution of each component.
---
Reply to Comment 1.1.1:
Comment: → Thank you for your constructive comments. We can interpret the transition distribution derived from the SDE as serving as a prior over the weights over time. Specifically, the transition distribution enables Bayesian filtering to infer the posterior weight distribution at the current time step by integrating past weight information. This posterior then acts as the prior distribution at the next time step (see Section 3.3, “Online Posterior Weight Distribution Inference”). We derived the balancing covariance using the posterior distribution and steady-state condition (Section 3.3, “Balancing Covariance”).
→ In light of this background, we have conducted the additional ablation study you suggested on the balancing covariance and the transition distribution. The results are summarized as follows:
| Method | covariate shifts | label shifts ($\gamma=0.1$) | label shifts ($\gamma=0.0$) |
| --- | --- | --- | --- |
| SSA | 42.2±0.16 | 35.9±0.04 | 13.1±0.29 |
| No Balancing Covariance | 43.0±0.20 | 37.9±0.10 | 14.0±0.31 |
| No Transition Distribution | 43.5±0.04 | 38.2±0.05 | 14.4±0.24 |
→ Thanks to your insightful suggestion, the individual contributions of each component in our method have become more apparent. Once again, we sincerely appreciate your thoughtful feedback. | Summary: Test-time adaptation assumes that only the inputs of the test dataset are given for adaptation, where the model parameters are updated using an unsupervised loss without labels. Consequently, the model parameters are inevitably updated by a noisy gradient, which differs from the gradient obtained using true labels. Thus, this work considers the weight parameters as random variables and applies the Kalman filtering approach to update the mean and covariance of the random weight parameters. Unlike previous Bayesian approaches, this work presents how to adapt the learning rate/step size of the mean using the covariance so that the weight parameters are updated in a way that ensures the covariance of the noisy gradient remains stable, meaning it does not change significantly compared to its previous update. Empirically, the effectiveness of the proposed method is demonstrated on ImageNet using various scenarios of covariate shift and label shift, as well as in terms of learning rate robustness.
Claims And Evidence: The claim is well supported by empirical results.
Methods And Evaluation Criteria: The proposed method seems to make sene.
Theoretical Claims: Although this work does not have the theoretical claim, the claim of the proposed scala parameter in Eq. (19) seems valid.
Experimental Designs Or Analyses: The experimental design and analysis are well conducted.
Supplementary Material: I checked the derivation in Algorithm Section A of the supplementary material.
Relation To Broader Scientific Literature: This work presents a clever approach to updating model parameters in test-time adaptation while considering a realistic adaptation setting. In this context, the proposed method has the potential to enable DNN models to adapt effectively and continually to new environments.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ### Strengths
* This work recognizes an important problem in previous test-time adaptation algorithms, where parameter updates using noisy gradients could result in performance degradation of the adapted model due to an improperly tuned learning rate.
* This work presents a clever way to address the learning rate issue by considering that the filtered covariance of the noisy gradient remains stable, and then determining the adaptive step size depending on the filtered covariance.
### Weaknesses
* In my view, this work does not seem to have any clear weaknesses.
Other Comments Or Suggestions: N/A
Questions For Authors: I do not have any further question on this.
Ethical Review Flag: Flag this paper for an ethics review.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate your valuable review and insightful comments. In line with your suggestions, we are extending our research to address a general problems of DNNs. Once again, thank you very much for your thoughtful feedback. | null | null | null | null | null | null |
A Sample Efficient Conditional Independence Test in the Presence of Discretization | Accept (poster) | Summary: This paper addresses the critical challenge of conducting conditional independence tests on discretized data, where traditional methods often fail due to information loss from binning. The authors propose DCT-GMM, a new method leveraging the Generalized Method of Moments to infer latent continuous variable relationships without binarization. Theoretical guarantees for asymptotic normality and reduced estimator variance are provided, and experiments demonstrate superior performance over existing methods in both Type I and Type II error rates and causal discovery tasks.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: Yes, I checked the experiments in Section 4.
Supplementary Material: Yes, I reviewed Additional Experiments in Appendix E.
Relation To Broader Scientific Literature: The key contributions of the paper can advance methodologies in conditional independence testing, causal discovery, and handling discretized data.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths: The integration of GMM to resolve over-identification in discretized data is novel and theoretically sound. The two-step GMM approach optimally weights moment conditions, enhancing efficiency.
Weaknesses: The method assumes latent variables follow a multivariate Gaussian distribution. This greatly limits its applicability to non-Gaussian settings.
Other Comments Or Suggestions: None.
Questions For Authors: 1. Given that each experimental configuration was replicated 2,000 times, Figure 2 suggests that the proposed method exhibits slight size inflation. What might be causing this phenomenon?
2. In all experiments, the data are discretized into three levels. How does the number of discretization levels impact the results?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > Multivariate Gaussian distribution.
Thank you for raising the concern. We fully agree that the assumption of Gaussianity will limit the generality of the proposed test. At the same time, please allow us to share a few points regarding its reasonableness:
1. **Challenges in Conditional Independence in the Presence of Discretization**: Inferring the conditional independence of latent variables based on their discretized values is indeed challenging, despite being a common practical issue. _Discretization significantly reduces the available information._ Without introducing mild assumptions, it is particularly difficult—if not overly ambitious—to construct statistics that correctly reflect conditional independence of latent continuous variables, let alone develop valid inference procedures. In this work, we rely on two key properties:
(1) The Gaussian assumption enables consistent estimation of the latent covariance matrix from discretized observations thanks to its parametric structure.
(2) Under the Gaussian model, conditional independence can be inferred solely from the covariance matrix.
2. **Popularity of nonparanormal Model**: The assumption of latent variables following multivariate Gaussian, also called the nonparanormal model, is well-studied and widely accepted in the community. There is a substantial body of work demonstrating the effectiveness of the nonparanormal model in various scenarios [1,2].
3. **Empirical Performance:** To alleviate your concern and thanks to the insightul suggestion from Reviewer Nwpp, we conducted experiments investigating the Type I and Type II error of the proposed test where the data generation process violates our assumption. Specifically, the data are generated as either linear or nonlinear non-Gaussian, where the linear parameters follow the same setting of the main experiment and the nonlinear functions are randomly choose from $(a) f(x) = sin(x), (b) f(x) = x^3, (c) f(x) = tanh(x), (d) f(x) = ReLu(x)$. Figure~1 in the link https://anonymous.4open.science/r/DCT-GMM-0D6D shows the comparison.
From the experiment result, DCT-GMM demonstrates comparable or superior Type I error control relative to DCT. In terms of Type II errors, it also outperforms DCT under most distributions. Overall, the discretization-aware tests clearly outperform other baselines.
> Slight size inflation. What might be causing this phenomenon?
We appreciate the reviewer for the valuable question. The most plausible explanation towards this phenomenon is the neglect of the second order and higher order terms in the deriving distribution of $ \hat{\sigma} - \sigma^*$ (Theorem 3.1 and Lemma 3.2 ). Specifically, our derivation relies on a Taylor expansion of the form (kindly refer to line 1045-1048):
$$ \hat{g}(\mathbf{\theta}^*) = \hat{g}(\hat{\mathbf{\theta}}) + \hat{\mathbf{G}}(\mathbf{\theta}^* - \hat{\mathbf{\theta}}) + \dots,$$
In our derivation, we omit higher order terms, which might influence the accuracy of the derived distribution. To validate the hypothesis, we conducted the experiment where we further increase the sample size, and strengthen the influence from the conditioning set to the observed pairs, thereby reducing the impact of higher-order terms. As shown in Figure 4 (link: https://anonymous.4open.science/r/DCT-GMM-0D6D), the proposed test effectively controls the Type I error rate under these conditions, supporting the validity of our explanation.
We will acknowledge this approximation as a limitation of our approach in the revised version of the paper.
> How does the number of discretization levels impact the results?
Thank you for the insightful question. Towards this question, we conducted additional experiments shown in Figure2 and Figure3 in the anonymous link https://anonymous.4open.science/r/DCT-GMM-0D6D.
Figure 2 compares Type I and II errors of DCT-GMM and baselines, using discretization level $M=5$, the cardinality of conditioning set $D=1$ and change the sample size $n=(100,500,1000,2000)$. Similar to the main experiment, both DCT and DCT-GMM control Type I error well, while DCT-GMM achieves higher power.
Figure 3 varies $M=(4,5,6,7,8)$ and fix $D=1$, and $n=2000$ in Figure3. DCT-GMM consistently maintains Type I error control and high power. Notably, the two-step DCT-GMM outperforms the one-step version, supporting our theoretical results.
---
[1] Fan, J., Liu, H., Ning, Y., and Zou, H. High dimensional semiparametric latent graphical model for mixed data. Journal of the Royal Statistical Society Series B: Statistical Methodology, 79(2):405–421, 2017.
[2] Zhang A, Fang J, Hu W, et al. A latent Gaussian copula model for mixed data analysis in brain imaging genetics[J]. IEEE/ACM transactions on computational biology and bioinformatics, 2019, 18(4): 1350-1360.
---
Thank you again for your thoughtful question. We hope the additional results clarify our findings. Please feel free to reach out if you have further questions or feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I have raised my score. | Summary: This paper introduces a Conditional Independence (CI) test designed for scenarios where continuous data is represented at a discretized level due to measurement limitations. In such settings, applying standard CI tests directly can lead to incorrect conclusions. Assuming the continuous data follows a multivariate normal distribution, the paper addresses this issue by proposing a Discretization-Aware CI test that accounts for these limitations and can identify independence relations between continuous latent variables given their discretized versions. This is achieved using the Generalized Method of Moments (GMM), which leverages all available sample information to estimate the covariance matrix of the continuous variables.
By assuming the original continuous variables follow a Gaussian model, performing the CI test reduces to an inference problem for the precision matrix $\Omega = \Sigma^{-1} = (w_{jk})$, since under this model, $w_{jk} = 0$ implies $X_j \perp\kern-0.3em\perp X_k \mid X_{-jk}$, where $X_{-jk}$ represents all other variables in $X$ except $X_j$ and $X_k$. The paper provides asymptotic error bounds on estimating these parameters and empirically demonstrates the improved performance of the proposed method compared to other CI tests.
Claims And Evidence: yes, however, the multivariate normal distribution assumption should be stated more explicitly and earlier in the paper, as it is a key part of the analysis. It may be worth mentioning it in the abstract or contributions section since, as it stands, the language suggests a more general claim up until the assumption is introduced in line 130.
Methods And Evaluation Criteria: yes
Theoretical Claims: I went over the proofs but haven't checked line by line.
Experimental Designs Or Analyses: Yes, results sounds reasonable
Supplementary Material: I checked Appendix A, B, C, D, E and went over the proofs (but not in details).
Relation To Broader Scientific Literature: The paper is related to the literature on Discretization-Aware CI tests (DCT), particularly the work in (1). However, the test in (1) relies on binarizing the observed data. The key contribution of this paper is proposing a sample-efficient CI test that does not require binarization, making it more effective in preserving information from the original data.
(1) Sun, B., Yao, Y., Hao, H., Qiu, Y., and Zhang, K. (2024). A conditional independence test in the presence of discretization.
Essential References Not Discussed: no
Other Strengths And Weaknesses: Strengths:
- Proposing a sample-efficient CI test that does not require binarization by utilizing GMM, making it more effective in preserving information from the original data.
- Promising empirical results
Weaknesses:
- The the multivariate normal distribution assumption limits the implication of the theoretical results
- From the technical perspective, the contribution is rather limited since the main results either uses standard asymptotic tools or results from (1)
- No guarantees for the type-I or type-II errors since the results are asymptotic and under the gaussian assumptions which may not hold in practice
(1) Sun, B., Yao, Y., Hao, H., Qiu, Y., and Zhang, K. (2024). A conditional independence test in the presence of discretization.
Other Comments Or Suggestions: Minor comments:
- In Equation (1), it should be clarified that $m$ ranges from $2$ to $M-1$, or dots should be added in the brackets to indicate the full range. Initially, it appears as if there are only three cases instead of $M$ cases.
- In Lemma 3.3, Equation (8), the definitions for $\Sigma_{-j j}$ and $\Sigma^{-1}_{-j -j}$ should be stated explicitly in the lemma statement or at least referenced in the appendix (Equation 23). Currently, it is not immediately clear how these terms are defined until reaching Equation (23).
Questions For Authors: In the two-step procedure, is there a specific reason for limiting the process to only two steps, rather than iterating further until some form of convergence in the weight matrix is achieved? or the convergence itself is not guaranteed ?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > The multivariate normal distribution assumption.
Thank you for raising the insightful concern. Please kindly refer to our response~1 to Reviewer 9WLL for a detailed explanation. Due to the 5000-character limit, we were unable to include the full response here. We apologize for the inconvenience.
> The technical contribution is rather limited since the main results use standard asymptotic tools.
Thank you for this important comment. We believe this comment arises from a lack of sufficient discussion in our paper. We have included additional discussion to clarify our use of asymptotic tools.
1. **Standard Practice in CI Testing**: Asymptotic approaches are a cornerstone of CI testing. Many well-established tests, including the classical Chi-square test [2], Fisher-z test [3], kernel-based tests [4] and more recent approaches [5], rely on asymptotic theory. While permutation-type tests may be constructed controlling Type I error in some special cases [1], there are currently no non-asymptotic methods available for CI tests involving hidden variables under nonparanormal models.
2. **Finite Sample Solution**: Recognizing the limitations inherent in asymptotic analysis, our framework can be readily extended with resampling techniques, such as the bootstrap. These methods offer a viable route to improve finite-sample performance without deviating from the core asymptotic framework.
3. **Empirical Support from Simulation Studies**: Extensive simulation studies in the literature demonstrate that asymptotic approximations yield accurate results even at moderate sample sizes (e.g., n>100 [4, 6]), supporting their practical applicability.
> results from DCT
We are sorry for the confusion. Both DCT and our method involve inferring the precision matrix, for which the standard technique of nodewise regression is employed in both methods [7,8]. This is why the derived CI test in Theorem 3.4 has the same form (though the analytical solutions $\xi$ involved are entirely different). We have properly acknowledged it.
Apart from the use of nodewise regression, all other components employ entirely different techniques.
Our aim lies in making the CI test sample-efficient in the presence of discretization. To achieve this, we cannot follow DCT, which further binarizes observations and conducts estimation and inference based on the binarized data—a limitation explicitly acknowledged in DCT (Appendix G in [9]). Instead, we reformulate the problem as an overparameterized one and leverage GMM to provide a principled solution with theoretical guarantees. The rationale and techniques are fundamentally different.
We will carefully discuss the above points in the paper to avoid further confusion. Thanks for the valuable feedback.
> In Equation (1), it should be clarified that m ranges from 2 to M−1,
Thanks for your great advice. We have included "$m$ is an integer ranging from $2$ to $M-1$" in our revised paper.
> definitions for Σ−jj and Σ−j−j−1 should be stated explicitly
Thanks for your great suggestion. Kindly note that we define the notation in right part of line 99-102: "Similarly, $\mathbf X_{-j-j}$ is the submatrix of $\mathbf X$ without $j$th column and $j$th row, and...". However, your suggestion made us realize that it might be not sufficiently clear. We have included the specific definition of $\mathbf \Sigma_{-j-j}$ and $\mathbf \Sigma_{-jj}$ in Lemma 3.4 to improve the clarity.
> In the two-step procedure, is there a specific reason for limiting the process to only two steps, rather than iterating further until some form of convergence in the weight matrix is achieved? or the convergence itself is not guaranteed?
Thanks for your insightful question. Yes, we can iterate the procedure alternately until convergence, and convergence is indeed guaranteed. we don't adopt the iterative update since the two-step estimator is already consistent and variance-efficient, additional iterations would offer limited benefit while incurring higher computational cost.
---
[1] Berrett et al., The conditional permutation test for independence while controlling for confounders.
[2] Pearson, on the criterion that a given system of deviations from the probable in the case of a correlated system ...
[3] Fisher, On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample.
[4] Zhang et al., Kernel-based conditional independence test and application in causal discovery
[5] Azadkia & Chatterjee, A Simple Measure of Conditional Dependence
[6] Gretton et al., A kernel two-sample test
[7] Qiu et al., Inference on multi-level partial correlations based on multi-subject time series data
[8] Chang et al., Confidence regions for entries of a large precision matrix
[9] Sun et al., A conditional independence test in the presence of discretization
---
Please feel free to let us know if any part remains unclear or if you have further questions. Thank you again for your time and valuable feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for your helpful response. I have increased my score. | Summary: This work tackles the problem of detecting conditional independence among hidden continuous variables, while the observed variables are discrete. Specifically, the authors rely on a recent work, the Discretization-Aware CI Test (DCT), which establishes a workflow to estimate covariances between continuous variables. This workflow leads to a system of equations that the DCT uses to over-identify the covariance. To address the issue of over-identification, the authors propose using a Generalized Method of Moments (GMM) to leverage all equations constructed with various discretization boundaries to acquire an accurate estimate of the covariance. Subsequently, nodewise regression is used, combining the covariance estimates to determine the conditional independence between variables.
Claims And Evidence: Most of the claims presented are clear, except for Theorem 3.5, which is critical to demonstrating why the DCT-GMM approach is superior to the standalone DCT method. It would be beneficial if the authors could provide additional details about Theorem 3.3 in the main content to enhance understanding and support the claims made.
Methods And Evaluation Criteria: The evaluation criteria include Type I and Type II errors, F1 Score, precision, recall, etc., which makes sense for evaluating the proposed method.
Theoretical Claims: I did not check.
Experimental Designs Or Analyses: I have reviewed the experiments, and they are generally convincing. However, I am a bit concerned about the experimental settings for the application in causal discovery. The parameters described in lines 407 to 410 on the left differ from those used in the original DCT work.
Supplementary Material: I did not.
Relation To Broader Scientific Literature: I believe the proposed method could prove particularly useful in digital data settings, where data is often discretized for storage in devices.
Essential References Not Discussed: I am not aware of.
Other Strengths And Weaknesses: I have noticed substantial overlap between lines 91 to 102 and lines 297 to 304 on the right-hand column with the corresponding sections of the DCT work. Please rephrase these sections to ensure your work is original.
Other Comments Or Suggestions: Please see the weaknesses section.
Questions For Authors: What are the benefits of using nodewise regression rather than constructing and inverting a covariance matrix? Is it because the inversion is expensive?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Most of the claims presented are clear, except for Theorem 3.5.
Thank you for this construction comments. We followed your comment and carefully revised Theorem 3.5 to make it more intutive and clear. Note that the formal theorem demonstrating the superiority of DCT-GMM over DCT is provided in Appendix G. We avoid to include it in the main body of the submission because presenting the claim requires a detailed introduction to DCT, which may detract from the overall flow of the main text.
> It would be beneficial if the authors could provide additional details about Theorem 3.3 in the main content
Thank you for your thoughtful feedback. We believe you are referring to Lemma 3.3, which is intended to illustrate a property of nodewise regression that supports the derivation of the CI test. Inspired by your comments, we have added sentences to clarify the motivation behind using nodewise regression in this context:
" While Theorem 3.1 and Lemma 3.2 effectively address the independence test, they do not directly resolve the CI testing problem. A seemingly straightforward approach is to invert the estimated covariance matrix; however, this does not provide a valid solution for inference."
> The parameters described in lines 407 to 410 on the left differ from those used in the original DCT work.
Thanks for your careful review. Since DCT is the only method specifically designed to handle discretization scenario, we generally followed their experimental setup for fair comparison. However, our parameter setting for the experiment is **more challenging**, involving **fewer samples** and **more variables** to validate the superiority of DCT-GMM.
We summarize the comparison of causal discovery of DCT paper and our own:
| Setting | DCT | DCT-GMM |
| -------------- | --------------------------------- | ----------------------------- |
| varying sample | $p=8, n=(500, 1000, 5000, 10000)$ | $p=10, n=(100,500,1000,2000)$ |
| varying nodes | $n=5000, p=(4,6,8,10)$ | $n=2000, p=(4,6,8,12)$ |
> overlap between lines 91 to 102 and lines 297 to 304 on the right-hand column with the corresponding sections of the DCT work.
We appreciate the reviewer for the careful review. We have rephrased the related work part (line 91 to 102) with citing additional references [1, 2, 3] (thank again to reviewer NWpp for his great suggestion).
For the experiment part (line 297 to 304), our intention is to follow the previous work DCT. Given your concern, we have rephrased it as
"
In the first part, we evaluate the Type I and Type II errors of DCT-GMM across various scenarios, comparing it with baseline methods including DCT (Sun et al., 2024), the Fisher-z test (Fisher, 1921), and the Chi-square test (F.R.S., 2009). In the second part, we assess the performance of DCT-GMM on causal discovery tasks and compare with same baseline methods. In the third part, we directly compare DCT-GMM and DCT to empirically validate Theorem 3.5.
"
> Benefits of using nodewise regression rather than inverting a covariance matrix?
Thanks for your insightful question. The problem with inverting the covariance matrix is that **it does not address the issue of inference**, i.e., **deriving the distribution** of $\hat \omega_{jk} - \omega_{jk}^*$.
While the GMM solves the estimation of covariance $\mathbf{\hat{\Sigma}}$ effectively, and we can directly invert it and obtain $\mathbf{\hat{\Omega}}$ , whose entry $\hat \omega_{jk}$ captures the relation of $X_j$ with $X_k$ conditioning on all other variables. The problem is, it does not provide a tractable way to infer the distribution $\hat \omega_{jk} - \omega_{jk}^*$.
To address this, we adopt a standard technique---nodewise regression [4, 5]. This allows us to:
1. Show $\beta_{j,k}$ acts as an effective surrogate of $\omega_{jk}$, thereby the wanted distribution transfers from $\hat \omega_{jk} - \omega_{jk}^*$ to $\hat \beta_{j,k} - \beta_{j,k}^*$;
2. Express $\hat \beta_{j,k}-\beta_{j,k}^*$ as linear combination of the known distribution $\hat{\mathbf{\Sigma}} - \mathbf{\Sigma}^*$ as equation~(11).
[1] Li S, et al. K-nearest-neighbor local sampling based conditional independence testing[J]. Advances in Neural Information Processing Systems, 2023.
[2] Cai Z, et al. A distribution free conditional independence test with applications to causal discovery[J]. Journal of Machine Learning Research, 2022.
[3] Kim I, et al. Local permutation tests for conditional independence[J]. The Annals of Statistics, 2022.
[4] Qiu, Yumou, et al. "Inference on multi-level partial correlations based on multi-subject time series data." Journal of the American Statistical Association, 2022.
[5] Chang J, et al. Confidence regions for entries of a large precision matrix[J]. Journal of Econometrics, 2018. | Summary: The paper proposes a conditional independence (CI) test for testing CI relations in discretized data. It does not rely on binarizing the data to infer the CI relations between latent variables. The paper argues that it does not need to rely on binarization like the previous work to establish correct CI relations between the latent continuous variables. It addresses an issue known as over-identifying restriction problem by using Generalized Method of Moments. The authors derive the test statistic and establish its asymptotic distribution. The key limitation of this work is that it assumes all latent continuous variables follow a multivariate normal distribution.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes, the paper uses F1 score, precision, and recall of the skeleton to evaluate the effectiveness of the proposed CI test on learning conditional independence. The authors also show the robustness of the test based on a range of sample sizes and graph sizes. They further test the algorithm on denser graphs and a real-world experiment.
Theoretical Claims: Yes. I have checked sections F and G. I don’t see any issue.
Experimental Designs Or Analyses: Yes. Originally, I thought the paper intentionally avoided many recent advanced baselines. However, I later understood that it is not needed because of the neat idea that the conditional independence among the original continuous variables may be wrongly tested as conditional dependence due to discretization. With faithfulness, it makes sense. The experiment demonstrates the contributions of the proposed CI test, which is more sample efficient than the previous work called DCT and outperforms it.
However, I do not follow how the authors came up with the ground truth graph to verify their test performance in the real-world experiment. I tried to look up the reference, but I don't see anything describing the ground truth on the subset of variables.
Supplementary Material: Yes, Sections A.1, A.2, B, F, G, E.
Relation To Broader Scientific Literature: The paper aims to solve the overidentification issue (see lines 184-192) found in one recent related work called DCT (Sun et al. 2024) without the need of binarization like DCT. The key contribution of this paper is the content starting from line 197 all the way to proving Theorem 3.1 (con)and Lemma 3.2 along with Theorem 3.5. Theorem 3.1 establishes the asymptotic normality of the covariance estimator derived via GMM. Lemma 3.2 further claims that choosing an optimal weighting matrix (namely, one converging to the inverse of the covariance of the moment functions) reduces the asymptotic variance compared to a one-step estimator. Then, the authors follow the same framework of using nodewise regression (Sun et al. 2024) to derive the CI tests. The authors also argue that DCT-GMM, the proposed CI test, achieves lower variance than DCT because it leverages additional valid moment conditions via Theorem 3.5.
Essential References Not Discussed: I think the paper should cite the recent work on conditional independence tests though these tests are not primarily designed for the exact same setup. I think at least citing CI tests that work with discrete data is a fair game e.g. [1]
Reference:
- [1] Li, Shuai, et al. "K-nearest-neighbor local sampling based conditional independence testing." Advances in Neural Information Processing Systems 36 (2023): 23321-23344.
Other Strengths And Weaknesses: Strengths:
- The paper is quite well-written.
- The theoretical claims are logically sound. I appreciate the detailed explanation for the proofs in section F.
- The experiment supports the contributions and claims of the paper. Both synthetic and real-world data are used.
Weaknesses:
- Some sentences are difficult to understand .e.g, "the proportion of both observed variables exceeding their means reflects the underlying covariance, solved using a single equation.”
- The practical significance of the proposed CI tests is quite limited. Although the previous work (Sun et al. 2024) has been published in ICLR'25, I find it difficult to come up with practical scenarios where one knowingly discretizes some continuous variables, and those continuous variables become unobserved. This is further questioned, especially when it assumes normality for those unobserved continuous variables.
- The paper uses results heavily from (Sun et al. 2024) to derive the CI tests via nodewise regression. The presentation structure e.g., experiments, also closely follows Sun et al. 2024 (see section 3 and the rest in Sun et al. 2024).
Other Comments Or Suggestions: - inferring \tilde{X_{1}} being conditionally dependent of \tilde{X_{3}} given \tilde{X_{2}} is via faithfulness assumption, not causal Markov condition.
- Minor typo in figure 3b: “Numb”
- Lines 964-968: DCTG-> DCT-GMM. Also, the algorithm is named DCTG in the appendix, but the main paper refers to it as DCT-GMM.
- It would be better to describe how the DAG structures are generated in the experiment.
- I think it will be interesting how robust the proposed CI test is when the unobserved continuous variables do not follow a normal distribution and run the same comparison with the baselines.
Questions For Authors: 1. In Figure 3b, does ‘Fisherz’ mean that the authors directly apply the Fisher z test to the discretized data? If so, why is that appropriate?
2. How do the authors obtain the ground truth for the real-world experiment to verify the experimental result, especially for the selected three variables?
3. Can the authors further give some practical scenarios where some variables are discretized and are known to be both continuous and unobserved?
4. Can the authors test on an experiment where the unobserved continuous variables do not follow a normal distribution?
I will raise my score if the authors address my questions well.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > Some practical scenarios
Thank you for your valuable question. We appreciate the chance to highlight the **common and often unavoidable discretization** due to practical measurement constraints. In principle, **any variables measuring "degree" or "intensity"**(e.g., happiness, severity) are inherently continuous but often recorded in discrete form. For example
- In medical diagnostics, the **severity of cancer** is commonly categorized into stages like _"Stage I"_ or _"Stage II"_, not because the disease progresses in discrete jumps, but because there is no equipment that can exactly quantify its severity on a truly continuous scale.
- Similarly, in questionnaires—including the real-world dataset used in our paper—latent continuous variables such as **level of depression** are often measured using Likert scales (e.g., 1 to 5) for fast assessment and clinical usability.
We hope those practical scenarios can clarifies its practical significance of DCT-GMM.
> Groundtruth of real-world dataset
Thanks for your question. You're right—there is no ground truth in the Big Five Personality Dataset. We followed the setup from DCT. Despite the absence of ground truth for reference, the results of the discretization-aware CI tests (DCT and DCT-GMM) appear more reasonable. Specifically, the conclusion that $N4 \perp N10 \mid N3$, is intuitively plausible---since N10 represents "I often feel blue" is a stronger indicator of mood and should naturally subsume the information in N4 ("I seldom feel blue").
At the same time, we would greatly appreciate it if the reviewer could suggest any datasets that align well with our setting.
> Normal assumption
Thank you for this very insightful question. Please kindly refer to our **response~1 to Reviewer 9WLL** for a detailed explanation. Due to the 5000-character limit, we were unable to include the full response here. We sincerely apologize for the inconvenience.
> Citing CI tests that work with discrete data...
Thanks for your great suggestion. We have included the suggested reference [1] and other recent CI tests for discrete variables in our revised version [2,3].
> Sentences are difficult to understand. e.g, the proportion...
Thank you for the helpful comment. We have followed your comment and revised it to: “The empirical probability of observed discrete pairs reflects the covariance of the underlying continuous variables.”
> Use heavily from DCT of nodewise regression
Thank you for discussing the connection between DCT and ours. Nodewise regression is a standard technique for addressing the inference problem of the precision matrix [4,5]. Both DCT and our method involve the inference of the precision matrix. By using the nodewise regression in nonparanormal model, the derived CI test in Thoerem 3.4 would have the same form (but the analytical solutions involved are entirely different). We have properly acknowledged this in the main text and references.
Moreover, it is worth noting that apart from the use of nodewise regression, all other components employ entirely different techniques.
Our unique contribution lies in making the CI test sample-efficient in the presence of discretization. To achieve this, we cannot follow DCT, which further binarizes observations and conducts estimation and inference based on the binarized data—a limitation explicitly acknowledged in DCT (Appendix G in [6]). Instead, we reformulate the problem as an overparameterized one and leverage GMM to provide a principled solution with theoretical guarantees. The rationale and techniques are fundamentally different.
> - DAG generation
Kindly refer to lines 380–382, where we state that the DAG is generated using the BP model.
> "Fisherz" meaning..
Thank you for your insightful question. Yes, "Fisherz" refers to directly applying the Fisher z test to discretized data. Our goal is twofold:
1. To compare it with the Fisher z test on the original continuous data and highlight how discretization distorts causal discovery.
2. To show that even if users know the variables are inherently continuous, directly applying a CI test designed for continuous data—such as the Fisher z test—on discretized values can significantly degrade the resulting causal graph.
[1] Li et al., K-nearest-neighbor local sampling based conditional independence testing
[2] Cai et al., A distribution free conditional independence test with applications to causal discovery
[3] Kim et al., Local permutation tests for conditional independence
[4] Qiu et al., Inference on multi-level partial correlations based on multi-subject time series data
[5] Chang et al., Confidence regions for entries of a large precision matrix
[6] Sun et al., A conditional independence test in the presence of discretization
---
We sincerely thank the reviewer for the thorough and constructive review, which has greatly improved our paper. Please let us know if there are any further questions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response and the additional experiments. I have read other reviewers' comments and responses. Overall, I think the paper makes a decent theoretical contribution and it's evident in the experimental results. I am happy to raise my score to support this paper to be accepted.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your thoughtful review and for raising your score. Your constructive feedback has greatly improved our paper. Thank you for taking the time to carefully consider our rebuttal and for acknowledging the contributions of our research. | null | null | null | null | null | null |
Are Sparse Autoencoders Useful? A Case Study in Sparse Probing | Accept (poster) | Summary: This paper studies the benefits and limits of Sparse Autoencoders (SAEs). The topic is quite relevant given they are attracting more and more research in Large Language models, specifically in the context of Mechanistic Interpretability (MI). The authors propose a set of benchmarks to study both probing and interpretability of SAEs, concluding that they often do not meet their promises.
Claims And Evidence: Being an experimental verification of the advantage of SAEs, claims are about the investigation. The authors cover different settings, including 5 settings for probing, and 3 settings for interpretability of SAEs.
The claims are clear, and evidence and counter-evidence have been well studied.
I have a few remarks on the interpretability side, see next box.
Methods And Evaluation Criteria: Linear probing seems convincing (section 3). After obtaining the last token embeddings, the investigation resembles a standard machine learning pipeline. The evaluation criteria are good. The authors' methodology is precise and both plots and settings are clear and easy to understand. Also connected, section 5 contributes to highlighting SAEs are not useful for downstream tasks.
The investigation on the SAEs activations interpretability is less rigorous compared to the previous one. And whether SAEs extract interpretable concepts remains essentially unsolved.
0. The authors start from the assumption that, somewhat, activations of SAEs are interpretable (Point 2, lines 87-93; First paragraph, lines 300-303). I am not convinced this is the case as the authors also highlight that "we lack a ground-truth to know whether SAEs truly extract interpretable concepts..." (lines 40-44). The purpose of Section 4 is to investigate interpretability by measuring usefulness for other tasks, but this does not seem to provide clear evidence in support of or against interpretability of SAEs.
1. The details about the `autointerpr` method are missing and they should be discussed at least in the supplementary. I am not even convinced that using this method is the best practice: it looks a bit circular to study the LLM concepts with another LLM, and in the best possible scenario that would have been done with a user study. In the discussion of Section 4.1, the authors focus on the latent 122774 of the SAE, whose supposed meaning is "mentions living room". This "concept", however, is not invariant to language variations (the French example), so is it really encoding the concept of "presence of living room"? Is it activating also on unrelated sentences?
2. In section 4.2, the authors further suggest that some latents are correctly labelled and some others are mislabelled by `autointerpr`. This conclusion is drawn by restricting to the dataset of interest for these latents. E.g., for positive evidence of 81210 on the dataset 5, for negative evidence of 50817 on the dataset 125. Despite that, is it the case that latents are disentangled from other potentially unrelated concepts? Can they activate in other tasks as well? Also, from the conclusion in lines 372-384 it is not clear what the authors mean.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental setup seems sound and the analysis in Section 3 is accurate. Section 4 is less convincing. Section 5 is very clear.
Also, **the code is not available and reproducibility cannot be verified**.
Supplementary Material: I mainly covered Section A, G, and H.
Relation To Broader Scientific Literature: Some works are worth mentioning. I would suggest the authors to connect to other representation learning works relevant to study interpretability and advantages of probing with SAEs. There is an interesting link to identifiability and disentanglement research:
[1] Are Disentangled Representations Helpful for Abstract Visual Reasoning?, van Steenkiste et al., NeurIPS 2019 - discusses whether disentangled (which means sometimes interpretable) representations are helpful for tasks, concluding they are not. \
[2] Synergies between Disentanglement and Sparsity: Generalization and Identifiability in Multi-Task Learning, Lachapelle et al., ICML 2023 - considers sparsity and label classification to extract disentangled representations. This seems to aid in classification over new tasks. \
[3] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts, Joshi et al., arXiv 2025 - studies identifiability of SAEs and connects to the linear representation hypothesis.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: It is sidelined that SAEs are not truly intrinsically interpretable. This has to be verified somehow.
Other Comments Or Suggestions: Repetition of reference Bricken 2024a/b.
Questions For Authors: All those appearing in Methods and Evaluation Criteria. All questions revolve around the interpretability of SAEs. The main question is: Are SAE neurons interpretable?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work! We are grateful for your time and help, especially related to missing discussion of related work and questions about the automated interpretability techniques we use. We were especially glad to hear that you appreciated the depth to which we studied evidence and counter-evidence.
---
>And whether SAEs extract interpretable concepts remains essentially unsolved.
>.. but this does not seem to provide clear evidence in support of or against interpretability of SAEs…All questions revolve around the interpretability of SAEs. The main question is: Are SAE neurons interpretable?
This is an important point that we would like to clarify. In this paper, we do not directly investigate whether SAEs discover an interpretable basis of latents. This statement is difficult to falsify or prove because there is no ground-truth for SAE latents, as you note. Instead, the goal of our work is to attempt to evaluate how “good” the basis of features that SAEs discover is by studying how helpful they are on the downstream task of probing. This question is both more useful for practitioners and easier to answer than pure interpretability metrics. Thus, our work also provides a more objective (if indirect) measure of SAE latent interpretability.
> The details about the autointerpr method are missing and they should be discussed at least in the supplementary. I am not even convinced that using this method is the best practice: it looks a bit circular to study the LLM concepts with another LLM, and in the best possible scenario that would have been done with a user study.
Thank you for bringing this up! We have added the following description of autointerp in Section 4.1: “For this and all subsequent experiments, we generate autointerp labels using Neuronpedia, which leverages a language model to produce natural language explanations for a latent based on its top activating tokens (the Neuronpedia autointerp implementation is based on [4]).” Autointerp is a standard procedure for evaluating SAE latents [1, 2, 3, 4 ], although you are correct in stating that it is somewhat circular to study an LLM with an LLM! More specifically, our procedure is dependent on the efficacy of autointerp. To remove this confounder in Section 4.1 (about probe pruning), we ran an additional experiment where two of the authors independently labeled latents and ranked their relevance for all three tasks, with no change to the results.
[1] "Language models can explain neurons in language models."
[2] "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning"
[3] "Scaling and evaluating sparse autoencoders."
[4] "Automatically interpreting millions of features in large language models."
> In the discussion of Section 4.1, the authors focus on the latent 122774 of the SAE, whose supposed meaning is "mentions living room". This "concept", however, is not invariant to language variations (the French example), so is it really encoding the concept of "presence of living room"? Is it activating also on unrelated sentences?
This is correct! We use latent 122774 as an example of a latent that doesn’t represent the true living-room feature (which we expect to be the same across languages). This indicates that the underlying SAE is imperfect, and is a possible reason why SAE probes did not generalize as well to covariate shifts. This latent generally activates only when living-room is in the sentence, but other latents are less precise.
> In section 4.2, the authors further suggest that some latents are correctly labelled and some others are mislabelled by autointerpr… Despite that, is it the case that latents are disentangled from other potentially unrelated concepts? Can they activate in other tasks as well? Also, from the conclusion in lines 372-384 it is not clear what the authors mean.
Thank you for mentioning this! If SAEs worked perfectly, then we would expect latents to specialize and only fire on a single concept (which could be quite complex itself). However, since SAEs are imperfect, many latents remain polysemantic (are active on multiple concepts). We have adjusted the language of lines 372-384, we agree that it was complex! Please see our response to reviewer 3 (gmH6), which has the revised paragraphs.
>Also, the code is not available and reproducibility cannot be verified.
Thank you for pointing this out! We have uploaded our code anonymously here: https://anonymous.4open.science/r/SAE-Probes-B404 We also have added a link to the de-anonymized github repo in the non-anonymous version of our paper.
Thank you for providing the additional citations, we will add them to our related work.
---
Thank you again for taking the time to review the paper and providing helpful feedback! Do the above actions address your concerns with the paper? And are there any further clarification or modifications we could make to improve your score?
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply and for considering my requests.
> the goal of our work is to attempt to evaluate how “good” the basis of features that SAEs discover is by studying how helpful they are on the downstream task of probing. This question is both more useful for practitioners and easier to answer than pure interpretability metrics. Thus, our work also provides a more objective (if indirect) measure of SAE latent interpretability.
Yes, I understood this point, but I am not convinced this is a "more objective, indirect measure of interpretability".
My concerns were more about Sec. 4.2 and as I already expressed my concerns, the results do not complete the picture on SAE interpretability. | Summary: The paper comprises of several experiments evaluating the efficacy of sparse autoencoder (SAE) approaches to probing.
The paper first focuses on the accuracy of probes under various settings (such as imbalanced data), and finds that SAEs do not improve upon baselines. The paper introduces a "quiver of arrows" methodology, in which the SAE approach is evaluated according to its ability to improve upon a "toolkit" of other methods (specifically, the best method is chosen on a validation set, and the test accuracy is computed).
The paper then examines other potential benefits of SAEs beyond accuracy improvements, such as interpretability. Here, the paper demonstrates that many of the tasks that can be achieved by SAEs can also be achieved by simple baselines (like logistic regression).
Claims And Evidence: The paper's primary claim is that SAE-based probing does not outperform simple baselines. This claim (when restricted to the paper's implementation of SAE-based probing) is supported, on one hand, by a large set of experiments. However, the method by which SAE-based probing is evaluated ("quiver of arrows") is nonstandard. The claim that such an approach is needed for robustness is not convincing to me, as it is widely practiced to compare the individual accuracies of different models.
At a higher level, the claim that SAE-based probing does not have advantages for interpretability is limited by the fact that there are not comparisons to baselines for the most standard interpretability task (interpreting latents). The paper claims that "these findings may be possible using baseline classifiers," (line 373), but this is limited to theoretical speculation. An evaluation here would be more convincing. In general, the other evaluations in Section 4 seemed to be somewhat ad hoc / anecdotal.
Methods And Evaluation Criteria: See above.
Theoretical Claims: NA
Experimental Designs Or Analyses: See above
Supplementary Material: NA
Relation To Broader Scientific Literature: The paper contributes to a growing literature studying SAEs as a tool for mechanistic interpretability. While there has been a lot of excitement regarding the potential of SAEs, as the paper notes, the literature evaluating the practicality/usefulness of SAEs is limited. This paper adds to this literature by considering SAE probing performance on a large number of probing datasets.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The paper focuses on an important task, evaluating SAEs against baselines, and provides evidence (through many datasets) that SAE-based probing may not offer the improvements that some past work has suggested. The breadth of experiments is a strength, exploring the settings that may particularly illustrate the benefits of SAEs. This kind of fair evaluation is important.
The primary weaknesses of the paper are the nonstandard evaluation strategy in Section 3, and the incomplete and somewhat ad hoc results in Section 4 (in particular, the lack of comparison to baselines in Section 4.2, since interpretability is the primary strength of SAEs).
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We are thankful for your time and help, especially related to your points about the quiver of arrows and the clarity of our interpretability experiments. We were glad to hear that you appreciated the breadth of our experiments and found the problem we are investigating important.
---
> However, the method by which SAE-based probing is evaluated ("quiver of arrows") is nonstandard. The claim that such an approach is needed for robustness is not convincing to me, as it is widely practiced to compare the individual accuracies of different models.
Thank you for bringing up this point! The quiver of arrows approach we introduce is non-standard in the literature, but we adopt it to make the strongest possible case for SAEs. Since we select the best method using validation AUC, we expect to choose SAEs only for tasks where they perform best. To verify this, in the three settings where we employ the quiver of arrows—standard conditions, data scarcity, and class imbalance—we compare its performance to that of using a single SAE across all tasks, as shown below.
| **Setting** | Baseline Quiver | SAEs Quiver | SAEs + Baselines Quiver | LogReg | SAE 16k k=16 | SAE 16k k=128 | SAE 131k k=16 | SAE 131k k=128 | SAE 1m k=16 | SAE 1m k=128 |
|------------------------|------------------|-------------|---------------------------|--------|---------------|----------------|----------------|------------------|---------------|----------------|
| **Standard Conditions**| 0.940 | 0.930 | 0.939 | **0.941** | 0.904 | 0.921 | 0.899 | 0.918 | 0.889 | 0.913 |
| **Data Scarcity** | 0.819 | 0.806 | 0.812 | **0.836** | 0.800 | 0.816 | 0.794 | 0.810 | 0.785 | 0.801 |
| **Class Imbalance** | 0.921 | 0.906 | 0.916 | **0.929** | 0.898 | 0.909 | 0.890 | 0.906 | 0.882 | 0.899 |
Clearly, the quiver of arrows serves as an upper bound on the performance of any individual SAE. As an alternative counterfactual, instead of comparing against a single SAE probe, we select the best SAE for each task using validation AUC (a “quiver of SAEs”) and compare this to the overall quiver of arrows with both baselines and SAEs. Again, the baseline + SAEs quiver outperforms the just SAE quiver.
> At a higher level, the claim that SAE-based probing does not have advantages for interpretability is limited by the fact that there are not comparisons to baselines for the most standard interpretability task (interpreting latents). The paper claims that "these findings may be possible using baseline classifiers," (line 373), but this is limited to theoretical speculation. An evaluation here would be more convincing. In general, the other evaluations in Section 4 seemed to be somewhat ad hoc / anecdotal.
Thank you for pointing out that line 373 was confusing! In section 4.3, we zoom into two datasets as case studies and show that this is not just a theoretical worry: baseline classifiers can find spurious correlations and noisy labels as well as SAEs. We have modified this paragraph to be as follows:
```
The spurious latent category seems especially promising because finding a spurious latent may help us identify spurious features in the dataset. However, in a cast study in \cref{sec:ai_vs_human}, we find that similar findings may be possible using baseline classifiers: we apply a logistic regression probe to model hidden states on tokens from the Pile \cite{pile} and show that maximally activating examples also exhibit the spurious correlation.
However, a practical advantage for SAEs is that the infrastructure to perform autointerp is pre-existing through platforms like Neuronpedia, and a theoretical advantage is that the baseline classifier can only identify the single most relevant coarse-grained feature, while the decomposability of SAE probes into latents allows for identifying many independent features of various importance.
```
For your first point, we are not sure what you mean by comparisons to baselines for interpreting latents. Because SAEs are an unsupervised method, it is not clear what a baseline for interpreting a latent would be, as other methods do not have an equivalent to latents (i.e. units a probe can be broken down into). This decomposition is an advantage of SAEs, but it is unclear how much value it gives. In section 4.2, we are investigating what we can learn about different SAE latents by examining the datasets that they are discriminative for (as opposed to auto-interp or human interpretability of latents, which typically looks at top activating examples).
---
Thank you again for taking the time to review the paper and providing helpful feedback! Do the above actions address your concerns with the paper? If not, what further clarification or modifications could we make to improve your score? | Summary: In this work, the authors propose a fair evaluation of SAEs by modelling them as a tool in a practitioner's toolkit, or “quiver of arrows” with the overall question of asking “When is it useful for a practitioner to incorporate SAE probes into their downstream application?”
Claims And Evidence: The author’s claims are supported by relatively clear evidence. The claims are simple and easily validated by the provided experiments. While I do observe evidence that SAE probes contribute less to downstream use cases than other probes, the gap is not incredibly large, especially in Fig. 5. I appreciate the authors claim that autointerpretability methods could easily be applied to probes to achieve similar latent interpretability benefits. However, I was unconvinced by the claims in Section 5 (see strengths/weaknesses). I don’t think the section adds a lot to the impact of the paper and I would encourage removing or deferring it to the appendix.
Methods And Evaluation Criteria: The quiver of arrows methodology makes sense, especially in a setting where one has multiple SAE probes. Additionally the authors choose a comprehensive suite of probing tasks to cover a wide range of potential use cases for SAE probes. I also think their experiments were thorough and together paint a clear picture of their argument and conclusion. However, I would like to see more elaboration of why the imbalanced dataset settings give the SAE’s inductive bias an advantage.
Theoretical Claims: The authors make no theoretical claims in this work.
Experimental Designs Or Analyses: I checked the general quiver-of-arrows AUC comparison setup which is used throughout their experiments and found it to be sound.
Supplementary Material: I reviewed the quiver of arrows and a few tables and charts in the Supplemental Material.
Relation To Broader Scientific Literature: This work ties in well with existing discussion on Sparse Autoencoders in the interpretability literature.
Essential References Not Discussed: I think the work covered most essential literature.
Other Strengths And Weaknesses: Section 5 is a little confusing to me. Bricken et al. also use max pooling on their baseline probes, and find that the performance is similar. Thus, it is hard to argue they present an “illusion” of SAE probes being better and thus the need for the argument presented in Figure 11. Additionally, it is hard to understand if your results in the third graph of Figure 11 came from softmax-pooling or the quiver approach. It seems as though you use the quiver approach to “select between pooled and last-token strategies,” does this mean that you are considering all possible strategies {softmax, last} x {SAE, activations} and picking the best? Also, why do the win rates not sum to 100%?
Other Comments Or Suggestions: * In 2.3, when introducing your probing methods, you seem to be listing the hyperparameters for each method. However, they are introduced as sentence fragments and without context. The writing should be improved here.
* "throughout the paper we train probes using the largest L0 for SAEs width = 16k, width = 131k, and width = 1M. We use k = 16 to construct easily interpretable probes that potentially overfit less and use k = 128 for performance." What is the “largest l0”?
* I feel like the “Quiver of Arrows” setup is a core part of your model for practitioner usefulness, yet you describe it only in the Appendix. I would appreciate seeing this discussed in the main body.
* You may consider flipping the x-axis of Figure 19 and relabel it “Number of Pruned Latents”, as I intuitively read the graph as further right meant more pruning.
* “We find that ~25% of CoLA labels…” There is a typo here with the ~
Questions For Authors: Why did you have to run autointerp on your latents, for example in 4.1 and 4.2? Did your GemmaScope and LlamaScope SAEs not come with already-interpreted latents? Also, the autointerp process you used is not described in your paper. There are various strategies to do this proposed by different works so it is important you explain which method you did.
In section 4.2 when you consider the top 128 latents is this again by mean difference across positive and negative samples in your binary classification tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time and help! We were very glad to hear that you appreciated our use of the quiver of arrows technique and found our evidence and claims clear.
---
> While I do observe evidence that SAE probes contribute less to downstream use cases than other probes, the gap is not incredibly large, especially in Fig. 5.
Thank you for pointing this out! Figure 5 shows the result of the quiver of arrows approach, and so the SAE quiver achieving the same performance as a baseline quiver only shows that SAEs do not actively make the practitioner worse off. We have added a new table in the appendix which directly compares each SAE to logistic regression.
| **Setting** | Baseline Quiver | SAEs Quiver | SAEs + Baselines Quiver | LogReg | SAE 16k k=16 | SAE 16k k=128 | SAE 131k k=16 | SAE 131k k=128 | SAE 1m k=16 | SAE 1m k=128 |
|------------------------|------------------|-------------|---------------------------|--------|---------------|----------------|----------------|------------------|---------------|----------------|
| **Standard Conditions**| 0.940 | 0.930 | 0.939 | **0.941** | 0.904 | 0.921 | 0.899 | 0.918 | 0.889 | 0.913 |
| **Data Scarcity** | 0.819 | 0.806 | 0.812 | **0.836** | 0.800 | 0.816 | 0.794 | 0.810 | 0.785 | 0.801 |
| **Class Imbalance** | 0.921 | 0.906 | 0.916 | **0.929** | 0.898 | 0.909 | 0.890 | 0.906 | 0.882 | 0.899 |
> However, I was unconvinced by the claims in Section 5 (see strengths/weaknesses)...
Thanks! We agree and have moved it to the appendix.
> However, I would like to see more elaboration of why the imbalanced dataset settings give the SAE’s inductive bias an advantage.
Thank you for pointing this out! We have created a new Appendix section to detail the general SAE probing intuition and for each section. We paraphrase below:
‘’’
“..We argue that if SAEs are successful at this task [creating a sparse interpretable model basis], requiring a probe to only use a sparse set of directions in this basis should serve as a beneficial inductive bias to prevent overfitting with limited data.”
Class Imbalance: Because SAE latents are sparsely activating, choosing SAE latents that are positive on the minority class and negative on the majority class may generalize well.
‘’’
> Section 5 is a little confusing to me. Bricken et al. also use max pooling on their baseline probes… Thus, it is hard to argue they present an “illusion” of SAE probes being better…
Thank you for pointing this out! We attempt to argue that max pooling activations is not the strongest baseline possible. That being said, we agree that our language regarding an “illusion” is too strong, and we have toned down the language in the paper.
> Additionally, it is hard to understand if your results in the third graph of Figure 11 came from softmax-pooling or the quiver approach… Also, why do the win rates not sum to 100%?
We apologize that we were not more clear here! The graph compares two quivers, quiver(SAE max pool, SAE last token) vs. quiver(activations softmax,activations last token). The win rates do not sum to 100% because we consider test AUCs within 0.005 to be tied, and thus count as a win for neither method. We have added additional clarification to the text and figure caption.
> In 2.3, when introducing your probing methods, you seem to be listing the hyperparameters for each method…The writing should be improved here.
We agree, thank you! We have now formatted this section as a table.
> …”we train probes using the largest L0”...What is the “largest l0”?
Thank you for noting this! For any given SAE width, we use the largest available L0 from GemmaScope (and noted this in the paper).
> …the “Quiver of Arrows” setup is a core part of your model… I would appreciate seeing this discussed in the main body.
We agree! We moved it to the main body.
> …consider flipping the x-axis of Figure 19…
Thanks, we just made this change!
> …~25%… There is a typo here with the ~
Good catch! We have updated the paper to replace it with the correct value (22%)
> 1)…Did your GemmaScope and LlamaScope SAEs not come with already-interpreted latents? 2
> 2) Also, the autointerp process you used is not described in your paper.
1)Gemma/LlamaScope SAEs do not come with already-interpreted latents, they just contain the SAEs themselves. 2) we have now added the following language to Section 4.1: “For this and all subsequent experiments, we generate autointerp labels using Neuronpedia, which leverages a language model to produce consistent natural language explanations for a latent based on its top activating tokens.” Thanks for the update!
> In section 4.2 when you consider the top 128 latents is this again by mean difference across positive and negative samples in your binary classification tasks?
Yes, we have fixed this in the text, thank you!
---
Thank you again for taking the time to review the paper and providing helpful feedback! Do the above actions address your concerns with the paper? | Summary: This paper deals with the problem of evaluating the downstream utility of sparse autoencoders (SAEs). SAEs have recently gained popularity as a means to disentangle concepts learnt by layers of a model, paritcularly LLMs, in order to gain a better mechanistic understanding of their workings. However, evaluating their utility systematically has been challenging. This work examines this by comparing the utility of SAEs with other baseline approaches over a variety of binary tasks. To simulate real world utility, it considers cases of data scarcity, class imbalance, label noise, and covariate shift, and finds that SAE probes do not beat existing baselines. It then performs an analysis over learnt SAE latents to better understand why this happens, and to understand differences in the observations made as compared to prior work.
## Update after rebuttal
Thank you for your response. The most important concerns I had were addressed, so I am increasing my score to accept.
Claims And Evidence: The claims made are generally thoroughly supported by evidence. Some concerns, such as generalization to other layers and datasets, have been raised in the Weaknesses section below.
Methods And Evaluation Criteria: The methods and evaluation criteria generally make sense. Some concerns about evaluation have been raised in the weaknesses below, and a critical concern about the method formulation has been raised in Weakness 1 below.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Generally the experimental design and analyses appear to be sound and thorough. The paper first explores whether SAEs outperform baselines, and then (in Section 4) looks into possible reasons for not doing so. It also explores why results shown in this work may differ from previous findings (Section 5).
Supplementary Material: I skimmed over parts of the supplement referred to in the main text, particularly the figures, but did not read the supplement otherwise in detail.
Relation To Broader Scientific Literature: This work focuses on evaluating the downstream utility of SAEs trained on LLMs. It builds upon recent work (e.g. Bricken et al. 2024, Gao et al. 2024, etc.) that aim to show that SAEs can disentangle concepts learnt by each layer of such models. However, evaluating SAEs for their utility has been limited, for which this work provides an important contribution.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: ## Strengths
1. This paper deals with the important problem of assessing the utility of SAEs and to understand if they truly provide an added benefit to existing methods. This has been one of the key promises of SAEs, but has not been evaluated properly so far, and is thus a valuable contribution.
2. The evaluation appears to be very thorough, covering 113 datasets, five baselines, and with several ablations for each.
3. Analyses have also been performed to understand why SAEs may be underperforming (Sections 4 and 5), which can help understand the root cause of the problem and direct future research.
## Weaknesses
1. The input to the SAE probes are the top $K$ latents in the trained SAE that have the maximum absolute difference for the two binary classes (Equation 1). However, since SAE latents (e.g. in TopK SAEs, Gao et al. 2024, L086 right) are not on the same scales, it seems like this could be misleading. For example, consider two latents $A$ and $B$ and classes $0$ and $1$. Let the values of $A$ for hundred data points each of classes 0 and 1 be $\{1,2,\cdots,100\}$ and ${51,52,\cdots,150\}$ respectively. Then the class-wise mean activations would be $50.5$ and $100.5$ respectively, giving a difference (as per Equation 1) of $50$. Now, suppose the values of $B$ for these data points are ${0.01, 0.02, \cdots, 1.00\}$ and $\{10.01, 10.02, \cdots, 11.00\}$ respectively. Then the class-wise mean activations would be $0.505$ and $10.505$ respectively, giving a difference of $10$. Clearly, latent $B$ is more discriminative of the two classes than latent $A$, but the scheme proposed in Equation 1 would pick latent $A$ instead. This could be fixed for instance by normalizing using the mean activations of each latent, and could help avoid misleading conclusions.
2. It is unclear how to interpret Figure 4. If the SAE was "chosen" based on its performance for 14 tasks (L184-188 right), shouldn't the Figure have 14 points above the diagonal? Or is this because the SAE outperformed the baselines in the validation set but underperformed in the test set? If so, the fact that this happened so consistently seems surprising, and a discussion on this would be useful.
3. In Section 4.2 (and Section 4) in general, results from autointerp are assumed to be "ground truths". It would be helpful if this could be evaluated, e.g. using humans for a small subset of latents. As of now it is unclear if the performance loss is due to wrong latents being used (as claimed in Section 4.2) or by autointerp labelling them incorrectly.
4. All evaluation is performed at layer 20 because this is where the baselines performed the best (e.g. as per Section D.1). However, could it be possible that this choice harms the SAEs in the comparison, and that SAEs would perform well at a different layer? A discussion on this would be useful.
5. Results in Section 4, and particularly Section 4.3, are on specific handpicked datasets. Do they generalize? Alternatively, is there a reason for picking these specific datasets? Comment on this would be helpful.
6. Why is a different SAE config used for experiments with label noise (L263) and covariate shift (L294)?
7. In Section 4.1, why is $k=8$ used, when $k=16$ and $k=128$ have been used everywhere else?
8. L158, right: why only use logistic regression for SAE probes?
Other Comments Or Suggestions: - L027-033: sentence is hard to read, please rephrase.
Questions For Authors: Please refer to the Weaknesses section. Overall, I believe this paper provides a valuable and important contribution, and does a thorough and interesting analysis. I strongly lean towards accept, but I believe the issue with scales of the metric as discussed in Weakness 1 is critical and needs to be addressed. I would be happy to raise my score if this is adequately discussed in the rebuttal.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful questions and comments! We are especially grateful for your suggestions on better latent selection methods, and we are glad you feel our paper is a valuable contribution.
---
> However, since SAE latents (e.g. in TopK SAEs, Gao et al. 2024, L086 right) are not on the same scales, it seems like this could be misleading.
We agree your method of choosing latents is better, thank you! We find this technique improves test AUC when k is small (<32) but not when k is large (see https://anonymous.4open.science/r/SAE-Probes-B404/rebuttal_plots/comparing_new_and_old_mean_diff_auc.png). Intuitively, your method better finds the “correct” latents for small k, but the old method is “good enough” for large k.
We reran our main experiments with this technique. Unfortunately, the baseline + SAE quiver still fails to improve over baselines. Investigating further, we find that we select k = 128 probes from the quiver 80% of the time, so quiver performance is dominated by k = 128 probes, which do not improve that much.
> If the SAE was "chosen" based on its performance for 14 tasks (L184-188 right), shouldn't [Figure 4] have 14 points above the diagonal?
When an SAE is chosen the point may be on or below the diagonal, since the test AUC might decrease or stay about the same. We’ve added “Datasets not directly on the diagonal signify that an SAE method was chosen from the quiver” to the Figure 4 caption.
> In Section 4.2 (and Section 4) in general, results from autointerp are assumed to be "ground truths". It would be helpful if this could be evaluated, e.g. using humans for a small subset of latents.
This is a valid point, thank you! To remove this confounder in Section 4.1, two of the authors independently labeled and ranked latents, with no change in results. However, manually labeling the latents for all datasets in Section 4.2 is labor intensive (and thus a good use case for autointerp).
>could it be possible… that SAEs would perform well at a different layer?
This is a great point that we did not think of, thank you! We checked the width=16k largest L0 SAEs on the 4 layers in Figure 12a (layers 9, 20, 31, and 41), and found that layer 20 was also the best for SAEs as well, see https://anonymous.4open.science/r/SAE-Probes-B404/rebuttal_plots/comparing_sae_test_auc_by_layer.png. We have added this plot and a discussion to Appendix D.
> Results in Section 4, and particularly Section 4.3, are on specific handpicked datasets. Do they generalize? Alternatively, is there a reason for picking these specific datasets?
We apologize for not being clearer here; we intended for Section 4.3 to be a set of case studies complementing Section 4.2. We selected these datasets after an investigation of five datasets whose top latent representations exhibited strong performance. During this analysis, we discovered that both 87\_glue\_cola and 110\_aimade\_humangpt3 contained labeling errors. We have altered the introduction to section 4.3 to reflect this.
> Why is a different SAE config used for experiments with label noise (L263) and covariate shift (L294)?
For label noise, we use the SAE with smallest width (16k) and maximal L0, which we found to be most performant in standard conditions (we have added this justification to the paper). We would have used this SAE for the covariate shift domain as well, but there is no support for generating autointerp explanations through Neuronpedia for this SAE, so we instead use the width = 131k, L0 = 114 SAE. We have added additional clarification around this choice in the paper, thank you for this comment!
> In Section 4.1, why is k=8 used, when k=16 and k=128 have been used everywhere else?
We do so because the probe pruning experiment is a proof-of-concept experiment and is significantly simpler with smaller k.
> L158, right: why only use logistic regression for SAE probes?
This is a great question! We think this is a good choice because logistic regression is common in practice and is the best baseline activation methods. This is still a valid concern, so we have added the following to our limitations section: "Finally, it is possible that further optimization of the SAE probe baseline might increase performance such that it beats baseline methods. For example, we only tried logistic regression on SAE probes, and it is possible that other probing techniques could perform better."
> L027-033: sentence is hard to read
We agree and have rephrased to: “However, although SAEs occasionally perform better than baselines on individual datasets, we are unable to ensemble SAEs and baselines to consistently improve over just baseline methods.”
---
Thank you again for taking the time to review the paper and providing helpful feedback! Do the above actions address your concerns with the paper, especially with regard to the better top k latent selection? If not, what further clarification or modifications could we make to improve your score? | null | null | null | null | null | null |
Large Displacement Motion Transfer with Unsupervised Anytime Interpolation | Accept (poster) | Summary: This paper presents an anytime interpolation framework for flexible and accurate motion-driven frame generation. Specifically, during training, the model searches for an optimal intermediate time step that produces the highest-quality interpolated frame for training. To ensure valid motion transfer, the authors design an unsupervised bidirectional training strategy, effectively preserving appearance and structural consistency in the generated frames.
Claims And Evidence: The majority of the content is clear, but some detailed design aspects need further justification.
Methods And Evaluation Criteria: The evaluation criteria appear to be appropriate and sufficient.
Theoretical Claims: No theoretical claims
Experimental Designs Or Analyses: Yes, the authors provide several experimental details to validate the algorithm, but some ablation analyses are missing.
Supplementary Material: There is no supplementary material in the appendix.
Relation To Broader Scientific Literature: Provide insights into unsupervised large motion transfer, such as the search for optimal interpolated time steps and constraints on appearance and structural consistency for effective decomposition.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
The motion decomposition strategy alleviates the challenge of directly transferring large displacement motions.
To enhance supervision, the authors propose an optimal interpolation selector, which identifies the interpolated frame with the best identity consistency and minimal warping distance.
The method incorporates appearance and structural consistency constraints, improving the overall motion transfer quality.
Weaknesses & Suggested Revisions
The use of L1/L2-based reconstruction loss may lead to blurry interpolated results. Consider integrating GAN-based refinement or a perceptual loss to improve sharpness.
The design of the interpolation selector lacks sufficient justification. A more detailed discussion on the impact of different weighting strategies on final performance would strengthen the argument.
The model requires pre-generating multiple interpolated frames and selecting the best one, which may introduce significant computational overhead. It would be beneficial to provide a comparison of inference/training costs with other baseline methods.
The paper should discuss and compare conditional diffusion-based approaches, which are gaining popularity in motion transfer. Highlighting the advantages of the proposed method over such alternatives would further clarify its contributions.
Other Comments Or Suggestions: See the ``Other Strengths And Weaknesses"
Questions For Authors: NA
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Privacy and Security']
Ethical Review Concerns: The visual examples and the training/evaluation datasets mayraise concerns regarding privacy and security.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the valuable comments.
Q1: The authors provide several experimental details to validate the algorithm, but some ablation analyses are missing.
A1: Thank you for your comments. The Reviewer 3 LLC asked the same question in Q3.
In the ablation experiment, we mainly validate three modules, the first is the interpolation method. We added the interpolation method on the baseline (TPSMM), as shown in Model 1 in Table 2. In the unsupervised case, the lack of constraints on the interpolated image makes the generated interpolated image very poor, which seriously affects the quality of the downstream target image generation. To solve this problem, we add a second module, the two-way training strategy, as shown in Model 2 in Table 2. By adding consistency constraints to the intermediate generated interpolated image pairs, the generation range of the interpolated images is narrowed down, significantly improving the generation quality of the target images. In order to improve the generation quality of the target image, we added a third module to add appearance consistency loss and structural consistency loss to the image using the pre-trained Vit model, as shown in “Ours” in Table 2, which is our complete model.
Q2: The use of L1/L2-based reconstruction loss may lead to blurry interpolated results. Consider integrating GAN-based refinement or a perceptual loss to improve sharpness.
A2: Thank you for your comments. In the unsupervised scenario, we aimed to improve the quality of the interpolated images through bidirectional training by incorporating additional loss functions to constrain their generation. Initially, we used L1 constraints on the generated pairs of interpolated images; however, our experiments showed that these L1 constraints were too strict. As a result, minimizing the loss on these image pairs often led to the generation of all-white images. We applied L1 loss to their multiscale features to address this issue, as demonstrated in Equation (6).
To further enhance image quality, we utilized a pre-trained Vision Transformer (ViT) model to introduce appearance consistency loss and structure consistency loss to the interpolated image pairs, as outlined in Equations (7) and (8). Additionally, we attempted to refine the images using a super segmentation network, but the improvements were minimal.
Q3: The design of the interpolation selector lacks sufficient justification. A more detailed discussion on the impact of different weighting strategies on final performance would strengthen the argument.
A3: The interpolation network generates multiple interpolated images that progressively adopt the poses of the driving image as time (t) increases from 0 to 1. However, we observe that as the pose gets closer to that of the driving image, the visual quality of the interpolated images tends to deteriorate. To address this issue, we have designed an interpolation selector aimed at identifying the optimal interpolated image from the set of generated images. This optimal image should not only have a pose that closely matches the driving image but also maintain visual consistency with the source image. In motion migration, preserving appearance is just as crucial as learning the pose; therefore, we assign equal weights in Equation (4). In our future studies, we plan to explore how different weighting strategies might affect overall performance.
Q4: The model requires pre-generating multiple interpolated frames and selecting the best one, which may introduce significant computational overhead.
A4: Thank you for your valuable comments, multiple interpolated frames and selecting the optimal interpolated frame impose a significant computational overhead on the model. Since we are unsupervised interpolation method, there is no guarantee that the generated interpolated images can satisfy the subsequent downstream tasks without labeling. In my future research, I will improve the proposed model as a way to reduce the computational overhead.
Q5: The paper should discuss and compare conditional diffusion-based approaches, which are gaining popularity in motion transfer. Highlighting the advantages of the proposed method over such alternatives would further clarify its contributions.
A5: Thank you for your valuable comments. In recent years, diffusion-based motion migration methods have shown satisfactory performance. However, these methods are supervised and often rely on a pre-trained model to extract a priori conditions for the target image, such as human body key points. In the case of the Taichi dataset, accurately extracting human body poses can be challenging due to the low resolution of the images. Additionally, the diffusion process requires extensive computational resources, which may limit the model's performance in environments with restricted resources. | Summary: The proposed method advances unsupervised motion transfer by addressing the challenge of large displacement motions through interpolation and strategic training. While it excels in pose accuracy, it faces minor challenges in maintaining appearance details, particularly in complex scenarios. This work provides a robust framework for applications in image animation, with potential for further refinement in appearance preservation.
Claims And Evidence: The linear motion assumption is too strict and not empirically validated.
The lack of comparison with recent works makes it uncertain if the method is truly state-of-the-art.
The ablation study does not clearly prove the necessity of each module.
Methods And Evaluation Criteria: The method is reasonable but assumes linear motion, which may not hold for complex cases. The datasets are appropriate, but lack comparisons with newer methods.
Theoretical Claims: Since the paper primarily relies on empirical validation, no formal proof of correctness check is required.
Experimental Designs Or Analyses: Comparisons should include recent methods (2023-2024) to validate performance.
Ablation studies need more detailed analysis to confirm the contribution of each module.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper extends prior motion transfer work with a two-step strategy and ViT-based consistency enforcement, but its linear motion assumption, outdated baselines, and missing perceptual metrics limit its broader impact.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength:
1. The author proposed to decompose the traditional motion transfer pipeline into two steps to solve the difficulty when there is large displacement of motion between S and D.
2. A ViT is introduced to ensure the consistency of appearance and motion.
Weakness:
1. The assumption that motion between S and D of each keypoint is linear seems to be strict, especially in cases such as non-linear human actions in Tai-Chi-HD.
2. In quantitative comparison, the newest work is from 2022. More recent works are better be included.
3. In ablation study, the effectiveness of the proposed module is not clearly proved according to Table.2.
4. Some related works [1-2] are discussed and compared.
[1]. Structure-aware motion transfer with deformable anchor model. CVPR 2022.
[2]. Motion Transformer for Unsupervised Image Animation. ECCV 2022.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions you gave. We have incorporated your feedback into the paper.
Q1: The method is reasonable but assumes linear motion, which may not hold for complex cases.
A1: In the interpolation method, we assume that the local keypoints' motion is linear to obtain keypoints that do not exist between the source and driver images. The corresponding interpolated images are then generated from these interpolated keypoints. The pose of these interpolated images is between the source and driver images. The motion from the interpolated image to the driver image is smaller than the magnitude from the source image to the driver image. And for modeling the motion, we use the nonlinear transform (TPS) from the literature [1] to approximate the motion.
[1] Zhao J, Zhang H. Thin-plate spline motion model for image animation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 3657-3666.
Q2: The datasets are appropriate, but lack comparisons with newer methods.
A2: We have added the 3 most recent comparison methods (DAM [2], MTIA [3], and CPABMM [4]) as shown in Table 1 in A2 of Reviewer 1 LsVD. Relevant qualitative results have also been added, as shown in the link (https://github.com/ICML2025Anonymity/Anonymity), which complies with the double-blind policy.
[2] Tao J, Wang B, Xu B, et al. Structure-aware motion transfer with deformable anchor model[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 3637-3646.
[3] Tao J, Wang B, Ge T, et al. Motion transformer for unsupervised image animation[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2022: 702-719.
[4] Wang H, Liu F, Zhou Q, et al. Continuous piecewise-affine based motion model for image animation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(6): 5427-5435.
Q3: In ablation study, the effectiveness of the proposed module is not clearly proved according to Table.2.
A3: In the ablation experiment, we mainly validate three modules, the first is the interpolation method. We added the interpolation method on the baseline (TPSMM), as shown in Model 1 in Table 2. In the unsupervised case, the lack of constraints on the interpolated image makes the generated interpolated image very poor, which seriously affects the quality of the downstream target image generation. To solve this problem, we add a second module, the two-way training strategy, as shown in Model 2 in Table 2. By adding consistency constraints to the intermediate generated interpolated image pairs, the generation range of the interpolated images is narrowed down, significantly improving the generation quality of the target images. In order to improve the generation quality of the target image, we added a third module to add appearance consistency loss and structural consistency loss to the image using the pre-trained Vit model, as shown in “Ours” in Table 2, which is our complete model. | Summary: This paper propose a unsupervised motion transfer algorithm that can transfer pose in the driving video to the object of the source image so that the source image can copy the movement of the driving video. To be exact, the method decompose complex large displacement motion into many small displacement motions, and improve the accuracy of motion estimation. A bi-directional training strategy is used to constrain the intermediate interpolated images.
Claims And Evidence: The experimental results are not convincing, such as when comparing with state-of-the-art qualitatively, it only shows one state-of-the-art method (TPSMM) in figure 3, and figure 5. It did not compare with X2Face, FOMM, MRAA, I wonder why?
Methods And Evaluation Criteria: Evaluation follow criteria that used by state-of-the-art. But the experiment lack a few key datasets.
Theoretical Claims: looks correct.
Experimental Designs Or Analyses: When comparing with state-of-the-art, it's essential to compare with the most up to date and relevant methods. X2Face is not referred, so I don't know which paper it is referring to. FOMM was published in 2019. MRAA was published in 2021, TPSMM was published in 2022. Why not compare with any more recent papers such as
Wang, H., Liu, F., Zhou, Q., Yi, R., Tan, X., and Ma, L. Continuous piecewise-affine based motion model for image animation. In In Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI, pp. 5427–5435, 2024.
From this paper's experiment results, it seems that this paper has much better performance.
Qualitative comparison seems not convincing, as shown in Figure 3-6, the proposed method's results look quite blurry, and images are of low resolution. Also since the paper claims to deal with large displacement motion transfer, I find that in the figures, a lot of examples have small displacement motion transfer, such as Figure 3 (first row), Figure 4 (second and third rows), Figure 6 (second the third rows). Why not showing more results that have large motion displacement to highlight the paper's main claim?
When conducting the quantitative experiment, the proposed methods leave out a few key evaluation datasets, such as "TED-talks" , "VoxCeleb", "MGif". I wonder why this paper did not compare with state-of-the-art methods on these datasets.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The proposed method address a small research area of motion transfer.
Essential References Not Discussed: In the experiment part, the paper compares with a few state-of-the-art algorithms including X2Face, but X2Face is not mentioned/referred in the reference. It's difficult to understand what is exactly X2Face.
Other Strengths And Weaknesses: The main strength is that the paper claims to deal with a challenging tasks of large displacement motion transfer. The weakness is the lack of experimental results to support the claim,
Other Comments Or Suggestions: There seems to be a grammar issue in section 3.2 "Inspired by it, a keypoint-based anytime interpolation method", is this an un-finished sentence?
Questions For Authors: My question are mainly related to the experimental design and comparison with more recent/relevant papers.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your valuabe comments. We have incorporated your feedback into the maniscript. We believe it will help strengthen the work and present it better.
Q1: The experimental results are not convincing, such as when comparing with state-of-the-art qualitatively, it only shows one state-of-the-art method (TPSMM) in figure 3, and figure 5. It did not compare with X2Face, FOMM, MRAA, I wonder why?
A1: Thank you for the valuable feedback. We have provided more comparative results in the following link: https://github.com/ICML2025Anonymity/Anonymity. This is fully compliant with the double-blind policy. In response to the reviewer's question, our decision to compare the proposed method primarily with TPSMM is based on existing literature, which shows that TPSMM outperforms X2Face, FOMM, and MRAA. Therefore, we have chosen to validate the superiority of our method by comparing it only to TPSMM.
Q2: X2Face is not referred. Why not compare with any more recent papers such as Wang, H., et al. Continuous piecewise-affine based motion model for image animation.
A2: Thank you for your valuable comments, and the manuscript has been revised and cited more recent literature papers such as Wang, H., et al. Also, we added comparison results with CPABMM, as the answer to Reviewer 1 LsVD's A3.
Q3: Qualitative comparison seems not convincing, as shown in Figure 3-6, the proposed method's results look quite blurry, and images are of low resolution. Also since the paper claims to deal with large displacement motion transfer, I find that in the figures, a lot of examples have small displacement motion transfer, such as Figure 3 (first row), Figure 4 (second and third rows), Figure 6 (second the third rows). Why not showing more results that have large motion displacement to highlight the paper's main claim?
A3: Thanks to the reviewers' insightful feedback, we show more results with large motion displacements to highlight the paper's main points in this link (https://github.com/ICML2025Anonymity/Anonymity). We modify the presentation of Figures 3-6, and the process of motion transfer is all large displacement motion, such as https://github.com/ICML2025Anonymity/Anonymity/tree/main/Figues%203-6.
Q4: Why are datasets such as "TED-talks", "VoxCeleb", "MGif" not used when quantifying experiments?
A4: Thank you for your comments, and Reviewer 1 LsVD also raised a similar question in Q1, and we addressed it in our response A1 to Reviewer LsVD. The unsupervised optimal interpolation method proposed in this paper aims to address the large motion problem in motion migration. To validate the effectiveness of this method, we selected two datasets—TaiChiHD and Fashion—that exhibit significant motion amplitudes.
While the face dataset typically demonstrates a smaller range of motion compared to larger human datasets, we included the UvA-Nemo dataset to assess the method’s performance in scenarios with small motion amplitudes. Although VoxCeleb is another face dataset, its size—approximately 308 GB—restricts its usability.
The Ted Talks dataset has a resolution of 384×384, which is higher than the resolution of the datasets we used. However, the motion of the human body in the Ted Talks videos is minimal, as illustrated in https://github.com/ICML2025Anonymity/Anonymity/tree/main/TedTalks, by the double-blind policy. Due to the constraints of the rebuttal date, we only conducted experiments using the Ted Talks dataset.
Regarding the MGif cartoon animal dataset, we were unable to evaluate motion-related metrics like AKD, which means that the dataset is not appropriate for validating our claim that our method enhances postural accuracy.
Q5: The main strength is that the paper claims to deal with a challenging tasks of large displacement motion transfer. The weakness is the lack of experimental results to support the claim.
A5: In this paper, we deal with the problem of large displacements between source and driver images by interpolating between them. As in Fig. 4 and Fig. 6, we show the interpolated image under the same identity and the interpolated image under different identities, respectively. Visually, it is observed that the movements of the interpolated image are closer to those of the driving image than those of the source image, which significantly alleviates the large displacement problem in the motion migration task. More experimental results will be shown in the following link (https://github.com/ICML2025Anonymity/Anonymity).
Q6: There seems to be a grammar issue in section 3.2 "Inspired by it, a keypoint-based anytime interpolation method", is this an un-finished sentence?
A6: Thank you for your careful review. The error has been revised: "Inspired by it, a keypoint-based anytime interpolation method is proposed". | Summary: This paper proposes a novel method for transferring large motion from a driving to a source image. The proposed method is to find a middle step which essentially adds non linearity to the motion transfer. The method generates a set of interpolated in between images based on key point transfers and selects the optimal interpolation point. Then transfers the motion from the interpolated image to the driving image, in effect takes a shorter final step. In order to constrain the optimal interpolations the method adopts a bidirectional training scheme where both the (source -> driving) and (driving -> source) is considered and the hypothesis is to have the same optimal interpolation point between the two directions.
The results show improvements in terms of keypoint accuracy.
Claims And Evidence: The claim is that the interpolation helps with large motion transfer. Overall the accuracy of the keypoints are improved but there are no ablations on large or excessively large motion that is not usually considered in other works. If any some of the typical datasets such as Ted-talks and voxceleb are missing.
Methods And Evaluation Criteria: yes, the method makes sense for motion transfer and the evaluation criteria are well established.
Theoretical Claims: no theoretical claim.
Experimental Designs Or Analyses: The experiments are missing results on TedTalks and VoxCeleb. The current evaluations show that the bidirectional training improves the accuracy of the keypoints significantly. There is not much discussion why the L1 and AED metrics are not on par with SoTA, specially considering CPABMM it seems that the quality of the generated images might be suffering from the interpolation.
Supplementary Material: no supplementary.
Relation To Broader Scientific Literature: The notion of using interpolation and incremental steps is already well established. But the idea of finding an optimal interpolation point and using bidirectional training to fix that point is novel and seems to be significantly improving the keypoint accuracy.
Essential References Not Discussed: none
Other Strengths And Weaknesses: The idea is novel and significantly improves the results in terms of keypoint accuracy.
The write up has many syntax and grammatical errors but it still reads fine.
The main weakness is not quantifying what a "large" motion is, selecting datasets that show "larger" motion than typical ones and reporting results on Ted Talks.
Other Comments Or Suggestions: TPSMM on taichiHD is usually 4.57 AKD, why is it higher here? It would be better to report the official AKD for TPSMM and add CPABMM to the table as well.
The paper is not in review format.
Page 6, 2nd paragraph, [21 - 22] seems to be a citation syntax error.
eq. 5: break so that it doesn't go over text width.
Questions For Authors: what is the difference between features used in the appearance loss and structure loss? both of them seems to be extracted from the same network. are they from different layers? which layers?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions.
Q1: The experiments are missing results on TedTalks and VoxCeleb.
A1: The unsupervised optimal interpolation method proposed in this paper aims to address the large motion problem in motion migration. To validate the effectiveness of this method, we selected two datasets—TaiChiHD and Fashion—that exhibit significant motion amplitudes.
While the face dataset typically demonstrates a smaller range of motion compared to larger human datasets, we included the UvA-Nemo dataset to assess the method’s performance in scenarios with small motion amplitudes. Although VoxCeleb is another face dataset, its size—approximately 308 GB—restricts its usability.
The Ted Talks dataset has a resolution of 384×384, which is higher than the resolution of the datasets we used. However, the motion of the human body in the Ted Talks videos is minimal, as illustrated in https://github.com/ICML2025Anonymity/Anonymity/tree/main/TedTalks, by the double-blind policy. Due to the constraints of the rebuttal date, we only conducted experiments using the Ted Talks dataset.
Q2: There is not much discussion why the L1 and AED metrics are not on par with SoTA, the quality of the generated images might be suffering from the interpolation.
A2: Although the interpolated image's quality significantly impacts the final target image's quality, it helps reduce the substantial motion between the source and driving images to a smaller motion between the interpolated and driving images, significantly improving the AKD metric.
To address the issues of L1 and AED degradation, we have included enhancements to the model in the manuscript and added relevant content to subsection 4.4. Specifically, the method based on optimal interpolation provides better poses for the model, while the motion transfer technique from the source image to the driving image enhances the appearance. By combining the advantages of these two approaches, we achieved significant improvements demonstrated in Ours-V2 in Table 1, linked to https://github.com/ICML2025Anonymity/Anonymity/tree/main/Table%201.
Q3: This paper does not compare with the recent paper CPABMM
A3: Although CPABMM performed well in the motion migration task, the paper's authors did not provide pre-training checkpoints, and the training process is quite time-consuming. In our experimental setting, reproducing the original results was challenging. As a result, we utilized the official results. Thanks for your suggestion again. Since the rebuttal time limitation, we will try to reproduce the results when we have the time. On the other hand, our method still achieves optimal performance in the AKD metric compared to CPABMM on the TaiChiHD dataset.
Q4: What is the difference between features used in the appearance loss and structure loss? Both of them seems to be extracted from the same network. Are they from different layers? Which layers?
A4: Appearance loss involves comparing two images' [CLS] token. The [CLS] token serves as the global semantic descriptor in Vision Transformers (ViT), akin to the “full-image feature” found in Convolutional Neural Networks (CNNs). This token is derived from the last layer of ViT and emphasizes the overall semantics of the images.
On the other hand, structural loss focuses on comparing the self-similarity of two images using the Key matrix from the 11th layer of ViT. The self-similarity matrix generated by the Keys at this deeper layer reveals the global correlation patterns among different image regions. As such, it can be utilized to assess the structural similarity between the two images.
Q5: TPSMM on taichiHD is usually 4.57 AKD, why is it higher here? It would be better to report the official AKD for TPSMM and add CPABMM to the table.
A5: For TPSMM and FOMM, the experimental results presented in Table 1 are those we reproduced under the existing experimental conditions. We have updated the results to reflect the official findings and included several recent methods, such as CPABMM, in the experiments, as shown in Table 1.
Q6: Page 6, 2nd paragraph, [21 - 22] seems to be a citation syntax error. Eq. 5: break so that it doesn't go over text width.
A6: The manuscript has been revised, and the errors have been corrected. | null | null | null | null | null | null |
Features are fate: a theory of transfer learning in high-dimensional regression | Accept (poster) | Summary: The manuscript theoretically analyzes the transfer learning from a feature-centric viewpoint. Specifically, the authors consider the deep linear model and two transfer schemes, i.e. linear transfer and fine-tuning. Multiple theoretical results, such as phase diagrams, are established to uncover when transfer learning will outperform the training from scratch. Some insights are then extended to nonlinear networks slightly.
## update after rebuttal
The authors' additional results for finite-sample pre-training are interesting. Although I can't see the revised version, and the problem setting under the linear form is limited, I think this work will be a fair addition to the transfer learning community. Please make sure the finite-sample pre-training results will be added to the final version. As my original score is already positive, I will keep it as is.
Claims And Evidence: The authors claimed those distributional measures between source and target data distributions are not enough to predict the success of transfer learning over training from scratch. This is also listed as one of the contributions of this manuscript. However, the authors showed that positive transfer can happen even if source and target distributions are far apart only in Appendix A. Since this is the motivation to consider the feature-centric viewpoint (and even the claimed contribution), the authors should not put it in the appendix.
Methods And Evaluation Criteria: The evaluation criteria are reasonable.
As for the training procedure of this paper, I’m not convinced. Specifically, the authors considered the pre-training over the source domain was conducted on the population distribution rather than empirical one, as the authors would like to mimic the practical setting where the sample size over the source domain greatly exceeds the sample size over the target domain. Therefore, the authors can show the pre-trained coefficient $\beta_{s}$ indeed converges to ground truth as $t\rightarrow\infty$.
However, I think this setting is problematic. It is unreasonable to assume one can train on the population distribution (or equivalent $n_{s} \rightarrow \infty$) and one can do the pre-training infinitely long. It is more reasonable to consider pre-training in which the source sample size is much larger than the target one ($n_{S}\gg n_{t}$), and the pre-trained model can not fully recover the source ground truth $t\neq \infty$. A lot of transfer learning theory works consider such cases where they can more precisely evaluate how sample sizes and model similarity can produce a positive transferability; see [1,2] and more references therein. Besides, since the pre-trained model can not perfectly recover the ground truth, there will be some extra error induced by adapting these features in linear transfer, which could change the phase diagrams greatly.
[1] Du, Simon S., et al. "Hypothesis transfer learning via transformation functions." *Advances in neural information processing systems* 30 (2017).
[2] Li, Sai, T. Tony Cai, and Hongzhe Li. "Transfer learning for high-dimensional linear regression: Prediction, estimation and minimax optimality." *Journal of the Royal Statistical Society Series B: Statistical Methodology* 84.1 (2022): 149-173.
Theoretical Claims: I only went through the proofs roughly (which seem correct), but I did not check them in great detail.
Experimental Designs Or Analyses: Experimental designs seem correct, and the results seem to align with the theoretical results.
Supplementary Material: I went through all appendices, but more focus on A, B, and C.
Relation To Broader Scientific Literature: The result of data similarity (distributional measure) might not be enough to predict the success of transfer learning and motivate the feature-centric viewpoint can produce a broader impact in transfer learning research.
Essential References Not Discussed: I do not heavily work in transfer learning within over-parametrizing and feature learning regimes, but it seems like the authors have cited some reasonable reference in this field.
Other Strengths And Weaknesses: Strengths:
1. The manuscript is well-written and very easy to follow.
2. It is interesting to see how feature/model similarity can be used to predict the success of transfer learning, which is now gaining attention and investigation in the statistics community.
3. Results in Theorems 3.7, 3.8, and 3.9 seem novel in the field (transfer learning with deep linear network).
Weakness:
1. The pre-training process. Please refer to the “Methods And Evaluation Criteria" part.
2. The deep linear model is a very simple setting. Although I understand this may be due to the lack of available theoretical tools or techniques in the community to investigate the nonlinear network, this can limit how the theoretical analyses in this manuscript can provide insights into practical settings.
3. The argument for “distributional measure is not predictive of the success of transfer learning” is confined to only using KL divergence and Dudley Metric. It does not seem fully convincing, as there may be other measures/metrics.
Other Comments Or Suggestions: The authors can probably be clearer about some of the notations. For example, explain what is $\bar{W}\_{l}$ and how to set it instead of just citing other papers, and state the signal strength $\||\beta\||_{2}$ equals 1.
Questions For Authors: Questions:
1. Is it possible to make Theorem A.2 to be measure-free or for a broader class of distributional measure? I know this can be challenging, but it can make the motivation for a centric viewpoint convincing.
2. If the pre-training is not on population distribution, and the setting is $n_{s} \gg n_{t}$ and $t\neq \infty$, which part of your analyses will no longer hold?
3. Isn't the label shift referred to the case where the marginal distribution of $Y$ is unchanged across domains? I believe the case you studied is usually referred to as concept or model shifts.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer jfTK,
Thank you for your thoughtful review of our work. We are grateful you found the manuscript easy to follow and we appreciate your feedback on areas that can be improved. We respond to your smaller comments below and leave our response to finite source datasets to the end, as this is more involved.
First, thank you for pointing out these two references we have not cited. They will be included in the Related Works section of the introduction
While we agree that deep linear networks are particularly simple, we chose to prioritize analytic tractability in this work. However, as we mention in our response to Reviewer 65EM, a lot of the take aways are the same in the case of linear transfer with nonlinear networks. In this case we can describe the feature space using Reproducing Kernel Hilbert Space theory as in Appendix D, where we carry out a calculation for two layer ReLu networks.
You are correct that Theorem A.2 is proven only for the Dudley Metric and KL divergence. However, we note that there is a hierarchy of integral probability metrics. For example, the theorem also holds for The Wasserstein metric since it is lower bounded by the Dudley Metric. Although we expect it to be the case, we were not able to prove that the relation holds for any IPM or $\phi-$divergence.
You are correct that we have called our model label shift, but it appears that the term “concept drift” is more common in the literature. We will adjust our terminology accordingly in the revised manuscript
As for the question of finite source datasets, we agree that this is a richer setting to study transfer learning. To this end, we have carried out a calculation for the **full fine tuning** transferability surface in the case of a finite source dataset, the results of which are described in the following. Let $\gamma_s = n_s/d$ and $\gamma_t = \n_t/d$ where $n_s$ and $n_t$ are the number of source and target data points respectively. Let $\sigma_s$ be the standard deviation of the Gaussian label noise in the source task (defined analogously to Assumption 3.1). Then the transferability is
$$
\begin{cases}
\frac{(\gamma_t - 1)}{1-\gamma_s}\sigma_s^2 \gamma_s + \gamma_s (\gamma_t -1) (1- 2\cos
\theta) & \gamma_s, \gamma_t < 1, \\
(\gamma_t -1)(1-2\cos\theta) - \frac{(\gamma_t - 1)}{1-\gamma_s}\sigma_s^2 & \gamma_s > 1>\gamma_t , \\
0 & \gamma_t > 1
\end{cases}
$$
The outline of the proof, which we will include in the final version of the paper, is similar in spirit to that of Theorem 3.9. In particular we know that gradient flow will converge to the minimum norm least square solution during source training, and we reason about the norm of its projection in the space orthogonal to the row space of the target data. There are a few interesting aspects of this expression. First, note that as long as the target task is overparameterized ($\gamma_t < 1$), the negative transfer boundary is completely determined by $\gamma_s$. Second, in the case of no label noise in the source task, the phase diagram is the same as in Fig 2. That is to say that fine tuning does not depend on the amount of source data if there is no label noise. The case of general noise level and source dataset size is more interesting: for some values of $\sigma_s$ there are disconnected regions of positive transfer. We will include the analysis of these equations, as well as plots of the phase diagram in the final version of the manuscript. We thank the reviewer for this suggestion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I appreciate the part where you extend your results to pre-train over a finite source sample case. If there is room for the final revised version (if accepted), please ensure you add this **full fine-tuning** scenario in the main paper with sufficient discussion.
Besides, citing the paper I mentioned is unnecessary as they are just examples of how I view the importance of finite sample pre-train. There is some work (to my knowledge) in this line that can be more related to your work. Authors can consider giving credit to such works. | Summary: This paper theoretically analyzes transfer learning under a multi-layer neural network model. The exact setting considered is a label shift setting with Gaussian noise and linear targets. The paper analyzes the features learned by the penultimate layer of the linear network and studies how the learned features relate to transferability. Simulations on two-layer ReLU networks trained in a mean-field regime show that the theoretical results on linear networks also transfer to non-linear ReLU networks.
**Post rebuttal update:** The authors' rebuttal addresses most of my concerns. While I still feel that the technical contribution of the paper is a bit limited, I agree that analyzing transfer learning with the inductive biases in feature learning considered, even in a simplified linear setting, holds some significance. I thus keep my score as is.
Claims And Evidence: All claims made in the paper are supported by clear evidence.
Methods And Evaluation Criteria: The paper does not propose a new method. The experiments mainly serve as proof-of-concept demonstrations of theoretical results, which make sense to me.
Theoretical Claims: I did not carefully check the proofs. Yet, many results (e.g., Theorem 3.4 and 3.5) are built on prior work (Yun et al., 2021), and based on my understanding of the problem, they seem correct.
Experimental Designs Or Analyses: Experimental designs are OK.
Supplementary Material: I skimmed the supplementary material but did not check the proofs carefully.
Relation To Broader Scientific Literature: Most of the theoretical results of this work are built upon Yun et al. (2021): since the trained model can be fully characterized in the considered multi-layer linear network setting, analyzing the generalization error/transferability seems rather straightforward based on the trained model. That being said, I think the transferability results presented are a decent addition to the existing literature. I also found the results showing the insufficiency of distributional source-target measures interesting.
Essential References Not Discussed: To my knowledge, most of the related works are properly cited and discussed.
Other Strengths And Weaknesses: **Strengths:**
- The theoretical parts are well-written.
- Simulations are comprehensive and some results are interesting to me.
**Weaknesses:**
- The technical novelty is limited. Multi-layer linear networks are a rather well-understood setting with much prior work, and the paper mainly uses the results in prior work to further analyze transfer learning.
Other Comments Or Suggestions: - I think the first contribution listed in Lines 104-109 is a bit of an overclaim: as I have mentioned above, multi-layer linear networks are not a novel setting itself, so the claim "We develop an analytically solvable model of transfer learning that captures training dynamics, implicit bias, and generalization error in deep linear networks" feels too big for me.
- In the "Feature learning" paragraph of related work, I think several prior works should also be discussed (yet they do not belong to "essential references" so I did not list them above): [1] and [2] are among the first works theoretically analyzing the feature learning process beyond the neural tangent kernel regime. More related to transfer learning, [3] and [4] analyze the feature learning process of neural networks and its impact on generalization to new test distributions.
- Lines 271-273: "This condition requires that there is more data than the there is target function power in the direction learned during pretraining." (non-comprehensible sentence)
---
[1] Allen-Zhu et al. Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning. arXiv, 2020.
[2] Allen-Zhu et al. Feature Purification: How Adversarial Training Performs Robust Deep Learning. FOCS, 2021.
[3] Chen et al. Understanding and improving feature learning for out-of-distribution generalization. NeurIPS, 2023.
[4] Zhang et al. Feature contamination: Neural networks learn uncorrelated features and fail to generalize. ICML, 2024.
Questions For Authors: Based on the transferability analysis, can you comment on the benefits of linear probing/fine-tuning? For example, when should we choose fine-tuning/linear probing for transfer learning?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer AyJw,
Thank you for your review of our work. We are glad you found the paper well written and we thank you for your feedback on how to improve the manuscript. First, we appreciate your pointing out the additional references. We will discuss their relevance to our work in the Related Works section of our updated draft. Thank you for pointing out moments that felt unclear or overstated. In regards to Lines 104-109 we primarily want to highlight that training dynamics, implicit bias, and generalization error are all relevant to a theory of transfer learning and that deep linear models are an ideal test bed to explore the interplay of these phenomena since their manifestations are analytically solvable. We agree that many works have explored deep linear networks, and we will change these lines to highlight that our results build on prior work to tackle these multifaceted aspects of the transfer learning problem. Thank you for also pointing out the lack of clarity in Lines 271-273. We will change this sentence to: “We can view $\gamma$ as a dimensionless measure of the amount of target data, and $\cos^2 \theta$ as the amount of power that the target function has in the subspace of the pretraining task. The condition for negative transfer is satisfied when there is more target data than there is power in the pretrained subspace”. In response to your final question, there indeed is a regime where linear probing will outperform fine tuning. We can solve for this condition by subtracting Equation 15 from Equation 11 and looking for points in the $(\theta, \gamma)$ plane where this function is positive. In the noiseless case, the expression simplifies a bit. In the overparameterized regime ($\gamma < 1$) linear probing out performs fine tuning as long as $\gamma < \sin^2(\theta/2)$. The intersection of this condition with that for positive transfer ($\gamma < \cos^2{\theta}$) is satisfied when there is limited target data. We will include a plot of this region in the appendix of the final draft. For the underparameterized case ($\gamma > 1$) linear probing always induces negative transfer whereas fine tuning has zero transferability. So, although fine tuning has better transferability than linear probing, it’s not worth doing any pretraining in the underparameterized regime if there is no label noise.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which addresses most of my concerns. While I still feel that the technical contribution of the paper is a bit limited, I agree that analyzing transfer learning with the inductive biases in feature learning considered, even in a simplified linear setting, holds some significance. I thus keep my score as is.
- On the comparison between linear probing and fine-tuning: Thank you for the response on this point. I think the dependency of the advantage of linear probing/fine-tuning on the learning setting is indeed interesting. Perhaps the authors could also consider discussing the relation between your results and those in [1], which also compares linear probing with fine-tuning in an overparameterized setting and has been cited in your paper, but seems without detailed discussion.
---
[1] Kumar et al. Fine-tuning can distort pretrained features and underperform out-of-distribution. ICLR, 2022. | Summary: The paper analyzes the transferability capability of deep linear networks. Specifically, it theoretically analyzes the generalization error of deep linear networks when they are trained from scratch versus linear transfer and fine-tuning in a regression problem. The paper also extends this study to the use of the ReLU activation function. In particular, it focuses on a theoretical analysis of performance based on a feature space learned by the pre-trained model. The aim is to evaluate when a model can be beneficial, for which the theoretical assumptions consider a mathematically tractable model to study transfer learning, introducing deep linear networks, which, although a simplification, capture how neural networks learn features.
The findings of this article demonstrate that transfer improves performance if the overlap of the feature space between the source task and the target task is sufficiently strong. Furthermore, approaches such as linear transfer and fine-tuning are compared, showing that the efficiency of transfer depends on the structure of the features learned during pretraining.
## update after rebuttal
After reading other reviews and responses, I prefer to keep my score. There are no critical points to decrease/increase my evaluation.
Claims And Evidence: Yes, they are.
Methods And Evaluation Criteria: Yes, they make sense. The paper is theoretical, and some empirical examples are used to corroborate the theory.
Theoretical Claims: No, I did not have time to analyze the proofs corresponding to Appendix C.
Experimental Designs Or Analyses: Yes, the analysis seems fine.
Supplementary Material: Partially, Appendix A, B, E, and F. Appendix E is not mentioned in the main paper.
Relation To Broader Scientific Literature: It's quite important. Linear transfer and fine tuning are some of the most used methods in the literature. Unfortunately, people use them without any knowledge and do not understand that sometimes, the model can do even worse. Even though the paper makes several assumptions, this can be a very important theoretical contribution.
Essential References Not Discussed: No, I can not think of any paper that must be included.
Other Strengths And Weaknesses: Strengths:
-The writing is clear, and the theoretical formulations are well explained, allowing the article to be understood even by readers without deep knowledge of the topic. The logical organization of the sections also facilitates comprehension, and the article is presented in an accessible manner, maintaining an appropriate balance between formality and clarity.
-The choice of a linear network-based approach simplifies the mathematical formulation and facilitates the demonstration of phenomena related to model transfer. This theoretical approach provides clarity in the concepts and makes the article accessible to a broader audience.
-The article provides a detailed analysis of the conditions for successful transfer, especially by considering the similarity between the source and target tasks. It also clearly discusses whether the representations learned in both domains are relevant, which is essential for transfer success. This analysis is a strong point, as it helps to connect ideas with practical situations in Transfer Learning.
Weaknesses:
-Equation (6), a parenthesis is missing in f(x)^2?
Other Comments Or Suggestions: -While the article is based on a robust theoretical analysis, it would be helpful to include a more detailed section on experiments or simulations that support the theoretical conclusions, providing more concrete validation of the approach.
Questions For Authors: No comments or questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer UEjP,
Thank you for taking the time to review our paper. We are happy to hear that you found our results interesting and the presentation clear. We agree that we could be more detailed in our presentation of the simulations. In the final draft of the manuscript we plan to make the code publicly available, including a jupyter notebook that readers can use to explore the concepts of the paper for themselves. In addition, we will mention Appendix E in the main text, per your recommendation. | Summary: The authors develop a a feature-centric theory of transfer learning, based on their insight that transferability is a property of the learned
feature space and not only of the source and target datasets.
The theory is developed for a deep linear networks and analytically characterizes the transferability phase diagram as a
function of the target dataset size and the feature space overlap.
The authors report a few insights based on their theoretical analysis, and demonstrate potential applicability in two-layer ReLU networks
Claims And Evidence: Paritally. I am unsure about the claim about lazy vs rich training mentioned in Section 5, and in generally, claims related to training dynamics.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Partially
Experimental Designs Or Analyses: N/A. The analysis is mainly theoretical.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Sufficient novelty
Essential References Not Discussed: Please cite the following reference for lazy vs rich training:
Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. ICLR 2014
Other Strengths And Weaknesses: The authors provide many profound insights, however:
- Taken strictly, the results are limited to deep linear networks.
- The parctical implications are not clear.
- The manuscript is rather difficult to follow. I understand that this is a theory paper, but I would appreciate including an example that guides the reader through the theorems and what they mean for that example.
- A visualization of the feature space would be very helpful, especially to explain the overlap.
- I miss the number of epochs and learning rate as influential parameters that impacts transferability in the rich regime.
Other Comments Or Suggestions: A few typos:
- intital => initial
- discrepency => discrepancy
- and model operates => the model
- Gram matrix => always capitalize
Questions For Authors: - How do you measure feature space overlap ($\theta$)? And why you use radian as its unit?
- I found it surprising to see a goldilock effect w.r.t. target dataset size ($\lambda$), as illustrated in Figure 1. Is this because a large target set negates any benefits of transfer learning and tip the scale towards training from scratch?
- Do the results extend beyond two-layers in ReLU networks?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 65EM,
Thank you for your thoughtful review and feedback. You are correct that the rigorous proofs are limited to deep linear networks. This architecture exhibits certain symmetries which generate a conserved quantity in gradient flow that we exploit heavily to prove convergence. However, we believe the main takeaway of the work is applicable to other architectures, as demonstrated by the experiments in shallow nonlinear networks. Namely, during gradient descent in the feature learning regime, the model will learn features present in the source task. This defines a subspace of functions that the model can represent. When transferring to a target task with linear transfer, the model can only fit the portion of the target function that lives in the space spanned by the learned features of the target task. Since we can show this effect precisely in the deep linear setting we focus on this as the primary running example in the manuscript. The main practical take-away is that feature space overlap is the relevant object for predicting transferability. The focus of this paper is to demonstrate this phenomenon precisely in a solvable model, but we believe that interesting directions for future work include designing feature-level metrics that are predictive of transfer performance, or designing optimal sampling protocols for the source task when source sampling is readily available, but target data is scarce. Per your request of practical implications, we will add these ideas to our discussion section. While the space constraints keep us from including a simple example in the main body of the paper, we plan to make our code publicly available, including a Jupyter notebook with a simple example to help readers explore the concepts of our work. Additionally, we will include an additional figure with a cartoon depiction of the feature space overlap concept in the main body of the paper. We also thank you for raising the question of finite training time and learning rate. We show in Appendix C2 that in deep linear networks that convergence to a global minimum is subexponential in time, so at long times the results of our paper hold with very small error. However, optimal early stopping is an interesting and likely useful regularization technique that would prevent the model from sparsifying to the source features. We predict that the results of source task early stopping would be similar to those for weight decay, which we demonstrate in the appendix (Figure 5). That is, keeping the model from completely learning the source task may actually help transfer if the source and target features are very different. As for the learning rate, we believe that this framework would also describe models trained with other optimization algorithms such as finite size gradient descent or SGD. The challenge lies in describing their dynamics precisely. For this reason, we focused on the analytically tractable setting of gradient flow. However, similar theorems on the global convergence of gradient descent for finite step size are proven in “Global Convergence of Gradient Descent for Deep Linear Residual Networks” (Wu 2019). The main takeaway is that with a sufficiently small learning rate the network converges to a global optimum, in which case our results would hold exactly as written in the paper. Finally, we respond to your specific questions below:
We define the feature space overlap in the following way: during the source training the model will learn a function $f = \sum_{i} c_i \phi_i{x}$. We call span($\{\phi_i\}$) the feature space of the source task. The feature space overlap is the norm of the orthogonal projection (in the $L_2$ sense) of the target function $f_t$ into this space. In the case of our model, the learned feature space is $\beta_{s}$ (see Theorem 3.4), the projection is simply $\beta_{s}^T \beta{t}$. Since both vectors have unit norm, this can be viewed as the cosine angle between the two tasks, so we plot the overlap in radians. In the case of two layer ReLU networks, computing the overlap is also analytically tractable. We describe how this is done in Appendix D.
That is the correct intuition. We measure the success of transfer learning by its generalization relative to training from scratch on the target dataset. If the target dataset size is very large, there is little benefit to pretraining.
Yes we believe our results extend beyond two layer ReLU networks. Equation 18 holds generally for any Reproducing Kernel Hilbert Space. We choose two layer ReLU networks with Gaussian data since the projection in Equation 18 can be computed exactly.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response by the authors which clarified many issues.
I will raise my score to 4 accordingly. | null | null | null | null | null | null |
RATE: Causal Explainability of Reward Models with Imperfect Counterfactuals | Accept (poster) | Summary: **Post-rebuttal edit: In light of the discussion with the authors, in which they were very engaged and forthcoming, I spent a lot of time debating whether I should increase my score from from 2 to 3. I continue to share the concerns of Reviewer pJdZ that the two key assumptions motivating the method are unlikely to hold in practice, meaning this should be considered a primarily empirical paper rather than one that has strong theoretical backing. In the end, I decided that the problem being tackled was sufficiently novel, interesting and important that the positives narrowly outweigh the negatives, so I decided to move my score to 3. This was a really close call though, and the meta reviewer should consider my position to be a "weak weak accept".**
---
This paper aims to quantify the causal treatment effects of high-level semantic and syntactic attributes (e.g. length, sentiment, helpfulness) on the predictions of language reward models (RMs). A promising approach (which has been explored in prior work) is to prompt a language model (LM) to rewrite text to modify each attribute, then measure the change in reward. This paper argues that such estimates can be biased if the LM inadvertently modifies other attributes as a byproduct. The proposed method is to use "double-rewrites" (i.e. change the attribute, then change it back) to cancel out such off-target changes and obtain more reliable treatment effect estimates.
Claims And Evidence: See "Theoretical Claims" for my main feedback on the claims.
Regarding the empirical results, they do align with the claimed benefit of the method. That said, I note that even the single-rewrite method produces much better results than the naïve baseline. It makes me wonder whether the issue you're trying to solve is particularly important in practice. In particular:
- In your third experiment (Appendix C.1), both single- and double-rewrite methods yield constant treatment effects. I don't quite understand your claim that the double-rewrite results are more stable.
- On the "Real World Reward Models" Experiments in Section 5.2, there appears to be **no comparison to the single-rewrite method at all**. Why is this not included? It makes the reader wonder whether it actually gives very similar results to your method here.
Methods And Evaluation Criteria: My main critiques do not belong in this section so I don't have much to say here. The method is well-motivated (provided the assumptions hold; see below), well-explained, and simple to implement, which is always an advantage!
Theoretical Claims: Overall, I think your main idea is quite an elegant one, and I can see how it should work in theory, **provided all the required assumptions hold**. That said, I'm a little sceptical that they actually do hold in practice.
- **Assumption 1:** I'm fairly happy that the changes to off-target attributes $\xi$ won't typically depend on the target attribute $W$, but what if they depend on the *current values* of $\xi$? As an illustrative example, suppose that $\xi$ denotes the length (number of words) of the response, and that the LM always tends to make a response 20% shorter than the current length every time it rewrites. In that case, the rewrite will be shorter than the original, and the rewrite$^2$ will be *even shorter*. Your method would not cancel out any spurious length effect here, but rather would invert it. I note that my example uses a numerical off-target attribute (number of words) rather than a binary one as you consider throughout your paper, but I don't think that should be critical.
- **Assumption 2:** I see no particular reason to expect this additivity assumption to hold. It seems just as plausible that the interaction between attributes is *multiplicative*, e.g. an RM gives high reward if the sentiment is positive *and* there are no spelling mistakes, and low reward otherwise.
I'm open to any more persuasive arguments that these assumptions should hold in practice, and would be interested to hear if you have any.
A second concern is that your method ends up evaluating the treatment effects of attributes using entirely synthetic data (i.e. the two LM rewrites of each original data point). Since some distribution shift will exist between the original and rewritten data, does this mean that the results don't properly quantify the effect of each attribute in the original data distribution? I'm not quite sure how worried to be about this, but I thought it was worth mentioning (perhaps including in the paper itself).
Experimental Designs Or Analyses: Overall, the main experimental design using datasets with a known ground truth is appropriate, and tests what you would want to test for a method like this. It also makes sense to end with a comparison on real world models where no ground truth exists, but again, why no single-rewrite method in Figure 5?
Supplementary Material: Appendix reviewed, but Supplementary Material not reviewed.
Relation To Broader Scientific Literature: This paper extends a very small (but important!) literature on evaluating and interpreting language RMs, which are critical components in the alignment pipeline. The specific problem of confounding factors biasing interpretations is a novel one in this area.
Essential References Not Discussed: I took a look at your first citation in the Related Work section (Jiang et al. 2024) and noticed some significant overlaps in the core method, i.e. interpreting RM predictions by getting an LM to rewrite text to modify certain high-level attributes. Obviously you are citing this work already, which is good, but I do think you should acknowledge these similarities earlier in the paper, e.g. in the introduction and method sections. Doing so would help to clearly delineate the novelty of your method, namely the use of double-rewrites to cancel out off-target changes. This comment is not intended to diminish your own contribution; in fact, you could frame the possibility of off-target changes as an important limitation of Jiang et al.'s work, which your proposal (partially) rectifies.
Other Strengths And Weaknesses: Overall, I find the paper to be well-written, with a good discussion of the key issues and decisions made as the method is introduced.
Other Comments Or Suggestions: - I don't think the term "imperfect counterfactuals" (in the title and elsewhere) quite captures the essence of the problem of off-target attribute changes. I don't have a great suggestion for an alternative, but maybe a word like "imprecise" or "confounded" or "poorly isolated" could be better.
- Figure 1 isn't currently doing much to aid understanding of the problem, and is quite confusingly laid out. For example, it's not particularly clear what the red question mark is meant to represent. You should consider another attempt at this figure, or even removing it entirely.
Questions For Authors: 1. Can you give any reassurance to my concerns in the "Theoretical Claims" section?
2. Can you explain why the single-rewrite method is not shown in Figure 5, or better, actually add those results to the Figure?
3. Can you elaborate on the claim that "the double-rewrite estimator is more stable than the single-rewrite estimator" in Appendix C.1? Do you just mean that the values are closer to zero?
4. At a high-level, I understand your method as compensating for imperfect instruction following on the part of the rewriting LM (i.e. you ask it to only change one attribute, but it changes others too). As these models get better over time, would you expect your method to become less necessary?
**I would like to emphasise that I'm very open to raising my score if you're able to address my concerns. I think RM interpretability is a very important problem and it's great to see novel ideas in this area.**
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review!
A critical point to clarify is that Assumptions 1 and 2 in Section 4 are merely *sufficient* conditions. There’s no reason a priori to expect that imperfect rewrites can be used in causal estimation, so the role of Theorem 4.1 is simply to show that the approach is not entirely vacuous! That being said, we also note that all causal estimation is based on strong assumptions, and the assumptions in Section 4 have the virtue of being largely checkable! Unlike most assumptions in causality, a practitioner can actually just manually inspect a random subset of the rewrite data to see if it matches the desired counterfactual elicitation.
Those are some clever toy examples, let's apply the considerations above to them:
1. Suppose you visually inspected some samples and had a suspicion that the rewriter was cutting the length by 20% each time it rewrites, as you describe. Unlike e.g. adjustment-based causal methods, you can just test your hypothesis directly, which we consider a major advantage of our method. To illustrate this, we ran two-sample t-tests on the rewrite vs. double-rewrite and found no significant differences between the mean length when rewriting on sentiment. Specifically, p-values were 0.1014 (IMDB) and 0.7496 (hh-rlhf).
2. We agree that in general Assumption 2 (additivity) may not hold, as more complex interaction effects could occur, but note that the ATE is itself only a sensible causal estimand in the absence of interaction effects (generally, not just in our setting). When such interaction effects are present, other estimands like the conditional ATE are more appropriate. Our work here aims to cleanly spell out a simple case, which we hope lays the foundation for more complicated causal estimands.
Responses to other comments and questions:
- Good point about Figure 1, we have edited it. Edit: Here is a link to the new figure https://postimg.cc/9wwXZ1bQ
- The key distinction is that Jiang, et al. focus on altering examples to change a classification score while RATE focuses on estimating a treatment effect of a specific latent attribute on a reward score. We now clarify this earlier in the manuscript per your suggestion.
- All single-rewrite and double-rewrite comparisons for the "Real World Reward Models" Experiments in Section 5.2 are already provided in Appendix D.1; we’ll note this in the main body. We don’t show the single rewrites in Figure 5 because the semi-synthetic experiments demonstrated they don’t properly cancel out rewrite errors.
- We would expect models to become better at generating “true” counterfactuals as they get better (i.e. better at following instructions, more similar to human-generated text).
- While it’s true that the double-rewrite method ends up using entirely LLM-generated data, it's not obvious that the original data distribution has any privileged status if the goal is to understand alignment—it might even be the case that LLM-rewrites are closer to the generations of the downstream fine-tuned model. A benefit of the rewrite method (as compared to e.g. adjustment) is the ability to spotcheck rewrites to make sure they look typical of the domains we care about.
- For our third experiment (Appendix C1), we will edit this section to give quantitative details supporting our claim that the double-rewrite method is more stable. In particular, the slopes of lines interpolating the data points differ: the single-rewrite method has a slope of -0.0413 while the double-rewrite method has a slope of -0.0114. Thus, while both are fairly stable in absolute terms, the double-rewrite is relatively more stable.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response; much of this is satisfying to me. I respond only to points where I feel I have more to add right now.
- **Assumptions as merely sufficient conditions:** I accept your point here; it might be worth including this discussion in the paper.
- **Length reduction example:** I agree that it is advantageous that one could simply test for 'compounding' rewrite effects of the type I mentioned. Do you think there would be any way to cancel those out, as an extension of your method? How about *averaging* the single- and double-rewrite treatment effects? Or maybe averaging the double- and triple-rewrite ones? I'd be interested in your perspective on this.
- **Distinction from Jiang et al.:** I'm not convinced that the distinction is quite as sharp as you claim. While it seems that Jiang et al. focus on changing individual classification decisions in most of the paper, their Section 4.1 ("Global Sensitivity") performs an aggregated analysis across many data points that looks somewhat similar to your measurement of attribute treatment effects. From my understanding, the key methodological difference really lies in the single- vs double-rewrites.
---
I'll wait until I get a response from you before updating my review and making my final assessment.
---
Reply to Comment 1.1.1:
Comment: Regarding the length reduction example, these are great ideas for extensions! We understand the “triple-rewrite” estimator to refer to <taking the second and third rewrites as counterfactual pairs> and the “double-rewrite” estimator to be RATE as in Algorithm 1. Averaging the double- and triple-rewrite estimators seems better than the single- and double-estimators for similar arguments as we discuss in the paper: there are more differences between the original sample and the first rewrite than between subsequent rewrites.
It would be interesting follow-up work to explore this in detail, we agree that RATE is laying a methodological foundation for estimation using imperfect rewrites! Fully addressing this particular length-reduction example seems non-trivial. For instance, whether the length bias induces higher or lower reward overall depends on the frequency P(W=1) of the target attribute (page 4, lines 205-215). At the extreme, the ATT (of the double rewriter) consists entirely of <double - single> rewrites, while the ATU consists entirely of <single - double>. Hence for this length-reduction example, whether a (single-, double-, or triple) rewriter ends up erring high or low depends on the frequency of W in the data.
Lastly, regarding Jiang et. al.: There are two key reasons that their results are not about the ATE:
- Firstly, their use of “counterfactual” is different than in our paper. The authors do rewrites by specifying an attribute (e.g. “correctness”), have an LLM mark words in the response associated with the attribute, then have the LLM change only those words to make the response more / less like the attribute. No restriction is placed on whether this rewrite affects other attributes as well. **Hence, “counterfactual” in Jiang et. al. refers to whether the attribute flips the RM’s preference, not whether “no other attributes have changed” as we use here.**
- Secondly, in Section 4.1, they calculate rates for how often generating a counterfactual on a specific attribute flips a preference classification. These aggregated flip rates are interesting, although slightly different than calculating an ATE. You could imagine a situation where, say, "correctness" was correlated with "know-it-all-ness" in the data in dispreferred responses. So, even if you change "correctness," the reward model would not flip its preference classification due to the "know-it-all-ness." In the calculation of the ATE, however, the rewrite procedure can compare the reward model's scores on "correct" + "know-it-all" to "incorrect" + "know-it-all" to isolate the RM response to "correctness."
We really appreciate your engagement with our paper and the ideas you've had! Please let us know if you have any other questions. | Summary: The manuscript presents Rewrite-based Attribute Treatment Estimator (RATE) as a novel approach to estimate the causal effect of response attributes on reward models. It addresses the challenge of reward model opacity by leveraging LLM-generated counterfactual rewrites. While the work is theoretically grounded and experimentally validated, some areas require improvement for clarity, rigor, and completeness.
Claims And Evidence: Strengths
1. Relevance and Novelty: The paper tackles an important problem in reinforcement learning and NLP model alignment—reward model explainability. The proposed method (RATE) effectively corrects biases introduced by imperfect rewrites.
2. Strong Theoretical Foundation: The formalization of RM explainability as a causal inference problem (through ATE, ATT, and ATU) is rigorous and aligns with best practices in causal learning.
3. Well-Designed Experiments: The semi-synthetic experiments provide clear validation of RATE’s superiority over naive and single-rewrite estimators.
4. Open Source and Reproducibility: The study emphasizes reproducibility by using publicly available datasets and reward models, and by sharing code implementation details.
Weaknesses and Areas for Improvement
1. Clarity of Conceptual Framework: Some theoretical concepts are presented too briefly, making them difficult to follow for readers unfamiliar with causal inference.
2. Incomplete Discussion on Practical Implications: The study does not sufficiently explore how RATE would perform in real-world applications (e.g., LLM alignment tasks).
3. Limited Discussion on Rewrite Quality: The paper acknowledges but does not fully address the potential quality degradation in LLM-generated rewrites.
4. Unclear Computational Cost Analysis: The method involves multiple rewrites, increasing computational demands. The paper lacks a clear discussion on efficiency and feasibility.
Methods And Evaluation Criteria: Methodology
Section 2: Setup
• Subsection 2.1, Paragraph 2: The explanation of naive estimators does not provide explicit examples or empirical failure cases.
○ Suggestion: Introduce an illustrative failure case to help contextualize why naive estimators are unreliable.
• Subsection 2.3, Equation 1: The derivation of ATE assumes the existence of perfect counterfactuals but does not explicitly discuss hidden confounders. and none of the formulas that follow are labelled with a serial number, please fix that.
○ Suggestion: Discuss potential unobserved confounders and their impact on causal effect estimation.
Section 3: RATE Procedure
• Subsection 3.2, Paragraph 3: The paper discusses "imperfect rewrites" but does not define quality metrics for evaluating rewrite reliability.
○ Suggestion: Introduce formal metrics (e.g., similarity scores, embedding distances) to assess rewrite fidelity.
• Subsection 3.3, Algorithm 1: The pseudocode for RATE is well-structured but lacks computational complexity analysis.
Suggestion: Provide an analysis of the computational cost and memory usage, comparing RATE with baseline methods.
Theoretical Claims: Section 2: Setup
• Subsection 2.3, Equation 1: The derivation of ATE assumes the existence of perfect counterfactuals but does not explicitly discuss hidden confounders.
Suggestion: Discuss potential unobserved confounders and their impact on causal effect estimation.
Section 3: RATE Procedure
• Subsection 3.2, Paragraph 3: The paper discusses "imperfect rewrites" but does not define quality metrics for evaluating rewrite reliability.
○ Suggestion: Introduce formal metrics (e.g., similarity scores, embedding distances) to assess rewrite fidelity.
• Subsection 3.3, Algorithm 1: The pseudocode for RATE is well-structured but lacks computational complexity analysis.
Suggestion: Provide an analysis of the computational cost and memory usage, comparing RATE with baseline methods.
Experimental Designs Or Analyses: Experiments
Section 5: Experimental Setup
• Subsection 5.1, Paragraph 4: The authors use GPT-4o for rewrites but do not evaluate how different LLMs impact performance.
○ Suggestion: Include a comparison of rewrite effectiveness across multiple models (e.g., GPT-4 vs. LLaMA).
• Figure 2: The off-target changes in rewrites are discussed, but there is no quantitative measure of rewrite accuracy.
○ Suggestion: Report statistical metrics (e.g., perplexity shift, KL divergence) to assess rewrite fidelity.
Section 5.2: Real-World Reward Models
• Subsection 5.2, Figure 5: The results show that naive estimators significantly overestimate effect sizes, but do not discuss how these findings translate to practical applications.
○ Suggestion: Provide case studies or real-world examples where RATE can correct reward model biases in deployed systems.
• Paragraph 6: The text briefly mentions bias in common reward models but does not explore whether bias correction affects downstream model performance.
○ Suggestion: Conduct an ablation study to analyze how RATE impacts final model behavior.
Supplementary Material: Yes. Appendix A, Reproducibility Statement, and Appendix C, Additional Semi-Synthetic Experiment Details.
Relation To Broader Scientific Literature: Introduction
• Section 1, Paragraph 3: The paper states that "Naively, one might attempt to estimate RM responsiveness to an attribute..." but does not explicitly introduce the causal inference framework early enough.
○ Suggestion: Introduce the concept of counterfactual analysis earlier to provide context for the naive approach’s limitations.
• Paragraph 5: The introduction claims that RATE is "empirically effective," but lacks an explicit research question or hypothesis.
○Suggestion: Clearly define the research objectives (e.g., "We aim to develop an estimator that isolates causal effects while correcting for LLM rewrite biases").
Essential References Not Discussed: None
Other Strengths And Weaknesses: Please refer to the Claims and Evidence
Other Comments Or Suggestions: Abstract
Revision Suggestion: The abstract effectively highlights the problem and proposed solution but lacks quantitative results. Include key findings (e.g., how much RATE outperforms naive approaches in estimation accuracy).
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and your support!
We will address some of your higher-level comments and questions first.
- We appreciate your kind words about relevance and novelty. We agree that practical implications are important, which is why we demonstrate the utility of our method on real-world reward models drawn from RewardBench. Following the reviewer’s suggestion, we will emphasize this application in the abstract.
- We are happy to hear that you find the experiments well-designed. On the topic of rewrite quality, quantitative metrics are fundamentally proxy measurements for linguistic similarity. As such, we believe that simply allowing the reader to see random samples of our rewrites (Appendix D) is a more direct way to demonstrate rewrite quality.
- While we agree that the ultimate application is in downstream tasks, for the purposes of debugging it’s important to have modular evaluations. For instance, our work allows practitioners to identify whether issues identified in alignment (e.g., length bias) are due to the reward model itself or the alignment procedure. On the other hand, if we were to evaluate our method just by its impact on downstream performance, we would not be able to disentangle these two components.
Responses to other comments and questions:
- The computational complexity of RATE remains O(n) since we generate two rewrites per example via API calls and score them with a reward model, where the primary cost comes from generation. Using the gpt-4o-2024-08-06 model, processing 25K IMDB samples (including rewrites and rewrites-of-rewrites) costs only $60, and in practice, a relatively small n suffices for reliable confidence bounds on treatment effects.
- Regarding using other LLMs for rewrites, our priority was to estimate the treatment effect of concepts on reward model scores, so we used the best rewriter available at the time of writing. As LLMs improve in quality and efficiency, we imagine that smaller, more efficient LLMs will be able to perform acceptable rewrites, but considering the cost is so small, and the emphasis here is on precise evaluation, there is little benefit to using less capable LLMs. | Summary: The paper proposes to evaluate the responsiveness of reward models used for LLM training on certain attributes of interest via average treatment effect and average treatment effect on the (un)treated. To simulate interventions on the interested attributes accurately, the paper proposes to rewrite the response twice to cancel out certain rewrite errors.
## Update after rebuttal
The authors' rebuttal is not sufficient to change my current rating. They cannot justify the two unrealistic assumptions in practice or evaluate empirically how far a reward model deviates from those two assumptions. I would have given them a higher score if 1) they didn't use those two assumptions or 2) they framed this as a purely empirical paper.
Claims And Evidence: Yes. However, I remain skeptical about the realizability that the rewrite errors can be canceled out instead of accumulating after rewriting twice.
Methods And Evaluation Criteria: The ATE approach used in the paper needs to be compared with the probability of causation line of literature as I discussed in more detail in the later section.
Theoretical Claims: Other than the overly strong assumptions, the proof itself is legit.
Experimental Designs Or Analyses: The experiments look valid to me.
Supplementary Material: I checked the proof part.
Relation To Broader Scientific Literature: The paper discusses how one should provide explanations to reward model sensitivity on certain attributes of the responses. It extends the previous work on naively estimating this sensitivity via average conditional reward differences.
Essential References Not Discussed: The paper proposes to use ATE families to evaluate the effect of text rewrites on the reward outcomes. However, in the literature, people also use probability of causation (PNS) extensively to derive explanations. How would the authors compare their approach to this line of work?
- [Probabilities of causation: Bounds and identification](https://ftp.cs.ucla.edu/pub/stat_ser/r271-A.pdf) in Annals of Mathematics and Artificial Intelligence 28 (2000).
- [Towards Trustworthy Explanation: On Causal Rationalization](https://proceedings.mlr.press/v202/zhang23ap/zhang23ap.pdf) in ICML 23.
- [Does Reasoning Emerge? Examining the Probabilities
of Causation in Large Language Models](https://openreview.net/pdf?id=b1ylCyjAZk) in NeurIPS 24.
Other Strengths And Weaknesses: - The usage of ATE is well motivated and the associated practical concerns are clearly explained and addressed.
- The strong assumption on LLMs rewrite quality is the only flaw in the theory. While the proposed double rewrite procedure seems effective from the provided experiments, it still leaves doubt on whether this is always the case. Especially given the fact that LLMs rewriting the texts are also trained by some black box reward models.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How do you think the quality of the reward model the rewrite LLMs being tuned with will affect your approach outcomes?
2. How can one possibly measure the "invalidness" of your assumptions with black box rewrite LLMs? If rewrite errors cannot cancel out, how will theorem 4.1 change?
3. How would you compare your ATE approach to PNS-based approaches in the literature?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review and your detailed comments.
We agree that the assumptions in Theorem 4.1 are somewhat strong, albeit less onerous than they may first appear: the cancellations only need to occur *in expectation*, which is a weaker condition than cancellation for each unit.
However, we emphasize that these are merely *sufficient* conditions whose purpose is to show that there exist any situations under which RATE is consistent: after all, there’s no reason a priori to expect that imperfect rewrites can provide a causal estimation—hence, the purpose of the theorem is simply to show that the approach is not vacuous. The substantive evidence of the paper are empirical: that the causal view must be considered (e.g., the experiments in Section 5.1 showing a mismatch with the naive and single-rewrite estimates) and that the double-rewrite method provides better counterfactual pairs than the single-rewrite method (e.g., based on Figure 3 and inspection of the candidate counterfactual pairs in the appendix).
That being said, we also note that all causal estimation is based on strong assumptions, and the assumptions in Section 4 have the virtue of being largely checkable! Unlike most assumptions in causality, a practitioner can actually just manually inspect a random subset of the rewrite data to see if it matches the desired counterfactual elicitation.
Regarding the specific questions:
1. RATE is agnostic to the particulars of how the rewriter-LLM was trained (e.g. with RLHF or not): all that matters are the produced rewrites. In particular note that humans also produce imperfect rewrites, and the same estimation procedure could be applied in principle to these expensive human-generated rewrites (subject to the same validity tests above), even though humans are not trained via any explicit reward model.
2. See main response above.
3. Probabilities of causation are appropriate for binary outcome variables like legal decisions, which is why we opted for the ATE to formalize how much the reward model (continuous-valued) is responding to attributes of text—hence, the cited papers use a fundamentally different notion of explanations. It’s an exciting direction for future work to explore whether the double-rewrite trick might improve estimation for other counterfactual estimands.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttals. I can resonate with your argument that the proof is mainly to show the ideal situation for RATE to be valid. However, given the purpose of providing explanations to LLMs which is already a giant black box, I don't think we should compound the errors & confusions by involving another rewrite LLM without practically auditable justifications.
I will maintain my score but there are two directions for further improvements. 1) It would be nice if the authors can show thm. 4.1 without the strong assumption and adding some error analysis; Or 2) you may avoid having imperfect theories but increase the amount of empirical evidence that the proposed approach works.
**Update after authors' reply**:
Thanks for the clarification. But still, to measure a reward model's responsiveness to high-level textual attributes (or to understand the reward model behavior as stated in your related work), I am expecting a more formal justification for the causal estimand rather than proofs based on assuming that the rewrite error doesn't depend on the rewrite direction. Again, assumption 1 is indeed strong because of its nature of counterintuitiveness. Additionally, no unobserved confounders is not commonly assumed in causal theory work (it is indeed commonly used in some less justified empirical work that try to sugarcoat the results though).
And similarly for assumption 2, the additivity of reward model, how can you evaluate how far a reward model deviates from that assumption empirically?
Building upon both **hard to verify and unrealistic assumption 1, and 2**, the proof becomes **pratically unauditable**. And my suggestion remains. Either **removing the assumption and add error analysis** or **framing the work as purely empirical**.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply.
We would like to reiterate our position that our experiments are sufficient and the assumptions are quite weak, especially relative to common assumptions in causality (e.g., no unobserved confounders). We would also like to clarify that the purpose is not to "provide explanations to LLMs," as the reviewer states, but to **measure a reward model's responsiveness to high-level textual attributes.**
The key obstacle to methodological validation in causality is the challenge of getting ground truth for the causal estimands. The semi-synthetic experiments we have in Section 5 and Appendix C have both a clear ground truth and realistic premises. They illustrate not only that our method works, but also that the double rewrite approach is necessary. See Figures 3 and 4 (Section 5.1) and Figure 6 (Appendix C), which illustrate that the single rewrite method (unlike the double rewrite method) is sensitive to distribution shift and consequently yields poor estimates of the ATE. **These are practically auditable justifications for the double rewrite method.**
On the assumptions, we would like to emphasize that they serve the role of showing that we can get strong theoretical guarantees under non-vacuous conditions. We would also like to reiterate that Assumption 1 does not require cancellation in every case, but only in expectation, so the assumption is much weaker than it might at first seem.
**What specifically does the reviewer believe is lacking in the assumptions or the experiments? Thus far the reviewer has only repeated vague concerns. We look forward to addressing the reviewers' specific concerns. Would it resolve the reviewer's concerns if we clarified these points in the exposition of the manuscript?** | Summary: The paper introduces RATE, a framework for understanding reward models in the context of LLMs. The core idea is to do rewrites of rewrites, so that confounding factors (such as typos and sentence length etc.) can be filtered out. Some experiments are done showing the method appears better than the alternative of doing a single rewrite counterfactual.
# After rebuttal
Many thanks to the authors for their direct and professional rebuttal, and thanks for clarifying some misunderstandings. (the new Figure 1 looks improved!)
* Can you clarify what you mean by “significant” regarding the difference in distributions in figure 2? “Significant” as in hypothesis testing or as in “meaningful”? -- I meant in terms of "meaningful" really, but this is just an aside.
My main issue with the paper is the lack of demonstration in application. I understand this is a tall order for any XAI paper, but what I really want to see is that this method is better than e.g. Jiang et al. 2024' on the task of e.g. model improvement in a fair comparison, in such a situation I would accept with a 4/5 score. So I will keep my current recommendation, thank you.
Claims And Evidence: I don't think there are any hugely problematic claims in the paper. I would argue that the abstract claim that RATE measure the "causal" effect of an attribute is not true however. Just because we change attribute x and the reward flips, that doesn't necessarily mean it "caused" that to happen. It may simply be other confounding factors in the text, which are brought out by modifying that attribute, but the attribute itself (when causally traced, which we cannot currently do) had no effect.
In essence, it is more accurate to say "RATE measure the causal effect of **removing** an attribute on the reward", but not the attribute itself.
Methods And Evaluation Criteria: Yes, no issues here.
Theoretical Claims: Did not check.
Experimental Designs Or Analyses: The experiments are fine for what the authors are trying to show.
Supplementary Material: No.
Relation To Broader Scientific Literature: Explainability of reward models is mostly a novel research area, so it does help to fill a gap in that sense. Work by Jiang et al. (2024) did just do something similar, but as far as I know those authors did not consider the idea of rewriting counterfactual edits.
Essential References Not Discussed: This is ok as far as I know.
Other Strengths And Weaknesses: ### Strengths
* This is an important problem, we should ideally better understand reward models if we are chasing human-AI alignment.
* The authors have thought carefully about common issues in counterfactual generation and are trying to directly address them.
### Weaknesses
* Clarity was a bit of an issue for me. I found it very difficult to understand the process going from original -> edit -> edit of edit. Perhaps the other reviewers got it a bit better, but I feel the paper would really have benefited from a clear motivating example as "Figure 1" to really make the problem setup clear, and the issues with prior approaches, as the paper is I had to mentally work very hard to (I think) understand the setup. Figure 1 currently doesn't make it clear how edits can really help.
* It's difficult to judge Figure 2 also, is the difference in distributions significant? I believe the only way to really evaluate this is with pairwise comparisons to show a change/no change.
* At a deeper level, my my concern with the paper is that the authors don't show how the method is useful in application. What would make the paper much better in my eyes would be to demonstrate its usage for e.g. model improvement. Can you show it improves LLMs in practice? That would be the only truly compelling evaluation in my opinion.
Other Comments Or Suggestions: Line 76 typo
Questions For Authors: * Can you absolutely guarantee that the counterfactuals (and additional edits) form reasonable text? The method seems to depend upon the second edit just "minimally" editing the first edit? How do you know it does that? Creating a counterfactual from a counterfactual seems to multiply the issue of confounding factors rather than help? But maybe I just misunderstand.
* Can you add an evaluation showing your method helps with model improvement? (or anything else useful)
If you can address these convincingly, I will raise my score to accept.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your review. We agree that the behavior of reward models is an important and understudied area of alignment research.
First, we will respond to some higher level points:
- Regarding your concern about the causal claims, we think there may be a confusion here. We are not defining our counterfactual examples as examples that result in a changed classification decision, as in Jiang, et al. 2024. Rather, we are creating counterfactual examples with our rewrite process that differ on one binary attribute (and rewrite noise). That is, our counterfactuals are of the form “what would the text of this response be if it were generated with binary latent variable W set to 1 rather than 0.” We then use these counterfactual examples to estimate an ATE on the reward score itself. That is, RATE does not remove attributes, but rather changes the value of attributes (e.g., swapping sentiment from positive to negative) in order to understand how the reward model’s output is affected by the attribute. So, our treatment effect is in “units of reward score” regardless of any classification decision. We agree that we should mention the Jiang work earlier and more clearly distinguish our notion of counterfactual from the notion of counterfactual explanations addressed in Jiang. We have updated the manuscript to clarify this in the introduction.
- We agree that the ultimate goal of RATE is model improvement. However, the current alignment pipeline involves several different components. If an aligned model is producing undesirable behavior (e.g. excessively long responses, annoying bullet pointed lists, etc.) it’s not obvious where in the pipeline this behavior is coming from. Is this from the data used or generated during alignment? The reward model? An unintended consequence of the alignment objective? We focus on a modular evaluation of reward models themselves in order to localize failure points in the alignment procedure, and we plan to examine the impact of reward model bias on aligned models in future work.
Here are responses to some other comments and questions:
- Thank you for suggesting improving figure 1. We agree that this leaves something to be desired. Edit: Here is a link to the updated figure https://postimg.cc/9wwXZ1bQ
- Thanks for catching the typo on line 76. This is fixed on the updated version of the paper.
- Can you clarify what you mean by “significant” regarding the difference in distributions in figure 2? “Significant” as in hypothesis testing or as in “meaningful”?
- It’s not possible to *guarantee* that the rewrite procedure produces sensible text, but examination of (many) randomly selected examples shows the procedure is robust. When applying the method, it is, as always, important for practitioners to spot-check the data.
- Regarding your question about Assumption 1, note that Assumption 1 is actually much weaker than the $\xi$ terms cancelling out for each unit: rather, it suffices for the cancellations to occur in expectation, which is very permissive. This is easy to check via visual inspection on a random subset. | null | null | null | null | null | null |
SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models | Accept (poster) | Summary: This submission proposes SelfCite, an approach for LLMs to improve their citation of sentences from the input context to support their responses. SelfCite evaluates the quality of a citation in terms of the LLM's probability of generating the same response without the cited sentences and with only the cited sentences. This self-supervised reward is used for best-of-N sampling, which is in turn used to generate training data for preference optimization to improve the LLM's intrinsic citation capability. SelfCite is evaluated on the LongBench-Cite benchmark (composed of 5 datasets) against baselines that use prompting, fine-tuning, or contributive context attribution, showing superior citation F1 scores. The paper also presents several studies of ablations/alternatives.
### Update after rebuttal
The rebuttal provided additional discussions, clarifications, and a latency table that I trust will be added to further improve the paper.
Claims And Evidence: Yes, I think the experimental design and results are sound.
Methods And Evaluation Criteria: The proposed approach makes sense as it judges the quality of cited sentences based on their contribution to the LLM's probability of generating the given response, which is mostly aligned with the degree to which the cited sentences support the response in a general sense (not from the LLM's point of view). The evaluation is done on LongBench-Cite, which is a recognized benchmark for citation quality.
Theoretical Claims: The submission does not make theoretical claims.
Experimental Designs Or Analyses: I read the experimental sections in full and did not find issues with soundness.
My only major comment is that the discussion of experimental results should also address the different notions of "citation" targeted by ContextCite on the one hand and evaluated by LongBench-Cite (and similar benchmarks) on the other hand. The former is a contributive context attribution method, i.e., one that aims to find the sources "that a model actually uses when generating a statement" (lines 423-424), whereas the latter is based on GPT-4o annotations of whether context sentences "support" a statement in a general sense. I think the paper in general could discuss this distinction more clearly or more often. Regarding Section 3.4 specifically, given these somewhat different objectives, I think it is expected that the LongCite fine-tuned models in Table 1 are already slightly better than ContextCite in terms of LongBench-Cite performance. Similarly, this might explain the decent but inferior results of SFT (supervised fine-tuning) on ContextCite. What I find interesting and a bit unexpected is that using contributive attribution to re-rank citations can then improve LongBench-Cite performance, despite the (small?) mismatch in objectives.
Supplementary Material: I read Appendix B on the use of ContextCite and Appendix D on the comparison with Claude Citations.
Relation To Broader Scientific Literature: Section 5 discusses the relationships with 1) other work on teaching LLMs to generate citations as well as with 2) contributive context attribution and 3) self-supervised alignment in general. The main distinction with respect to 1) is the use of the techniques of 2) to further improve citation quality in a self-supervised manner. A longer-term goal, which the submission reports some results on, is to produce high-quality citations (as measured by benchmarks like LongBench-Cite) in a completely self-supervised manner.
Essential References Not Discussed: I cannot think of essential references that were not discussed, but please see "Other Comments or Suggestions" for additional references on contributive context attribution.
Other Strengths And Weaknesses: ### Strengths
- I think (as mentioned above) that it is a very interesting idea and finding that leveraging a somewhat different notion of "citation", namely importance in *causing* a model to generate a certain statement, can improve performance with respect to the evaluated notion of citation, namely logically supporting a statement in a general sense.
- The paper reports on many ablations and alternatives: the fully self-supervised case of SFT on ContextCite, and all the subsections of Section 4 (different rewards, citation length balancing, preference optimization vs. SFT, etc.). The set of baselines considered (shown in Table 1) is also comprehensive.
### Weaknesses
Some aspects of best-of-N sampling (Section 2.3) are not clear to me:
1. I am wondering why the text implies that best-of-N sampling has to be done "after generating the full response" (lines 154-155). Could it not also be done after generating each statement $r_i$?
1. In re-sampling citation sequences in position $e_i$, are the future statements and citations $r_{i+1}, e_{i+1}, ...$ removed? The notation in Algorithm 1 suggests yes?
1. Lines 132-134, right column, "additional inference cost of generating candidates and re-ranking": This inference cost should be quantified more precisely somewhere.
Other Comments Or Suggestions: Minor questions and comments:
- What constitutes a statement $r_i$ in the response? How is the response divided into statements?
- Lines 112-113, right column, "we exclude the BoN candidates that cites more than 384 tokens in total": I believe this refers to the set of context sentences $c_{e_i^1}, ..., c_{e_i^m}$, not the sequence of identifiers $e_i^1, ..., e_i^m$, but it is not completely clear.
- Equation (1): Were weighted sums of the probability drop and probability hold metrics also considered?
- Table 2: Are the numbers in the second Llama-3.1-8B-Instruct row better than the ones in the first Llama-3.1-8B-Instruct row because the second one is for answering with citations (i.e. generating citations also improves answer correctness)?
- The Limitations section could also note that SelfCite assumes access to the LLM's predicted probabilities, which may not always be available.
Additional references on contributive context attribution:
1. G. Sarti et al. "Inseq: An Interpretability Toolkit for Sequence Generation Models." ACL 2023.
1. L. Monteiro Paes et al. "Multi-Level Explanations for Generative Language Models." https://arxiv.org/abs/2403.14459
Questions For Authors: Please see Weaknesses under Other Strengths and Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer PWPX for the constructive comments!
> …different notions of "citation" targeted by ContextCite on the one hand and evaluated by LongBench-Cite (and similar benchmarks) on the other hand.
Thanks for pointing out the mismatch between the objectives of corroborative (sources that support a statement, e.g., LongCite & LongBench-Cite benchmark) and contributive context attribution (e.g., ContextCite). As you said, SelfCite applies contributive alignment (using context ablations) to a method for corroborative evaluation (LongBench-Cite). Our intuitions are:
1. Among citation candidates proposed by LongCite, the candidates actually “used” by the model are also more likely to “support” the statement—other candidates may just be semantically related. Although such support is certainly not always guaranteed, the two objectives are still aligned to some extent.
2. Current corroborative methods (LongCite) have a big room to improve, even by a contributive method, despite their discrepancy in goals. In other words, if LongCite were already near perfect, enforcing it to be “more contributive” may not help much.
We will discuss this nuanced point more clearly throughout our paper!
> …why the text implies that best-of-N sampling has to be done "after generating the full response" (lines 154-155). Could it not also be done after generating each statement $r_i$?
Yes, BoN only needs each statement $r_i$ before sampling citation $e_i$. Our implementation here is mostly for convenience. We generate a full response to get $r_1, ..., r_S$ first, then re-sample each $e_i$, considering the weak dependence between $r_{>i}$ and $e_i$.
> In re-sampling citation sequences in position $e_i$, are the future statements and citations $r_{i+1}, e_{i+1}$,… removed? The notation in Algorithm 1 suggests yes?
Yes, future statements aren't used when sampling citation $e_i$.
> Lines 132-134, right column, "additional inference cost of generating candidates and re-ranking": This inference cost should be quantified…
We measured latency per example (8*A100 GPUs, batch size 1, model parallelism) on LongBench-Cite. Direct decoding of LongCite-8B vs. SelfCite SimPO shows similar latency. BoN sampling+reranking is ~7x slower. We'll add this to our paper.
|Method|Avg latency (s)|
|-|-|
|LongCite-8B|24.3|
|SelfCite:|
|BoN sampling|149.0|
|BoN reranking|34.0|
|SimPO model|26.2|
> What constitutes a statement $r_i$ in the response? How is the response divided into statements?
In LongCite-45k, statements are split based on semantic integrity in data generation, so fine-tuned models will naturally learn to produce statements. We'll clarify this in our paper.
> Lines 112-113, right column, "we exclude the BoN candidates that cite more than 384 tokens in total": I believe this refers to the set of context sentences $c_{e_i^1}, ..., c_{e_i^m}$, not the sequence of identifiers $e_i^1, ..., e_i^m$…
Yes, the length limit was applied on the cited texts, not identifiers. We'll clarify this in our paper.
> Equation (1): Were weighted sums of the probability drop and probability hold metrics also considered?
Good suggestion! We only tested 1:1 weights for simplicity but will explore this further.
> Table 2: Are the numbers in the second Llama-3.1-8B-Instruct row better than the ones in the first Llama-3.1-8B-Instruct row because the second one is for answering with citations?
After carefully checking our experiments, we found the second Llama-3.1-8B-Instruct row (avg 71.7) was actually mistakenly taken from the ContextCite result, which uses greedy decoding and answering without citations, thus is not directly comparable.
We reran the experiments and show full results in the table below. The first Llama-3.1-8B-Instruct row in Table 2 of our paper should be updated with row (2) below (avg 68.9). Its original scores with "†" (avg 60.2) in Table 2 of our paper are taken from Table 3 in LongCite paper and thus have some prompt/implementation differences (they didn't open-source this part of code). We will update it to our own results for now. The second Llama-3.1-8B-Instruct row should be updated with row (4) (avg 63.3).
In summary, answering with citations hurts accuracy (68.9 -> 63.3), which is expected and consistent with the same trend from Table 3 in the LongCite paper (all non-LongCite models have such degradation when asked to answer with citations.) We'll update Table 2 of our paper.
||Long.|Multi.|Hot.|Dur.|Gov.|Avg|
|-|-|-|-|-|-|-|
|**Answering without citations**|
|(1) Greedy (CC)|67.4|87.9|73.5|67.8|62.1|71.7|
|(2) Sampling|66.0|83.7|65.8|62.8|66.1|68.9|
|**Answering with citations**|
|(3) Greedy|61.2|79.0|68.8|60.0|54.9|64.8|
|(4) Sampling|58.4|75.3|67.3|59.3|56.4|63.3|
> The Limitations section could also note that SelfCite assumes access to the LLM's predicted probabilities…
> Additional references on contributive attribution: …
We'll add these points and references to our paper. Thanks again for the valuable suggestions!
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their responses. I trust that the additional discussions, clarifications, and latency table will be added to the paper.
Regarding the correction to Table 2, it seems then that there is no longer a degradation in answer correctness due to SFT on ContextCite data (lines 283-285).
---
Reply to Comment 1.1.1:
Comment: We sincerely thank Reviewer PWPX for the very detailed and thoughtful review and the reply. Your in-depth questions and constructive feedback greatly helped us improve every detail of the paper!
We confirm that we will include all the above additional discussions, clarifications, and the latency table in the final version of our paper. Regarding the correction to Table 2, you’re right—there is no longer a degradation in answer correctness due to SFT on ContextCite data, and we will revise lines 283–285 accordingly to reflect this correction.
Thank you again for your time and valuable insights throughout the reviewing process! | Summary: This paper proposes an attributable response generation strategy, SelfCite, which cites relevant sentences in the context supportive of the generated response. SelfCite can both operate during inference or during training. For inference, SelfCite pick the Best-of-N using a newly designed reward/score composed by probability drop and probability hold via ablating the reference. For training, the reward signal can be leveraged to curate DPO preference data to train the generation model. Experimental results on LongBench-Cite demonstrate the effectiveness of the proposed strategy in terms of citation quality.
Claims And Evidence: Yes, the claim that "using the combination of probability drop and probability hold during inference and training can boost the citation quality" is supported by the experimental results.
Methods And Evaluation Criteria: 1. The proposed method makes sense, but not entirely novel, as the ablation technique has been proposed by Cohen-Wang et al. in ContextCite. The authors adopt this technique to produce a reward or score to guide the selection of the best generation or to be used in preference optimization.
2. The evaluation mostly focuses on citation quality, but overlooks the answer accuracy which is also quite important. From Table 2, it seems the answer accuracy drops substantially on Llama-3 which is concerning.
3. Given many RL algorithms can be explored to improve the generation quality (both citation and answer accuracy), it is more beneficial to investigate these RL algorithms for this problem.
4. In the field of attributable generation, there are also other benchmark datasets, such as the ALCE benchmark (Gao et al., 2023) and ExpertQA (Malaviya et al., 2023). Is the proposed method applicable in these benchmarks?
Gao et al., 2023: "Enabling large language models to generate text with citations".
Malaviya et al., 2023: "Expertqa: Expert-curated questions and attributed answers".
Theoretical Claims: There is no theoretical claim
Experimental Designs Or Analyses: Yes, I have checked the experimental designs and analyses. There are no issues.
Supplementary Material: Yes, I have reviewed all the supplementary materials.
Relation To Broader Scientific Literature: The idea is quite related to the field of attributable generation in general. The methodology is inspired from the work of ContextCite.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
- The proposed approach using self rewards instead of manual annotations is meaningful. The results in terms of citation quality show the effectiveness of the proposed reward.
- Extensive experiments are conducted over a well-known benchmark on both inference-only and training-based scenarios. Comprehensive analysis on model components, length balancing, data size and iteration is provided.
Weaknesses:
- There is lack of detailed discussion on the comparison against existing approaches such as Zhang et al., 2024 and Cohen-Wang et al., 2024. How does SelfCite differentiate itself from these studies?
- Given that many existing works have already investigated attributable generation, there is a lack of empirical comparison against these baselines. From related work, both Huang et al., 2024 and Gao et al., 2023, for example, have proposed methods to tackle this problem, but there are no comparisons in experiments.
- Besides LongBench-Cite, what about other benchmarks such as the ALCE (Gao et al., 2023) datasets?
- One key limitation is the degradation of answer accuracy. Admitting that the citation quality is crucial for reliable generation, we should not expect trading enhanced citation quality with decreased answer capability.
Gao et al., 2023: "Enabling large language models to generate text with citations".
Other Comments Or Suggestions: NA
Questions For Authors: Refer to the above sections
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer Cy66 for the constructive comments!
## Ours vs ContextCite
> The proposed method makes sense, but not entirely novel, ablation technique has been proposed by ContextCite
While inspired by ContextCite (CC)’s context ablation (L029, right column), our key contribution differs notably: **SelfCite enables an LLM to directly generate its own citations, while CC does not.**
CC is post-hoc, relying on external linear models trained from scratch for each new example, requiring heavy inference per example (32 to 256 ablated generations) after a response was produced.
In contrast, SelfCite directly teaches LLMs to produce accurate citations in responses with a few tokens in one pass. The context ablation signals become internalized capabilities of LLMs after SimPO; no context ablations needed at inference.
Extra distinctions:
- CC measures citation “necessity” by “prob drop”; SelfCite adds “sufficiency” by “prob hold” to catch missing citations.
- CC’s linear model assumes context sentences independently impact responses; SelfCite directly learns from nonlinear “prob drop/hold” rewards, handling sentence interactions.
CC’s details were in Appendix B due to space. We'll move them to the main text.
## Answer Accuracy Drops?
> Table 2, it seems answer accuracy drops on Llama-3 which is concerning. (avg 71.7 vs 64.6)
Good catch! We carefully checked our experiments and found the high score (avg 71.7) of Llama3 baseline under “Answering with citations” was mistakenly copied from our ContextCite (CC) experiment. We reran the experiments to get a corrected baseline of avg 63.3 in row (4) of the table below. Compared to this correct baseline, both our `+ SFT on CC` (avg 64.6) and `+ SimPO (Ours)` (avg 64.7) in fact show slightly higher accuracy. We apologize for the mistake.
||Long.|Multi.|Hot.|Dur.|Gov.|Avg|
|-|-|-|-|-|-|-|
|**Answering without citations**|
|(1) Greedy (CC: wrong baseline)|67.4|87.9|73.5|67.8|62.1|71.7|
|(2) Sampling|66.0|83.7|65.8|62.8|66.1|68.9|
|**Answering with citations**|
|(3) Greedy|61.2|79.0|68.8|60.0|54.9|64.8|
|(4) Sampling (true baseline)|58.4|75.3|67.3|59.3|56.4|63.3|
|**Table 2 from paper**|
|+ SFT on CC|58.8|83.4|65.8|57.8|57.5|64.6|
|+ SimPO (Ours)|56.8|80.9|65.3|59.5|60.9|64.7|
### Why is CC higher? (avg 71.7)
CC’s higher score (71.7) comes from:
1. CC uses greedy decoding; we follow LongCite [1] to use sampling (top_p=0.7; temp=0.95). See rows (1) vs (2): avg 71.7 vs 68.9
2. CC’s citations are post-hoc, so it’s “answering without citations”, making answer generation easier and more accurate. See rows (1) vs (3): avg 71.7 vs 64.8
The correct baseline to be used is Sampling + Answering with citations in row (4): avg 63.3, which is slightly better than the same results from LongCite ([1], Table 3, row Llama-3.1-8B, column C’s). And it confirms no accuracy drop after our fine-tuning (avg 64.6 & 64.7). We’ll update Table 2 in our paper.
[1] LongCite: https://arxiv.org/pdf/2409.02897
## More RL algorithms?
> …beneficial to investigate these RL algorithms...
Our main goal is to validate a novel "reward" for citation. Prior work ([2], Figures 4 & 5) shows Best-of-N (BoN) closely approximates the upper-bound scores of RL without training artifacts to make comparisons more confounded. Following established practices [3, 4], we used BoN as main evaluation, and further verified it using training-based alignment, SimPO, achieving the same improvement of BoN. While extra RL algorithms may offer improvements, we believe they wouldn’t qualitatively change our observations given BoN results.
We also acknowledge SelfCite doesn’t aim to boost answer accuracy (but not to decrease it either). Combining it with answer-matching rewards to jointly improve citations & answers is an exciting direction we want to explore in future work!
[2] Controlled Decoding from Language Models, Mudgal et al., ICML 2024
[3] Scaling laws for reward model overoptimization, Gao et al., ICML 2023
[4] Let’s Verify Step by Step, Lightman et al., ICLR 2024
## More Benchmarks/Baselines?
> …other benchmark datasets, such as ALCE...
> …both Huang et al., 2024 and Gao et al., 2023 have proposed … there are no comparisons in experiments.
Following your advice, we evaluated on ALCE and compared to Huang et al. 2024 & Gao et al. 2023. Due to space, please see our rebuttal to Reviewer WRpv. Spoiler: SelfCite outperforms baselines on ALCE even on cross-domain transfer!
## More Discussion?
> lack of detailed discussion on comparison against existing approaches such as Zhang et al., 2024 (LongCite) & Cohen-Wang et al., 2024 (ContextCite).
Comparison with ContextCite is at the top of this reply; will be added to our paper.
Comparison with LongCite is in our Section 5. Briefly, LongCite uses data from proprietary APIs for SFT only. SelfCite performs further alignment steps without external supervision.
We also made a table to contrast key distinctions among prior works; see our rebuttal to Reviewer ci8i.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses to my questions with additional experiments. They have mostly addressed my concern and I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank Reviewer Cy66 for the thoughtful and constructive feedback and for taking the time to revisit our response and raising the score. We’re very glad to hear that the additional experiments already addressed your concerns!
If there are any remaining concerns, we’d be happy to clarify. Thanks again for your valuable input throughout the process!
--------
p.s. We also wanted to remind Reviewer WRpv who mainly raised the same questions of adding more baselines of Huang et al., (2024) and Gao et al., (2023) on the ALCE benchmark. Since those additional experiments have been added in our rebuttal and already acknowledged by Reviewer Cy66, we hope our responses were helpful to resolve the concerns from Reviewer WRpv as well! | Summary: This paper proposes a method ("Self-cite") to automatically evaluate cited text using context ablation -- i.e. changing the context and compare probability of generating a given sentence. It then proposes to use this signal to enhance citation quality as the reward model for two approaches: (1) Best-of-N sampling and (2) preference learning. Experiments show that it is possible to enhance the citation F1 for both LongCite-8B and Llama-3.1-8B-Instruct.
Claims And Evidence: Experiments results on LongBench-Cite with two models (LongCite and Llama-3.1-8B) demonstrate that the proposed reward signal does improve citation quality.
Methods And Evaluation Criteria: * **Proposed method**: The propose method leverages probability difference in generating a given answer when a piece of cited text is included / not included in the context, which is an intuitive method to approximate citation quality.
* **Dataset** : The experiments are mainly conducted on the LongBench-Cite benchmark, which adopts sentence-level citations. However, my understanding is that proposed method is not limited to sentence-level citation by design and can be adopted to chunk-level citation. Therefore, it would be better to also include experiments are datasets such as ALCE [0].
[0] Enabling Large Language Models to Generate Text with Citations. Gao et al., EMNLP 2023.
Theoretical Claims: N/A
Experimental Designs Or Analyses: * **Baseline chosen**: Experiments are conducted against three baselines: a prompting baseline, ContextCite and Fine-tuned models. I think the ContextCite baseline is not appropriate. More specifically, ContextCite is a method that is used to attribute generated response towards the context (i.e. **generating** citation), and thus it is applied on Llama-3.1-8B-Instruct, while the proposed method is a method that is used to **improve** citation quality, and is applied to model that is already fine-tuned to generate citation (either using the LongCite-SFT data or data generated with ContextCite in the experiment setting). Thus, it is a bit unfair to compare these two methods. On the other hand, previous method [1] has been proposed to leverage NLI models to measure citation precision / recall, which seems to bit a more appropriate baseline for the proposed method. While it is true that this method requires an external NLI model whereas the proposed method is "self-supervised", it would be helpful to compare these two approaches (if there is a gap, how big is it?).
[1] Training Language Models to Generate Text with Citations via Fine-grained Rewards. Huang et al., 2024 ACL.
Supplementary Material: I briefly skim through the submitted code.
Relation To Broader Scientific Literature: The proposed method contributes to the line of work that enables language model to generate citation / attribution to its generation, which is an important research direction. The proposed "self-supervised" method is interesting in that it leverages context ablation to improve citation quality.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: * Strength: Overall I think the proposed method is an effective approach to improve citation quality given an answer generated. The `SimPO then BoN` results are pretty strong on both LongCite-8B and Llama-3.1-8B SFT on ContextCite.
* Weakness: While the proposed method is "self-supervised", it would be helpful to compare it with some "supervised" methods -- for instance, the NLI model as reward as mentioned before, or SFT with the data from LongCite that is used to create the preference-pair. It is ok if SelfCite does not out-perform these methods, but it will be helpful to understand the gap (if there is any).
Other Comments Or Suggestions: N/A
Questions For Authors: * Since the method (especially with BoN) aims to improve citation quality while keeping the answer unchanged, are there cases where the generated answer is not faithful (and thus can't be supported by the context) and how does SelfCite deal with that?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer WRpv for the constructive comments!
## A Better Baseline: SimPO with NLI Rewards (Also for **Reviewer Cy66**)
> …ContextCite baseline is not appropriate. … [1] has been proposed to leverage NLI models to measure citation precision / recall, which seems to bit a more appropriate baseline
We agree that ContextCite’s mechanism is much different from SelfCite, so we will change our framing and treat its scores mainly for reference. We follow your advice to adopt NLI rewards from Huang et al. 2024 [1] as baseline. For fair comparison, we reuse our SelfCite SimPO training pipeline (initializing from LongCite-8B + trained with LongCite-45k data), but only change the reward function to be the NLI-based citation recall/precision proposed in Huang et al. 2024 [1]. We ignore the correctness reward in [1] as we don’t have ground truth answers from LongCite-45k.
We compare this method (SimPO w/ NLI Rewards) with ours (SimPO w/ SelfCite) on both LongBench-Cite (table below) and ALCE (table in the next section). Both results show SimPO w/ NLI Rewards improves citation quality over LongCite (except for MultifieldQA & HotpotQA), but still consistently outperformed by SelfCite, further verifying the effectiveness of SelfCite. We will include this baseline in our final version paper.
|Metric: Citation F1|Longbench-Chat|MultifieldQA|HotpotQA|Dureader|GovReport|Avg|
|-|-|-|-|-|-|-|
|LongCite-8B|66.6|79.9|64.1|73.7|84.5|73.8|
|+ SimPO w/ NLI Rewards|69.8|77.4|63.2|77.2|87.5|75.0|
|+ SimPO w/ SelfCite|69.1|81.0|71.5|78.9|89.1|77.9|
## Evaluation on Chunk-level Citation Benchmark ALCE (Also for **Reviewer Cy66**)
> previous method [1] can be adopted to chunk-level citation. Therefore, it would be better to also include experiments are datasets such as ALCE [0].
We follow your advice to test our models on ALCE and show the results in the table below. We found that our baseline LongCite-8B already achieves much better citation recall/precision than the prompting method of Gao et al. (2023). The baseline “SimPO w/ NLI Rewards” (using the rewards from Huang et al., (2024) above) performs slightly better than LongCite-8B. Our method, “SimPO w/ SelfCite”, further brings substantial improvements over both baselines.
The bottom row is the best result from the supervised method of Huang et al. (2024). It differs from other model settings and was trained on in-distribution data. Its numbers are thus incomparable with the other rows and we only include them for reference. Specifically, the differences are:
1. They train the models only on the “in-distribution” training sets of QA datasets in ALCE, with the exact same chunk-level setting of ALCE, while SelfCite was trained on “out-of-distribution” LongCite-45k data with sentence-level citations.
2. They directly use the same NLI evaluator used in ALCE benchmark (`google/t5_xxl_true_nli_mixture`) to provide rewards for citation recall/precision, essentially optimizing the benchmark scores of ALCE directly.
3. They also do distillation from ChatGPT.
Despite the fact of a **cross-domain** & **cross-setting** transfer setting, SelfCite still achieves good performance much better than baselines: LongCite-8B & SimPO w/ NLI Rewards, showing its effectiveness. We will include this result in our paper.
||ASQA|||ELI5|||
|-|-|-|-|-|-|-|
||EM Rec.|Cite Rec.|Cite Prec.|Correct|Cite Rec.|Cite Prec.|
|**Gao et al. 2023**|
|Llama-2-13B-chat|34.66|37.48|39.62|12.77|17.13|17.05|
|Llama-3.1-8B-Instruct|42.68|50.64|53.08|13.63|34.66 |32.08|
|**Finetuned on LongCite-45k**|
|LongCite-8B|42.11|62.27|57.00|15.37|30.54|29.15|
|+ SimPO w/ NLI Rewards|41.20|65.65|60.20|15.30|33.06|31.05|
|+ SimPO w/ SelfCite|42.57|**71.68**|**62.05**|15.17|**37.09**|**35.62**|
|**Finetuned on ALCE train set**|
|Huang et al. 2024|40.05|77.83|76.33|11.54|60.86 |60.23|
## How does SelfCite handle unfaithful answers?
> are there cases where the generated answer is not faithful (and thus can't be supported by the context) and how does SelfCite deal with that?
We did sometimes (but not often) find that the answer can be a slight misunderstanding of the cited information, and SelfCite will still cite such text that it is based on. Stepping back, this is a common theme for all methods that post-hoc generate citations, where the prevailing philosophy is that “citations” are for the traceability and verifiability of answers, which enable a user to double-check the answer correctness easily, in case the answer is wrong. We will include this discussion in our final version paper. | Summary: The paper presents SelfCite, a self-supervised method for improving citation accuracy in Large Language Models (LLMs). The key innovation lies in using context ablation to compute a self-rewarding signal based on necessity and sufficiency scores, which are then used to enhance citation quality through best-of-N sampling and preference optimization (SimPO). The method achieves up to 5.3 F1 improvement on the LongBench-Cite benchmark without requiring human annotations.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Theoretical Claims: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Experimental Designs Or Analyses: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Supplementary Material: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Relation To Broader Scientific Literature: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Essential References Not Discussed: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Other Strengths And Weaknesses: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Other Comments Or Suggestions: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Questions For Authors: Strength:
1. The paper introduces a novel self-supervised approach for citation alignment in LLMs, eliminating the need for human annotations. The combination of necessity and sufficiency scores to derive a reward function is well-motivated and provides a principled way to improve citation quality.
2. The method is designed to be lightweight, leveraging a model’s own probability estimates rather than requiring expensive external annotations. This makes it applicable to large-scale citation tasks in real-world settings, such as research assistants or fact-checking systems.
3. Unlike previous methods that rely on human annotations or costly API calls, SelfCite autonomously improves citation quality using a reward function derived from context ablation, making it highly scalable and cost-efficient.
Weakness:
1. The paper should more explicitly differentiate SelfCite from previous work, especially in how it improves over ContextCite and other contributive context attribution methods. A comparison table summarizing key differences could be helpful.
2. The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
3. Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
4. The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer ci8i for the constructive comments!
## A Comparison Table (Also for **Reviewer Cy66**)
> The paper should more explicitly differentiate SelfCite from previous work… A comparison table summarizing key differences could be helpful.
We follow your suggestion to make a table and point out the key differences among the previous work. We will include it in our final version paper.
|Method|Sentence-level citations?|One pass generation?|Preference optimization?|Handle 128K long-context?|External supervision?|
|-|-|-|-|-|-|
|ACLE|❌(chunk-level)|✅|❌(prompting)|❌(8K)|2-shot prompting|
|Huang et al. 2024|❌(chunk-level)|✅|✅|❌(8K)|NLI + ground truth|
|ContextCite|✅|❌(at least 32 calls)|❌(not generative)|✅|N/A|
|LongCite|✅|✅|❌(SFT only)|✅|SFT data|
|SelfCite (Ours)|✅|✅|✅|✅|N/A|
## A Toy Example of Necessity and Sufficiency Scores
> The necessity and sufficiency scores are well-motivated, but additional theoretical justification or a toy example demonstrating their individual impact could strengthen the argument.
The necessity and sufficiency scores are designed based on human’s common preference: a citation has to be necessary and sufficient, which also matches the metrics of citation precision (necessary) and recall (sufficient) commonly exist in the evaluation benchmarks. Here we follow your advice to show a simple toy example demonstrating their individual impacts:
**Document:**
[1] Alice traveled to France in 2020.
[2] Bob visited the famous National Museum in Tokyo, Japan in 2019.
[3] Chloe visited the Louvre Museum in Paris, France in 2018.
…
**Query:**
"Which famous museum could Alice have visited?"
**Response:**
"Alice could have visited the Louvre Museum."
**Citation Candidates:**
- **[1,2] (Incorrect)**:
- *Necessity*: Probability drops (since removing [1] prevents “Alice traveled to Paris”). (✅ high necessity due to [1])
- *Sufficiency*: Probability doesn’t hold. [1,2] alone cannot fully support “visited the Louvre Museum” since [2] is irrelevant and [3] is missing. (❌ low sufficiency)
- **[1,3] (Correct)**:
- *Necessity*: Probability drops more; removing [1,3] loses essential details (“Alice traveled to Paris,” “visited Louvre”). (✅ high necessity)
- *Sufficiency*: Probability holds. [1,3] Fully supports the response. (✅ high sufficiency)
This toy example clearly shows the individual contributions of necessity and sufficiency scores.
## Discussion on Efficiency
> Since best-of-N sampling increases inference-time costs, a discussion on its efficiency trade-offs and potential ways to reduce overhead (e.g., pruning low-quality candidates early) would be beneficial.
We calculate the latency per example on the LongBench-Cite dataset. On average, direct decoding from LongCite-8B/SelfCite SimPO models have similar latency. When using SelfCite BoN, the sampling + reranking steps in total take roughly 7x longer time compared to the direct decoding methods. All experiments are done with 8*A100 GPUs with batch size 1 and model parallel.
|Method|Avg latency (s)|
|-|-|
|LongCite-8B|24.3|
|SelfCite BoN sampling|149.0|
|SelfCite BoN reranking|34.0|
|SelfCite SimPO model|26.2|
Also, because we only sample the citation sequence, not the whole responses, the number of generated tokens is very limited, usually within 5~10 tokens. The strategy of pruning low-quality candidates early may not help, as they are mainly for saving time in generating long responses.
### Latency of BoN is not a major concern
In fact, we do not have concerns on the longer latency or extra inference cost of BoN, because we also have the SelfCite SimPO model that can achieve the same performance of BoN in one pass generation, without any additional inference cost. If the users have the need for best efficiency, the best solution is to directly use our SimPO model, instead of using our BoN and trying to optimize BoN.
## Exploring Hyperparameters
> The effect of hyperparameters like N in best-of-N sampling and the choice of probability thresholds for necessity/sufficiency scores should be explored more systematically.
There are no “probability thresholds” for necessity/sufficiency scores in SelfCite. We use the raw probability changes (probability drop and probability hold) during context ablation directly as the reward. There is no need to tune any thresholds in our reward design.
For N in best-of-N sampling, as we mentioned in Line 217 (left column), after deduplicating repeated citation candidates, on average there are only 4.8 candidates (std=3.2) left per statement. It is due to the fact that we only do sampling within the citation sequences, and keep the statements in the response unchanged. When generating citations, usually only a limited number of relevant sentences can support the statements, resulting in a limited possibility of citations to be generated. Given the low diversity of citation candidates, increasing N to be larger than 10 would have a very limited impact on the BoN results. | null | null | null | null | null | null |
Toward Efficient Kernel-Based Solvers for Nonlinear PDEs | Accept (poster) | Summary: This paper presents a fair contribution to kernel-based PDE solvers for nonlinear PDEs, improving upon prior methods by eliminating the need for differential operator-embedded kernels. The proposed algorithm enhances computational efficiency by leveraging Kronecker product properties and avoiding complex Gram matrices. The paper also provides convergence proofs and error rate analysis under regularity assumptions. The proposed method is evaluated on Burgers’, nonlinear elliptic, Eikonal, and Allen-Cahn equations, showing its comparable accuracy and improved scalability compared to the baselines.
## update after rebuttal
Thank you for addressing the concerns. I am keeping my original score.
Claims And Evidence: Claims are largely supported by theoretical derivations and experiments.
Methods And Evaluation Criteria: - Strengths: The methods and evaluation criteria are in general well-aligned with PDE-solving tasks. The benchmarks involve multiple nonlinear PDE formulations, and the baselines include DAKS, PINNs, and legacy finite difference methods too. The paper also offers scalability analysis, which is appreciated.
- Weakness: The method assumes a structured grid, which limits its generalization to unstructured meshes and other complex discretization routines.
Theoretical Claims: This paper provides convergence analysis, proving that the method maintains error bounds similar to prior kernel PDE solvers despite using a smaller model space. Other claims are also briefly reviewed and look good, but not checked in detail.
Experimental Designs Or Analyses: - Strengths: The experimental setup is in general well-designed. It covers multiple PDE formulations, varying levels of difficulty (small vs. large collocation points), and different kernel settings.
- Weaknesses: The hyperparameter selection process (e.g., kernel length scales) is not discussed in detail. I'm also curious how the runtime of the Kronecker based method compares with the naive full matrix method.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper is built upon kernel-based PDE solvers and relates to Gaussian Process models for PDE solving. The Kronecker product approach also aligns with structured kernel methods. There is sufficient technical contribution.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: See the "Weaknesses" items above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable and constructive comments.
>The method assumes a structured grid, which limits its generalization to unstructured meshes and other complex discretization routines.
R1: We appreciate your insightful comments. Indeed, we agree that our method relies on a structured grid and does not directly apply to more complex discretizations. A simple yet effective workaround — which we have validated (see Section 6 and Appendix Section C.3) — is to employ a virtual grid that covers the irregular domain.
In future work, we plan to explore two directions to better generalize our method. First, we will investigate domain decomposition and hierarchical grid construction to adaptively adjust the grid resolution across local regions while preserving computational efficiency. Second, we aim to learn a mapping from unstructured mesh points to a latent grid, where our efficient kernel solvers can be applied. This approach is inspired by recent work such as GeoFNO[1] in the neural operator literature.
We will include this discussion in the paper.
[1] Li, Z., Huang, D. Z., Liu, B., & Anandkumar, A. (2023). Fourier neural operator with learned deformations for pdes on general geometries. Journal of Machine Learning Research, 24(388), 1-26.
>The hyperparameter selection process (e.g., kernel length scales) is not discussed in detail.
R2: Thank you for your great suggestion. Given the wide range of hyperparameters --- including $\alpha$ and $\beta$ (see our response R3 to Reviewer fQAE), as well as kernel length-scales and nugget values (see Lines 284–297 right column) --- performing a full grid search would be prohibitively expensive. Therefore, we adopt a hybrid strategy. We begin with a random search to identify a promising set of hyperparameters. Then, we perform a grid search over $\alpha$ and $\beta$, keeping the other parameters fixed. Once $\alpha$ and $\beta$ are selected, we fix them and conduct a grid search over the remaining hyperparameters, including the nugget and length-scales. We will include a detailed description of this procedure in the paper.
>I'm also curious how the runtime of the Kronecker based method compares with the naive full matrix method.
R2: Great question. Below, we provide the runtime of our model using the naive full matrix computation (i.e., without exploiting the Kronecker product structure). As shown, the per-iteration runtime with naive matrix operations is consistently around **100× slower** compared to our method, which leverages Kronecker product properties. Furthermore, when the number of collocation points increases to 22,500 for the Burgers’ equation and 43,200 for the Allen–Cahn equation, the naive approach exceeds available memory, resulting in out-of-memory (OOM) errors — rendering it infeasible to run. We will include these results in our paper.
| Allen-Cahn ($a=15$) | 2400 | 4800 | 6400 | 8100 | 22500 |
|-------------------------------------------|------------|------------|-----------|---------|-------|
| **Naive computation** (per-iter) | 1.1E-02 | 4.3E-02 | 7.2E-02 | 1.1E-01 | OOM |
| SKS (per-iter) | 3.6E-4 | 9.1E-4 | 1.2E-3 | 1.8E-3 | 5.9E-3 |
| DAKS (per-iter) | 2.1 | 10.5 | OOM | OOM | OOM |
| PINN (per-iter) | 5.6E-2 | 1E-1 | 1.3E-1 | 1.5E-1 | 4.3E-1 |
| Burgers' $\nu=0.001$ | 2400 | 4800 | 43200 |
|------------------------------------------|------------|------------|-------------|
| **Naive computation** (per-iter) | 1.4E-02 | 5.4E-02 | OOM |
| SKS (per-iter) | 4.6E-04 | 9.8E-04 | 6.8E-03 |
| DAKS (per-iter) | 7.43 | 38.5 | OOM |
| PINN (per-iter) | 2.7E-01 | 5.2E-01 | 4.1E-01 | | Summary: This paper introduces a novel kernel learning framework for efficiently solving nonlinear partial differential equations (PDEs). Unlike existing methods that embed differential operators within kernels, this approach eliminates these operators from the kernel, using standard kernel interpolation to model the solution. By differentiating the interpolant, the method avoids the need for complex Gram matrix construction, which simplifies implementation and enables efficient computation. The framework leverages Kronecker product structures for scalable computation, allowing it to handle large numbers of collocation points. The authors provide a rigorous convergence analysis and demonstrate the method's efficacy on several benchmark PDEs, showing competitive performance and scalability, particularly in challenging scenarios requiring dense grids.
Claims And Evidence: The main claim is that the newly proposed method has better computation efficiency and scalability, which are well supported by the experiments.
Methods And Evaluation Criteria: The kernel method and PDE solution accuracy criteria are suitable.
Theoretical Claims: I roughly checked the correctness of theoretical claims.
Experimental Designs Or Analyses: The description and analyses of experiments are valid.
Supplementary Material: I reviewed section C about additional experiment results.
Relation To Broader Scientific Literature: Kernel Method, Numerical method of PDEs.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The main strength of the proposed method is the acceleration of kernel-based PDE solvers.
Other Comments Or Suggestions: Readability can be improved by adding: 1)intuitive and informal description of math derivations in Sections 2 and 3, 2) bullets in Section 5.
Questions For Authors: 1. Can you provide more details on the virtual grid on an irregular domain, which is important for the application of your method?
2. Can you provide the running time comparison with the finite difference method?
3. In future work, can the proposed method be extended to the neural operator setting for further acceleration?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable and constructive comments.
>C1: Can you provide more details on the virtual grid on an irregular domain, which is important for the application of your method?
R1: Great question. Our virtual grid is chose as the smallest rectangular grid that fully covers the irregular domain. Specifically, for the nonlinear elliptic PDE (see Appendix C.3 and Figure 4), the domain is a circle inscribed within $[0, 1]\times [0, 1]$, and we hence set the virtual grid on $[0, 1]\times [0, 1]$ with a grid size of **$50 \times 50$**. For the Allen-Cahn equation (see Appendix Figure 4), the domain is a triangle with vertices at (0, 0), (1,0) and (0.5, 1). To fully cover this domain, we again place the virtual grid on $[0, 1] \times [0, 1]$, using a grid size of $50 \times 50$. We will include these details in our paper.
> C2: Can you provide the running time comparison with the finite difference method?
R2: Great suggestion. Below, we provide the time comparison results in seconds (for the Allen-chan equation with $a=15$).
| | 2400 | 4800 | 6400 | 8100 | 22500 |
|-----------------------------|------------|------------|------------|---------|---------|
| **FD (per-iter)** | 1.6E-02 | 1.4E-02 | 1.4E-02 | 1.4E-02 | 1.5E-02 |
| SKS (per-iter) | 3.6E-4 | 9.1E-4 | 1.2E-3 | 1.8E-3 | 5.9E-3 |
| DAKS (per-iter) | 2.1 | 10.5 | N/A | N/A | N/A |
| PINN (per-iter) | 5.6E-2 | 1E-1 | 1.3E-1 | 1.5E-1 | 4.3E-1 |
| | | | | | |
| **FD (total)** | 0.13 | 0.14 | 0.16 | 0.18 | 1.15 |
| SKS (total) | 27.1 | 99.56 | 116.8 | 132.57 | 474.34 |
| DAKS (total) | 16.44 | 84.18 | N/A | N/A | N/A |
| PINN (total) | 2821 | 5112 | 6287 | 7614 | 21375 |
As we can see, the finite difference (FD) is computationally much more costly per iteration compared to our method (SKS). This might be due to the expensive cost of computing the inverse Jacobian during the root finding procedure. However, FD typically converges within just a few dozen iterations — substantially faster than the stochastic optimization used in both SKS and PINN — leading to a much lower total runtime. We will include this comparison in the paper.
> C3: In future work, can the proposed method be extended to the neural operator setting for further acceleration?
R3: We appreciate the reviewer for bringing up this excellent idea — we completely agree. In fact, we have already begun exploring this direction in our ongoing work. One line of effort involves replacing the trunk network in the Deep Operator Network (DeepONet) with Gaussian process bases. This enables us to adopt a similar computational strategy to accelerate both training and prediction, while also providing a natural framework for uncertainty quantification. We are also extending our approach to deep kernel-based operator learning, where our method can further accelerate function transformations in the latent channel space. We look forward to continuing our exploration in this promising direction. | Summary: The paper proposes a new twist to a kernel-based solver for PDEs. It builds on previous work by relaxing the constraints associated to the PDE to facilitate optimization. By placing the collocation points on a grid and using a decomposable kernel, the inversion of a large matrix is broken down into many inversions of small matrices, which improves the performance. A convergence analysis is provided, showing that this solver enjoys similar properties as previous work i.e. the Sobolev norm of the residuals goes to zeros as the grid uniformly covers the input space. Numerical experiments highlight the benefits of the approach and compare to competitors (PINNs and the former kernel-based solver).
## update after rebuttal
The authors clarified my question about how the constraint was handled using lagrangian duality. This and the rebuttal to other reviews convinced me to increase my score.
Claims And Evidence: Yes, the claims are valid and supported by evidence, both theoretical and empirical. The performances are not uniformly superior to competitors but I do not think it is an issue. The wording in the conclusion ("encouraging performances") might be closer to reality that within the abstract ("demonstrate the advantage").
Methods And Evaluation Criteria: Yes, the methods make sense. They seem to be standard in this area of research and have been investigated in other papers.
Theoretical Claims: To the best of my knowedge, the theoretical claims are valid.
Experimental Designs Or Analyses: I have no issue with the experimental design.
Supplementary Material: I skimmed through it but did not read it in detail.
Relation To Broader Scientific Literature: The paper proposes a novel way to encode the constraints for solving PDEs, improving previous work. The convergence analysis is similar to what was done for previous work as well. Overall, the paper is very close to their main source of inspiration.
Essential References Not Discussed: I would have liked to see discussed [Learning Partiel Differential Equations in Reproducing Kernel Hilbert Spaces, George Stepaniants, JMLR 2023] and references therein. Overall I think that the overview in the introduction lacks references about related approaches beyond the scope of [Chen et al. 2021].
Other Strengths And Weaknesses: +: The paper is relevant to the ICML community.
-: The introduction lacks references. Only the direct relevant work is cited (2 papers) which makes it hard to situate the paper at first.
-: There are no standard deviations on the performance metrics
Other Comments Or Suggestions: I think that there is a problem with the formulation of (7). As it is, $\epsilon$ has no impact on the problem thus I hardly see how it can be equivalent to (6). The regularization term should grow as $(P(u)(x) - f(x))^2$ is away from $[0, \epsilon]$. A typical way to do this is to square this quantity again.
Questions For Authors: The proposed formulation for handling the constraints relies on two additional regularization parameters. While you demonstrate the equivalence between formulations (6) and (7), this is only true for the perfect regularization strength, which we do not know. Could you include a discussion about the tuning of these parameters ?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your valuable and insightful comments.
>The introduction lacks references. Only the direct relevant work is cited (2 papers)
R1: Thank you for the helpful suggestion. We will include additional references in the introduction to provide broader context and better situate our work within the existing literature from the outset.
>no standard deviations
R2: Great comment. In our experiments, we observed that despite using stochastic optimization (ADAM), our method (SKS) consistently converge to the same solution. So is PINN. We ran multiple trials on each PDE and found that the standard deviation of the error is negligible. For example, below we show the standard deviation of the $L^2$ error for our method when solving Burgers' equation ($\nu = 0.01$) and the nonlinear elliptic equation. Given the extremely low variance, we chose to omit standard deviation values from the main results (Note that the finite difference method is deterministic and does not exhibit variance). We will clarify this point in our paper.
Burgers($\nu = 0.01$):
| 600 | 1200 | 2400 | 4800 |
|---------------------|----------------------|----------------------|-----------------|
| 1.44E-02 $\pm$ 1.52E-07 | 5.40E-03 $\pm$ 1.39E-07 | 1.12E-03 $\pm$ 7.76E-08 | 3.21E-04 $\pm$ 0.0 |
Nonlinear elliptic:
| 300 | 600 | 1200 | 2400 |
|-----------------|-----------------|-----------------|-----------------|
| 1.26E-02 $\pm$ 0.0 | 6.93E-05 $\pm$ 0.0 | 6.80E-06 $\pm$ 0.0 | 1.83E-06 $\pm$ 0.0 |
>I think that there is a problem with the formulation of (7). As it is, $\epsilon$ has no impact on the problem thus I hardly see how it can be equivalent to (6)... A typical way to do this is to square this quantity again.
R2: Thank you for your insightful question. Here is our clarification.
First, **$\epsilon$ plays an important theoretical role** (though in practice, it can often be set to zero). Our convergence proof and rate estimate (see Lemma 4.2 and Appendix Section A) are established by systematically varying $\epsilon$. Specifically, we set $\epsilon = C_0 h^{2\tau}$ and vary $\epsilon$ by adjusting the fill-in distance $h$. Each choice of $\epsilon$ defines a distinct instance of problem (6), with its own solution. We prove that as $\epsilon \to 0$ (i.e., $h \to 0$), the solution of (6) converges to the ground-truth solution of the PDE (see Appendix Section A for details).
Second, to show that solving (7) (with appropriately chosen $\alpha$ and $\beta$) is equivalent to solving (6), we use the **Lagrangian formulation** of (6). It is a standard result that constrained optimization problems can be reformulated as **mini-max problems over the Lagrangian**. The Lagrangian is a *linear* combination of the objective and the constraints — it does **not** involve squaring the constraints. From this perspective, it is straightforward to show that when $\alpha$ and $\beta$ are selected as the optimal dual variables in the mini-max problem, solving (7) yields the optimal $u$, and thus is equivalent to solving (6). This equivalence holds for **any** $\epsilon$, not just in the limit as $\epsilon \to 0$. The full proof is provided in Appendix Section B.
Finally, we agree that an alternative approach --- minimizing the objective plus the squared constraints (i.e., a penalty method) --- is also a good and viable idea. However, this method is primarily justified for equality constraints (i.e., $\epsilon = 0$) (While it is possible to design penalty terms for inequality constraints, they typically introduce non-differentiable terms.) Moreover, equivalence to the original constrained problem is only guaranteed as as **the penalty weights tend to infinity**. Therefore, we believe our formulation in (7) offers stronger theoretical guarantees and greater practical flexibility when approximating solutions of (6). We will include a more detailed discussion of this alternative approach in our paper.
>Could you include a discussion about the tuning of two regularization parameters (7)?
R3: Thank you for the great suggestion — we completely agree. In our experiments, we selected $\alpha$ and $\beta$ in Equation (7) from a wide range:
$[10^{-2}, 10^{-1}, 1, 10, 10^2, 10^3, 10^4,10^5, 10^6, 10^7, 10^8, 10^{10}, 10^{12}, 10^{14}, 10^{15}, 10^{20}]$,
jointly with other hyperparameters, including the kernel length scales and nugget terms (see their ranges in Lines 284-297 right). To efficiently tune hyperparameters, we first performed a random search to identify a promising group of hyperparameters. We then fixed all other hyperparameters and conducted a grid search over $\alpha$ and $\beta$. Finally, we fixed $\alpha$ and $\beta$ and performed a grid search over the remaining hyperparameters. We will include this discussion in the paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response.
Concerning the formulation of (7): thanks for the clear explanation, my comment was mislead.
After reading the other reviews and their rebuttals, I do not think that there are major issues with the paper. It is sound work, and as a reader I would be happy to see it among the ICML papers this year. I moved my score up a bit to reflect that.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We appreciate your support and positive feedback! | Summary: The authors propose an assymetric RBF collocation method for solving general PDEs from a Gaussian process/RKHS point of view. They parametrize the solution $u(x;\eta)$ as a (Gaussian RBF) kernel interpolant given function values on collocation points, then differentiate this representation to obtain derivatives values at collocation points, which they then optimize to approximately satisfy the PDE on collocation points using a least squares formulation.
They then take advantage of the fact that under the assumption that the kernel $k$ factorizes across different dimensions of the input space, then both $\eta \to u(x;\eta)$ aand $\eta \to Lu(x;\eta)$ posses efficient efficient evaluation formulas via Kronecker-factored matrix vector products.
With the parametrization and fast evaluation in hand, their algorithm consists of minimizing a regularized squared residual of PDE mismatch on interior collocation points and boundary mismatch using stochastic optimization (ADAM).
Claims And Evidence: The claims made in the paper seem well substantiated by evidence. I am only unsure as to why SKS would outperform DAKS--the theory for DAKS seems stronger, but a good implementation is difficult, and especially with the RBF kernel, matrices can get very ill-conditioned, making it hard to disambiguate discretization error from numerical roundoff error.
Methods And Evaluation Criteria: The problems and evaluation criteria seem good. I am a little bit skeptical of ADAM being the best solver for the problem, compared to a quasi-Newton or Newton-Krylov method, but their experiments corroborate this choice.
Theoretical Claims: The proofs look sound. I read the first part of the proof of Lemma 4.2 carefully, and it looks all correct except for a minor typo in equation (22) which seems like its should be $u_m^*$ on the left.
Experimental Designs Or Analyses: The experiments seem valid
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The main contributions are as follows:
1. Convergence analysis of least squares methods for assymetric collocation--a bound on the attainable squared residual norm, and convergence rates of such least squares solutions to a true solution of the PDE.
2. Taking advantage of the Kronecker factored structure of the kernel matrices when the kernels factorize to obtain fast evaluation formulas and efficient optimization
3. Interpretation of RBF collocation methods as optimization over function values that are interpolated rather than coefficients of an expansion.
Essential References Not Discussed: None that I know of.
Other Strengths And Weaknesses: The strengths include a nice analysis of feasibility for the least squares problem, and a simple algorithm for general PDEs.
The sensitivity to hyperparameters seems to be a weakness (lengthscale and nugget).
Other Comments Or Suggestions: The term "Gaussian-Newton" seems odd to refer to "Gauss-Newton" methods.
The implementation on irregular domains is a bit unclear. Throughout the rest of the paper, it is assumed that the basis points are the same as collocation points, but this seems hard to do with irregular domains (especially for the boundary. This should be clarified.
The authors should include an error plot as a function of the mesh norm $h$ to show the order of convergence of the algorithm (using Gaussian RBF should theoretically give spectral convergence?).
The authors should clarify assumption C3 in relation to their choice of Gaussian RBF kernel. Is it true that the RKHS associated to a Gaussian RBF kernel can be continuously embedded in $H^k$ for every $k$?
Questions For Authors: What was the batch size and hyperparameters used in the stochastic optimization?
What did the results with attempting to apply LBFGS to improve the solutions look like? In the PINN literature, it seems that it often significantly improves the solution after running ADAM to warm up.
Did the gradient norms of the objective go to zero at the end (did you reach a minimum, or stall out due to ill conditioning)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | null | null | null | null | null | null | |
Lightspeed Geometric Dataset Distance via Sliced Optimal Transport | Accept (poster) | Summary: The paper proposes a sliced optimal transport-based dataset distance measure that uses moment transformation projection. The method improves upon a previously proposed optimal transport based dataset distance measure and makes the dataset distance computation more efficient. The paper presents a theoretical analysis of this distance and discusses various properties of the distance measure along with the computational complexity. The paper then presents evaluation of this distance on tasks such as transfer learning and data augmentation.
## Update after the rebuttal
I thank the authors for providing detailed answers to my questions. I have updated my rating of the work (2 --> 3).
Claims And Evidence: The theoretical claims are well supported and convincing while the empirical evaluation seems rather limited.
Methods And Evaluation Criteria: The evaluation methodology makes sense but are rather small scale making it unclear of whether the approach is effective practically or not.
Theoretical Claims: The claims and proofs look good but I did not check the details of the proofs though.
Experimental Designs Or Analyses: The applications of transfer learning and data augmentation are relevant.
Supplementary Material: Looked at all aspects of the supp material.
Relation To Broader Scientific Literature: The paper builds on the paper titled "Geometric dataset distances via OT (OTDD)" which had initially proposed a measure to calculate dataset distances via OT. The paper proposes a sliced OT based method which improves the efficiency of the previous approach. The paper follows the same experimental protocol used in the OTDD paper and shows that their distance measure is correlated with OTDD while being more efficient.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. The problem of measuring dataset distance is very relevant.
2. The proposal to use OT-based method and improving the efficiency over existing OTDD work is a good contribution.
3. Theoretical analysis of the proposed metric is sound.
Weaknesses:
1. The empirical evaluation of the method is very basic and follows the OTDD paper very closely with no new experiments or datasets to demonstrate the effectiveness of s-OTDD beyond what had already been shown. Since efficiency is the main contribution, the work should include at least a large-scale experiment on an application where OTDD is not applicable/ very expensive and s-OTDD handles it with ease.
2. The datasets used in the work to compute the distance tend to be very simplistic such as *NIST-type datasets.
3. The reason to use Pearson Correlation between OTDD and s-OTDD for evaluation is not clear.
Other Comments Or Suggestions: 1. A large-scale evaluation on high dimensional datasets is required to demonstrate the effectiveness of the method over OTDD.
2. Answers to below questions will be helpful for understanding the utility of the approach beyond just being faster than OTDD.
Questions For Authors: 1. Why is OTDD a natural choice for dataset distance? Why should a new dataset distance try to be correlated with OTDD? Why should the correlation be the Pearson correlation (wouldn't a rank correlation be a better way of measuring the effectiveness of the new metric)?
2. The number of random projections that lead to the best result seem to be of the same order as the number of samples in the dataset. What are the implications of it for larger datasets? The time complexity for the method increases to $O(n^2logn)$ and space complexity $O(n(n+d))$. Can you comment about this and give a guideline of how many projections should one be looking at for the best results?
3. The results of sec 4.2 and 4.3 have shown that s-OTDD achieves similar correlation to accuracy for transfer learning/data augmentation problems compared to OTDD. But since s-OTDD can work with more data that OTDD, why is it not more predictive of the transfer learning/data augmentation performance?
4. Is s-OTDD predictive of metrics other than classification accuracy?
5. What types of datasets/tasks (beyond single label classification datasets) can s-OTDD be used to measure distances for?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the time and constructive feedback.
**Q11**. Please provide large-scale experiments where OTDD is too expensive, and s-OTDD excels.
**A11.** In the paper, we showed that OTDD cannot be used when the dataset size reaches more than 30,000 samples on MNIST and CIFAR10 in Figure 2. In addition, we added new experiments on the large-scale Tiny-ImageNet dataset, where the OTDD baseline is not feasible to compute in **A14**.
**Q12.** The datasets used in the work to compute the distance tend to be very simplistic, such as *NIST-type datasets*.
**A12.** We did conduct text classification with various datasets such as AG News, DBPedia, Yelp Reviews, Amazon Reviews, and Yahoo Answers (Section 4.2). In addition, we conducted experiment on Tiny ImageNet and CIFAR-10 (32×32 resolution). We also refer the reviewer to **A14** where we added a new experiment on Tiny-ImageNet of size 224×224.
**Q13.** The reason to use Pearson Correlation between OTDD and s-OTDD for evaluation is not clear.
**A13.** s-OTDD can be seen as an alternative solution for OTDD when dealing with a large scale datasets. Therefore, measuring correlation with OTDD strengthens the claim that s-OTDD can replace OTDD while being more efficient.
**Q14.** A large-scale evaluation on high dimensional datasets is required to demonstrate the effectiveness of the method over OTDD.
**A14.** We have added a large-scale transfer learning experiment in Tiny-ImageNet dataset (224x224 resolution) where computing OTDD is infeasible. To compute dataset distances, we sampled 5,000 examples from each sub-dataset. The result of the experiment is visible in Figure 16 at this link [https://imgur.com/a/hHvg2T2](https://imgur.com/a/hHvg2T2). In the figure, we see that s-OTDD has a relatively good correlation with the classification accuracy.
**Q15.** Why is OTDD a natural choice for dataset distance? Why should a new dataset distance try to be correlated with OTDD? Why should the correlation be the Pearson correlation (wouldn't a rank correlation ...)?
**A15.** OTDD is a good dataset distance since it is model-agnostic, does not involve training, can compare datasets even if their label sets are completely disjoint. Similar to OTDD, s-OTDD is also model-agnostic (requires no modeling on data), does not need to estimate any parameters, and can handle label sets that are completely disjoint. Since s-OTDD is proposed as an alternative solution to OTDD, it needs to have a good correlation with OTDD.
From the suggestion of the reviewer, we have added the new figures with both Pearson correlation (denoted as $r$) and Spearman's rank correlation (denoted as $\rho$) in Figure 12-14 at this link [https://imgur.com/a/hHvg2T2](https://imgur.com/a/hHvg2T2). We observe that the values of Spearman's rank correlation and Pearson correlation are almost similar in majority of cases for s-OTDD.
**Q16**. The number of random projections ... What are the implications of it for larger datasets? The time complexity for the method increases to $O(n^2 \log n)$ and space complexity $O(n(n + d))$... how many projections should one be looking at for the best results?
**A16.** In practice, $L$ should be large enough with respect to the number of dimensions $d$ not the dataset size $n$. In particular, authors in [4] empirically show that we need $L\approx 1.22 \sqrt{d}$. In Figure 2, we showed that the computation gap between $L=1000$ and $L=10000$ is not large. Moreover, the computations for $L$ projections are independent, hence, we can utilize parallel computing to compute s-OTDD. Therefore, having a large $L$ is not a problem. In addition, we can reduce $L$ by using more advanced sampling techniques [5].
[4] Sliced Wasserstein Autoencoders, Kolouri et al.
[5] Quasi-Monte Carlo for 3D Sliced Wasserstein, Nguyen et al.
**Q17**. ... s-OTDD can work with more data than OTDD, why is it not more predictive ... ?
**A17.**. In the paper, we actually compare s-OTDD and OTDD with the same number of data samples. We want to convey that s-OTDD is much faster than OTDD while being comparable in performance. In Figure 2 and in **A14**, we showed that OTDD cannot be used when having large or high-dimensional datasets.
**Q18.** Is s-OTDD predictive of metrics other than classification accuracy?
**A18.** We can compute the correlation between s-OTDD and any function of two datasets. We added a new experiments to show that s-OTDD also correlates well with other performance metrics such as Precision, Recall, and F1 Score in Figure 15 [https://imgur.com/a/hHvg2T2](https://imgur.com/a/hHvg2T2).
**Q19.** What types of datasets/tasks (...) can s-OTDD be used to measure distances for?
**A19.** s-OTDD can also be interpreted as a distance between distributions over distributions. Therefore, s-OTDD can also be adapted to compare distributions over point clouds, distributions over histograms, distributions over 3D shapes, multi-label datasets, and so on.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses to my questions. I have some follow up questions
1. Authors mention that OTDD cannot be computed on more that 30,000 samples in A11 but for the experiment in A14 where they demonstrate the advantage of s-OTDD uses a subset of 5000 samples. Why was a subset necessary if s-OTDD can handle all the samples? What happens to the correlation if more samples are used?
2. What do the points on Fig 16 refer to? What is the computational time (wall clock time) required to compute s-OTDD on this dataset. How does it compare to *NIST datasets in Sec 4.2 vs OTDD.
3. In A17, does adding more data improve the correlation of s-OTDD to the metrics?
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for the additional questions. We would like to answer them as follows:
**Q1.** Authors mention that OTDD cannot be computed on more that 30,000 samples in A11 but for the experiment in A14 where they demonstrate the advantage of s-OTDD uses a subset of 5000 samples. Why was a subset necessary if s-OTDD can handle all the samples? What happens to the correlation if more samples are used?
**A1.** It is worth noting that the computational time and memory also depend on the number of dimensions, therefore, OTDD cannot be computed for this case of 5000 samples (in a high-dimension 3x224x224). To calculate the correlation, we need a random generating distribution to generate datasets. Therefore, we use bootstrap sampling to obtain such generating distribution from a given fixed dataset. From the bootstrap distribution, we create random subsets (smaller datasets) to calculate multiple pairs of distance and the function of interest of two datasets with which we want to form the correlation. The reason we used 5000 samples (in a high-dimension 3x224x224) was to keep the computational time reasonable, given the time constraint that we had during the rebuttal. By increasing the bootstrap size, the variance of estimations e.g., distances, functions of interests, and correlation, will reduce due to the reduction of variability between random subsets (variation of the bootstrap distribution). We refer the reader to Figure 9 in the original OTDD paper [1] for an empirical study of this behaviour. By increasing the bootstrap size, the estimation of correlation and other functions of two datasets will converge to a population value.
[1] Geometric Dataset Distances via Optimal Transport, Alvarez-Melis et al.
**Q2.** What do the points on Fig 16 refer to? What is the computational time (wall clock time) required to compute s-OTDD on this dataset. How does it compare to *NIST datasets in Sec 4.2 vs OTDD.
**A2.** In Figure 16, the x-axis represents values of s-OTDD distance, and the y-value represents the value of the accuracy gain. For each random subset, we have one point in the figure. For this setup with 5000 samples of size 3x224x224, computing s-OTDD costs about 3450.73 (s) compared to about 40 (s) for MNIST with 60000 samples of size 28x28. For OTDD, we cannot compute it in this high-dimensional setting due to an out-of-memory problem. We refer the reviewer to Figure 2 for a relative comparison of computational time when varying dataset size on MNIST.
**Q3.** In A17, does adding more data improve the correlation of s-OTDD to the metrics?
**A3.** We recall that correlation can only be used to compare dataset distances with the same bootstrap size (random subset size). Varying the bootstrap size changes the dataset-generating distributions, which in turn alters the distributions of dataset distances and functions of interest. As a result, the corresponding correlations belong to different dataset distributions and are not directly comparable. As discussed, we only know that increasing the bootstrap size causes the correlation to converge to some population value. Therefore, the notion of "improvement" when adding more data is not well-defined.
The correlation computation in the paper is intended solely for relative comparison among s-OTDD, OTDD, CHSW, and WTE with datasets of the **same size**. In practice, computing correlation and creating sub-datasets is unnecessary. Our key claim is that s-OTDD is scalable and can replace OTDD, as it exhibits similar correlations to OTDD and other metrics when the dataset-generating distributions (obtained via bootstrapping) remain the same.
We would like to thank the reviewer again for letting us explain more about the details. Please feel free to let us know if you still have any concerns.
Best regards,
Authors, | Summary: The authors propose a similarity measure for supervised tasks based on sliced optimal transport. The data with labels are mapped to 1D slices via a feature projection map for features and the induced moment transform projection for label distributions. Then the dataset distance is defined using the 1D optimal transport distances. The proposed distance is shown to be a proper metric. Numerically, it is effective and efficient in predicting task transferability over multiple image and text datasets.
Claims And Evidence: 1. Proposition 1 gives two conditions for the MTP to be injective, and the metric properties of s-OTDD are based on the injectivity of MTP. Does the MTP used in the experiments satisfy the conditions? Please justify the numerical implementation of MTP, including the assumption of infinite number of moments, the use of $\sigma(\Lambda^k)$ and how the metric properties are affected.
2. In Corollary 1, if both feature projection and MTP are injective, why the data point projection, as the sum of them, is also injective? Could the authors explain why the data point projection is a composition of Radon transform projection and the MTP?
Methods And Evaluation Criteria: Experiments are supportive and sufficient. However, can you explain more about the feature projection choices for each of the dataset?
Theoretical Claims: The injectivity conditions of MTP, metric properties of s-OTDD and the numerical approximation bounds are checked and correct.
Experimental Designs Or Analyses: Experiments are valid and sound.
Supplementary Material: Yes, both the proofs and the additional correlation experiments.
Relation To Broader Scientific Literature: A fast dataset distance can be used in tasks that involve comparisons of datasets, such as transfer learning,
Essential References Not Discussed: Literature was well reviewed.
Other Strengths And Weaknesses: The proposed method using MTP to project labels onto 1D slices is novel and interesting. The computation speed gain is significant.
Other Comments Or Suggestions: Typos:
1. $\Lambda\subset \mathbb{N}$ instead of $\Lambda\in \mathbb{N}$
2. In Preliminaries, at line 133, $n$ should be subscript.
3. In Algorithm 1, at line 683, $i=1'$ should be $i'=1$
Questions For Authors: 1. In Definition 3, do feature projection and MTP have to share the same $\psi^{(1)}$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the time and constructive feedback of the reviewer. We would like to extend the discussion with the reviewer as follows:
**Q6**. Proposition 1 gives two conditions for the MTP to be injective, and the metric properties of s-OTDD are based on the injectivity of MTP. Does the MTP used in the experiments satisfy the conditions? Please justify the numerical implementation of MTP, including the assumption of infinite number of moments, the use of $\sigma(\Lambda^k)$ and how the metric properties are affected.
**A6**. Checking existence of all moments of a high-dimensional distribution is impractical due to a high computational cost. Moreover, we only observe random samples from the unknown distribution which makes the estimation of moments are also random. Therefore, in experiments, we implicitly assume that the underlying distributions of datasets have infinite moments. This assumption is motivated by the fact that one-dimensional projection becomes a Gaussian distribution in high-dimension [1,2] and Gaussian distribution has infinite number of moments. We then use zero truncated Poisson distribution (as reported at line 309-310) for the value of moments since it is a distribution over infinite countable natural numbers. If the underlying distributions of datasets do have infinite moments, MTP is injective and s-OTDD is a metric. We would like to recall that s-OTDD is still a pseudo distance without injectivity of the MTP i.e., s-OTDD satisfies the triangle inequality, symmetry, non-negativity, and one direction of identity (is 0 when two datasets are the same). Therefore, s-OTDD is still a meaningful discrepancy for datasets as demonstrated through our experiments.
[1] Fast approximation of the sliced-Wasserstein distance using concentration of random projections, NeurIPS 2021, Nadjahi et al.
[2] Asymptotics of Graphical Projection Pursuit, The Annals of Statistics, 1984 Diaconis et al.
**Q7.** In Definition 3, do feature projection and MTP have to share the same $\psi^{(1)}$?
**A7.** It is actually a typo in our Eq. (9). The summation in the second term should involve $\psi^{(i+1)}$. We have fixed this typo in the revision. In particular, we have the following revised definition of the data point projection
$\mathcal{DP}^k_{\psi,\theta,\lambda,\phi}(x,q_y) = \psi^{(1)} \mathcal{FP}_\theta(x) $
$+ \sum_{i=1}^k\psi^{(i+1)} \mathcal{MTP}_{\lambda^{(i)},\phi}(q_y)$,
where $\psi=(\psi^{(1)},\psi^{(2)},\ldots,\psi^{(k+1)}) \in \mathbb{S}^k, \theta \in \Theta, \lambda =(\lambda^{(1)},\ldots,\lambda^{(k)}) \in \Lambda^{k}, \phi \in \Phi$.
**Q8**. In Corollary 1, if both feature projection and MTP are injective, why the data point projection, as the sum of them, is also injective? Could the authors explain why the data point projection is a composition of Radon transform projection and the MTP?
**A8**. Thank you for your questions. From the revised definition in **A7.** let $\mathcal{FP}_\theta(x)=t_1$
and
$\mathcal{MTP}_{\lambda^{(i)},\phi}(q_y)=t_i$ for $i=2,\ldots,k$, we can rewrite the data point projection as:
$\mathcal{DP}^k_{\psi,\theta,\lambda,\phi}(x,q_y) = \psi^{(1)} t_1+ \sum_{i=2}^k\psi^{(k)} t_i = \psi^\top t = \mathcal{R}_\psi(t)$,
where $\psi=(\psi^{(1)},\ldots,\psi^{(k)})$, $t=(t_1,\ldots,t_k)$, and $\mathcal{R}_\psi(t)$ is the Radon Transform of $t$ with projection parameter $\psi$ as defined at line 162. Therefore, the data point projection is nothing but the Radon Transform of the stacked output of the feature projections and moment transform projections. Since the composition of injective functions is also injective, the data point projection is injective given the injectivity of the feature projection and the moment transform projections.
**Q9**. Can you explain more about the feature projection choices for each of the dataset?
**A9.** As written at line 194, we choose the feature projection based on the prior knowledge about the feature space. For example, if we believe that the feature space is an Euclidean space, we can use the Radon projection (linear projection). If we work with a manifold, we can use geodesic projection [3]. If we work with images, we can use the convolution projection [4]. In our experiments, for the NIST dataset and the text classification dataset, we apply linear projection by default. For image data, we use convolutional projection. We conducted a new comparison between linear and convolutional projections, available at [https://imgur.com/a/1NVv5AC](https://imgur.com/a/1NVv5AC). The results show that convolution-based projections not only require fewer projections but also tend to exhibit a stronger positive correlation.
[3] Sliced-Wasserstein Distances and Flows on Cartan-Hadamard Manifolds, JMLR, 2025, Bonet el al.
[4] Revisiting Sliced Wasserstein on Images: From Vectorization to Convolution, NeurIPS, 2022, Nguyen et al.
**Q10.** Typos
**A10.** We have fixed them in the revision.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. All my concerns were addressed. I will adjust the score. It would also be great if the authors could add the clarification about the injectivity of MTP in practice and the s-OTDD being a pseudo-metric in the paper.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for increasing the score to 4. We will include all the discussion to the revision of the paper. Please let us know if you still have other questions.
Best regards,
Authors, | Summary: This paper is a straightforward application of sliced OT on OT-based dataset distances. The main novelty is Moment Transform Projection by which the authors could project dataset labels to scalars, enabling the use of Radon transformation and hence sliced Wasserstein distances. Authors then follow standard procedures to prove the metric properties and approximation error of the sliced OT dataset distance. Empirical results from various experiments show that the sliced OT dataset distance is both efficient and effective.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked all the proofs. Proofs in A.1. jump too much between lines. Please expend them and add intermediate steps.
Experimental Designs Or Analyses: Results from empirical experiments are convincing in verifying the claim that the proposed sliced OTDD is much more efficient than traditional OTDD, and the approximation error is acceptable.
Supplementary Material: I have checked the entire supplementary material.
Relation To Broader Scientific Literature: The paper is a sweet overlap between sliced OT and OT-based dataset distance. It's a natural application of sliced OT to another one of OT's applications. The main contribution of the paper is not to identify the overlap or the technical difficulty, but to lay the necessary theoretical foundation in a timely manner so that on one hand researchers could advance their work on top of this work and on the other hand people can use OT dataset distance in practice.
Sliced OT until now is a somewhat well-studied subject after Bonneel et al., 2015, Nietert et al., 2022, and others. And as the authors listed, Alvarez-Melis, D. and Fusi, N. 2020 is a milestone for OT-based dataset distance. It totally makes sense to fill gap between the two subjects.
Essential References Not Discussed: References are sufficient. I'd add work by Ho et al who contributed to some theoretical findings around sliced OT or Kolouri et al. as an example of the applications of sliced OT but that's not necessary.
Other Strengths And Weaknesses: A weakness of the proposed sliced OTDD is the extra hyper-parameter — the number of moment $\lambda$ as in (7). In the code, it’s set to 5. I didn’t find it discussed in the paper except in 246 where authors only briefly mentioned the principle of choosing $\lambda$. It’s impact to the projection and hence the distances is unknown as well.
Other Comments Or Suggestions: None.
Questions For Authors: The $\lambda$-th scaled moment as defined in (7) is not how I usually "scale" a moment. What's the rationale behind a constant scaler?
Please discuss the impact of $\lambda$ and provide more details for A.1.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: First, we would like to thank the reviewer for the review and feedback. We would like to answer questions from the reviewer as follows:
**Q3.** A weakness of the proposed sliced OTDD is the extra hyper-parameter - the number of moment $\lambda$ as in (7). In the code, it’s set to 5. I didn’t find it discussed in the paper except in [246] where authors only briefly mentioned the principle of choosing $\lambda$. Its impact on the projection and hence the distances is unknown as well.
**A3.** Thank you for your insightful comments. We would like to recall that the number of moments is denoted as $k$ and we did report the choice of $k=5$ at line 307-308 in the paper. For the parameter $\lambda$, it is the order of a moment. We would like to extend the discussion on both $k$ and $\lambda$ as follows:
For $k$, it plays the role of selecting the number of moments for the label projection. In particular, the higher value of $k$, the more moments information of the label distribution is gathered into the data point projection. For $\lambda$, it plays the role of choosing information of distributions to extract. For example, $\lambda=1$ leads to the information of centerness while $\lambda=2$ leads to the information of spread. In the paper, $\lambda_i$ for $i=1,\ldots,k$ are random which followed the zero-truncated Poisson distribution with the rate parameter equals $i$ i.e., the mean $\mathbb{E}[\lambda_i]= (ie^i)/(e^i-1) \approx i$. It means that we try to capture (in high probability) the first $k$ moments of the label distribution. In the case, where we know the true number of moments of the label distribution $\lambda^\star$, we can set $k=\lambda^*$ to capture all information of the label distribution. Nevertheless, checking existence of moments is expensive since we only access to samples from the dataset generating distribution and the size of datasets is also large. Therefore, $k$ becomes a hyperparameter. However, choosing $k$ is not a hard problem since we know that we want to have as big $k$ as possible. In practice, we can start with a big $k$ and check if the value of s-OTDD exists (not overflow). If s-OTDD does not exist, you can reduce the value of $k$ via a binary search rule to have a choice of $k$. To avoid such searching algorithm, we recommend to choose $k=5$ due to the concentration of random projection. In particular, we know that one-dimensional projection becomes a Gaussian distribution in high-dimension [1,2]. Therefore, two moments are enough to capture the information of a Gaussian distribution. Since we have finite dimension, using extra 3 moments (5 in total) must be enough.
To verify the above hypothesis, we have conducted additional ablation studies in *NIST Adaptation where we vary $k\in \\{1,2,3,4,6\\}$, resulting visually at [https://imgur.com/a/NCcqYgo](https://imgur.com/a/NCcqYgo). We see that increasing $k$ leads to better correlation to the performance gap. Nevertheless, we found that when passing $k=3$, the correlation does not increase fast as when $k<3$. Also, we found that when setting $k$ to be too large, s-OTDD is not computable due to numerical issue i.e., overflow which might be due to the non-existence of the higher moments. We will add this discussion and the additional experiments to the revision.
[1] Fast approximation of the sliced-Wasserstein distance using concentration of random projections, NeurIPS 2021, Nadjahi et al.
[2] Asymptotics of Graphical Projection Pursuit, The Annals of Statistics, 1984 Diaconis et al.
**Q4.**. The $\lambda$-th scaled moment as defined in (7) is not how I usually "scale" a moment. What’s the rationale behind a constant scaler? Please discuss the impact of $\lambda$.
**A4.** The reason we scale $\lambda$-th moment by $\lambda!$ is to make sure the output of the data point projection is not biased toward high order moments. In particular, in the definition of the data point projection (Definition 3), the data point projection is a weighted average of the feature projection and multiple moment transform projections with different values of $\lambda$. If we do not scale the moment, the value of the data point projection will be entirely based on the value of the moment transform projection with highest value of $\lambda$. However, we know that each moment captures different information of the distribution e.g., the first moment captures the information about center, the second moment captures the information about the spread. Therefore, scaling the moment value makes the contribution of all moments "approximately" equal. The reason we say that they are "approximately" equal is that we actually imply the priority for small order moments since the factorial normalizing constant goes faster in limit than the exponential function. We will include this discussion in the revision of the paper.
**Q5.** provide more details for A.1.
**A5.** Thank you for your comments. We will expand the proofs by adding intermediate steps. | Summary: This paper tackles the Dataset Distance problem with a proposed sliced optimal transport dataset distance (s-OTDD) method.
The core module is called Moment Transform Projection (MTP), mapping a label (represented as a distribution over features) to a real number. Then, s-OTDD is defined as the exptected Wasserstein distance between the projected distributions, and calculated by leveraging the closed-form one-dimentional optimal transport, i.e. sliced Wasserstein distance.
With random projection, s-OTDD shows (near-)linear property for the Dataset Distance problem.
Experiments are conducted on various benchmarks, and the main comparison method is OTDD(exact). OTDD can be regarded as the gold standard or upper bound of the s-OTDD algorithm. The results shows good correlation between OTDD and s-OTDD, while the compuation time is magnitude of faster.
Moreover, several theoretical results are given: injective property, s-OTDD is a valid metric, and the approximation error.
## update after rebuttal
I appreciate the authors' response. All my questions and concerns are addressed by the rebuttal.
I am happy to keep my score as 4: Accept.
Claims And Evidence: The s-OTDD is a fast and effective approximation of the OTDD dataset distance method. This is verified by the theorecial proof (valid metric and approximatoon error), as well as experimental comparison on various benchmark dataset (correlation w.r.t. OTDD (exact)).
s-OTDD is a valid distance, which is proved in Proposition 2.
Methods And Evaluation Criteria: Yes.
The proposed method mainly leverages the one-dimensional closed-form property of the optimal transport, which also translates to the sliced-Wasserstein distance.
Based on this, the proposed method designed Moment Transform Projection to ensure the injective feature projection. Then it applies data point projection.
Finally, Sliced Optimal Transport Dataset Distance is approximated random projection over four parameters ($\psi, \theta, \lambda, \phi$) sampled from corresponding spaces.
Evaluation metrics mainly include (1) distance correlation w.r.t. OTDD and other dataset distances (2) processing time w.r.t. dataset size. Overall, these metrics can capture the effectiveness and efficiency of s-OTDD.
Theoretical Claims: Proposition 1. Existence of projected scaled moments. The proof is given in Appendix A.1
Proposition 2. s-OTDD is a valid metrics. The proof is given in Appendix A.2
Proposition 3. The approximation error. The proof is given in Appendix A.3
I went through all the proof and did not find theoretical issue. Because this is an emergent review, I might miss some detail due to limited review time.
Experimental Designs Or Analyses: Overall, the experiments are sound and effective.
(1) Various benchmarks and tasks are conducted, e.g. image classification and text classification on MNIST, CIFAR, TinyImageNet, AG News, DBPedia, Yelp Reviews, Amazon Reviews, and Yahoo Answers.
(2) The important baseline and standard distance OTDD(exact), as well as a bunch of other approximation distances are compared.
(3) Running time comparison is given w.r.t dataset size.
(4) Parameter analysis are given w.r.t number of projections.
Supplementary Material: Yes.
I went through the proof A1-A3.
Other parts of Supp. Materials are mainly additional experimental results.
Relation To Broader Scientific Literature: This paper mainly provides an effective and much faster dataset distance, which improves the OTDD (Alvarez-Melis & Fusi, 2020).
Through theoretical proof and experimental comparisons, the proposed distance s-OTDD approximates OTDD and also is much faster in speed w.r.t dataset size.
Essential References Not Discussed: To my knowledge, I do not know other relevant references.
Other Strengths And Weaknesses: The main Strengths and Weaknesses have already listed in the above sections.
Other Comments Or Suggestions: Typos.
The line below Eq. (8), empirical distribution misses the variable.
Questions For Authors: In Eq. (9), are the first $\psi^{(1)}$ in the first term, the same as the $\psi^{(k)}$ with $k=1$ in the second term?
Ethical Review Concerns: No ethical concern,
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: First, we would like to thank the reviewer for the time and the feedback. We extend the discussion as follows:
**Q1.** The line below Eq. (8), empirical distribution is missing the variable.
**A1.** Thank you for pointing out the typo. We have fixed the typo in the revision.
**Q2.** In Eq. (9), is the first $\psi^{(1)}$ in the first term, the same as the $\psi^{(k)}$ with $k = 1$ in the second term?
**A2.** It is actually a typo in our Eq. (9). The summation in the second term should involve $\psi^{(i+1)}$. We have fixed the typo in the revision. In particular, we have the following revised definition of the data point projection:
$\mathcal{DP}^k_{\psi,\theta,\lambda,\phi}(x,q_y) = \psi^{(1)} \mathcal{FP}_\theta(x) $
$+\sum_{i=1}^k \psi^{(i+1)} \mathcal{MTP}_{\lambda^{(i)},\phi}(q_y)$
where $\psi=(\psi^{(1)},\psi^{(2)},\ldots,\psi^{(k+1)}) \in \mathbb{S}^k, \theta \in \Theta, \lambda =(\lambda^{(1)},\ldots,\lambda^{(k)}) \in \Lambda^{k}, \phi \in \Phi$.
We would like to elaborate more on Corollary 1 with the revised definition. Let $\mathcal{FP}_\theta(x)=t_1$
and $\mathcal{MTP}_{\lambda^{(i)},\phi}(q_y)=t_i$ for $i=2,\ldots,k$, we can rewrite the data point projection as:
$\mathcal{DP}^k_{\psi,\theta,\lambda,\phi}(x,q_y) = \psi^{(1)} t_1+ \sum_{i=2}^k\psi^{(k)} t_i = \psi^\top t = \mathcal{R}_\psi(t)$,
where $\psi=(\psi^{(1)},\ldots,\psi^{(k)})$, $t=(t_1,\ldots,t_k)$, and $\mathcal{R}_\psi(t)$ is the Radon Transform of $t$ with projection parameter $\psi$ as defined at line 162. Therefore, the data point projection is simply the Radon Transform of the stacked output of the feature projections and moment transform projections. Since the composition of injective functions is also injective, the data point projection is injective given the injectivity of the feature projection and the moment transform projections. | null | null | null | null | null | null |
Securing Equal Share: A Principled Approach for Learning Multiplayer Symmetric Games | Accept (poster) | Summary: This submission studies multiplayer (n-player where n >= 3) symmetric zero-sum games (such as Mahjong and Poker). Unlike two-player zero-sum games, equilibria in multiplayer games are neither unique nor non-exploitable, which poses a challenge when competing against opponents who play strategies from different equilibria or non-equilibria strategies. Motivated by these observations, the authors propose a new learning objective: ensuring that each player secures at least C/n utility in a n-player symmetric game with total payoff C.
The authors derive conditions under which obtaining this equal share is tractable, and design learning algorithms to approximate this objective which are inspired by no-regret learning. The authors also derive complementary lower bounds which largely match their upper bounds.
They empirically evaluate their algorithms in two symmetric zero-sum games: majority vote and switch dominance game. They find that self play-based methods often fail to guarantee an equal share, and are often out-performed by the authors’ algorithms.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes - Appendix C.
Relation To Broader Scientific Literature: This work falls under the category of learning in games. Specifically, they propose novel objectives for learning in multiplayer symmetric games.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
The downsides pointed out by the authors about multiplayer equilibria are well-known “pain points” in algorithmic game theory, and I appreciate the authors’ attempt to propose a new objective. This new objective is very natural in symmetric games. The authors provide nearly-matching upper- and lower-bounds on learning algorithms for accomplishing this objective, which, when combined, give a clean solution to this problem.
Weaknesses:
One weakness of this submission is that the proposed algorithms are not terribly novel. When playing against stationary opponents, the authors use the well-known Hedge algorithm to obtain an approximately equal share. Against adapting opponents, they adapt the Sttrongly Adaptive Online Learning algorithm of Daniely et al., 2015 to achieve an approximately equal share. With that being said, there is still value in showing that existing techniques can accomplish this new objective and so I do not view this as a major weakness.
Other Comments Or Suggestions: n/a
Questions For Authors: Do you think any of your ideas could generalize to multiplayer games which are not symmetric? Or are completely new solution concepts needed?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your inspiring comments. We address your concerns as follows.
**Q1: Novelty and contributions**
**A1:**
Please refer to **A1** to **Reviewer Ada2** for clarification on the novelty and technical contribution of our work.
**Q2: Extension to asymmetric games**
**A2:** As mentioned on page 4, L184-186 (left column), we can effectively transform all asymmetric games into symmetric ones by introducing an initial stage where each player’s position is randomly assigned. This randomization process effectively symmetrizes the game, as each player has an equal probability of assuming any role, ensuring that no player has a consistent positional advantage. Therefore, under this transformed setting, our analysis remains applicable.
To further clarify the mathematical transformation, we enlarge the action space to incorporate both the player's position and their actions. Specifically, for each player $i$, we define $(P_i, a_i)$ where $P_i$ represents the player's position or role in the game, and $a_i$ denotes their action. We then define a symmetric utility function $U^{sym}_i$ as follows:
$U^{sym}_i ((P_1, a_1), \ldots, (P_n, a_n))$
$:=U_{\sigma^{-1}(i)}^{asym} (a_{\sigma^{-1}(1)}, \ldots, a_{\sigma^{-1}(n)}),$
where $\sigma$ is a permutation that reorders player positions such that $\sigma(P_1, \ldots, P_n) = (1, \ldots, n)$.
The action space is defined as $\bigcup_{j=1}^n (P_j \times \mathcal{A}_j)$, where $\mathcal{A}_j$ denotes the set of actions available to players when they are in position or role $P_j$.
This transformation aligns the players' positions with a standardized symmetric framework, enabling our analysis to apply to games that are asymmetric by nature but can be symmetrized under random initial positioning. | Summary: The paper proposes a new solution concept for constant-sum, multi-player, symmetric games, which is referred to as equal share. What this means is that each player secures the same utility. The paper observes that usual equilibrium concepts do not necessarily satisfy equal share, and it then proceeds by identifying necessary and sufficient conditions under which equal shares can be attained--namely, players adopting symmetric dynamics and limited opponent adaptivity. Under those two conditions, they provide algorithms that indeed attain equal share. Finally, experimental results are provided to support some of the theoretical claims.
Claims And Evidence: Overall, the claims made in the paper are well-grounded, although there are certain points that are very much debatable. In particular, I want to mention two claims made in Section 4.1, which serve to justify the main assumptions of the paper. The first one is that "Condition 1 is further implicitly adopted by most prior state-of the-art AI agents for multiplayer games." It is true that self-play algorithms satisfy the symmetry condition postulated in the paper, but I see no reason whatsoever why this is a reasonable assumption when interacting with opponents in a realistic game (in the self-play setting, one, of course, controls the algorithms employed by all players, but this is not so in practice).
The second claim is that "it is often safe to assume that the population meta-strategy will not quickly adapt to one particular player’s strategy." As far as I can see, no evidence is given to justify this claim. In many ways, limited opponent adaptivity trivializes the problem and goes against basic game-theoretic principles.
Methods And Evaluation Criteria: The evaluation in Section 6 supports some of the claims of the paper, but it does not go far enough. The benchmarks tested have been cherry-picked to serve the narrative of the paper, and they are not comprehensive enough. The paper would benefit from providing a more complete evaluation based on more benchmarks. The main question is why existing self-play algorithms perform so well in practice despite such clear deficiencies. Is it the case that the games tested by the authors do not reflect practical examples? It would be interesting if the authors could provide a more general characterization of games with practical relevance where similar phenomena are present.
Theoretical Claims: All theoretical claims appear to be sound; I did not find any notable issue.
Experimental Designs Or Analyses: The experimental evaluation is sound in the methodology of the games tested, but, as I pointed out above, it is not comprehensive enough. It's hard to draw concrete conclusions from a very limited benchmark evaluation.
Supplementary Material: I checked the proofs in the supplementary material, and I did not find any notable issues. The supplementary material is overall polished.
Relation To Broader Scientific Literature: The paper tries to address a thorny issue in multi-agent learning concerning the deficiencies of traditional equilibrium concepts. These issues are well understood, and so the observations made in Section 3 are fairly immediate from existing results. The paper departs from most of the line of work in game theory by going beyond equilibrium, and is mostly related to opponent modeling.
Essential References Not Discussed: One paper that could be discussed is "Game Theory-Based Opponent Modeling in Large Imperfect-Information Games," and more broadly I believe that the authors could elaborate more on how their results relate to opponent modeling.
Other Strengths And Weaknesses: On the positive side, the paper attempts to address an important problem in multiagent learning. The proposed solution concept is fairly natural, and as far as I know has not been analyzed before. The paper fully characterizes conditions under which guaranteeing equal share is possible. I believe that there are certain settings in which the results provided by the paper will be relevant.
On the negative side, the theoretical results are not surprising, and mostly follow directly from existing results, although there is value in stating and formalizing such results. Furthermore, as I said above, the assumption that opponents are not fully adaptive is a very strong assumption, which really goes against basic game-theoretic principles. I understand that making progress on this problem necessitates departing from the existing normative assumptions, but the paper hasn't made an entirely convincing case.
Other Comments Or Suggestions: As I pointed out earlier, I believe that the paper would benefit considerably by a more comprehensive experimental evaluation.
Questions For Authors: I have no further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks a lot for reading our paper and for your insightful comments.
**Q1: The theoretical results are not surprising**
**A1: Novelty.** We remark that, **compared to developing complex algorithmic techniques, identifying and formulating the right question to solve is equally—if not more—important.** This paper precisely addresses the latter: propose novel, practical, and important formulations, study basic properties, and bring the right tools to give principled solutions in a variety of settings.
In multiplayer games, much of the existing research has focused on achieving equilibrium-based outcomes. While extensively studied, equilibrium-based approaches have well-known limitations and are often unsuitable for developing AI agents in multiplayer settings. This concern was explicitly raised in prior work on real-world multiplayer game systems (e.g., Brown & Sandholm, 2019); however, such works do not resolve the issue or propose alternative objectives. Our paper is the first to formulate the problem as achieving equal share, and to clarify that the objective is theoretically well-behaved only under the condition that all opponents deploy identical strategies. While this condition seems strong, we justify its practical relevance and note that it aligns with the settings implicitly enabled in most prior state-of-the-art empirical work. **We emphasize that these findings, though not technically challenging, are very fundamental and entirely novel—they have not been previously discovered or formally articulated in the literature.**
**Technical Contribution on Efficient Algorithms.**
In addition to our contribution to identifying the correct solution concept, this paper also provides a comprehensive set of algorithmic results for achieving equal share in various settings: (1) fixed opponents; (2) slowly changing opponents; (3) opponents that adapt at intermediate rates; (4) matching lower bounds.
For (1) and (2), we leverage established techniques from the no-regret and no-dynamic-regret learning literature, achieving equal shares with provable guarantees. Our contribution here mostly lies in identifying and bringing the right tools and adapting them to solve the right setup of the multiplayer games, where, to the best of our knowledge, prior state-of-the-art AI systems on multiplayer games (for Poker, etc) have not utilized techniques such as no-dynamic-regret.
For (3), we believe our discovery here is completely new, that simple algorithms like behavior cloning can sometimes even outperform sophisticated no-dynamic regret algorithms. Such a result is only possible by leveraging the symmetric structure of the multiplayer game beyond treating it as one versus an adversarial group of opponents.
For (4), while similar lower bounds have been shown in more general games, they do not apply to our settings as the hard instances constructed in prior lower bounds are not symmetric games. In this paper, we carefully construct new hard instances showing that the algorithmic results we proved in (2) and (3) are near-optimal.
**Q2: Two claims made in Section 4**
**A2:**
**About Condition 1**: In Section 4.1, we introduce the concept of population meta-strategy and show that within a large player base, even opponents with different strategies can be viewed as adopting the same population meta-strategy. As mentioned in Section 4.1, various forms of games, such as card games, board games, and online video games, all exemplify the concept of population games via matchmaking in a large population of players.
**About Condition 2**: Our paper studies both scenarios where the opponents' policy is fixed (see Section 5.1) and where it is slowly adapting (see Section 5.2). While capable of updating their policies, the opponents do not run algorithms specifically targeting our AI agent, as it is merely one player within a vast pool of participants. Moreover, Proposition 4.2 points out that attaining equal shares is impossible if opponents can change their meta-strategy arbitrarily fast across different rounds, supporting Condition 2.
**Q3: About experiment**
**A3:**
Please refer to **A1** to **Reviewer Ada2** for details on our experiment design. Our experimental cases are adversarially constructed, based on the common characteristic of many multiplayer games: the existence of multiple NE. This can cause self-play variants to converge to a bad NE, resulting in negative payoffs against carefully chosen opponent policies. Since many real-life games have multiple NEs, the worst-case results of our experiments are relevant to practical games.
While self-play algorithms have achieved impressive results in games like multiplayer poker, these successes are often context-specific and rely on extensive engineering and human input. We agree that understanding why these algorithms succeed and whether they can generalize across different game settings is an important future question, though it is beyond the scope of this paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. I certainly agree with the authors that identifying the right questions is essential. My main concern is about the proposed solution concept. It can only be attained under very strong assumptions that in some sense trivialize the problem. As I said in my evaluation, I understand that some concessions have to be made to make progress in this problem, but I am not entirely convinced that the paper makes reasonable concessions. That being said, I appreciate the novelty of the paper and the fact that it tries to approach a fundamental problem from a new angle, so I will increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our contribution and raising the score! We would like to respectfully clarify our position regarding the claim that our results rely on “very strong assumptions that in some sense trivialize the problem.” We disagree with this characterization.
Our paper makes two assumptions, both of which we have argued in Section 4 are, in some sense, necessary for achieving an equal share in multiplayer games:
(1) All opponents deploy identical mixed strategies (referred to as the meta-strategy);
(2) The meta-strategy evolves slowly over time.
Assumption (1) approximately holds when opponents are randomly drawn from a large pool of players, as shown in Proposition 4.3. Assumption (2) is naturally satisfied in many real-world settings, such as online games where population-level strategies evolve gradually. These assumptions directly apply to games, such as Poker, Mahjong, and others, in casino or online platforms with a large player base. As such, they do not trivialize the problem; rather, they reflect realistic features of the environments we aim to model.
Moreover, many successful AI systems for multiplayer games---including those for Poker and Mahjong---implicitly rely on both assumptions, even if not explicitly stated. For instance, self-play algorithms often use a shared neural network to sample actions for all opponents, which effectively assumes (1). Similarly, many works aim to build strong agents against a fixed or slowly evolving meta-human policy, implicitly assuming (2). Therefore, we believe our assumptions are not strong, but instead essential for enabling principled learning in these settings. | Summary: The authors present an algorithms for learning in symmetric constant-sum multiplayer games, where the solution concept is equal allocation of social welfare among the players. They show that standard no-regret learning algorithms in a self-play setup cannot achieve "equal share". They demonstrate necessary conditions for a game and algorithmic assumptions necessary to allow an equal share strategy to be computable, and then adapt algorithms from the online learning literature to compute equal share strategy under different assumptions on the opponent.
Claims And Evidence: Yes, the claims are supported, and theoretical and empirical evidence are presented to support the claims.
Methods And Evaluation Criteria: Yes. they do.
Theoretical Claims: Yes, I checked the correctness of all proofs.
Experimental Designs Or Analyses: The experimental analysis is a bit confusing. The comparison is done against a set of fixed meta-strategies, but then the self-play algorithm is run, trying to compute optimal strategies for each player in a no-regret fashion. It is not clear to me what the experiments aim to illustrate.
Supplementary Material: Yes, I have reviewed all of the supplementary material.
Relation To Broader Scientific Literature: The authors do a good job of contextualizing their work in the broader line of work on this topic. In general, computing NE in general multiplayer games is hard. The authors choose to focus on the case of zero-sum symmetric multiplayer games and come up with a tractable solution concept for this reason.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: While the authors propose an interesting theoretical framework, the novelty of the results is limited. The sufficient conditions are immediate observations that can be made. The extension of no-static-regret algorithms and dynamic regret algorithms to the equal share setting while interesting, is a fairly straightforward extension.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Can you explain the experiments in more detail? What are they aiming to illustrate?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks a lot for reading our paper and for your insightful comments.
**Q1: Clarification of the experiment**
**A1:** Our experiment aims to demonstrate that although self-play-based meta-algorithms have been effective in many real-world multiplayer games, they can still fail to secure an equal share in worst-case scenarios. Specifically, we show that the meta-algorithms under consideration — **SP_scratch**, **SP_BC**, and **SP_BC_reg** — which are shown to converge to NE, do not guarantee equal outcomes in our constructed cases.
Consistent with the theoretical analysis in Section 5.1, our experimental setup assumes that all opponents play a fixed, non-adaptive strategy across all rounds. This setup represents a fundamental and arguably the simplest scenario in which a well-designed algorithm should perform reliably, especially given that real-world opponents can be adaptive or even adversarial.
For self-play-based algorithms, we first run each method until convergence and then evaluate the learned policy against the fixed opponent strategy. While SP_scratch has no access to the opponent’s strategy, we include SP_BC and SP_BC_reg as baselines for fair comparison. These methods incorporate the opponent’s strategy as either initialization or regularization, following design principles from recent self-play variants that achieve state-of-the-art performance in multi-player games. Finally, as discussed in Section 3, self-play from scratch without any opponent knowledge can fail even in simple settings, such as the majority-vote game.
**Q2: The novelty of the results is limited**
**A2: Novelty.** We would like to point out that, **compared to developing complex algorithmic techniques, identifying and formulating the right question to solve is equally—if not more—important.** This paper precisely addresses the latter: propose novel, practical, and important formulations, study basic properties, and bring the right tools to give principled solutions in a variety of settings.
In multiplayer games, much of the existing research has focused on achieving equilibrium-based outcomes. While extensively studied, equilibrium-based approaches have well-known limitations and are often unsuitable for developing AI agents in multiplayer settings. This concern was explicitly raised in prior work on real-world multiplayer game systems (e.g., Brown & Sandholm, 2019); however, such works do not resolve the issue or propose alternative objectives. Our paper is the first to formulate the problem as achieving equal share, and to clarify that the objective is theoretically well-behaved only under the condition that all opponents deploy identical strategies. While this condition seems strong, we justify its practical relevance and note that it aligns with the settings implicitly enabled in most prior state-of-the-art empirical work. **We emphasize that these findings, though not technically challenging, are very fundamental and entirely novel—they have not been previously discovered or formally articulated in the literature.**
**Technical Contribution on Efficient Algorithms.**
In addition to our contribution to identifying the correct solution concept, this paper also provides a comprehensive set of algorithmic results for achieving equal share in various settings: (1) fixed opponents; (2) slowly changing opponents; (3) opponents that adapt at intermediate rates; (4) matching lower bounds.
For (1) and (2), we leverage established techniques from the no-regret and no-dynamic-regret learning literature, achieving equal shares with provable guarantees. Our contribution here mostly lies in identifying and bringing the right tools and adapting them to solve the right setup of the multiplayer games, where, to the best of our knowledge, prior state-of-the-art AI systems on multiplayer games (for Poker, etc) have not utilized techniques such as no-dynamic-regret.
For (3), we believe our discovery here is completely new, that simple algorithms like behavior cloning can sometimes even outperform sophisticated no-dynamic regret algorithms. Such a result is only possible by leveraging the symmetric structure of the multiplayer game beyond treating it as one versus an adversarial group of opponents.
For (4), while similar lower bounds have been shown in more general games, they do not apply to our settings as the hard instances constructed in prior lower bounds are not symmetric games. In this paper, we carefully construct new hard instances showing that the algorithmic results we proved in (2) and (3) are near-optimal. | Summary: This paper proposes a novel Monte Carlo Tree Search–inspired algorithm for multi-agent, simultaneous-move games under imperfect information.
Claims And Evidence: The proposed NN-CCE algorithm achieves performance that is superior or competitive with some multi-agent reinforcement learning algorithms (e.g., MAPPO, MADDPG) across cooperative, competitive, and mixed tasks.
Methods And Evaluation Criteria: The authors benchmark performance mainly via win rate in competitive scenarios or average reward/success rate in cooperative tasks.
Theoretical Claims: The key theoretical claim is that following a no-regret learning procedure (like EXP-IX) in repeated states leads to approximate CCE in the time-averaged strategy profile. This is a standard result from the game theory literature on no-regret dynamics.
Experimental Designs Or Analyses: The authors evaluate their algorithm on 17 tasks, spanning different complexities and team-based structures (2-player zero-sum, 2-team with multiple agents, purely cooperative).
They compare to diverse baselines: MAPPO, MADDPG, s-MCTS, DORA, plus references to prior results for variants of PSRO or CFR on smaller tasks.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper is well positioned within the literature on multi-agent reinforcement learning (MADDPG, MAPPO) and game-theoretic equilibrium approximation (PSRO, CFR, DORA).
Essential References Not Discussed: No glaringly missing citations.
Other Strengths And Weaknesses: Strengths:
The authors introduce a method that abandons deeper search in favor of single-step lookahead, using an online no-regret learner (EXP-IX) at each state to approximate a CCE distribution over actions.
They decouple the “data collection” (policy rollout) from the “equilibrium approximation” (no-regret workers) and from the “value/policy network training"
Weaknesses:
By restricting the MCTS procedure to depth 1, the algorithm might miss deeper strategic foresight. While the neural value function can somewhat compensate, there may be domains where deeper lookahead is crucial.
Other Comments Or Suggestions: A brief discussion contrasting Coarse Correlated Equilibria with Correlated Equilibria and (in zero-sum) Nash Equilibria would be beneficial for readers less familiar with these solution concepts. The paper touches on it, but a succinct subsection might be useful.
Questions For Authors: How does your method adapt if the game includes partial observability of the environment state itself (not just hidden opponent actions)? Do you foresee major changes to the design of the no-regret worker or the neural network architecture in more general imperfect-information settings?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your feedback on our submission. Upon reviewing your comments, we believe there may have been a misunderstanding, as the points raised do not seem to be relevant to our paper.
We appreciate the time and effort you have put into your review and are happy to provide further clarification or address any specific questions if needed. | null | null | null | null | null | null |
Improved Coresets for Vertical Federated Learning: Regularized Linear and Logistic Regressions | Accept (poster) | Summary: Coresets serve as a compact summary of training data, offering an efficient method for reducing data processing and storage complexity during training. In the context of vertical federated learning (VFL), where different clients hold distinct data features, coresets help reduce communication complexity.
This work introduces a coreset construction method for regularized logistic regression in both centralized and VFL settings. Additionally, the authors improve coreset size efficiency for regularized linear regression in VFL, eliminating its dependence on certain data properties inherent to the VFL framework. These improvements stem from novel coreset construction algorithms that account for reduced model complexity due to regularization.
Extensive empirical evaluations support the theoretical findings, demonstrating the effectiveness of the proposed coresets. The performance is further validated by comparing models trained on full datasets versus those trained on the coresets, showcasing their practical utility.
Claims And Evidence: Almost all claims in the paper are well-supported by **proofs or references**, ensuring a strong theoretical foundation.
However, certain statements, while **technically correct**, may lack sufficient clarity for a **non-expert audience**. For example, the statement:
*"This is the most computationally expensive operation, which takes \(O(nd^2_j)\) time."* (Lines 355–357, right side, Page 7)
Although this statement provides the computational complexity, it does not explain **why** this operation is the most expensive or how the complexity arises from the underlying mathematical formulation. A brief explanation of the **source of this complexity**, its practical implications, and how it compares to other computational steps in the algorithm would enhance readability and understanding.
Providing additional context in such cases would make the paper more accessible to a broader audience, including those less familiar with the specific computational details.
Methods And Evaluation Criteria: In this paper, the authors provide theoretical guarantees alongside experimental results for regularized logistic and linear regression. This approach is well-justified, as it ensures alignment between theoretical insights and practical validation.
Theoretical Claims: All theoretical claims are well-formulated, and the definitions, lemmas, and theorems are clearly stated. Additionally, the other theoretical derivations are presented in a clear manner. Unfortunately, as I am not an expert in this field, I cannot verify the proofs. However, at a high level, the statements appear to be sound and make sense.
Experimental Designs Or Analyses: The authors conducted experiments on both regularized logistic regression (VRLog) and regularized linear regression (VRLR) using three datasets: Credit Card (classification), Financial (regression), and Blog Feedback (regression). Each dataset was partitioned into training and testing sets, with the training data further distributed among three clients.
Maintaining the VFL sampling technique from Algorithm 1, the authors compared their sampling method with various other techniques. Once a sample was drawn using one of the sampling methods, they trained a model with a regularization parameter. For VRLog, they evaluated training loss, test accuracy, model closeness, and training time. For VRLR, they reported test RMSE and model closeness.
For each sampling method and sample size, the authors repeated the experiment 10 times and reported the median values of the results.
While I appreciate the experimental results presented on these datasets, it would be beneficial to explore additional datasets to further strengthen the evaluation and generalizability of the proposed methods. A more diverse selection of datasets could provide deeper insights into the effectiveness and limitations of the approach across different data distributions and problem settings.
The inclusion of logistic regression and linear regression experiments is particularly valuable, as they align well with the theoretical framework and provide a solid foundation for validating the theoretical findings. However, extending the experiments to include deep learning tasks could offer additional practical insights. Given the increasing importance of deep learning in real-world applications, evaluating the proposed methods on more complex models could help assess their scalability, robustness, and applicability beyond the theoretical setting.
Supplementary Material: I briefly reviewed the supplementary material, including the proofs and appendix. However, as I am not an expert in vertical federated learning, I cannot guarantee the correctness of the provided results.
Relation To Broader Scientific Literature: This work is related to both vertical federated learning and the broader federated learning literature. However, as I am not an expert in this specific area, I am unable to provide a more detailed assessment of the underlying ideas.
Essential References Not Discussed: The paper discusses federated learning methods designed to address client heterogeneity, such as SCAFFOLD. However, it would also be valuable to mention other relevant approaches, including ProxSkip, FedLin, and DANE, which have been proposed to mitigate the effects of heterogeneity and improve convergence in federated learning settings. A more comprehensive discussion of these methods could provide a broader perspective on existing solutions and how they relate to the proposed approach.
Mishchenko, Konstantin, et al. "Proxskip: Yes! local gradient steps provably lead to communication acceleration! finally!." International Conference on Machine Learning. PMLR, 2022.
Mitra, Aritra, et al. "Linear convergence in federated learning: Tackling client heterogeneity and sparse gradients." Advances in Neural Information Processing Systems 34 (2021): 14606-14619.
Jiang, Xiaowen, Anton Rodomanov, and Sebastian U. Stich. "Federated Optimization with Doubly Regularized Drift Correction." International Conference on Machine Learning. PMLR, 2024.
Other Strengths And Weaknesses: The paper is generally well-written and easy to follow. However, for non-experts, the theoretical section may be challenging to comprehend, even at a high level. Providing additional explanations or intuitive insights could enhance accessibility for a broader audience.
Other Comments Or Suggestions: Please review and address the issues raised in the previous sections.
Questions For Authors: Would it be possible to empirically evaluate the proposed methods on deep learning tasks to assess their practical effectiveness?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for taking time to read our submission and providing useful remarks. Below we address your concerns
> However, certain statements, while technically correct, may lack sufficient clarity for a non-expert audience. For example, the statement: "This is the most computationally expensive operation, which takes $(O(nd^2_j))$ time." (Lines 355–357, right side, Page 7). Although this statement provides the computational complexity, it does not explain why this operation is the most expensive or how the complexity arises from the underlying mathematical formulation. A brief explanation of the source of this complexity, its practical implications, and how it compares to other computational steps in the algorithm would enhance readability and understanding.
The paper mentions that, the running time of computing $g^{(j)}$ of line 6 in Algorithm 4 $O(nd_j^{2})$ for client $j \in [T]$ having the dataset $Z^{(j)}$. This is because, for $p=2$, which is the case for VRLR, the matrix $W$ is always identity $I_{n}$ and hence $g^{(j)}$ can be computed using the orthonormal column basis of $\hat{Z}^{(j)}$, since $g_i^{(j)} = (x_i^{(j)})^{T}((X^{(j)})^{T}X^{(j)}+\lambda I_{d_j})^{-1}(x_i^{(j)})= (x_i^{(j)})^{T}((\hat{Z}^{(j)})^{T}\hat{Z}^{(j)})^{-1}(x_i^{(j)})= ||u_i^{(j)}||^2$, where $u_i^{(j)}$ is the $i^{th}$ row of the orthonormal column basis of $\hat{Z}^{(j)}$. Recall, that in Huanf et. al., 22 the score for the same point is $(x_i^{(j)})^{T}((X^{(j)})^{T}X^{(j)})^{-1}(x_i^{(j)})$ which is greater than our scores because $((X^{(j)})^{T}X^{(j)}) \prec ((X^{(j)})^{T}X^{(j)}+\lambda I_{d_j})$. Hence the total sensivity is smaller and thereby our coreset size is smaller than Huang et. al. 2022 for the VRLR problem.
One can compute SVD of $\hat{Z}^{(j)}$ to compute its orthonormal column basis. This can be computed in $nd_{j}^{2}$ time. Notice that these scores are part of the input to the coreset construction algorithm 1 which takes less time that $nd_{j}^{2}$. Once it has been computed, the rest of the algorithm takes fewer than $nd_{j}^{2}$ time. The first line takes $O(n)$ to compute the sum of sensitivity scores. Next, line 2 takes $O(T)$ time at the server. In line 3 every client selects $\lceil m/T\rceil$ indices from $[n]$ based on the defined sampling probability. Next, these selected indices were shared with the server which defines global weight for the selected indices which again takes $O(n)$ time.
> While I appreciate the experimental results presented on these datasets, it would be beneficial to explore additional datasets to further strengthen the evaluation and generalizability of the proposed methods. A more diverse selection of datasets could provide deeper insights into the effectiveness and limitations of the approach across different data distributions and problem settings.
Thanks for your feedback. The guarantees that we provide here, as pointed out above, provably reduces the coreset size. The datasets considered are standard for this setting.
> However, extending the experiments to include deep learning tasks could offer additional practical insights. Given the increasing importance of deep learning in real-world applications, evaluating the proposed methods on more complex models could help assess their scalability, robustness, and applicability beyond the theoretical setting.
Since our theoretical guarantees are only specific to regularized linear and logistic regression hence, the experimental evaluations were only conducted in that domain. We did not conduct any experiments on any deeper architecture as constructing the coresets for a regularized objective would require a potentially different approach.
> The paper discusses federated learning methods designed to address client heterogeneity, such as SCAFFOLD. However, it would also be valuable to mention other relevant approaches, including ProxSkip, FedLin, and DANE, which have been proposed to mitigate ... A more comprehensive discussion of these methods could provide a broader perspective on existing solutions and how they relate to the proposed approach.
Indeed the literature on federated learning is now mature with methods to mitigate heterogeneity in horizontal FL (the ones mentioned by you) to federated prototype learning (FedProto by Tan et al. AAAI 22), to federated continual learning (Wang et al. CVPR 2024) to so many other settings and problems. It is almost impractical to include them in the related work of any non-survey paper. We just touched upon the aspect of heterogeneity that lies at the core of Federated ML. We included the most relevant related works given our intent and purpose of reducing the communication complexity in Vertical FL via an improved coreset construction method. We will further include more references in the final version, where additional page space will be available.
We thank you again for your time and we will be happy to provide further clarifications if any.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their responses!
I appreciate the clarifications and look forward to the promised adjustments being incorporated. Given the quality of the work, I will maintain my initial positive score.
Best regards,
Reviewer
---
Reply to Comment 1.1.1:
Comment: Thank you for your acknowledgment. We will definitely include the promised texts in the CR version.
Best regards,
Authors of the submission. | Summary: The paper introduces a coreset construction algorithms for Vertical Federated Learning (VFL), focusing on regularized logistic and linear regression (ridge regression). The authors present algorithms to efficiently construct coresets that significantly reduce communication complexity in VFL, essential due to clients possessing different subsets of the feature space. Their primary contributions include:
1. A novel algorithm for regularized logistic regression coreset construction in centralized and VFL settings, employing ℓ₁ Lewis weights.
Improved coreset construction for vertical regularized linear regression (ridge regression), reducing coreset sizes while eliminating dependency on a specific data-related property previously considered necessary.
2. A detailed analysis demonstrating that increasing regularization parameters reduces model complexity and consequently reduces the required coreset size.
3. Empirical validation that demonstrates superior performance of their coresets compared to existing methods, both in terms of accuracy and computational efficiency.
The experiments highlight significant speed-ups (up to 100x) and comparable accuracy to models trained on complete datasets, validating their theoretical claims.
Claims And Evidence: The paper clearly demonstrates:
1. The theoretical guarantees of coresets constructed with their proposed algorithms.
2. Reduction in coreset size directly correlated with increasing regularization parameter λ.
3. Empirical evidence across multiple datasets validates the theoretical results and effectiveness compared to competing methods.
Methods And Evaluation Criteria: The proposed methods (coreset construction algorithms using Lewis weights and sensitivity scores) are well-suited to the stated problems (regularized logistic and linear regression) within the VFL framework. The evaluation criteria, including comparison against multiple benchmarks (e.g., uniform sampling, leverage scores, previous state-of-the-art methods), are appropriate and standard for the problem domain.
Theoretical Claims: The paper provides several theoretical results, including sensitivity score bounds (Lemma 1, Lemma 7), Lewis weights properties (Theorem 2, Lemma 3, Lemma 4), and coreset size bounds (Theorems 5 and 6).
Did not thoroughly check the proofs.
Experimental Designs Or Analyses: Authors validate their coresets across multiple datasets (Credit Card, Financial, Blog Feedback), comparing their performance on key metrics (training loss, test accuracy, RMSE, model closeness, training time) against other baseline methods.
Supplementary Material: Yes, supplementary materials (appendix) containing detailed proofs of lemmas, theorems, and algorithmic details. These provide clarity, detailed mathematical derivations, and additional experiments supporting the paper's theoretical and practical contributions.
I do not thoroughly review proofs.
Relation To Broader Scientific Literature: This paper establishes itself within the existing body of work on coresets and vertical federated learning. notably building upon prior literature such as Huang et al. (2022) and leveraging important foundational concepts from Lewis and sensitivity sampling.
Essential References Not Discussed: The paper has comprehensively cited the core relevant literature and key foundational works.
Other Strengths And Weaknesses: Strengths:
------------
1. Strong theoretical contributions and well-motivated problem.
2. Relevant algorithmic contributions for the vertical federated learning setup.
Weaknesses:
----------------
1. Limited experimental validation on diverse or large-scale real-world datasets.
2. A potential computational bottleneck (e.g., calculation of Lewis weights) may restrict the practicality of the proposed algorithms in certain very large-scale federated learning scenarios.
Other Comments Or Suggestions: Careful proofreading is suggested for typographical and minor grammatical mistakes in the introduction and methods sections.
Questions For Authors: 1. In your experiments, how did you select the regularization parameters (λ)? Did you use standard cross-validation or a more specialized method tailored to VFL settings?
2. Why have you used accuracy instead of F1-Score?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are thankful to the reviewer for taking the time to read our submission and providng a detailed review.
> Limited experimental validation on diverse or large-scale real-world datasets.
In the literature, it is standard to perform experiments on publicly available datasets to support the theoretical claims of federated learning algorithms. Large-scale experiments are useful in their merit to validate the distributed (multi-node-multi-GPU) machine learning algorithms, where we need to capture system-related aspects. Moreover, for a **vertical** federated learning algorithm, an experimental validation over a dataset with 280 features (blog feedback) or, for that matter, one with 500,000 samples (year-prediction) is reasonably large enough to capture the core claims of the algorithm.
> A potential computational bottleneck (e.g., calculation of Lewis weights) may restrict the practicality of the proposed algorithms in certain very large-scale federated learning scenarios.
This concern may not be completely well founded as a Lewis weight can be approximated using Online Row Sampling by Cohen Peng 2015 in $\tilde{O}(nd^{2})$ where $n$ is the number of the points in $\~R^{d}$. This is only larger by a $\log (n)$ factor compared to the running time for VRLR. Due to this, in Corollary 1, we have the running time of our algorithm as $\tilde{O}(nd^{2})$. We will add a remark after this corollary to clarify this.
> In your experiments, how did you select the regularization parameters ($\lambda$)? Did you use standard cross-validation or a more specialized method tailored to VFL settings?
Though Cross-validation is used for selecting $\lambda$ in centralized settings, it is not a standard approach for hyperparameter selection in federated learning, where it is unclear where -- on a client or the server -- to cross-validate. Please note that it is actually a critical decision problem for a client to even accept a federated model, which it would have contributed to the training of. We did a general grid search for $\lambda$. See "Federated hyperparameter tuning: Challenges, baselines, and connections to weight-sharing, Khodak et al. NeurIPS 2021". We will specify this in the final version.
> Why have you used accuracy instead of F1-Score?
In our experiments, we showcased losses (training/tests) because our theorem ensured theoretical guarantees on these parameters. Apart from these, we have compared the performance in terms of balanced accuracy, model parameters, and improvement in the training time. Following your suggestion, we did perform experiments to compare the performance on F1-Score. Even on that metric, the proposed algorithm outperforms the competitors. See the table below. We will include more extensive results in the supplementary material in the camera ready version of the paper.
## 1. Credit Card Dataset
### Sample Size: 500
| Method | Train F1 | Test F1 |
| -------- | -------- | ------- |
| Uniform | 0.8192 | 0.8185 |
| HLSZ | 0.8704 | 0.8712 |
| Lewis | 0.9220 | 0.9230 |
| AugLewis | **0.9330** | **0.9343** |
### Sample Size: 2500
| Method | Train F1 | Test F1 |
| -------- | -------- | ------- |
| Uniform | 0.8723 | 0.8731 |
| HLSZ | 0.9071 | 0.9078 |
| Lewis | 0.9304 | 0.9315 |
| AugLewis | **0.9319** | **0.9331** |
## 2. KDD CUP Dataset
### Sample Size: 50
| Method | Train F1 | Test F1 |
| -------- | -------- | ------- |
| Uniform | 0.3419 | 0.3412 |
| HLSZ | 0.4717 | 0.4702 |
| Lewis | 0.8715 | 0.8688 |
| AugLewis | **0.8801** | **0.8772** |
### Sample Size: 2500
| Method | Train F1 | Test F1 |
| -------- | -------- | ------- |
| Uniform | 0.9184 | 0.9155 |
| HLSZ | 0.9685 | 0.9659 |
| Lewis | 0.9712 | 0.9684 |
| AugLewis | **0.9713** | **0.9685** |
We thank you again for your time and we will be happy to provide further clarifications if any. | Summary: This paper studies regularized linear regression and regularized logistic regressions in the vertical federated learning (VFL) setting, where clients store different data features. The goal is to reduce communication complexity. The paper introduces coreset algorithms for these two problems and achieves improved coreset size.
Claims And Evidence: No
For example, this paper claims that their corset size improves upon that of [Huang et al., 2022] for ridge linear regression. However, they do not provide an explicit comparison between their coreset size provided in Theorem 6 and that of [Huang et al., 2022]. It is unclear why their coreset size is always smaller.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes.
Check the proof of Lemma 1, which is correct.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, Section A
Relation To Broader Scientific Literature: Yes, it relates to the broader area of federated learning and data compression.
Essential References Not Discussed: Yes
The paper heavily uses levis weights to compute the sensitivities, which have been studied extensively. However, they do not introduce or compare with the use of levis weights in the literature clearly. I list some potential papers below:
- William B. Johnson and Gideon Schechtman. Finite dimensional subspaces of $\ell_p$. 2001.
- Varadarajan, Kasturi R. and Xin Xiao. On the Sensitivity of Shape Fitting Problems. 2012.
- Jambulapati, Arun, James R. Lee, Y. Liu and Aaron Sidford. Sparsifying Sums of Norms. 2023.
Other Strengths And Weaknesses: Weaknesses:
The writing is not well structured.
- The introduction section introduces several dense math notations but does not provide the motivations for the problem.
- The novelty of this paper compared to the literature has not been clearly illustrated. For instance, Algorithm 1 looks like Algorithm 1 in [Huang et al., 2022], while there is no discussion of the difference.
Other Comments Or Suggestions: - It is strange to heavily strengthen that the bound for the total sensitivity is tight. If the coreset size is tight, it is interesting. For the sensitivity, a remark may be enough.
Questions For Authors: - Can you provide a concrete dataset example and compare the explicit corset sizes between [Huang et al., 2022] and your result? What is the exact size improvement?
Ethics Expertise Needed: ['Other expertise']
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank reviewer Br26 for the time taken to read our submission and the provided feedback. We address the concerns.
> The introduction section introduces several dense math notations but does not provide the motivations for the problem.
Regularized linear regression and regularized logistic regression are standard basic problems in the ML community. Federated learning is now well recognized for its ability to address data privacy, security, and scalability. We stated the motivation of the problem in lines 51-54 (right); we will highlight that in the final version where an additional page will be available.
>The novelty of this paper compared to the literature has not been clearly illustrated. For instance, Algo 1 looks like Algo 1 in [Huang et al., 2022], while there is no discussion of the difference.
In [Huag et. al., 22] the coreset size for VRLR is a function of the rank of the input data. In comparison, our coreset size depends on the statistical dimension of the data, which is strictly smaller than the rank for any regularization parameter $\lambda>0$. Algo 1 in the paper is only for completeness and has not been included in the contribution claims. The main novelties of the paper have been highlighted in the contribution claims. Algos 2 and 4 are our contributions, whereas algo 3 has been included for completeness and stated so. We discuss the improvement of coreset size in VRLR in rebuttal to reviewer YB3Z.
>It is unclear why their coreset size is always smaller.
> Can you provide a concrete dataset example and compare the explicit corset sizes between [Huang et al., 2022] and your result? What is the exact size improvement?
The coreset size depends on factors such as the total sensitivity, approximation error $\epsilon$, failure probability and the pseudo dimension of the problem. The most standard experiment evaluations are that for a fixed coreset size, what is the $\epsilon$ or loss or accuracy, and they tend to improve as we increase the coreset size. Comparing exact sizes between various sampling methods with a fixed one of the above parameters is impractical. Notice that in plot 2, for fixed coreset size, our algorithm has a smaller test and train RMSE and also smaller $\epsilon$ compared to the sampling methods from Huang et. al. This clearly shows that with a fixed sample size, our coresets (LEV) are performing better than the existing methods. We have further conducted more experiments on other real datasets showing similar improvements included the appendix.
The exact difference between the coreset size can only be described theoretically. Let us exemplify this. For simplicity assume the number of clients to be 1, which can be easily extended to a setup with multiple clients. Let $A$ be a dataset with all $n$ points in $\mathbb{R}^{d}$ such that $n/d = c$ where $c$ is a positive integer. Again, for simplicity, take its response vector $b$ to be the zero vector. Let $A = \begin{bmatrix} I \\\ \vdots \\\ I \end{bmatrix}$ where $I$ is just identity matrix in $\mathbb{R}^{d}$. In Huang et al., the sensitivity score for every point is $1/c$. Hence, the total sensitivity for $n$ points is $n/c = d$. Notice that it is irrespective of the fact if $\lambda$ is 0 or a positive scalar. However, in section 4 [line 185 right] we have clearly motivated why a smaller coreset size is expected for the case when $\lambda > 0$. So, in such a case, our sensitivity scores are $1/(c+\lambda)$. Hence, the total sensitivity score is $n/(c+\lambda) < n/c = d$. In fact, for higher values of $\lambda$, the total sensitivity score could be significantly smaller. So, theoretically, the improvement in the coreset size is at least by a factor of $c/(c+\lambda)$. We will further underscore this example in the final version.
> It is strange to heavily strengthen that the bound for the total sensitivity is tight. If the coreset size is tight, it is interesting. For the sensitivity, a remark may be enough.
The coreset size is proportional to the total sensitivity (also mentioned above) which is the summation of individual sensitivity scores. Having a tighter sensitivity score gives a tighter total sensitivity, further resulting in a coreset with a tighter size. We will add a remark for this.
> The paper heavily uses levis weights to compute the sensitivities, which have been studied extensively. However, they do not introduce or compare with the use of levis weights in the literature clearly.
While there are various methods for computing Lewis weights, however, for our purpose, it suffices to compute the Lewis weight in any of the known methods. We have used the Lewis weights computation from Cohen Peng Lp Row Sampling by Lewis Weight 2015. We have presented this as algo 3 for completeness. We appreciate the suggestions, we will include the appropriate citations in the camera ready version.
Hope we have addressed your concerns and we will be happy to provide futher clarifications if needed.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. It addresses my concern about the size comparison with Huang et al. 2022. I increased my score.
However, I still don't like the writing style of the paper, which is summarized below.
- Though the authors claim that there is a short paragraph for motivation, it is still unclear. There is no reference supporting the importance of VFL or VFL coreset.
- Though the main contributions do not include Algorithm 1 and the way of use for Lewis weight method. It is important to clarify that these two are not new and are from the literature. A detailed discussion is necessary to avoid overclaiming the contribution.
Given that, I think the paper can be improved significantly in another round.
---
Reply to Comment 1.1.1:
Comment: We thank reviewer Br26 for acknowledging our rebuttal and positively changing the score. Below, we further address the concerns raised.
>Though the authors claim that there is a short paragraph for motivation, it is still unclear.
We are happy to talk about this more. As stated previously, the motivation for the problem was explicitly made clear, not only in a short paragraph: "lines 51-54 (right)," but even in an extended and natural way throughout the paper, as shown below.
* At the beginning of the abstract, we wrote that **Coreset, as a summary of training data, offers an
efficient approach for reducing data processing and storage complexity during training. In the emerging vertical federated learning (VFL) setting, where scattered clients store different data features, it directly reduces communication complexity. In this work, we introduce coresets construction for regularized logistic regression both in centralized and VFL settings.**
* We indeed consciously kept the flow as organic as possible.
- After the abstract, we formally defined the problems VRLog and VRLR in definitions 1 and 2, respectively, in the Introduction section's first paragraphs.
- Having done that, we emphasized the usefulness of coreset in lines 51-54 right and 55-56 left.
- Following this, we formally defined the required guarantees from corsets in definitions 3 and 4. By this point, we are done with discussing the usefulness of coreset for our problem.
- Next, in section 2.3 under VFL (Line 134-144 right), we explain how a model gets trained in this setup.
- Afterward, in lines 145-154 right, we emphasize that communication complexity is a core challenge for this problem, which relates to definitions 1 and 2. We formalize the results in corollary 1 and theorem 6 through coreset.
- We believe that by this point, a reader will have sufficient exposure to the problem, its background, and motivation.
* More pointedly, please notice that we have precisely defined the problems to be solved with their motivation in paragraphs P1 and P2, specifying the challenges that one needs to address while constructing corsets for our problem VRLR and VRLog in VFL (see lines 169-190, right) setting.
>There is no reference supporting the importance of VFL or VFL coreset.
We are unsure what "reference supporting the importance of VFL or VFL corset" means. We have sufficiently discussed VFL. We cited the works, including a recent survey (Liu et al. 2024). In lines 175-198 left, we point to the literature where corsets were used in federated learning setup. The main related work to our submission is Huang et al. 2022, which is evident in the submission that we are improving on. We will be thankful to the reviewer if more relevant papers can be pointed to us.
>Though the main contributions do not include Algo 1 and the way of use for Lewis weight method. It is important to clarify that these two are not new and are from the literature. A detailed discussion is necessary to avoid overclaiming the contribution.
Very humbly, we disagree with the reading that we are claiming or overclaiming anything related to Lewis weight or Algo 1. We have written clearly right above Algo 3 (line 301 right) for Lewis weight that we included this for completeness. As committed earlier in our rebuttal, we will write the same for Algo 1 in the final version. We humbly state that such a statement is self-contained, and a discussion might appear redundant.
>However, I still don't like the writing style of the paper
In general other reviewers found the paper easy to follow. So, we regret missing pointers for a better writing style. However, it is standard to rigorously introduce the problem at the beginning, which our paper structure also aligns with. Furthermore, see these papers:
- Amsel et al. Nearly optimal approximation of matrix functions by the Lanczos method. NeurIPS 24
- Musco et al. Randomized block krylov methods for stronger and faster approximate singular value decomposition. NeurIPS 15
- Kacham et al. Sketching algorithms and lower bounds for ridge regression. ICML 22
We will consider writing an intuitive introduction before the formal description. However, this may not structurally change the paper. We will be thankful for a consensus suggestion by all the esteemed reviewers.
In summary, we humbly state that the CR version of a conference paper has one extra page to expand on the submitted texts, which then includes the address of the queries of the reviewers and the commitment made by the authors, which we will also do. The submission includes the main contributions under limited space. We tried to clearly describe the core lemmas, theorems, and experiments in detail for the benefit of the readers and kept the flow of introduction to concepts natural.
As there are no further queries, we humbly request the reviewer to consider reevaluating our submission in light of the above discussion during the AC-Reviewer discussion period. | null | null | null | null | null | null | null | null |
Adaptive Partitioning Schemes for Optimistic Optimization | Accept (poster) | Summary: This paper proposes an adaptive partitioning schemes which divides the search domain into a hierarchical subspace tree to reduce search space and enhance optimization performance. The subspace matrix is learned and updated as a hidden layer of neural network surrogate model. Experimental results on several synthetic problems and LLM quantization problems show that the proposed methods surpass the compared baselines.
## update after rebuttal
I keep my score. My final assessment of this paper is 2.
Claims And Evidence: The authors demonstrate the effectiveness of the proposed methods through theoretical analysis and comparison experiment on synthetic and LLM quantization problems.
Methods And Evaluation Criteria: The overall method is to learn a subspace partitioning matrix by fitting a neural network surrogate model and sample solutions from the partitioned subspace which contains the optimal solution. The authors proof the correctness of the proposed method through theoretical analysis.
Theoretical Claims: The authors provide the detailed proof of the proposed theorems and lemmas in the paper and Appendix.
Experimental Designs Or Analyses: 1) For each problem instance, the surrogate network should be trained from scratch to obtain the partitioning matrix, however, I did not find the surrogate training settings for synthetc problems? The time efficiency for the surrogate training in synthetic problems is not reported.
2) The number of layers M for the LLMs in the experiments is not reported.
3) The test synthetic problems included in the experiments are relatively simple, which are mostly unimodal or low conditioning.
Supplementary Material: Appendix (Section) 7 demonstrate the notations. Appendix (Section) 8-9 and 11 present the detailed proofs for the proposed theroems and lemmas in the paper, while Appendix (Section) 10 shows the numerical experiment validating the effectiveness of the lookahead direction selection. Appendix (Section) 11 reports the additional experimental settings and results on the synthetic problems and LLM quantization problems.
Relation To Broader Scientific Literature: 1) Black-box Optimization: The proposed methods are developed to solving black-box optimization problems.
2) Surrogate Model: The partitioning matrix is learned by fitting a surrogate model.
3) Large Language Model: In the experiment, the proposed method is applied to solve the LLM quantization problems.
Essential References Not Discussed: I didn’t see any essential references not discussed.
Other Strengths And Weaknesses: Weakness: the test synthetic problems are relatively simple. The tested problems are mostly unimodal or low conditioning. For simple or unimodal problems the surrogate model could be fitted more quickly and the subspace containing optimum could also be recognized more easily. Testing the methods on multi-modal and high conditioning problems such as COCO BBOB or CEC series benchmarks might enhance the convincingness of the experiments.
Other Comments Or Suggestions: None.
Questions For Authors: 1) In Appendix 8, Proposition 8.1, what is the reason of using the top m right singular vectors of the p vectors in the learned weight matrix of the hidden layer as the partitioning scheme instead of setting p=m?
2) The rows in the estimated A may not be orthonormal, will it affect the optimization performance?
3) How will the values of the number of dimensions m and integer c in Algorithm 2 affect the performance? Or how to choose a proper value for m and integer c when optimizing an unseen problem?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We respond to the reviewer's questions below.
> For each problem instance, the surrogate network should be trained from scratch to obtain the partitioning matrix
We trained the neural network using the Ray package for hyperparameter tuning. The search space included hidden layer sizes (500, 1000, 2000, 3000), learning rates (log-uniform from $1 \times 10^{-4}$ to $1 \times 10^{-1}$), weight decay (log-uniform from $1 \times 10^{-2}$ to $1 \times 10^{-1}$), and learning rate step decay with gamma values (uniform from 0.9 to 0.99) and step sizes (500, 1000, 2000). Early stopping was used to prevent overfitting. We will add these details in our revised draft.
> The number of layers M for the LLMs in the experiments is not reported.
The number of layers $M$ in our experiment is 24. We will clarify this in the revised draft.
> The test synthetic problems included in the experiments are relatively simple, which are mostly unimodal or low conditioning.
> Weakness: the test synthetic problems are relatively simple. The tested problems are mostly unimodal or low conditioning.
We thank the reviewers for the question! Our rationale in choosing these test functions is from the paper [1]. We choose one test from each of the category to cover different type of optimization challenges. Sphere function belongs to "Separable function class", Ellipsoid function belongs to "Functions with low or moderate conditioning class". Branin function belongs to "highly non-linear class". Rastrigin belongs to "Multi-modal functions with adequate global structure". Sum of Different Powers function belongs to "Functions with high conditioning and unimodal" class. Also, RESOO [2] was evaluated using the Branin, Rosenbrock function, while HesBO [3] was tested on the Branin, Hartmann-6, Rosenbrock, and Styblinski-Tang functions. Therefore, we included the results of all the baselines on these functions.
[1] Nikolaus Hansen, Steffen Finck, Raymond Ros, Anne Auger. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. [Research Report](https://inria.hal.science/inria-00362633v2) RR-6829, INRIA. 2009.
[2] Qian, Hong, and Yang Yu. "Scaling simultaneous optimistic optimization for high-dimensional non-convex functions with low effective dimensions." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 30. No. 1. 2016.
[3] Nayebi, A., Munteanu, A. &; Poloczek, M.. (2019). ["A Framework for Bayesian Optimization in Embedded Subspaces."](https://proceedings.mlr.press/v97/nayebi19a.html) Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4752-4761.
> In Appendix 8, Proposition 8.1, what is the reason of using the top m right singular vectors of the p vectors in the learned weight matrix
Proposition 8.1 is valid for any $p$ and we could set the $p$ to be $m$.
> The rows in the estimated A may not be orthonormal, will it affect the optimization performance?
A is estimated from the SVD of neural network weights, hence the rows of estimated A are orthonormal.
> How will the values of the number of dimensions m and integer c in Algorithm 2 affect the performance? Or how to choose a proper value for m and integer c
In practice, we can choose the value of m to explain a desired percentage (such as 95\%) of the total variation in the SVD step calculating $\hat{\mathbf{A}}$. $c$ is a hyper parameter which is chosen as 5 in our experiments. Larger er $c$ would mean we use more samples for the optimization and lesser samples for the surrogate training. Smaller $c$ would mean we use more samples for surrogate training and less samples for optimization. For a unseen problem we need to perform hyperparameter tuning to find the optimal value for $c$. | Summary: The paper proposes an adaptive partitioning scheme for optimistic optimization that extends existing gradient-free algorithms such as SequOOL. The authors consider both a two-stage and an interleaved algorithm. In the context of multi-index functions (defined on a n-dimensional subspace within m dimensions), they prove an improved simple regret bound for their method compared to SequOOL and another baseline. The algorithms are evaluated empirically on a set of synthetic functions as well as a LLM quantization task.
### Update after rebuttal
My overall assessment of the paper remains unchanged.
Claims And Evidence: * The theoretical results are supported by a large set of technical results and proofs (which I have no particular reason to doubt but I also did not double check).
* The empirical results are less convincing: the performance of the proposed methods is mixed, and it is not particularly deeply explored by the authors as to why.
Methods And Evaluation Criteria: * The benchmark setup appears generally reasonable, modulo the comments below.
* I would have also liked to see SAASBO (https://proceedings.mlr.press/v161/eriksson21a.html) and TuRBO (https://proceedings.neurips.cc/paper/2019/hash/6c990b7aca7bc7058f5e98ea909e924b-Abstract.html) as additional baselines, but I don't think this is critical.
Theoretical Claims: * No. This is a highly technical paper in an area that I'm not very familiar with so this was not feasible.
Experimental Designs Or Analyses: * It is not clear what the settings were that were chosen for the baseline methods. Without this (e.g. dimensionality of the embedding dimension for REMBO and HESOB) it's very hard to interpret the results of the experiments.
* The evaluation of the random embedding Bayesian Optimization (REMBO, HESBO) on the low-dimensional problems in the appendix appears rather odd. It doesn't make a lot of sense to generate some lower-dimensional embedding in a 5-dimensional space if the evaluation budget is 2000. The proper comparison here would be to just use standard Bayesian Optimization.
* A more clear ablation of the performance as the ambient dimension increases would have been useful to understand the behavior.
* Why is Algorithm 2 not evaluated on the test functions?
* The results in Figure 2 are quite hard to read. I recommend focusing on showing traces from a smaller number of functions and relegate the rest into the appendix. You can also aggregate the results across functions in a more compact format to keep the key message in the paper.
* Some of the discussions appear to focus on extremely small differences (e.g. on the order of 10^-10 - 10^-12 in Section 10). How relevant are these differences? Can we actually conclude anything from them them given the variance in the results?
Supplementary Material: I lightly reviewed sections 7-10, 12
Relation To Broader Scientific Literature: The primary contribution of the paper is to propose an adaptive partitioning scheme and proving improved regret bounds for this compared to non-adaptive approaches. This is significant in the sense that - if I parse the paper correctly - this is the first work that considers an adaptive partitioning approach of this kind in the context of optimistic optimization.
Essential References Not Discussed: I do not have enough familiarity with the specific literature to assess this.
Other Strengths And Weaknesses: * The authors never describe the SequOOL algorithm in detail. Given that this is not just a comparison baseline but an integral part of the proposed method, this makes it hard to understand the contribution for the non-expert already familiar with SequOOL.
* Apart from that, despite being quite technical, the paper tries to make itself at least somewhat accessible to the non-expert. For instance, I liked the intuition provided in the paragraph right after Definition 2.7 (though I believe there is a typo: eta -> nu).
* Overall, the empirical results are rather disappointing:
* Algorithm 1 does well on a couple of the examples (however, the authors don't provide much intuition for why htat is the case) but is otherwise worse than many baselines, especially in the small-sample regime (anytime performance matters!).
* Algorithm 2 does not appear to be evaluated on the synthetic functions (why?).
* For the LLM example, while a nice and practical real-world example, it is not clear how meaningful the improvement is compared to the baselines, and the results are from a single replicate - I understand that compute capacity here is a limitation, but it's still just not clear how meaningful the result is.
Other Comments Or Suggestions: I find the significance and quality of this paper quite hard to judge as it's outside my expertise, so this is a low-confidence vote.
Questions For Authors: * For the experiments, it appears that you generate a single A once per function and then run multiple optimization runs, rather than each replicate being over a new embedded subspace (i.e. a new A), is that a correct understanding? This seems like it could cause some bias in the evaluation. It may be good to re-randomize A, that would make the comparison against the non-randomized algorithms more interesting as well.
* You mention that one possible direction is to "extend your approach to the case when the function evaluations are noisy" - I'm curious how that would work; one of the core assumptions of the index function setting is that the function has a narrow ridge structure. If evaluations are noisy, then it appears that the direction of this structure would be quite hard to estimate (in other words, the learned partitioning schemes would not align with it). How would the theoretical results you obtained translate to a noisy setting.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We respond to the reviewer's questions below.
> I would have also liked to see SAASBO and TuRBO.
Thank you for sharing these two relevant baselines. Previously, we tried to ran the TuRBO method and after 100 evaluations the computation time is very large, hence we did not include it in our results. We will try to run SAASBO for 2000 evaluations and add it in our revised draft.
> It is not clear what the settings were that were chosen for the baseline methods (e.g. dimensionality of the embedding dimension for REMBO and HESBO).
Thank you for pointing this out. dimensionality of the embedding dimension is choosen to be equal to the value $m$. We will add these details in our revised draft.
> The evaluation of the random embedding Bayesian Optimization (REMBO, HESBO) on the low-dimensional problems in the appendix appears rather odd.
We thank the reviewer for the suggestion!. We will include standard Bayesian Optimization in the updated version.
> Why is Algorithm 2 not evaluated on the test functions?
The experiments in Figure 2 were on synthetic multi-index functions with a low-dimensional structure. We ran variant 1 to verify and demonstrate the benefit of learning the low-dimensional subspace. However, as per your comment, we could also run Algorithm 2 on these multi-index functions. We will run it and add those results to our revised draft.
> The results in Figure 2 are quite hard to read.
Thank you for the suggestion. We will incorporate this in our revised draft.
> Some of the discussions appear to focus on extremely small differences (e.g. on the order of 10^-10 - 10^-12 in Section 10).
Thank you for your feedback. Section 10 presents an illustrative experiment to motivate lookahead direction selection. The function $f(x_1, x_2) = 1 - |x_1| - x_2^2$ is evaluated with different parameterized choices of $A$, without randomness. Thus, variance is not a concern, and the goal is to highlight potential benefits of our proposed strategy.
> Algorithm 1 does well on a couple of the examples (however, the authors don't provide much intuition for why htat is the case)
Overall, Algorithm 1 is expected to perform well when a low-dimensional ridge structure is present in the objective function. This is because our approach learns the low-dimensional structure and performs optimization on the reduced search space. The effect of the reduced search space can be seen in our regret bounds and is demonstrated in our experiments. Our Algorithm 1 is a two-stage algorithm where the first stage learns an adaptive partitioning scheme and the second stage uses it for optimization. In Figure 2, we used 650 samples for the first learning stage; consequently the anytime regret till 650 samples is high. While standard approaches like the doubling trick can be used to convert a budgeted algorithm to an anytime algorithm, they are not the most effective, and we will develop an effective anytime algorithm in future work.
> For the LLM example, while a nice and practical real-world example, it is not clear how meaningful the improvement is compared to the baselines
Thank you for your feedback. Our response is as follows:
1. Our approach to LLM quantization enables a more faithful implementation of AWQ, leading to improved quantized model accuracy.
2. AWQ is a widely used quantization method, as evidenced by several publicly available models:
- [Huginn-13B-v4-AWQ](https://huggingface.co/TheBloke/Huginn-13B-v4-AWQ?utm_source=chatgpt.com)
- [Capybara-Tess-Yi-34B-200K-DARE-Ties-AWQ](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-AWQ?utm_source=chatgpt.com)
- [ChatQA-1.5-8B-AWQ](https://huggingface.co/bartowski/ChatQA-1.5-8B-AWQ?utm_source=chatgpt.com)
3. Quantization is a one-time process, and prioritizing accuracy at the cost of compute is justified to ensure a high-quality model.
> For the experiments, it appears that you generate a single A once per function and then run multiple optimization runs
We do reinitialize the subspace by generating a new $A$ after each trial to ensure unbiased evaluation. We will clarify this in the revised draft.
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional comments on my review.
> For the LLM example, [...] Thank you for your feedback. Our response is as follows:
I'm not questioning the importance and relevance of quantization and that AWQ is widely used - my question is about whether the improvements demonstrated in Table 2 are meaningful / reproducible. Calibration does not seem to improve relative to SequOOL,, and what's the incremental value of decreasing PPL from 16.83 to 16.68? Also, what variance would we expect from re-running this? Are the results statistically significant in any way?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for raising this question. To better illustrate the contribution of our results, we provide the following analysis:
- **Improvement Relative to the Unquantized Model:** The unquantized model has a perplexity of 14.47, and since quantization increases the perplexity, we can think of that value as the minimum possible perplexity of any quantized model. The baseline (AWQ) quantized model achieves a perplexity of 16.92, giving a perplexity difference of 16.92 - 14.47 = 2.45. In contrast, Algorithm 2 achieves a perplexity of 16.68, giving a perplexity difference of 16.68 - 14.47 = 2.21. The relative improvement in perplexity loss is thus (2.45 - 2.21) / 2.45 $\approx$ 10\%.
- **Objective Function Comparison with SequOOL:** When we apply **SequOOL** to our proposed joint optimization objective (in $3M$ dimensions), it achieves a perplexity of **16.83**. Since SequOOL is deterministic, this demonstrates that our new formulation of the quantization problem is itself beneficial and the improvement is not due to random chance.
- **Empirical Evidence of Robustness:** To assess statistical robustness under compute constraints, we performed partial quantization experiments. Specifically, we quantized:
1. First 3 and 4 layers
2. Last 3 and 4 layers
In each case, we observed consistent reductions in perplexity, indicating that the improvements from Algorithm 2 are not due to chance, but are reliably obtained across different regions of the model. However, we plan to run more independent trials (5 -10) and report the results in our revised draft. | Summary: The authors propose two different versions of learning partitioning ideas for Optimistic Optimization algorithms for black-box optimization. The first, uses a two step approach that first learns the partitioning and then optimizes. The second updates the partitioning while optimizing. The authors support their claims with theoretical and empirical evidence.
Claims And Evidence: Claim 1: We demonstrate the benefit of using a learned partitioning scheme for existing derivative-free optimization algorithms such as SequOOL.
On standard toy optimization problems, the learned partitioning shows improvement over SequOOL. This advantage doesn't show in the real-world problem of LLM quantization. Here, variant 1 performs worse. Variant 2 shows small (not clear whether this is significant as the others don't provide std) improvements, but also has 20% more compute budget. I consider this a mixed data point with clear need for improvement in the latter experiment.
Claim 2: When the function is a low-dimensional multi-index function we theoretically prove improved regret bounds shown in Table 1.
I'm no expert, but looks ok to me.
Claim 3: Empirically, we demonstrate the improvement in optimization error for several benchmark functions including Rastrigin (multi-modal), Branin (multiple minima), and Sharp Ridge (non-differentiable).
This is a strictly stronger claim than Claim 1 and thus those remarks apply here as well. Additionally, we see that RESOO shows similar performance on average compared to the proposed method. Thus, this claim has no strong support.
Claim 4: We pose the quantization of Large Language Model (LLM) as a high-dimensional black-box optimization problem and obtain an improved perplexity value.
The authors show an improvement, but also use 1/3 more search budget. Thus, this is no fair comparison. Improvements seem rather small as well.
Methods And Evaluation Criteria: The authors choose high-dimensional problems which makes sense for their problem. Most benchmarks are "toy" functions. I'd like them to tackle real-world problems instead. NAS benchmarks could be one example of high-dimensional benchmarks with stronger connection to practical relevance.
Allowing different compute time for the result in Table 2 makes little sense since we are no longer able to compare the performance of different search methods.
Theoretical Claims: I tried follow the authors reasoning, but I'm a more applied researcher. I leave the evaluation of this part to my fellow reviewers.
Experimental Designs Or Analyses: Yes. Covered in "Methods And Evaluation Criteria"
Supplementary Material: The appendix is very long and heavy in theory. I've only looked at section 12 in detail.
Relation To Broader Scientific Literature: In my opinion the authors describe it very well. They could have covered some other learning-related work as well. I leave pointers below.
Essential References Not Discussed: This might be also interesting work for the authors. Instead of using Bayesian optimization on a random lower dimensional space, there is work learning this space:
Wenlong Lyu, Shoubo Hu, Jie Chuai, Zhitang Chen: Efficient Bayesian Optimization with Deep Kernel Learning and Transformer Pre-trained on Multiple Heterogeneous Datasets
Martin Wistuba, Josif Grabocka: Few-Shot Bayesian Optimization with Deep Kernel Surrogates
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: I found it hard to identify which are the algorithms presented by the authors at times. Maybe give them a name instead of referring to Alg 1/2?
Questions For Authors: Where are Algorithm 2 results for Figure 2?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We respond to the reviewer's questions below.
> Claim 1: We demonstrate the benefit of using a learned partitioning scheme & Claim 4: We pose the quantization of Large Language Model (LLM) as a high-dimensional black-box optimization problem
We thank the reviewers for recognizing our claims. Our approach enables a more faithful application of the AWQ quantization strategy: We quantize the entire LLM end-to-end while the AWQ algorithm had to perform an independent grid search over each successive layer. Since quantization is a "perform once, reuse many times" operation, and small improvements in the perplexity score are often critical, we believe our approach could be fruitfully applied on other larger models. We believe this to be one of our main contributions. Regarding the search budget values in the experiments, grid search had long stopped improving and hence we had paused its execution at 9 hours. We will match the compute time budget in the updated version.
Variant 1 of our algorithm exploits a low-dimensional ridge structure in the objective function. Since the LLM quantization problem did not have such a structure, variant 1 did not provide an advantage. However, as the experimental results show, Variant 2 can provide an advantage even when such a structure is not present.
> Claim 3: Empirically, we demonstrate the improvement in optimization error
The contribution statement is a general claim. Specifically, compared to RESOO, our method achieved zero regret on both the Rastrigin and Styblinski-Tang functions, whereas RESOO had a regret of $5.5 \times 10^{-3}$ on Rastrigin. For Styblinski-Tang, our method required approximately 900 evaluations to reach zero regret, while RESOO needed 2000.
> Where are Algorithm 2 results for Figure 2?
Thank you for pointing this out. The experiments in Figure 2 were on synthetic multi-index functions with a low-dimensional structure. We ran variant 1 to verify and demonstrate the benefit of learning the low-dimensional subspace. However, as per your comment, we could also run Algorithm 2 on these multi-index functions. We will run it and add those results to our revised draft. | null | null | null | null | null | null | null | null |
Approximate Forest Completion and Learning-Augmented Algorithms for Metric Minimum Spanning Trees | Accept (poster) | Summary: The paper considers Metric Forest Completion (MFC) problem.
Let $G$ be an edge-weighted complete graph whose weights satisfy the triangle inequality (it induces a metric space).
Given a forest $F \subseteq G$, the MFC asks for the set of edges with a minimum sum of weights that connects the components of $F$.
They prove that $\Omega(n^2)$ query edges are required to optimally solve MFC and present a subquadratic algorithm with an approximation factor $2.62$. In addition, they report an experimental study that shows that their approach provides gains in terms of scalability without significantly compromising the quality of the produced solutions (spanning trees)
Claims And Evidence: I think they are clear
Methods And Evaluation Criteria: Yes, they make sense
Theoretical Claims: I have not checked them.
Experimental Designs Or Analyses: Yes, I checked the experimental designs and analyses and found no major issues.
That said, I think the behavior of $t$ needs some additional explanation. The time complexity (line 315, left) increases with $t$ but in the experimental results (line 317-319, Right), the speed up increases for larger $t$, which is not in line with the theoretical analysis.
I believe the reason is that in the time complexity analysis, one does not take into account the construction of the initial forest, while in the running times reported in Section 5, it is taken. In any event, I think it should be clarified. Maybe adding an extra parameter in the time complexity analysis to represent the time taken by the initial heuristic should be helpful.
Supplementary Material: I briefly looked appendix A, D and E to better understand the related work, time complexity analysis and some experiment details.
Relation To Broader Scientific Literature: Most of the algorithms available for building MST in metric spaces (lines 29-33 and 119-121, right) are only suitable for Euclidean distance.
This work differs from previous work since it handles general metric spaces that include, for instance, edit distance.
For this more general setting, there are lower bounds (Indyk 99) which show that any algorithm with constant approximation for building an MST needs to query/know $n^2$ edges. To overcome this obstacle (prove theoretical bounds), the algorithm takes the approach of the (recent) field of learning-augmented algorithms, assuming the availability of a prediction/warm start, obtained via a heuristic or a prediction method. Under this assumption, they obtain an algorithm with an approximation factor that depends on how good the prediction is (parameter $\gamma$).
Essential References Not Discussed: I'm not aware of essential related work that was not cited in the paper.
Other Strengths And Weaknesses: My positive score is based on
Strengths
* The design of scalable algorithms for MST, in metric space, with provable guarantees is a relevant topic As the authors mention, and I agree, for many tasks (some in the Machine Learning domain) the construction of an MST is an important step.
* The paper is, in general, very well written.
* The proposed framework is simple enough so that one would not have problems reproducing it.
* Both the improvement in terms of time complexity and the approximation bounds are interesting enough.
* The experiments were well designed
Weakness
* Lack of comparison with other available methods to build MST for metric spaces (even in Euclidean space). I think the paper can be accepted without that, but adding these comparisons would make the paper stronger and more informative regarding what should be used in practice.
Other Comments Or Suggestions: Some points:
* For Theorem 4.2, I understand you are assuming that $\gamma=1$. Is this correct? In the positive case, you shoud state that.
* line (329, right). I would avoid in "practice" since I do not see uniform random data as a typical case that occur in practical applications.
Questions For Authors: No relevant questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your feedback on our manuscript!
Thanks for your question about the parameter t, which we agree can be better clarified. As you noticed, the apparent discrepancy comes from whether or not one incorporates the time taken to form the initial forest. The theoretical analysis in lines 315-319 (left column) focuses on our algorithm for the MFC problem specifically, which assumes the initial forest is given. However, because computing initial forests is an important consideration in practice, our runtimes in the experiments include the time for computing an initial forest. We originally wanted to include a full runtime analysis that included the initial forest computation (using the k-centering approach), but were space constrained (hence the reference to Appendix D in line 319 left column). In the appendix, we also include runtimes in Table 3 for each step of the pipeline individually, but again for space constraints we were unable to include this in the main text. Again though, we agree that this could be confusing and we will update the text to better clarify this point.
Regarding algorithm comparisons, we agree that comparing our approach against other heuristics could be interesting. The main issue, as you highlight, is that existing methods are focused only on Euclidean metrics, and we are focused on algorithms that apply to general metric spaces and have theoretical guarantees. With this in mind, we felt that the best experiments section for this paper would be to provide an extensive comparison between our approach and the optimal quadratic algorithm for arbitrary metric spaces, across a wide range of datasets and metrics.
Regarding your question about Theorem 4.2, note that this is a result purely about an approximation for the MFC problem, not the MST problem on the original dataset. As such there is no defined $\gamma$ parameter here since it is a result independent of the overlap between the initial forest and some optimal MST. However, if we did assume that $\gamma = 1$ as you suggest, note that the optimal solution for MFC coincides with the optimal solution to the MST problem on the full dataset. Hence a corollary of this theorem is that if $\gamma = 1$ for the initial forest, we get a 2.62 approximation for the MST problem (not just the MFC problem).
Thanks for your point about the use of “in practice”. We agree and will update this.
---
Rebuttal Comment 1.1:
Comment: Thank you for the answers!
I'm keeping my positive score | Summary: The paper presents a new framework for computing approximate minimum spanning trees (MSTs) in arbitrary metric spaces, improving the time complexity compared to traditional exact MST algorithms. The exact MST algorithms require $\tilde{O}(m)$ time complexity, where $m$ is the number of edges. For a complete graph over $n$ vertices, it requires $O(n^2)$ time.
The authors propose the Metric Forest Completion (MFC) problem, which assumes access to an initial forest obtained via heuristics or exact algorithms, and aims to find a set of edges to complete it into a spanning tree. They show that optimally solving MFC still requires $\Omega(n^2)$ queries in the worst case, but they provide a 2.62-approximation algorithm in subquadratic time as long as the number of components $t = o(n)$. Additionally, in the learning-augmented setting, they prove that if the initial forest has a sufficient overlap with an optimal MST, the approximation factor improves based on the overlap parameter $\gamma$. Experimental results on synthetic and real-world datasets demonstrate that the proposed approach finds near-optimal spanning trees significantly faster than exact methods.
Claims And Evidence: The claims are well-supported by theoretical proofs and empirical results.
Methods And Evaluation Criteria: Their methods make sense for the MST problem. Their evaluation criteria are comprehensive, including the approximation factor, runtime, and an upper bound on overlap parameter $\gamma$ given by the overlap with the MST solution over various synthetic datasets and real-world datasets.
Theoretical Claims: I checked the correctness of all proofs including Theorem 3.1, Theorems 4.2 and 4.3 and all relevant Lemmas.
Experimental Designs Or Analyses: Yes, the experimental designs and analysis are sound and valid.
Supplementary Material: Yes, I reviewed most of the supplementary material, including all proofs and the additional experiments.
Relation To Broader Scientific Literature: This paper is related to a broad scientific literature including MST, clustering, graph algorithms, learning augmented algorithms, etc. The MST algorithm is also widely used in many machine learning and graph algorithms. This paper has also a great impact on these research as well.
Essential References Not Discussed: No, as far as I know, the related works are discussed adequately in this paper.
Other Strengths And Weaknesses: Strengths:
This paper introduces a new framework for finding approximate MST for arbitrary metric space, making a meaningful contribution to scalable MST computation. The connection to learning-augmented algorithms is well-motivated. The experiments across multiple datasets show their method provides nearly-optimal MST (with approximation factor close to 1) with 30 to 400 times speed up compared to the traditional exact algorithms. Furthermore, the proposed framework is particularly relevant for large-scale clustering and similarity-based learning tasks.
Overall, I think the framework and analysis in the paper are quite interesting and the experiments are surprisingly good and show the efficiency and effectiveness of their methods.
Other Comments Or Suggestions: I don't have any further comments.
Questions For Authors: I only have some open-ended questions:
1. In your experiments, the datasets with cluster structures show smaller upper bound on the overlap parameter $\gamma$. In the approximation analysis, would it be possible to have some explicit parameters other than $\gamma$, like a separation parameter, to improve the approximation factor?
2. It is nice that the analysis even does not rely on any balanceness of the forest partition. Would it be possible to get a better approximation factor or trade-off depending on the balanceness of the partition?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the feedback on our manuscript! We appreciate the open question you posed. In response to those questions:
1. This is an interesting question. As potential follow-up work, we have indeed wondered whether we could use assumptions or parameters relating to the structure of the initial forest to further improve our guarantees (e.g., parameters or structures that results from specifically using a k-centering approximation algorithm or another method to form the initial forest).
2. This is a very interesting question and we do have at least a partial answer with regard to the effect of balancedness on runtime. The answer is a little nuanced because it depends on whether you consider the method used to form the initial forest. If we focus solely on the MFC problem with a given forest, it turns out that unbalanced clusters lead to faster runtimes, even for optimally solving the MFC problem. Note that the $\Omega(n^2)$ lower bound on runtime we give in Theorem 3.1 is assuming a worst-case scenario of balanced clusters. It is an interesting open direction to explore more concrete results if we assume a certain level of balance or imbalance.
The answer changes slightly if you also consider the method used to construct the initial forest. In our experiments we computed the MSTs of our components optimally. Because of this, having unbalanced clusters means computing an optimal MST for a large component, which is computationally expensive and ends up outweighing the runtime benefit we get for solving the MFC step of the overall MST pipeline. Conversely, for balanced clusters the runtimes tend to be much better, and we still obtain good approximation guarantees. These tradeoffs are highlighted in Table 3 of our paper, where we include runtimes for the different steps of the full MFC framework (including forming the initial forest). These observations are all based on one approach for forming an initial forest (k-centering + optimally finding MSTs of components). Continuing to explore tradeoffs using this or other approaches for forming the initial forest is a very interesting open future direction. | Summary: The paper considers the problem of building Spanning tree of a set of points in a metric space. The goal is to have a tree that is a good approximation of the MST without querying all the Theta(n^2) pairwise distances. The approach proposed is to have an algorithm that starts from an initial forest and then merges the trees building an overall spanning tree. The algorithm proposed achieves constant approximation if the initial forest has a good "overlap"with some optimal MST. In this sense the approach falls in the category of learning-augmented algorithm, i.e., if by some predicting strategy a good initial partial solution is provided, then the remaining steps have good performance guaratee---not achievable in general because of well known hardness results implying Omega(n^2) queries for any constant approximation.
## Update after rebuttal
I kept my positive score. The rebuttal didn't need to bring any further information.
Claims And Evidence: The theoretical part appear to be sound.
Methods And Evaluation Criteria: The experimental analysis of the practical efficacy of the approach is well structured. The authors design and uses different sets of tests with both synthetic (uniformly sampled) and data from real databases. The experiments address 4 questions (runtime, the initial overlap achievable in practice, theoretical bounds vs performance in practice, effect of different datasets and metrics on performance) and the results are convincing.
Theoretical Claims: I checked some of the lemmas' and theorems' proofs also in the appendix.
Experimental Designs Or Analyses: See above
Supplementary Material: Partially: appendix A, B, C
Relation To Broader Scientific Literature: The approximation guarantee in the framework of general metric speces is to my knowledge new.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strenghts:
Theoretical guarantees conditional to the goodness of the initial forest.
A significant set of experimental analyses addressing the individual parts of the approach and how internal (initial setting) and external (dataset structure) affect the performance.
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the review and helpful feedback on our manuscript! | Summary: This paper considers the problem of finding subquadratic algorithms for metric MST (minimum spanning tree in a metric space) by leveraging learning-augmented algorithms. The goal is to break through strong lower bounds on runtime: the authors motivate the present work with the result that in an arbitrary metric space, it is impossible to achieve *any* constant factor approximation without knowing $\Omega(n^2)$ edges. In practice, such algorithms are not scalable. To this end, they assume an algorithm has access to an initial forest (on the whole vertex set) that can be thought of as an approximation to a spanning tree (e.g., the output of the first few iterations of Kruskal's algorithm). Given this, the goal is to find an algorithm that solves the Metric Forest Completion problem (MFC), which is to find a minimum-weight set of edges to add to the forest to produce a full spanning tree. This is a connectivity augmentation problem that, to solve to optimality, again requires $\Omega(n^2)$ queries. However, the approach taken here is to instead look for an approximation algorithm that takes subquadratic time. It is important to note that the objective used for computing the approximation ratio is the weight of all edges in the final tree (that of the initial forest and the augmenting set); the authors note that, without counting the initial forest, one cannot achieve good approximations.
There are three main contributions:
- For initial forests with $o(n)$ components, a 2.62 approximation algorithm for MFC with query complexity $o(n^2)$.
- A notion of error (coined $\gamma$-overlap) between the initial forest and the optimal MST, and an analysis that the above algorithm's approximation for the MST problem (not the MFC problem) can be parameterized by $\gamma$: specifically, it achieves a $(2\gamma + 1)$. In particular, for perfect overlap ($\gamma = 1$, meaning all edges in the initial forest are in an optimal MST), this algorithm is a 3-approximation for MST.
- Experiments on real and synthetic datasets (with Euclidean and non-Euclidean metrics) showing that fast heuristics like the $k$-center algorithm can be used to generate an initial forest, and, for datasets with underlying structure, with small $\gamma$. They also show that the approximation ratio tends to be notably better than the theoretical bound.
## update after rebuttal:
The authors answered the questions well and I maintain my evaluation.
Claims And Evidence: Yes, theoretical claims are supported with proofs (deferred to Appendix), and experimental claims are supported with a detailed explanation of dataset construction and analysis.
Methods And Evaluation Criteria: Yes, in addition to proof-of-concept synthetic datasets, the authors use a number of real-world datasets that are used as benchmarks in the clustering literature, and standard measures of distance on that data.
Theoretical Claims: I did not check proofs, which are all deferred to the appendix.
Experimental Designs Or Analyses: I read through the experimental design and analyses included the appendix, and found the methods completely reasonable.
Supplementary Material: I read Appendix A on additional related work, and skimmed over Appendix E on additional empirical results.
Relation To Broader Scientific Literature: This paper contributes to the recently popular framework of algorithms with predictions -- a type of beyond-worst-case approach to algorithm design that seeks to improve algorithmic performance by augmenting algorithms with a (often machine-learned) "prediction" of some parameter of the input (in this case, the initial forest is a "prediction" of the MST). A number of prior works have considered, as the authors do here, this framework for improving *runtime* for popular algorithmic problems (e.g., binary search, min-cost perfect matching, maximum flow). The MST problem has also been considered in the predictions framework, but with different models for what information is learning-augmented, specifically, they all consider predictions on edge weight. The authors here are also distinct in their goal of obtaining subquadratic algorithms.
Essential References Not Discussed: I did not notice any important related works that are missing.
Other Strengths And Weaknesses: There are several interesting contributions here. One is the metric forest completion problem (MFC), which is a model admitting an informative analysis even when $\lambda$ is potentially large (the only assumption is that the initial forest has a sufficiently small number of components). Another is the learning-augmented framework. While previous papers have considered MSTs with predictions, the present framework is distinct, as it is practically and conceptually of interest to obtain subquadratic algorithms for this problem. What is further interesting and distinct here is that the warm-start initial forest is not necessarily some black-box, machined-learned prediction (although it could be), but can be generated from the data itself (e.g., by performing the $k$-center heuristic), and indeed the authors show that doing so is effective in practice. Finally, the authors devise a novel error metric ($\gamma$-overlap) for the "distance" between a spanning forest and an optimal MST that is amenable to analysis; this is interesting as there seemingly isn't a natural / obvious one here.
Other Comments Or Suggestions: While the paper is very well-written and has very nice figures, I think the related work in the appendix on MSTs with predictions is important context with which to situate the present contribution, and therefore should be promoted to the main body.
You may also consider including additional references of results on using the predictions framework to improve runtime.
Questions For Authors: Did you consider comparing the performance of your algorithm against heuristics that compute a full MST in subquadratic time (e.g., if there are any common ones used in practice to overcome the runtime bottleneck)?
Are there any analogues of your MFC problem for other combinatorial problems? Particularly, the idea of computing the approximation ratio with respect to the total cost rather than just the cost of the new edges $M$ (which as you point out is necessary in order to obtain meaningful results)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your feedback on our manuscript!
Regarding the section in the appendix on MSTs with predictions, we agree that this related work provides good context for our work and would be nice to include in the main text. Ultimately we moved this to the appendix because of space constraints. However, if the manuscript is accepted this will allow 1 additional page of space, which we would use to elaborate on related work in the main text.
We agree that comparing our approach against other heuristics would be interesting, but the main issue is that only existing heuristics that we are aware of apply only to Euclidean metrics. Our manuscript focuses on algorithms that work for arbitrary metric spaces, and hence we felt that the best experiment section for this paper would provide a concrete comparison between methods that come with theoretical guarantees for arbitrary metric spaces, which in this case amounts to comparing our new approach against the naive quadratic approach.
Finally, regarding your question about analogues of the MFC problem for other combinatorial problems—at present we are not aware of any, but we think this is a very interesting direction to consider in future work. Thanks for the suggestion! | null | null | null | null | null | null |
Decision Making under the Exponential Family: Distributionally Robust Optimisation with Bayesian Ambiguity Sets | Accept (spotlight poster) | Summary: The authors introduce a novel formulation for distributionally robust optimization based on Bayesian posterior updates. Their model leverages KL-divergence, and they propose two ambiguity set designs with efficient sampling methods. These methods are computationally better than Bayesian DRO under mild conditions. Experimental results demonstrate their computational and statistical benefits.
Claims And Evidence: I had some questions related to expressions in Table 1:
1. The authors do not clarify how the problem dimension $D$ influences the number of variables. It would be helpful to reference this explicitly, at least in Appendix.
2. Regarding the column “Linear f”, it is nonclear why $BAS_{PE}$ and $BDRO$ exhibit a dependency on $D^2$. Given that the number of decision variables appears to be only $O(D)$ (and $O(M_{\theta} D)$) after the transformation in Line 290 or Equation (40), further clarification is needed.
Methods And Evaluation Criteria: The proposed method and evaluation criteria appear well-justified for the application.
Theoretical Claims: I reviewed the proofs for all the main results (except concrete examples in Appendix A) and did not find major issues that worth mentioning.
Experimental Designs Or Analyses: I checked the validity of experimental designs and had several problems:
1. I appreciate the visualization of Figures 2 and 3, particularly regarding the mean-variance frontier. It would be insightful to analyze which $\epsilon$ values would be practical for real-world implementations.
2. In Figure 7 of Appendix, the authors present multiple $\epsilon$ values for each method across the whole time period. It would be beneficial to select the optimal $\epsilon$ via cross validation and compare their performance accordingly, as is common in DRO literature. Also, all different methods appear to collapse under $\epsilon = 1$, but this is not well visualized.
Supplementary Material: I briefly reviewed all proofs and experimental setups in the paper.
Relation To Broader Scientific Literature: The paper contributes a novel approach by incorporating Bayesian posterior updates into the KL-DRO framework.
Essential References Not Discussed: I do not find the authors missing any important literature.
Other Strengths And Weaknesses: Overall, the paper is well-written and clearly articulates its contributions. Below, I outline some concerns and areas for improvement:
1. **Statistical Benefits**: Besides computational advantages, the statistical benefits of DRO-BAS are not entirely clear. Can authors further discuss how DRO-BAS compares to Bayesian DRO in terms of statistical performance? Additionally, under what conditions should one prefer DRO-BAS over Bayesian DRO given the prior? Based on the paper’s description, the observed experimental benefits seem primarily due to improved computational approximation.
2. **Novel Considerations**: While I appreciate the authors’ interpretation for these Bayesian ambiguity sets, the model design in Equation (18) and Theorem 3.6 appears to closely align with the PDRO method from Iyengar et al. (2023) when using KL-divergence cases (from the standard KL reformulation technique from Hu and Hong (2013)). Both approaches define an ambiguity set centered at an estimated parametric distribution via KL-divergence. Although the interpretations differ, a discussion clarifying the novelty of the approach would be beneficial.
3. **Scalability for General Models**: While the proposed approach performs well for exponential and normal distributions, posterior updates for general large-scale models may still be computationally expensive. This scalability challenge is a well-known limitation of Bayesian DRO.
Other Comments Or Suggestions: 1. In the multi-product newsvendor, the objective formulation should be written as $h 1^{\top} max(0, x - \xi)$. The current formulation is of vector output.
2. The texts in Figures 2 (especially) and 3 mix together across different methods. A clearer visualization strategy is to use an arrow to indicate the direction from 0.001 to 1 simply, instead of marking all the values there.
Questions For Authors: 1. Since the sampling number affects the performance of DRO-BAS, are there any Monte Carlo sampling guarantees similar to those in Iyengar et al. (2023) that provides strong performance guarantee?
2. The choice of divergence functions appears flexible. For example, I do not see a fundamental issue when extending the design idea to $chi^2$-divergence. In such a case, Property 3.1 might only require the existence of a second moment. Could the authors elaborate on this or discuss necessary modifications? Such extensions would enhance the practical applicability of the method, especially given concerns about KL-divergence’s conservativeness and little use in practice.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for providing feedback that greatly improves the paper. Please see this anonymous link for Tables and Figures: https://github.com/ICML-anon-2025/paper-11717
1. **Table 1**: We have changed the table and caption to distinguish between variables and the input size of the dual (see link above). As you suggested, we will also add a brief discussion of Table 1 inside Sec. 3.5 with a more detailed explanation in the appendix.
2. **Practical choice of $\epsilon$ and cross-validation (CV)**:
We have now implemented a 10-fold CV procedure for choosing $\epsilon$ from a set $E$ for BAS-PE and BAS-PP. As suggested, we took Fig. 7 for the Normal DGP Newsvendor. We train our model only on the training folds, then after solving for each $\epsilon \in E$, we visualise the CV mean and variance on the validation folds across all replications in Fig. 9 of the link provided above. On Fig. 9, we have marked the $\epsilon$ that achieves 1) the smallest CV mean, 2) the smallest CV variance, 3) the smallest CV mean/standard-deviation tradeoff.
3. **Statistical benefits of DRO-BAS over BDRO**:
One key difference is that we have a single worst-case objective, leading to an interpretable closed-form worst-case distribution (Sec. 3.6), whereas BDRO does not (see response 2 to fJ7Y). At the limit of infinite observations, both DRO-BAS and BDRO converge to a KL-based ambiguity set based on the DGP. However, our analysis is straightforward and more intuitive compared to BDRO, which uses a much more complex argument (p.1286-1292) to prove where the expected worst-case concentrates.
In terms of statistical properties, we follow suggestions from the DRO literature (Shapiro et al. 2023, Gotoh et al. 2021) and focus on the mean-variance frontier, which is a type of statistical trade-off, showcasing our domination over BDRO. See also response 2 to V1H1 (on the statistical behaviour of the solution).
4. **Interpretation of BAS in comparison to PDRO**:
Although the dual formulations of BAS and PDRO with the KL are both model-based and look related, they are fundamentally different. Our ambiguity sets are a-posteriori informed, integrating prior beliefs and data evidence. Consequently, the resulting nominal distribution ($\mathbb{P}_n$ for BAS-PP and $\mathbb{P}\_{\hat{\eta}}$ for BAS-PE) contains all the information from our posterior beliefs, including their uncertainty quantification, unlike the point estimator approach of Iyengar et al., 2023. This allows us to propagate uncertainty from the posterior beliefs about the parameters to the ambiguity set. We will highlight this important distinction by expanding the lines 53-63 with the points above.
5. **Scalability**: If the posterior is not available in closed-form, then the sampling time will increase (similarly to all Bayesian DRO methods) unless one performs approximate inference such as Variational or Laplace approximations. In this paper, we provide efficient formulations for a big class of models - the exponential family (which goes beyond just the Normal and Exponential distributions). See also response 3 to oPX6. Moreover, working with the exponential family enables our tractable formulations of the DRO-BAS problem and leverages the conjugacy property, which greatly enhances the scalability of our method.
6. **Monte Carlo guarantees**: It would be valuable to theoretically examine the effect of the Monte Carlo sampling size to the out-of-sample cost, similar to our empirical analysis in Sec. 4. This is a non-trivial analysis as it requires establishing bounds on the worst-case KL objectives based on a nominal distribution and its empirical approximation. This is also likely the reason why Iyengar et al., 2023 only provide these types of guarantees for the Wasserstein and $\chi^2$-divergence, and why Shapiro et al., 2023 omit them for the KL-based BDRO. Such guarantees would be a notable contribution on their own as they would likely lead to similar results for other model and KL-based DRO methods (like PDRO and BDRO). We will include this as future work in Sec. 5.
7. **Choice of divergence**: Please see response 3 to fJ7Y for a discussion of other $\phi$-divergences and response 4 to oPX6 for the benefits of KL-based BAS. The advantages of the KL have also been widely discussed in the DRO literature (Hu et al. 2013, Shapiro et al., 2023) and the closely related DR Bayesian Optimisation literature (Husain et al., 2023). We see extending BAS to other distances or divergences as an interesting future research direction.
8. **Other comments**: We will correct the vector notation for $h$ in the newsvendor objective. Concerning visualisations in Fig. 2 & 3, we have kept the $\epsilon$ markers because we do not want to give a false impression that the line is continuous and the values of $\epsilon$ are equidistant, but we will work to make the figures more legible.
[1] Husain H. et al. Distributionally robust Bayesian optimization with ϕ-divergences. 2023. | Summary: This paper introduces KL-based DRO formulations with two kinds of Bayesian ambiguity sets, the posterior expectation and the posterior predictive. The authors show that both formulations can be recast into a direct minimization problem, with more efficient closed form worst-case risk solution for exponential family distributions. Empirical study also shows the effective of the proposed estimators.
## update after rebuttal
I hope the authors would add comparisons to BDRO, standard ambiguity set, $\phi$-divergence in revision, as promised by the authors in their rebuttal. But in the current form, I still lean towards accept weakly.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. I have checked the proofs of Proposition 3.2 and 3.3, both of which look correct to me.
Experimental Designs Or Analyses: Yes. I have checked the portfolio experiments which are reasonable.
Supplementary Material: Yes, the technical proofs.
Relation To Broader Scientific Literature: The key contributions are related to the KL divergence-based distributionally robust optimization literature by approach it from a Bayesian perspective.
Essential References Not Discussed: Since the method is mainly based on KL-based DRO, the authors may want to discuss a seminal work of the more general $\chi^2$ divergence-based DRO.
[1] Duchi, J., & Namkoong, H. (2018). Variance-based Regularization with Convex Objectives. Journal of Machine Learning Research, 19, 1–55. http://jmlr.org/papers/v19/17-750.html.
Other Strengths And Weaknesses: **Strengths**
It is novel to propose a Bayesian DRO formulation to tackle the worst-case risk instead of the expected worst-case risk. The examples with conjugate exponential family provide insights into applicability of the proposed methods.
**Weaknesses**
The derived upper bounds in the theoretical analysis look qualitative rather than quantitative. The adopted KL divergence is restrictive, whose efficient dual formulation is well-known thus makes most of the dual analysis look mathematically trivial. See my detailed comments below.
Other Comments Or Suggestions: - Line 035, right column, the data **is** noisy.
- Line 290, Eq. 9,11,12, missin period.
- Line 201, Definition 3.4, function $h$ is not defined or remarked before being used.
The authors should proofread the symbols, commas and periods used in the paper, especially in the context around math equations.
Questions For Authors: The paper is well-written and the idea is novel. I have a couple of minor concerns though.
1. What's the point of the upper bound in Eq. 13? It resembles Eq. 8, but is not quantitative since it depends on $f$, $\theta$, which makes the generalization property unclear. Is it possible to provide a more quantitative bound, for example, by adopting Rademacher complexities? Otherwise, the theoretical analysis in the paper except for the conjugate looks like trivial corollaries of results in Hu
& Hong, 2013.
2. The optimal radius selection in Section 3.4 and its approximation with empirical data makes sense. However, there is still a gap between posterior distribution and the true distribution. The choice in line 265, right column does not take into account such gap. In other words, given a confidence level, say $\tau$, and number of samples $M$, how can I choose the radius that yields a non-asymptotic bound for Eq. 21 to hold with probability at least $1 - \tau$?
3. The theoretical guarantees and computational efficiently are somewhat tightly coupled with exponential family distributions. The authors admit the limitations in Discussion as well. What properties should a family of distributions encompass to go beyond exponential family while retaining the advantages claimed in the paper?
4. Why do the authors choose KL divergence-based DRO instead of DRO based on Wasserstein distances or MMD?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for carefully considering our work and providing feedback that greatly improves the paper:
1. **Upper bound in Eq. 13**: There is potentially a misunderstanding here: our work does not rely on this bound and goes beyond this inequality to achieve the strong duality result (with equality) in Theorem 3.6. In detail:
- *"What's the point of the upper bound in Eq. 13?"*: It provides a weak duality result for general parametric Bayesian models without closed-form posteriors, e.g. Bayesian neural networks. The discussion after Proposition 3.3 explains the challenge associated with BAS-PE leading to the upper bound in Eq. 13. Exactly for this reason we study the widely used exponential family models in Sec. 3.3, allowing us to move from an upper bound (weak duality) to an equality (strong duality) in Theorem 3.6.
- *"It resembles Eq. 8, but is not quantitative since it depends on $f$ and $\theta$ which makes the generalization property unclear"*: The goal of the duality result is to reformulate the maximisation in Eq. 11 as a minimisation so that the objective can be jointly minimised in the Lagrange variable $\gamma$ and decision variable $x$. As a duality result, it necessarily relies on $f$ and $\theta$ as these are part of the worst-case risk (the same dependence is in Proposition 3.2, Theorem 3.6, and in the duality results of BDRO and other methods).
- *"Is it possible to provide a more quantitative bound, [...] Otherwise, the theoretical analysis in the paper, except for the conjugate, looks like trivial corollaries of results in Hu \& Hong, 2013."* The theoretical analysis in Sec. 3.3 is far from trivial and cannot be seen as corollaries of Hu et al. 2013 as their analysis is for divergences and not for expected divergences. The goal of Sec. 3.3 is to go beyond the weak duality bound in Eq. 13 and provide stronger results. In particular, it provides an important formulation of the expected KL divergence in Lemma 3.5, which is crucial to overcoming the difficulty of obtaining the convex conjugate of the expected KL. This leads to a strong duality result in Theorem 3.6 along with the results in Corollary 3.7, Eq. 20 and Sec. 3.6, which have not been derived before for an ambiguity set based on an expected divergence (rather than just a divergence). To avoid similar misunderstandings in the future, we will include the clarifications above before Proposition 3.3.
2. **Optimal radius selection**: Such a probabilistic guarantee would be very interesting but is challenging future work that we add to the discussion. It would require finite sample concentration results about the KL divergence with respect to the posterior distribution. This is challenging as evident from the lack of such results in the DRO/KLD literature. To provide an alternative method of selecting the tolerance level, we now also perform a cross-validation radius selection analysis (see response 2 to GxWu)
3. **Restriction to the exponential family**: As discussed in Sec. 5, this is an interesting topic for future work. The key property is that the family of distributions should have a closed-form expression for the expected KL divergence, such as we have in Lemma 3.5. Otherwise, one can apply Proposition 3.3 to obtain an upper bound on the worst-case risk. We will include this comment in the discussion at the beginning of Sec. 3.3. Thankfully, we have obtained such a result for the exponential family, which is widespread across probabilistic graphical models, as well as models in spatial statistics and time series analysis, which also rely heavily on the exponential family. Moreover, it plays a central role in Variational inference, where it is the de facto approximating family, making our work broadly applicable across Machine Learning domains.
4. **Choice of KL divergence (KLD) & missed paper in lit. review**: Firstly, the KLD is a natural choice as the Bayesian posterior itself targets the KLD minimiser between the model family and DGP. Hence, intuitively, the expected KLD and the KLD with respect to the posterior predictive become good measures of separation between our posterior beliefs and the DGP (see main text lines 119-127 and Shapiro et al. 2023). The KLD further allows us to obtain a strong dual formulation in Theorem 3.6 for the DRO-BAS-PE setting and tractable reformulations (see Sec. 3.5). Moreover, by using the KLD, DRO-BAS admits the formulation of the optimal radius and the worst-case distribution, which are very important for interpretability and applicability of the method, overcoming shortcomings of the KL-based BDRO. We will further discuss the challenges associated with extending the framework to other distances/divergences in future work (Sec. 5) (see response 3 to fJ7Y) and include the missed paper in our literature review.
5. **Typos and other comments**: We will correct the typos and properly define the function $h$, which corresponds to the scaling constant of the exponential family. | Summary: In this paper, the authors focus on Distributionally Robust Optimization and introduce two new ambiguity sets with the aim of informing the construction of ambiguity sets using Bayesian Statistics. More specifically, they use the posterior distribution to construct these ambiguity sets. In the first set, we consider all distributions whose distance from the posterior-predictive distribution is less than a parameter \(\epsilon\). In the second case, we take into account all distributions whose expected KL-divergence (w.r.t. the posterior distribution) from the nominal distribution is bounded by \(\epsilon\). For these two cases, the authors provide appropriate reformulations. They then demonstrate the performance of their approach on a newsvendor and a real-world portfolio optimization problem. They show that their approaches peform as good as the BDRO method or better while being easier to solve and taking less time.
## Update after Rebuttal
The authors have strengthened the paper with better numerical experiments. However, I decided to maintain my score after receiving the response of the authors.
Claims And Evidence: The theoretical results provided in the paper appear sound. The numerical experiments demonstrating the benefit of this approach over the BDRO method appear sound, and both real and synthetic data are used well. The key limitation, I believe, is the limited comparisons to alternative methods. While BDRO is indeed the most comparable method, it would be interesting to see evaluations against most common DRO sets such as standard KL divergence sets and sets with other metrics to evaluate the advantages and disadvantages of the Bayesian approach.
Methods And Evaluation Criteria: The authors demonstrate the performance of their method using a pareto frontier with the out-of-sample mean and variance of the solution. They use both real and synthetic data to demonstrate their approach. All this appears reasonable.
Theoretical Claims: I verified the key results associated with duality theory specifically Proposition 3.2, Proposition 3.3 and Theorem 3.6.
Experimental Designs Or Analyses: The experimental analysis presented in the paper is sound and is similar to existing work in this domain.
Supplementary Material: In the supplementary material, I went through the discussion on numerical experiments as well as some of the proofs associated with duality theory. I did not find any issues with this.
Relation To Broader Scientific Literature: The paper extends existing literature on the use of Bayesian Ambiguity Sets for distributionally robust optimization by introducing new types of ambiguity sets which combine together KL-divergence and posterior information from data. This allows us to simultaneously use data and prior beliefs.
Essential References Not Discussed: None that I know of.
Other Strengths And Weaknesses: Strengths:
1. Novel ambiguity sets which incorporate both data, prior beliefs and distance metrics
2. Tractable reformulations for these ambiguity sets
3. Discussion of how to choose tolerance parameters.
4. Numerical experiments with both synthetic and real data.
Weaknesses
1. While the comparison with other Bayesian DRO approaches is present, it is difficult to evaluate how the Baysian Ambiguity Sets perform when compared against other DRO approaches such as Wasserstein ambiguity sets etc.
Other Comments Or Suggestions: 1. Have the authors also considered developing these sets for general phi-divergence metrics instead of just KL-divergence.
Questions For Authors: 1. I suggest authors compare their approach against other standard ambiguity sets so we can better understand how useful the Bayesian Ambiguity Sets are in practice.
2. Can the authors discuss a bit about how the worst case distribution that is being identified by the two BAS and BDRO differ amongst each other.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for carefully considering our work and providing feedback and suggestions that greatly improve the paper:
1. **Additional comparisons to other DRO ambiguity sets**: Firstly, the suggested empirical-based ambiguity sets, based on the KL-divergence and Wasserstein distance, are fundamentally different to model-based approaches (e.g. DRO-BAS, BDRO) as they are fully data-driven and are not appropriate in cases where the decision-maker relies on a model to make a decision. There is already a discussion of the model-based vs empirical KL DRO setting in Appendix E.3, however, we will add a short statement in the introduction pointing the reader to this discussion.
To this end, we have already included a comparison to the standard, empirical KL divergence ambiguity set that you suggested in Appendix E.3 and Figure 7. To further strengthen our empirical comparisons, and following your suggestion, we have also now conducted a comparison to the Wasserstein-based ambiguity set centred on the empirical measure for a well-specified (Normal DGP) and a misspecified (Truncated Normal DGP) Newsvendor example (see Figure 8 of this anonymous link: https://github.com/ICML-anon-2025/paper-11717). We observe that, for M sufficiently large, DRO-BAS outperforms all empirical methods. We will include this new comparison in Sec. 4 and in the Appendix.
2. **How do the worst case distributions identified by BAS and BDRO differ?**: Crucially, for BDRO, a single worst-case distribution does not exist because BDRO considers an expected worst-case approach (see Eq. 2, Figure 1 and the discussion on lines 67-79) rather than a worst-case approach, as advocated by DRO methods. In contrast, since both DRO-BAS formulations correspond to a worst-case risk minimisation objective, we obtain the worst-case distributions in Sec. 3.6. If we sample $\theta_j \sim \Pi$ from the posterior for BDRO, then one can obtain a worst-case distribution for each $\mathbb{P}\_{\theta_j}$ via the argument in Hu & Hong (2013), eq. (8). However, notice that in general, the minimiser of an expected objective is not the same as the average of the minimisers of the individual objectives as $\min\{f(x) + g(x)\} \geq \min\{f(x)\} + \min\{g(x)\}$ for any functions $f$ and $g$. Hence, even looking at the posterior mean or mode of the worst-case minimisers of the inner worst-case objectives in Eq. 2 would not necessarily give us a single distribution $p$ that yields a worst-case risk of the form $\mathbb{E}_{\xi \sim p} [f(x,\xi)]$ corresponding to the risk minimised by BDRO. In the limiting case of infinite data observations ($n \rightarrow \infty$) all of BDRO, BAS-PE and BAS-PP concentrate to a KL ambiguity set based on the data-generating process $\mathbb{P}^\star$ hence the worst case distribution is asymptotically the same for all methods but it is different (or non-existent for BDRO) for finite sample sizes.
This further highlights a fundamental difference between BDRO, which opts for an expected worst-case objective, and DRO-BAS, which follows the common DRO route of defining a worst-case objective. DRO-BAS provides the decision-maker with an exact and interpretable formulation of the worst-case risk being minimised. We will expand further on these points in lines 60-63 where we discuss the main difference of our method to BDRO and also include the above points in Sec. 3.6, which deals with the DRO-BAS worst-case distribution.
3. **Extension to general phi-divergence metrics?**: This is something we are actively working towards and believe would be very valuable future work.
- The analysis for DRO-BAS-PP would involve the derivation of the dual formulation of the problem based on convex conjugate results for $\phi$-divergences. Results about the radius and the worst-case distribution might be more challenging and would rely on properties of the posterior predictive and $\phi$-divergences.
- The analysis for DRO-BAS-PE appears even more challenging: one would need to derive, in closed-form, an expression for the expected $\phi$-divergence under the posterior (similarly to Lemma 3.5). This would allow us to derive the convex conjugate form of the expected $\phi$-divergence, which is otherwise challenging to obtain. The convex conjugate expression of the expected divergence is essential in order to derive the strong duality result in Theorem 3.6, but the closed-form of the expected KL divergence further allowed us to obtain the results about the tolerance level selection (Sec. 3.4), a closed-form objective in the Gaussian case with linear cost function (Sec. 3.5) and the closed form of the worst-case distribution (Sec. 3.6).
- In the discussion, we will explain the challenges associated with extending BAS to other $\phi$-divergences and suggest this as important future work. We believe this framework opens the door to alternative BAS formulations grounded in expected distances or divergences, such as the $\phi$-divergence you mentioned.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses and for answering my questions. After considering them, I have decided to maintain my score. | Summary: The paper provides two new ways to define the distributional robust counterparts for the Distributionally Robust Optimization (DRO) taking into account posterior information, called Bayesian Ambiguity Sets (BAS) . In particular, the authors address the problem of the worst-case risk optimization where the wort-case distribution is taken subject to an ambiguity set. The first set is BAS with posterior predictive (BAS-PP), that is defined by bounding the KL divergence by $\epsilon$ from the parametric distribution defined by posterior expectation. The authors propose to use it in the cases when distributions have bounded Moment Generation Function, since the DRO has a closed form in this case. For the more general distributions, the authors propose to use the BAS based on posterior expectation (BAS-PE), which bounds the expectation of the LK divergence to the potential parametric distribution, where the expectation is taken over the posterior distribution of a parameter. The authors prove that this formulation allows DRO to have a closed form for the case of exponential family of distributions. The authors demonstrate that the DRO with their ambiguity sets allows one-stage dual problem formulation, whereas the closest benchmark Bayesian DRO by Shapiro et al. (2023) needs to define two-stage optimization and requires more sampling. They also provide an empirical comparison it terms of computational complexity of solving the DRO including the sampling time, and compare the mean-variance trade-offs of the solutions obtained by different DRO formulations.
## Update after rebuttal:
I thank the authors for the detailed response and I keep my original score.
Claims And Evidence: Yes, as far as I can see. However, it would be good to add the direct references / hyperlinks to the proofs in the Appendix of each Lemma / Theorem directly after introducing them in the body of the paper.
Methods And Evaluation Criteria: Yes, it seems so.
Theoretical Claims: I tried to follow the derivations and claims in the body of the paper, and they all seem to be correct. I did not check the appendix.
Experimental Designs Or Analyses: The experiments seem to be quite thorough.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper provides a new way to define the ambiguity sets for the distributional robust optimization that take into account the bayesian posterior, and allow simpler optimization formulations for certain distribution families than the previous formulations.
Essential References Not Discussed: Not that I'm aware of of
Other Strengths And Weaknesses: The paper is very clearly written, very clean, almost everything is properly defined. The first formulation DRO_BAS_PP was proposed before in the non-bayesian framework, but extended here to the Bayesian framework, as they discuss in the paper. The proposed second formulation DRO_BAS_PE seems to be novel. The result of the closed form formulation of the DRO in this case for exponential family of distributions also seem to be novel.
Other Comments Or Suggestions: There just a very few typos that I noticed, and comments I would like to make.
Typos:
- Line 268, Corollary 3.7: I think it should be $\mathcal A_{\epsilon}$ instead of $A_{\epsilon}$
Comments:
- Line 102 left side: Let x be a decision variable that minimizes a stochastic objective function... Maybe, that should be chosen to minimize? Otherwise it sounds like it is a solution already.
- Line 198, right side: When defining exponential family with conjugate prior, could you please add the particular reference where it is taken from?
- Line 203 right side: Also, the function $h(\xi_i)$ in Definition 3.4 seems to be undefined. What $h$ stands for?
- Please add the references to the proofs in the Appendix directly after the corresponding results in the body of the paper.
Questions For Authors: - Line 248 right side: It follows that ...(21) - Where does it follow from? Here the flow was not very clear to me. Please clarify.
- Figure 2. Can you please explain why for increasing $\epsilon$ until 1, in some of the Pareto-curves the out-of-sample variance $v(\epsilon)$ grows again?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for carefully considering our work and providing feedback that greatly improve the paper:
1. **Clarification of line 248 right side**: $\epsilon^{\star}\_{\text{PE}}(n)$ in Eq. 20 is defined as the expected KL divergence between the data-generating process $\mathbb{P}^\star$ and model $\mathbb{P}\_\eta$, indexed by the natural parameter $\eta$. For any $\epsilon \geq \epsilon^\star\_{\text{PE}}(n)$ we have that $\mathbb{E}\_{\eta \sim \Pi}[d\_{KL}(\mathbb{P}^\star || \mathbb{P}\_\eta)] = \epsilon^\star\_{\text{PE}} \leq \epsilon$ and hence $\mathbb{P}^\star \in \mathcal{A}\_{\epsilon}(\Pi)$ by definition of the BAS-PE ambiguity set $\mathcal{A}\_\epsilon(\Pi)$. Since $\mathbb{P}^\star$ belongs to the ambiguity set, the worst-case risk will be at least as big as the risk under $\mathbb{P}^\star$ (since the supremum over a set is always at least as big as the value attained by any of its elements), resulting in Eq. 22. We will add this explanation to the main text and highlight its importance: if one chooses the radius at least as large as $\epsilon^\star\_{\text{PE}}(n)$ then the worst-case risk minimised by DRO-BAS-PE will upper bound the true risk under $\mathbb{P}^\star$.
2. **Explanation of variance behaviour for increasing values of $\epsilon$ in Fig. 2**: The reason is well-understood but quite challenging to convey compactly: the behaviour of the OOS variance $v(\epsilon)$ in the Newsvendor experiments is due to the behaviour of the variance of the solution (denoted by $v\_{x}(\epsilon)$) (see Fig. 10 in this anonymised link: https://github.com/ICML-anon-2025/paper-11717) across epsilons. In turn, the variance of the solution is driven by the Newsvendor asymmetric cost function and its interplay with the worst-case distribution for each $\epsilon$. The Newsvendor objective is piece-wise linear with 2 pieces, and hence, the true risk (expected cost under the DGP, in this example a Normal) has a similar two-piece behaviour. The optimal solution lies at the intersection of the two pieces of the true risk. The piece corresponding to larger solutions has a significantly smaller slope (see 4th plot of Fig. 10), leading to a smaller true risk.
- Let’s consider a fixed value of $M=25$. For small values of $\epsilon$, the variance of the solution $v\_{x}(\epsilon)$ is smaller because fewer distributions are included in the ambiguity set; hence, the obtained solution is fairly stable over replications (see first plot of Fig. 10). However, because BAS likely does not include the DGP, and the decision-making is risk-prone, the OOS variance of the cost function $v(\epsilon)$ is large (see the Pareto curve of the first plot of Fig. 4). In this regime, the OOS variance reduces as we increase epsilon and better capture the DGP.
- As $\epsilon$ increases further, the ambiguity sets start including a lot more distributions, leading to the mean solution moving to values larger than the optimal solution where the slope of the true risk is smaller (see 4th plot of Fig. 10). This makes sense as, as we increase $\epsilon$, we become more conservative. However, as $\epsilon$ increases to very big values (in this case $> 0.5$), the ambiguity set contains a lot of arbitrary distributions, significantly increasing the OOS cost. The methods then push the solution towards smaller values of $x$ as can be observed by the big variance $v\_{x}(\epsilon)$ on the first plot for larger values of $\epsilon$.
- This behaviour is common for all values of $M$, however, there is an important distinction: for smaller values of $M$ (25, 100), the error bars extend way below and above the optimal solution. If we associate this with the form of the true risk on the last plot, we can expect a very high variance $v(\epsilon)$ of the OOS cost. On the other hand, the empirical variance of the solution $v\_{x}(\epsilon)$ for large M ($M = 900$) does not extend below the optimal solution by a high degree meaning that as the optimisation becomes more exact, the methods suggest staying on values bigger than the optimal solution which correspond to the smaller slope of the true risk. This is why in the third plot of Fig. 4 in the main text ($M = 900$), the OOS variance $v(\epsilon)$ does not increase by a lot for big values of $\epsilon$.
- We believe this plot and the above explanation give great intuition on how the specific form of the cost function can affect the behaviour of the method and will hence include it in the Appendix.
3. **Other comments**: We will change the statement in line 102 to your suggestion. We will further add a reference to Definition 3.4, noting that we are following the notation of Murphy et al. 2023 for the conjugate exponential family. We accidentally omitted the definition of $h(\xi)$, which is referred to as the scaling constant.
4. **Typos and references to proofs**: We will correct the typo and add direct references to the location of each proof after each mathematical statement. | null | null | null | null | null | null |
Sum-of-Parts: Self-Attributing Neural Networks with End-to-End Learning of Feature Groups | Accept (poster) | Summary: This paper proposes a new approach to producing self-explainable (attributing) NNs. It does so after identifying limitations of previous approaches. The authors thereupon introduce a group-based SANN approach. They evaluate on several datasets and investigate multiple aspects of the approach.
Claims And Evidence: I think the claims are quite well supported by the experimental results, except for some which I have commented on in the other boxes.
Methods And Evaluation Criteria: I have also read through the relevant parts in the supplements, but I still don't understand what the groups are based on. The authors always mention ..., but I don't understand whether these are latent features or simply partitions of the input features (e.g., image patches). Could the authors clarify this here, but also make this more explicit in the main text?
Theoretical Claims: n/a
Experimental Designs Or Analyses: There are things I can't seem to find in the experimental section. How exactly or for which task do the authors evaluate the models e.g. for RQ1? I.e. what is the task for these datasets and what does the metric MSE tell us for this task? Could the authors give more details in terms of metrics for the evaluation section?
Supplementary Material: Yes, related to the method itsself and referenced results.
Relation To Broader Scientific Literature: I think the work relates well to prior work both on the topics of explainability and SENNs in particular.
Essential References Not Discussed: I was wondering how this work relates to topics such as object factorization via approaches such as slot attention as the general idea seems quite similar and I think it would be valuable to discuss this in the context of the related works. Somewhat older, but Important references (there are most likely more novel approaches to search for):
Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, Thomas Kipf: Object-Centric Learning with Slot Attention. NeurIPS 2020
Gautam Singh, Yeongbin Kim, Sungjin Ahn: Neural Systematic Binder. ICLR 2023
This is a little ol, but might have additional references: Klaus Greff, Sjoerd van Steenkiste, Jürgen Schmidhuber: On the Binding Problem in Artificial Neural Networks. CoRR abs/2012.05208 (2020)
Other Strengths And Weaknesses: I find the paper is well written and structured. The general idea seems interesting and novel.
I don't know where to write this in ICML's new template, so I'll write it here. I think the method is very cool and I actually lean towards accept, however I then noticed several things that are not optimal yet in the paper (see all of my questions and comments). And I think these need to be tackled for readers to properly follow the authors argumentations and claims. If the authors can comment and apply the suggestions throughout all of the boxes I would be very happy to raise my score to an accept.
Some weaknesses:
I find it bad to have several important, main results in the appendix. E.g., for RQ3, RQ7. In this case I would rather reduce the number of experiments in the main paper (the authors have done quite a lot, maybe move these completely to the appendix) and fill the free space with missing important results.
I am not sure about the correctness of RQ7's evaluations and what they tell us. Could the authors elaborate on this. How does it help to identify that 5% more often for correct samples the model is focussing on object partitions for stating "Thus, explanations from SOP and other SANNs can help illuminate the reasoning behind model behaviors."
There seems to be a wrong mapping to sections in line 381: "As validated in Section 4.3," in the context of RQ8, what does section 4.3 have to do with RQ8?
I also find the conclusion in line 403 a little strong given the experimental evidence. Perhaps the authors could tone this down a little or argue for why they can make this statement from the one experiment.
Other Comments Or Suggestions: In line 201 the phrase "while the sparse number of groups ensures that the human interpretability." doesn't make sense grammatically.
Figure 3 is quite small to identify individual differences. Could the authors try enlargening this?
In line 326 the authors state "Explanations need to be semantically coherent such as relating to object segments or human-understandable concepts". I don't think this is correct. Explanations per see don't need to be this. We would hope that they do so so they are aligned with our own human knowledge, but in unsupervised learning in terms of explanations there is not guarantee that this is the case.
Questions For Authors: What is the intuition why in RQ5 SOP does learn explanations that are more coherent with object segmentations? It is not trained so, so where does the influence come from?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the feedback! In the following response, we clarify the posed questions and commit to adjustments as needed.
## Experimental Setup Clarifications
- *What features are groups based on?* Groups are simply subsets of the input features (the $x_{G_i}$ from Definition 1). A SANN takes such groups and embeds them in a latent space (the $h(x_{G_i})$ from Definition 1) with a neural network to make predictions. In our experiments, $x_{G_i}$ manifests as subsets of **image patches and text tokens (line 261-262)**. We will make this more explicit by adding the following statement to Definition 1: “We refer to $x_{G_i}$, the subset of input features of $x$ corresponding to the subset $G_i$ as a group of features of $x$.”
- *Datasets and Metrics.* **On line 256-262, we gave the general categorizations of the tasks (image classification for ImageNet, image regression for CosmoGrid, and text classification for MultiRC), with further details in Appendix C**. To summarize: The classification tasks of ImageNet/MultiRC are to predict the right object in the image or answer the multiple choice question (Appendix C.1.1 and C.1.3). The regression task for CosmoGrid is to predict cosmological parameters of the universe from telescope images (Appendix C.1.2), where the MSE (Mean Square Error) measures how much the predicted cosmological parameters differ from true values (lower is closer and thus better).
## SOP for Model Debugging (RQ7)
The goal of debugging is to identify a “bug”, or the reason behind errors. Here, we find that when SOP is correct, it tends to be using more of the object, whereas when SOP is incorrect, it tends to use more of the background. This analysis shows that one potential bug behind the errors of SOP is when the model looks at the background as an inaccurate proxy for the object. **This finding is only reliable because of SOP’s (and other SANNs) faithfulness guarantee–since predictions depend solely on the selected feature groups**. Without the faithfulness guarantee, one then cannot be confident if the prediction depends on the explanation. Since SOP has more semantic objects and backgrounds in its groups (from RQ5), the analysis is applicable to more SOP predictions than other SANNs.
## Clarifying the Cosmology Study (RQ8)
**The mapping between Section 4.3 and RQ8 is correct.** Specifically, RQ8 asks a scientific question on the dependency of network predictions on cosmological structures. However, **analyzing these cosmological structures with SOP only makes sense if the SOP groups contain such cosmological structures to begin with. This fact was established previously in Section 4.3 (RQ5)**, which measured the semantic coherency of SOP groups in various settings including cosmological structures in CosmoGrid, the setting of RQ8. That is why RQ8 references Section 4.3. We will update the reference to refer specifically to RQ5.
In the final conclusion, we called the result “meaningful” as the conclusion was understandable to experts, and “scientific discovery” as these are relations that experts want to understand but currently do not. **We did not intend for this to be a final irrefutable claim: rather, it is an initial step into the unknown that allows follow-up work to independently confirm or investigate more deeply.** We will adjust the wording to reflect this. These claims were made jointly with expert cosmologist collaborators, who helped us write background, justification, and conclusions of RQ8.
## Relation to Object Factorization
At a high level, both SOP and methods such as slot attention use attention mechanisms to group input features without direct group supervision. **However, the goals (and corresponding design decisions) are quite different.**
SOP forms feature groups that are beneficial for the end-to-end prediction task, without trying to reconstruct the original input. In contrast, techniques like slot attention try to break an input into its parts by learning groups that can reconstruct the input. Slot attention forces competition between groups to contain disjoint components and get coverage over the entire input, whereas SOP allows for groups to both overlap or not cover irrelevant features. Overlapping information (e.g. in Figure 1) can benefit prediction, i.e. when objects have multiple relevant contexts/groups in the image. We will add this discussion to the related work.
## Other Comments
- We will correct the grammatical error in Line 201, and enlarge Figure 3 (a larger version can be found in Figure 4 in https://github.com/icml2025-3311/icml2025-3311/blob/main/ICML2025_3311_rebuttal.pdf ).
- For line 326, we meant for this to refer to human understandability being improved if the explanations are semantic, and will adjust accordingly.
- Lastly, we can certainly swap RQs to be fully in the appendix and enrich the results for other RQs in the main paper. If you can let us know which RQs should be moved, we can adjust accordingly.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed comments! I am happy with the clarifications and have update my rating accordingly. Regarding which RQ to move: personally I suggest to move RQ6 to the appendix as this is tied to RQ5 and I see more value in have more details of the other RQs.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for updating your recommendation. We also agree that moving RQ6 to the appendix would be the best. We will incorporate all the revisions into the final version. Thanks again for suggesting improvements that make our paper better! | Summary: This paper proposes a method for extending models to serve as self attributing neural networks (SANNs). The novelty is that it uses a group-based SANN where the grouping itself is learned end-to-end. SANNs are useful in that they provide an added explainable layer on top of black-box models. The paper's theoretical section shows that per-features SANN have a lower bound on the error that group-based SANN do not have. The architecture of the model is made of 3 parts: 1) a group predictor (selecting subsets of the input into groups and masking inputs), 2) a backbone model (pre-trained and frozen) applied to each masked subsets, 3) a scoring module that combines the backbone output for each group into a prediction via sparse cross-attention. The empirical section aims to show both that the method keeps the predictive power and that it provides semantically meaningful groups. On the performance, it is compared to other SANN SOTA approaches on 3 different datasets and shows it is superior to them. On the semantics of the grouping, three assessments are performed: 1) coherence (IoU and threshold-based purity are used to measure coherence and show superior behavior); 2) class-distinctiveness (this is evaluated by humans by checking if the class can be predicted by looking at the grouping); 3) semantic usefulness (do the groupings help in 2 tasks: model debugging - where model's behaviours in grouping more or less objects can be predictive of an over-reliance on background objects; cosmology-discovery assistance - where the method is shown to help understand how cosmic density is correlated to space voids).
## update after rebuttal:
I have considered the author's rebuttals of all reviews. I believe that most issue have been addressed. Therefore, I raise my rating one notch.
Claims And Evidence: The claim that the method achieves better predictive performance compared to other SANN method is well supported empirically. However, the claim that the grouping provided by the method are superior to other methods is not entirely supported, as the semantic utility is not compared to other method for the cosmology experiment and has similar behaviour than other methods for the model debugging experiment.
Methods And Evaluation Criteria: The evaluation criteria for accuracy are 'error' and it is not specified how it is calculated. More standard measures such as AUC and F1-score would be more meaningful. For semantic utility, IoU is used, but it is not well explained how it is calculated (using what type of annotations) and intuitively why the grouping would be more useful when intersecting with those annotations.
Theoretical Claims: The theoretical claims mostly formally proves that grouping allows for more expressibility than per-feature methods, which is not really new in my opinion and not quite center with respect to the novelty of the paper, which is more the end-to-end learnable grouping with model agnostic backbone.
Experimental Designs Or Analyses: The experimental design is fine and covers both accuracy preservation and semantic utility. I wish the the accuracy of the backbone itself would have been provided in the main table (1) in order to evaluate the accuracy degradation for prediction. Also some of the reported error for the baseline seem suspiciously high (> 0.7 !) for some baselines on the ImageNet task (those numbers are obtained by the authors, not lifted from the published reference).
Supplementary Material: I only reviewed appendix C.2 and C.3 for baselines and evaluation details.
Relation To Broader Scientific Literature: It seems the paper does refer to the related literature adequately, although i'm not super familiar with this field.
Essential References Not Discussed: It appears that the related works are properly referenced, see box above.
Other Strengths And Weaknesses: pros:
- a novel end-to-end method for SANNs that learns the grouping itself via masking.
- an interesting application to cosmological discovery where the method confirms that voids are more relevant to estimation of space density that denser clusters.
cons:
- it is unclear how the number of groups is controlled. Is it a fixed number? If yes, it should be studied how its value affects performance.
- the model and overall system is relatively complex and would benefit from pseudo-code algorithms and especially a code repository, none are given.
- given that the appendix itself has 27 (!) pages, a table of content would have been nice. Furthermore, 27 pages is too many. Please only keep what is absolutely necessary to understand the paper (first 8 pages).
- There are several typos and failed references left, please proof-read again.
Other Comments Or Suggestions: typos:
089-right: anddifferent
179-left: double 'in contrast'
201-right: unfinished sentence after interpretability.
393-left col: unclear explanation for (1) - missing a 'respectively'
394-right: (c) and (c) -> (c) and (d)
1337: however + on-the-other-hand
1365: missing words after incur
1104: bad reference
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback. We respond to your questions as follows:
## Backing up the Claim that SOP’s Groups are Superior
We would like to clarify a misunderstanding of the experiments: In referring to semantic utility, we believe the reviewer is confusing the claims on group quality with the wrong set of experiments.
Specifically, **experiments showing that SOP’s groups are semantically superior are in Section 4.3,** which validate the semantic coherence of the groups. These experiments measure the semantic meaning of the groups, and evaluate all baselines on all datasets, including cosmology.
We believe the reviewer is misinterpreting Section 4.4 as an evaluation of the groups. Instead, **Section 4.4 highlights how these semantic groups can be utilized in downstream tasks, not to measure groups**. In practice, practitioners will want to use and interpret the best performing model and have an easier time interpreting results that are more semantically aligned, which is SOP as validated respectively in Sections 4.2 and 4.3. For example, our cosmologist collaborators are only interested in interpreting models that can predict the ground truth parameters with low error to obtain trustworthy insights for scientific discovery.
## Clarification for Experiments Metrics
- *Error Metrics.* To our knowledge, **our error metrics are within standard practice in the literature for measuring the performance in these tasks.** ImageNet is almost universally evaluated with top-1 error/accuracy. Cosmogrid uses mean-squared error as it is a regression problem, and is also consistent with prior work. Reporting F1/AUC is not common in these settings. MultiRC can be evaluated with either accuracy or F1 score, and many before us have used accuracy without a significant difference. We reported accuracy to be consistent with ImageNet, but can easily change this to F1 (the results and findings are the same).
- *Exact formulation of IoU-based Semantic Utility.* **At the end of RQ5, we direct the reader to Appendix C.4.1 (line 1550) for detailed descriptions including the exact formulation for IOU, how it is calculated, and the annotations used,** which are object segmentations and human-annotated explanations. Semantic groups that align with objects in images or structures in mass maps are easier for humans to interpret in downstream analysis, as in RQ7/RQ8.
## Significance of Theoretical Claims
While the idea that grouping is more expressive than per-feature methods may seem intuitive, this has remained a heuristic with no formal proof. **Our work provides the first formal mathematical proof of this key property**. Furthermore, our theorem provides two key insights that go beyond this intuition. **First, our theorem proves that SANNs require only a finite number of groups**, contrasting other theoretical DL results that have classically assumed infinite width layers to achieve good performance. Second, our theory proves that it is **impossible for per-feature SANNs to achieve good performance.** Until now, it was not known whether per-feature SANNs could be improved with more data, larger models, or better training.
## Why do some baselines have high error?
In our experimental setup, **we carefully controlled for all baselines to have equal compute costs and equal sparsity when measuring accuracy**. This is critical in order to attribute better performance to the SANN approach rather than more computation or more features. In Appendix C.3 we describe the common setup: all methods use at most 20 forward passes with 80% sparsity (keeping 20% features). The high error rates arise from baselines failing to be efficient with respect to compute or sparsity. RISE-F typically requires thousands of random masks to be effective, which is impractical for ImageNet inference. The performance of XDNN and BCos-F is due to their design depending on having all the features (0% sparsity) to make accurate predictions.
## How is the number of groups controlled?
**The maximum number of groups is a hyperparameter that can be selected based on the user’s available computational resources.** We analyzed how different maximum numbers of groups affect performance on ImageNet in Figure 1 in https://github.com/icml2025-3311/icml2025-3311/blob/main/ICML2025_3311_rebuttal.pdf . We see that **performance increases with more groups until it plateaus at 5 groups**, indicating that our model can potentially be 4x more efficient!
## Request for pseudo-code and code repository
**The requested pseudo-code and code repository are already in the submission**. The pseudo-code is in Algorithm 1 at the top of Appendix B (page 24, line 1265-1279). The code repository is included in the supplementary material.
## Other comments:
- We will add the backbone performance to Table 1, which is 0.097 error for ImageNet, 0.00869 MSE for Cosmogrid, and 0.318 error for MultiRC.
- We will add a table of contents to add structure and clarity to the Appendix. | Summary: The paper introduces Sum-of-Parts (SOP), a framework for transforming any differentiable model into a group-based Self-Attributing Neural Network (SANN). The key innovation is the end-to-end learning of feature groups without requiring explicit group supervision. SOP addresses the limitations of per-feature SANNs, which struggle with high-dimensional, correlated data, by proving that group-based SANNs can achieve zero error if the groups align with the underlying correlations in the data. The framework consists of three components: a group generator, a backbone predictor, and a group selector. SOP achieves state-of-the-art performance on vision (ImageNet, CosmoGrid) and language (MultiRC) tasks while providing interpretable and faithful explanations.
Claims And Evidence: 1. Claim 1: Per-feature SANNs have a lower bound on error that grows with the number of features, especially when features are highly correlated.
Evidence: The paper provides theoretical proofs (Theorems 2.3 and A.2) showing that per-feature SANNs cannot model even simple polynomial functions with correlated features. Empirical results (Figure 2) show that the error grows exponentially with the number of features.
2. Claim 2: Group-based SANNs can achieve zero error if the groups align with the underlying correlations in the data.
Evidence: The paper proves (Theorems 2.4 and A.5) that group-based SANNs can perfectly model complex polynomials with zero insertion and deletion errors. Empirical results show that SOP outperforms other SANNs and post-hoc methods on vision and language tasks.
3. SOP achieves state-of-the-art performance while maintaining interpretability.
Evidence: SOP achieves the lowest errors on ImageNet (0.267), CosmoGrid (0.025 MSE), and MultiRC (0.366 error), outperforming baselines like SHAP-F, FRESH, and BagNet (Table 1). The learned groups are validated using quantitative metrics (e.g., intersection-over-union, purity) and human evaluations.
However, the authors do not compare a recent self-interpretable works, i.e., AutoGnothi[1], which limit its trustworthiness. I suggest the authors compare SOP with AutoGnothi.
[1] Wang S, Tang H, Wang M, et al. Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models[J]. ICLR, 2025.
Methods And Evaluation Criteria: 1. Methods:
- Group Generator: Uses a multi-headed self-attention mechanism to dynamically generate feature groups for each input.
- Backbone Predictor: Makes predictions for each group using a pre-trained model (e.g., Vision Transformer, CNN, BERT).
- Group Selector: Assigns scores to each group using a sparse cross-attention mechanism, ensuring that only a few groups contribute to the final prediction.
2. Evaluation Criteria:
- Performance: Error rates (ImageNet, MultiRC) and mean squared error (CosmoGrid).
- Interpretability: Intersection-over-union (IOU) for ImageNet-S, threshold-based purity for CosmoGrid, and human distinction tasks.
- Faithfulness: Fidelity (KL-divergence between model predictions and summed attributions), insertion and deletion tests.
- Utility: Model debugging (correct vs. incorrect predictions) and scientific discovery (cosmology insights).
Theoretical Claims: 1. Theoretical Claim 1: Per-feature SANNs have a lower bound on error that grows with the number of features, especially for correlated data.
Support: Theorems 2.3 and A.2 prove that per-feature SANNs cannot model simple polynomial functions with correlated features. The error grows exponentially with the number of features (Figure 2).
2. Theoretical Claim 2: Group-based SANNs can achieve zero error if the groups align with the underlying correlations in the data.
Support: Theorems 2.4 and A.5 show that group-based SANNs can perfectly model complex polynomials with zero insertion and deletion errors.
Experimental Designs Or Analyses: 1. Datasets: ImageNet, CosmoGrid, and MultiRC.
2. Baselines: SANNs: XDNN, BagNet, FRESH.
3. Post-hoc Methods: LIME, SHAP, IG, GradCAM, RISE, Archipelago, MFABA, AGI, AMPE, BCos.
Please compare Shapley Value based methods like KernelSHAP[2], FasSHAP[3], ViT-Shapley[4] and AutoGnothi[1].
[2] Lundberg S M, Lee S I. A unified approach to interpreting model predictions[J]. Advances in neural information processing systems, 2017, 30.
[3] Jethani N, Sudarshan M, Covert I C, et al. Fastshap: Real-time shapley value estimation[C]//International conference on learning representations. 2021.
[4]. Covert I C, Kim C, Lee S I. Learning to Estimate Shapley Values with Vision Transformers[C]//The Eleventh International Conference on Learning Representations.
Supplementary Material: Yes. I review the code.
Relation To Broader Scientific Literature: I believe it is crucial to investigate self-attributing in terms of groups of features rather than individual tokens or features. However, AutoGnothi [1] already achieves faithful interpretability without compromising performance. It is therefore important for the authors to provide a deeper discussion of related work and offer a more thorough comparison of the results.
Essential References Not Discussed: [1] Wang S, Tang H, Wang M, et al. Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models[J]. ICLR, 2025.
Other Strengths And Weaknesses: 1, Computational Cost: Generating and evaluating groups for large datasets (e.g., ImageNet) can be computationally expensive.
2, Out-of-Distribution Data: The binarization of groups may create out-of-distribution data, although the paper argues that modern Transformers are robust to this.
3. Limited Human Evaluation: The human distinction task is conducted on a small subset of images (10 examples), which may not be representative.
Other Comments Or Suggestions: See above comments.
Questions For Authors: See above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your feedback. We provide an additional PDF at https://github.com/icml2025-3311/icml2025-3311/blob/main/ICML2025_3311_rebuttal.pdf for additional figures and will respond to your questions below.
## Comparison with Additional Shapley-Based Baselines
We first would like to clarify a key distinction between SANNs and the referenced line of Shapley-based methods. Specifically, **these Shapley-based methods (SBMs) are not Self-Attributing Neural Networks (SANNs)**. Instead, these methods are faster approximations of the SHAP baseline that we already compare to. The goal of these SBMs is to efficiently approximate post-hoc explanation Shapley values, with later works (FastSHAP, ViT-Shapley, Auto-Gnothi) training additional surrogate models to estimate Shapley values. AutoGnothi, for example, trains multiple side networks to generate approximations of Shapley explanations. **These SBM explanations are crucially never passed into the backbone to make a prediction.** In contrast, in a SANN, the features in an explanation are directly passed into the backbone to guarantee that predictions rely solely on the explanation. Therefore, the explanations of AutoGnothi and other Shapley-based methods are more similar to post-hoc explanations and do not fall into the category of SANNs. This fact is reflected in SBM experiments: all the papers of FastShap, ViT-Shapley, and Auto-Gnothi only compare to post-hoc explanation methods and do not compare to any SANNs, as a direct comparison is not possible.
In our paper, we provide comparisons to SANN variants of post-hoc methods. Since we already compared to a SANN variant of SHAP (in Table 1, denoted SHAP-F), for completeness, we ran analogous comparisons with SBMs using their released codebases. Since the suggested methods involve training additional side networks, for fair comparison, we ensured that all baselines had equal or greater training resources than SOP. **We show the results in Table 1 in the additional PDF, where we find that their performance as a SANN is actually even worse than SHAP, and consequently worse than SOP.** Intuitively, this result is not too surprising since these methods are faster but more inaccurate approximations of SHAP, trading off accuracy of the Shapley value for speed. KernelSHAP was too expensive to be fairly run as a baseline, as it requires a minimum number of forward passes at least equal to the number of features to find the solution to their loss function in Theorem 2 in their paper and code [2], making it an order of magnitude more expensive than all other baselines.
## On Computational Cost of SOP
Since the reviewer brings up computational cost as a potential weakness, we point out that **our SANN experiments are orders of magnitude larger scale than those of the referenced SBMs**. The AutoGnothi and ViT-Shapley papers conduct experiments on small scale datasets such as ImageNette, which only has 10 easily distinguishable classes where it is easy to get 100% accuracy. In contrast, **we train SANNs on full ImageNet (1000 classes!), where SOP achieves SOTA performance with only one epoch of training.** Since we are evaluating SBMs in a significantly larger setting than originally proposed, we checked the accuracy of surrogate models to ensure these baselines were trained properly. We confirmed that the resulting surrogate models trained on masked inputs have comparable accuracy on ImageNet to other work in the literature [1, Figure 10] that also trains models on similarly masked inputs, so we are certain these baselines were sufficiently trained.
If the reviewer is still concerned about cost, we have also included an ablation study (Figure 1 in the additional PDF) showing that SOP performance on ImageNet saturates at just 5 groups, meaning we can further reduce inference costs by an additional factor of 4 beyond what we originally reported.
[1] Jain et al. Missingness Bias in Model Debugging. ICLR 2022.
## Human Distinction Task With More Samples
First, we point out that **our human distinction task follows the exact same protocol as the original HIVE paper published at ECCV** [2a], which uses 10 examples for evaluating human distinction tasks [2b]. We believe this to be partly because evaluating the task on a single example requires significant human effort, with the original study consisting of hundreds of evaluator tasks. **We have increased the study to 50 examples,** which amounts to thousands of evaluator tasks. In summary, we found results consistent with what we originally reported previously (that SOP is among the best, but that the distinction test cannot distinguish between the best), but with slightly smaller error bars and smaller differences between methods. The expanded study is shown in Figure 3 in the additional PDF.
[2a] Kim et al. HIVE: Evaluating the Human Interpretability of Visual Explanations. ECCV 2022.
[2b] https://github.com/princetonvisualai/HIVE/blob/main/materials/HIVE_suppmat.pdf
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I would like to point out that AutoGnothi is actually an SANN, which simualtanesouly outputs the prediction and its explanation. I suggest the authors compare the results with AutoGnothi.
---
Reply to Comment 1.1.1:
Comment: ## Reviewer’s mischaracterization of AutoGnothi vs. SANNs
**A network that simultaneously produces a prediction and explanation is not necessarily a SANN.** SANNs explicitly use feature subsets to make predictions, formalized as $f(x) = \sum_{i=1}^m \theta(x)_i h({x_G}_i)$ (Definition 2.1). This SANN framework aligns with prior work [1,2,3]. AutoGnothi generates predictions and explanations separately by using different heads on the last hidden states [4]. However – **this explanation is never used for feature selection to make the prediction, as stated in the AutoGnothi paper**. While AutoGnothi simultaneously produces explanations and predictions, it does not fall under the SANN framework and is therefore not a SANN.
This was explained in our [initial rebuttal](https://openreview.net/forum?id=r6y9TEdLMh¬eId=uKLJz7H3DW).
If the reviewer can concretely explain the original AutoGnothi model can be formalized as a SANN then we would be happy to categorize it as such.
## Initial rebuttal contains requested comparisons
**Requested comparisons to AutoGnothi, FastShap, and ViT-Shapley have already been reported on their SANN variants.** As we previously explained in detail in the [initial rebuttal](https://openreview.net/forum?id=r6y9TEdLMh¬eId=uKLJz7H3DW), these methods are faster, less-accurate approximations of SHAP. **SOP outperforms SHAP and all requested baselines**. Nonetheless, we are happy to include these comparisons in the final version.
[1] David Alvarez-Melis, Tommi S. Jaakkola. Towards Robust Interpretability with Self-Explaining Neural Networks. NeurIPS 2018.
[2] Brendel, W. and Bethge, M. Approximating CNNs with Bag-of-Local-Features Models Works Surprisingly Well on ImageNet. ICLR 2019.
[3] Jain, S., Wiegreffe, S., Pinter, Y., and Wallace, B. C. Learning to Faithfully Rationalize by Construction. ACL 2020.
[4] Shaobo Wang, Hongxuan Tang, Mingyang Wang, Hongrui Zhang, Xuyang Liu, Weiya Li, Xuming Hu, Linfeng Zhang. Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Transformers. ICLR 2025. | Summary: The paper addresses the limitations of existing self-attributing neural networks (SANNs) in high-dimensional data. The authors theoretically prove a lower bound on the error of per-feature SANNs and demonstrate that group-based SANNs can overcome this limitation. The main algorithmic contribution is the Sum-of-Parts (SOP) framework, which transforms any differentiable model into a group-based SANN. SOP achieves state-of-the-art performance for SANNs on vision (ImageNet-S, CosmoGrid) and language (MultiRC) tasks. The learned feature groups are shown to be interpretable through quantitative metrics (performance at different sparsity levels, faithfulness) and semantic metrics (semantic coherence, human distinction). Furthermore, the paper demonstrates the utility of SOP explanations in model debugging and cosmological scientific discovery.
Claims And Evidence: Overall, the claims are well-supported by evidence; however, some minor points may require further clarification or additional evidence.
1. Performance : While SOP achieves strong results on insertion metrics, its performance on deletion metrics is somewhat weaker compared to certain baselines like Archipelago or FRESH in specific cases. Although the authors explain this behavior as natural due to SOP's reliance on multiple feature groups rather than single groups, this aspect could benefit from further empirical exploration to fully justify this claim.
2. Overhead : The authors propose SOP as a flexible and model-agnostic framework; however, they do not extensively discuss computational overhead or scalability implications clearly. Given that SOP involves dynamic attention-based group generation and multiple forward passes through pre-trained models per input example, computational efficiency could be a concern in practical applications.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. However I have some doubts and would like the authors comment on them.
1. Group Diversity: Do you think multi-head attention may produce redundant groups? If yes, methods like diversity regularization could help?
2. Deletion Bias: Deletion tests generally favor models using fewer features, slightly disadvantaging SOP’s multi-group approach.
Theoretical Claims: The theoretical claims are well-supported by rigorous proofs and empirical validation, particularly for Theorems 2.3 and 2.4. The authors also posit conjectures (e.g., Conjecture A.1 and A.2) suggesting exponential growth of insertion/deletion errors for monomials and binomials as feature dimensions increase. These conjectures are supported by empirical fits to numerical results but lack formal proofs.
Experimental Designs Or Analyses: The experimental designs and analyses are well-executed and provide convincing evidence for most claims made in the paper. However there are some minor issue where where further validation could strengthen the work.
1. Pre-Trained Backbones: SOP uses pre-trained models (e.g., ViT, BERT), while some baselines (e.g., BagNet) do not. Hence it is ideal to compare SOP to baselines using the same pre-trained backbone for fairness.
2. Computational Efficiency : SOP’s dynamic group generation and sparse attention may increase computational cost compared to baselines. Hence the authors should include a detailed analysis of training/inference times and memory usage.
3. Generalizability: The proofs focus on polynomial functions, which may not capture all real-world data patterns (e.g., non-polynomial interactions). I would like the authors to comment here.
Supplementary Material: The supplementary material provides extensive theoretical justifications, algorithmic details, experimental setups, and additional results that reinforce the claims made in the main paper. It is thorough and well-documented, leaving little room for ambiguity. While some conjectures remain unproven, they are strongly supported by empirical evidence. I did not check all the theorems in detail and it's definitely possible that I missed some details provided in the supplementary material. Overall, the supplementary material significantly enhances the credibility and reproducibility of the work.
Relation To Broader Scientific Literature: SOP's key contributions lie in its theoretical insights into the limitations of per-feature SANNs, its novel framework for end-to-end learning of semantically coherent feature groups without supervision, its state-of-the-art performance among SANNs, and its rigorous validation of interpretability and utility in diverse applications. These advancements significantly contribute to the ongoing research efforts in the field of interpretable machine learning.
Essential References Not Discussed: The paper does a thorough job of discussing related works in the context of self-attributing neural networks and interpretability.
Other Strengths And Weaknesses: The paper makes a compelling contribution to interpretable ML by addressing the trade-off between performance and faithfulness in SANNs. But some minor weaknesses exist according to me.
1. Computational Cost: While the paper demonstrates strong performance, it does not extensively discuss the computational cost associated with the SOP framework. The use of attention mechanisms, especially multi-headed self-attention, can be computationally intensive. Providing an analysis of the training and inference time complexity and potentially comparing it to other SANN methods would be valuable for understanding the practical feasibility of SOP.
2. Hyperparameter Sensitivity: The paper mentions using a sparsity level of τ=20% for the group generator but does not delve deeply into the sensitivity of the results to different hyperparameter settings, such as the number of groups (m) or the sparsity level. An analysis of how these hyperparameters affect the performance and interpretability of SOP would strengthen the paper.
3. Interpretability of Learned Groups: While the paper presents quantitative and qualitative evidence for the interpretability of the learned groups, further visualization and analysis of the actual feature groups discovered by SOP could provide more intuitive insights. Understanding what kind of feature groupings the model learns for different tasks and classes could further enhance the understanding and trust in the explanations provided by SOP.
4. Cosmological Evaluation Metrics: The threshold-based purity metrics for CosmoGrid rely heavily on domain expertise. While justified by collaboration with cosmologists, these metrics may be less accessible or transparent to non-expert readers without additional context or sensitivity analyses.
Other Comments Or Suggestions: Typos:
1. L89: "anddifferent" -> "and different"
2.L211: "in and Figure" -> "in Figure"
Questions For Authors: It would be great if the authors consider below recommendations to improve the paper:
1. Provide a computational complexity analysis and discuss optimizations for scalability.
2. Include a discussion of potential biases inherited from pre-trained backbones and mitigation strategies.
Overall, the paper presents significant original contributions through rigorous theory, innovative methodology, strong empirical validation across multiple domains, and practical utility in scientific discovery. Minor weaknesses include limited exploration of computational overhead, and brief explanations regarding certain evaluation metrics, which I mentioned before. These weaknesses do not substantially undermine the paper's core contributions but indicate areas where additional clarity or evidence could further strengthen the work.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you so much for the valuable and encouraging review! We provide an additional PDF at https://github.com/icml2025-3311/icml2025-3311/blob/main/ICML2025_3311_rebuttal.pdf for additional figures.
## Explaining the Deletion Performance of SOP.
To explain how SOPs usage of multiple groups can affect the deletion, we show the average deletion curves for SOP in Figure 2 in the linked PDF. We see that even as we delete features from groups, SOP is able to maintain relatively high performance in some cases, resulting in a worse deletion score. **Such behavior is natural because the training objective in SOP encourages the group selector to select highly predictive groups, and multiple groups can compensate for the information missing in another group.**
## What is the computational complexity of SOP?
The cost of SOP (running time and memory) is dominated by the forward passes through the backbone predictor $h$, which can be seen in the definition of $f$ in Section 3. **Therefore, when using $m$ groups, the cost of an SOP forward pass is equivalent to $m$ forward passes through the backbone.** The costs of the group generator and the group selector are negligible in comparison, since each one is a much smaller attention module on sequences of $m$ inputs. We will make this cost explicit at the end of Section 3.
In our experiments, we control computation fairly across all baselines, which is discussed at length in Appendix C.2 & C.3. All methods can use at most 20 forward passes per inference with the exception of Archipelago which requires quadratic forward passes (Appendix C.3 line 1531-1533). For SOP, this amounts to **m=20 groups and thus 20 forward passes** per inference (Appendix C.2.2 line 1462-1463). SOP can match or beat all other baselines with equal compute.
A detailed summary of computation is in Table 2 in the linked PDF, which we will add to Table 2 of the main paper to make this information more visible.
Lastly, the training needed to fine-tune SOP modules (Appendix C.2.2 line 1464-1465) are minimal compared to training the backbone (e.g. one epoch for ImageNet).
## Questions about Group Diversity.
Multihead attention and explicit diversity regularization could potentially change group diversity. We found that **simply using more attention heads in the group generator actually helped create more diverse groups,** an observation that we briefly mentioned in Appendix C.2.2 line 1462-1462. Concretely, we observed that using four heads in the group generator results in 32% fewer overlapping patches in groups than one head on ImageNet.
## Comparing SOP to Baselines with the Same Backbones.
We confirm that SOP is already compared to baselines with the same backbones whenever possible. Specifically, **we compared 10 model-agnostic baselines that use the exact same backbone model as SOP**, and 4 baselines that are architecture specific. This is summarized in the main paper in Table 1 (see “Model-Agnostic” column) and discussed at length in Appendix C.3. We included architecture-specific baselines as readers wanted to see such comparisons.
## Generalizability of Polynomials in the Theory.
We note that our theory tackles polynomials in the most general setting, with no restrictions on the type or degree of polynomials, and therefore has broad implications. Thanks to the Stone-Weierstrass theorem, every continuous function can be uniformly approximated by a polynomial function. **Since our result applies to any polynomial, it also applies to those that uniformly approximate real-world data patterns.**
## Sensitivity to Sparsity / Number of Groups.
- *Sparsity Analysis*. **We have already done this sparsity analysis in Section 4.2 RQ2 (Figure 3)**, with further analysis in Appendix C.4.2 (Figure 18).
- *Number of Groups Analysis*. In Figure 1 in the linked PDF, we measure how the number of groups affects accuracy for ImageNet. Naturally, **having more groups improves accuracy**, and it saturates at 5 groups, suggesting that SOP can actually be 4x more efficient than originally reported.
## Further Visualization & Analysis of Groups.
**We have already included visualizations** in Figures 5+7 in the main paper as well as Figure 11~15 in the appendix for more visual examples, and **RQ5 and RQ6 do a semantic analysis of said groups**. Furthermore, **RQ7 and RQ8 provide insights on how different features groups learned with SOP affect predictions**.
## More Context for Cosmology Study.
We referenced additional detailed and accessible cosmology background in Appendix D, where we introduce the problem and discuss the kinds of findings and groups cosmologists find interesting.
## Biases in Pretrained Models.
Pretrained models are known to encode a range of social biases including racism and sexism. One could utilize algorithms that edit models for fairness or debiasing models. Such biases equally affect all baselines in our work using the same pretrained model.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, which has effectively clarified my doubts. I appreciate the effort you put into addressing my concerns.
Based on your reply, I have adjusted my rating accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your encouraging comment and for updating your recommendation. We appreciate your feedback on improving our paper and will incorporate all important points from our discussion into the revision. | null | null | null | null | null | null |
Learning to Plan with Personalized Preferences | Reject | Summary: This paper proposes a framework/benchmark, named Preference-based Planning (PbP), for learning preferences from human behavior and subsequently planning actions guided by these learned preferences. The PbP is an embodied benchmark built upon NVIDIA Omniverse and OmniGibson, and provides provides a large-scale, structured environment for evaluating preference learning and personalized planning. The performance of this PbP framework/benchmark is mainly evaluated via leveraging extensive State-of-the-Art (SOTA) algorithms.
-----------------update after rebuttal---------------
My concerns were addressed by the authors and I maintain my original score.
Claims And Evidence: There are several claims made in this paper, including but not limited to the followings:
- Claim: The planning adaptability can be improved via learning human preferences from few-shot demonstrations.
- Evidence: This paper validates and supports this claim via empirical evaluations. The empirical results demonstrate that the action prediction performance can be improved via incorporating learned preferences as intermediate representations.
- Claim: The PbP benchmark provides a comprehensive and systematic evaluation for preference-based planning.
- Evidence: This paper supports this claims via spanning 50 distinct scenes and encoding 290 unique preferences, with a comprehensive test set of 5000 instances.
Most of these claims are supported by quantitative results, some potential limitations include these results/claims are mainly validated via simulations, the real-world applicability of PbP is not quite clear.
Methods And Evaluation Criteria: The proposed methods are evaluation criteria make sense in general. For example, the generalization performance is evaluated via testing models on novel environments, which is a general method widely used in the literature.
Theoretical Claims: No. There is no theoretical claims in this paper.
Experimental Designs Or Analyses: In general, the experimental designs are well-structured since this paper mainly focuses on introducing a new benchmark. One potential issue is that it is clear that there will be an error propagation in the two-stage learning, however, there is not a comprehensive or deep analysis from this perspective. However does the noisy preference prediction affect downstream planning?
Supplementary Material: Sections B&C, not too much on the Baseline details of Section D.
Relation To Broader Scientific Literature: This work is based on preference learning, embodied AI benchmarks, etc.
Essential References Not Discussed: The paper discusses most relevant works.
Other Strengths And Weaknesses: - it is clear that there will be an error propagation in the two-stage learning, however, there is not a comprehensive or deep analysis from this perspective. However does the noisy preference prediction affect downstream planning?
- Most of the claims in this paper are mainly validated via simulations, the real-world applicability of PbP is not quite clear.
Other Comments Or Suggestions: - How diverse are user preferences in the benchmark? Does the benchmark sufficiently cover cross-cultural differences?
Questions For Authors: - There is often preference drift in practice, will the benchmark be able to handle evolving or conflicting user preferences over time?
- In practice, the amount of data may be limited. In order to achieve reliable preference inference in real-world applications, is there any estimations on the number of demonstrations required?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer kiee:
Thank you for your thoughtful review and constructive feedback.
> Some potential limitations include these results/claims are mainly validated via simulations, the real-world applicability of PbP is not quite clear.
Most of the claims in this paper are mainly validated via simulations, the real-world applicability of PbP is not quite clear.
We acknowledge the weakness for the lack of real-world demonstrations. We would argue that our study has a different focus. The preference definitions defined, together with the simulation based on prior research in the community, can be viewed as simplified simulation of real-world human-AI interactions. We did not ground the task in more challenging settings involving real humans, for the reason that their preferences might change unpredictably and observations of their behaviors can be noisy and with errors. Instead, we focused on fundamental reasoning and planning tasks where a stable preference can guide the entire process. However, moving forward, it is indeed important to explore our current work in real-world scenes with diverse and unpredictable real-human behaviors. Adapting the pipeline to real-world scenarios is a valuable direction for future work.
> One potential issue is that it is clear that there will be an error propagation in the two-stage learning, however, there is not a comprehensive or deep analysis from this perspective. However does the noisy preference prediction affect downstream planning?
We have shown the results in table 1, where the mid line results indicates using previously inferred noisy preference labels, and the bottom line results ground truth preference labels for downstream planning. There is a significant performance drop when using noisy preference labels.
> How diverse are user preferences in the benchmark? Does the benchmark sufficiently cover cross-cultural differences?
Very important question. There are 290 pre-defined questions in PbP. There are certainly some corner cases or cross-cultural differences that not coverd in these scope. However, as defined, these preferences can be seen as high-level abstractions of action sequences. The generalization of our preference modeling doesn't necessarily depend on the scope of these pre-defined preference definitions. The model learns from action sequences and finally outputs actions as well. The pre-defined preferences mainly serve as a guide to help us sample demonstrations and help model planning. While they may not cover all corner cases, they provide a substantial enough range to serve as a benchmark to evaluate the baselines.
Besides, our environment naturally supports extensions like cross-culture differences or preference drifts. So long as the task can be formulated as action sequences and sub-structures exist in the action distributions, the methodology can be applied, no matter how domains vary, or humans have their own unique preference, or organize preferences according to cultural differences. Our proposed three-tiered hierarchical structure of preferences is designed from the perspective of how things happen in a household scenario.
> There is often preference drift in practice, will the benchmark be able to handle evolving or conflicting user preferences over time?
Yes, as the learned preferences are mainly based on high-level abstractions of action sequences, as long as the user preference drift is demostrated and can be observed through his observations, his preference evolution could be learned and updated to the machine's policy.
> In practice, the amount of data may be limited. In order to achieve reliable preference inference in real-world applications, is there any estimations on the number of demonstrations required?
Yes. This is exactly the figure 7 wants to show. We do ablation study on the number of demonstrations and show the results in various cases. Generally, more demostrations mean better performance. In our experimental settings, we found that approximately five demonstrations are typically enough. However, real-world applications may require further validation, as they involve more complex perception challenges and more nuanced human behaviors than our controlled experimental environment. We also agree that a more rigorous study shall be required in order to study accurate scaling laws.
We sincerely welcome your further feedback. | Summary: This work attempted to develop agents capable of learning preferences from few-shot demonstrations and generalize across diverse household task-planning scenarios. in that pursuit, the work also presents the 'Preference-based Planning' (PBP) benchmark featuring a set of demonstrations rendered within a simulator, representing 290 different preferences spanning multiple levels from low-level action execution to spatial constraints to sequence-level preferences. The findings indicate that learned preferences may be useful intermediate representations for planning and that pure language models show potential for scalability over vision-language models.
### Update after rebuttal:
My major concerns and questions have been addressed. I do not have any further qualms about this work. Thus, I maintain my acceptance score.
Claims And Evidence: 1) In section 3, the motivations for formulating preference learning as a few-shot learning from demonstrations are technically naive. Relevant literature in embodied AI and preference learning needs to be reviewed to back up the hypotheses and motivations for this approach than current evidence that compares abilities of humans and exhorts difficulty of preference collection.
Methods And Evaluation Criteria: 1) Methods and evaluation criteria are largely sound as far as saw. Empirical and Ablation studies have been conducted to understand the impact of the number of demonstration examples in the prompts for the models. There is a sufficient comparison of text-based models and vision-based models.
Theoretical Claims: Section 3 - Formulating Preference-based Planning: The observations are stated to contain the $S_i$, egocentric observation sequence video, $A_i$, action sequence and $M$, the bird's eye view of the scene maps. It is unclear whether or how the $M$ values - the bird's eye view of the scene maps are utilized in the inputs by the models. More details are needed explaining whether there are any explicit mappings other than $A_i$ between the $S_i$ and $M$ values. The work also needs to specify whether this was a choice of design or whether it is an accepted practice for such kinds of evaluations.
Experimental Designs Or Analyses: 1) Section 5.4, Generalization experiment details are insufficiently explained. The description says that demonstration and test videos are rendered with the same objects and identical rooms. It's unclear what difference is being leveraged in the hypothesis of the experiment. A side-by-side rendering of the two instances would be useful to clear the conflations.
Supplementary Material: Yes, I have reviewed appendix A to C.
Relation To Broader Scientific Literature: The work's results are relevant for embodied AI and preference learning communities in that it highlights issues with preference learning with few-shot demonstrations and generalization of learned preferences across different settings, especially with multimodal models.
Essential References Not Discussed: Section 4.2 Constructing the PBP test set claims that the egocentric perspective is prioritized for certain reasons. It is essential to note that the VLMs have a frame of reference bias. Regardless of whether the authors knew about prior works proving the same such as [1], it is prudent to mention that such bias may well exist and that this work is aware of it when performing evaluations, especially as the work also presents a benchmark.
[1] Zhang, Zheyuan, et al. "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Under Ambiguities." ArXiv, 2024, https://arxiv.org/abs/2410.17385.
Other Strengths And Weaknesses: ## Other Weaknesses:
1) An insufficient amount of work is done in the failure analysis of the experiments. For example, It would have been useful to know which levels of preference learning are more prone to failure or are difficult for what kind of models.
Other Comments Or Suggestions: 1) Section 5.4, Table 3, Figure 6 - In some cases, 'gen' has been used to denote the generalization setting, and in other cases 'orig' has been used for the same. Are they different settings? If not, It is best to have consistent definitions.
Questions For Authors: 1) Section 5.3 two-stage learning, what do the auxiliary preference tokens look like? How do they differ for symbolic models vs vision models?
2) Does option-level include action-level preferences as well? If not, Why is option-level and sequence-level compared and not action-level preferences?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer rU32:
Thank you for your thoughtful review and constructive feedback.
> the motivations for formulating preference learning as a few-shot learning from demonstrations are technically naive. Relevant literature in embodied AI and preference learning needs to be reviewed.
Our formulation is based on two considerations:
i) Humans, even infants are found to have the ability to detect others' preference with only a few demonstrations. There do exist a series of literatures, especially in psychology, supporting this [1][2][3]. Recently in embodied AI, works like [7][8] demonstrate the importance of personalization in robot assistants and highlight the difficulty in preference inference and adaption. So to facilitate embodied AI, it is necessary to test such ability.
ii) Under our home assistant setting, it is nearly impossible to collect a larget amount of demonstrations for a specific person and task. And it is too tedious for users to choose or rank the preferred trajectories in performing everyday tasks. So in a realistic setting, learning from the observations of user behaviors is a more natural way rather than collecting preference data. There exists literature proposing similar opinions, like reducing query times as far as possible [5][6], or using few-shot learning from demonstrations for adapting [4].
We will add more literature discussion in the Intro part and Related work part.
[1] Choi et al. How do 3-month-old infants attribute preferences to a human agent?. Journal of experimental child psychology.
[2] Duh et al. Infants detect patterns of choices despite counter evidence, but timing of inconsistency matters. Journal of Cognition and Development.
[3] Baker et al. Rational quantitative attribution of beliefs, desires and percepts in human mentalizing. Nature Human Behaviour.
[4] Verma et al. Adaptagent: Adapting multimodal web agents with few-shot learning from human demonstrations.
[5] Hejna et al. Few-shot preference learning for human-in-the-loop. PMLR.
[6] Yu et al. Few-shot in-context preference learning using large language models.
[7] Hwang et al. Promptable behaviors: Personalizing multi-objective rewards from human preferences. CVPR.
[8] Hellou et al. Personalization and localization in human-robot interaction: A review of technical methods. Robotics.
> It is unclear whether or how the M - the bird's eye view are utilized in the inputs. More details are needed explaining any explicit mappings other than A_i between the S_i and M values. The work also needs to specify whether this was a choice of design for such kinds of evaluations.
The scene maps are not used as the model input in our settings, as discussed in sec 4.2. It rather helps illustrate the overall process of robot behaviors. Its exclusion from model inputs is a deliberate design choice to focus on egocentric perception. There are no other explicit mappings other than A_i (can be human action or robot action). For evaluation, we kindly refer to examples and discussons to reviewer eCTV.
> Generalization experiment
For generalization experiment, there could be randomly-sampled scenes and objects for the same preference. We will add side-by-side rendering in revision. Thank you for your suggestion.
> It is essential to note that the VLMs have a frame of reference bias.
We will explicitly highlight this point together with the references in revision.
> It would have been useful to know which levels of preference learning are more prone to failure or are difficult for what kind of models.
Table 1-3 in the paper reports model-level performance at different levels. Generally, all models struggle more when dealing with the sequence level than the option level, as in long-horizon preference reasoning dependencies between preference steps accumulate errors.
> 'gen' and 'orig' has been used for the same.
Sorry for the confusion. We will fix this confusion in revision.
> What do the auxiliary preference tokens look like? How do they differ for symbolic models vs vision models?
The auxiliary preference tokens are preference labels we define in section 4.1. They are same for symbolic models and vision models for two-stage experiment setting.
> Does option-level include action-level preferences as well?
Yes. Action-level preferences are related to single actions. We didn't explicitly include action-level preference in comparision for the reason that as commonly defined, action preferences are related to specific actions, such as drinking juice vs drinking coffee. A basic imitation policy can effectively learn these preferences where models only need to remember and repeat. We would like to focus on more complex preferences that cannot be easily addressed through simple copy/imitation, and to test models in scenarios where preferences involve more nuanced or context-dependent decisions.
We sincerely welcome your further feedback. | Summary: The paper introduces Preference-based Planning (PBP), a benchmark for learning human preferences and integrating them into AI planning. The framework enables AI agents to infer user-specific preferences from a few demonstrations and apply them in task planning.
Claims And Evidence: Following claims are made with or without evidences:
1. AI agents can learn user preferences from limited demonstrations and generalize them across diverse planning tasks.
2. "Few-shot learning generalizes well across various scenarios" – No direct comparison with baseline retrieval-based or reinforcement learning based approaches.
3. "Preference learning significantly improves AI adaptability" – While performance gains are observed, no real-user evaluations validate whether these improvements translate to better human-AI interactions.
4. "PBP is a realistic simulation and real-time rendering of human preferences" – The benchmark relies on synthetic data, and no real-world demonstrations are included.
Methods And Evaluation Criteria: Proposed method leverages AI agents with Levenshtein distance and accuracy as the metric for performance quantification. However, need more experiments to see how the work is inline with the literature. There is a lot of research on RL based preference learning (see literature section).
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: They generated 50 distinct scenes, 290 unique preferences, and 5,000 test cases. Two experiments are designed:
1. End-to-End Action Preference Learning – Models generate action sequences directly from past demonstrations.
2. Two-Stage Learning & Planning – First, models predict user preferences, then use them for task planning.
Levenshtein distance and accuracy is used as performance evaluation metric. However, detailed comparison with SOTA is missing.
Supplementary Material: NA
Relation To Broader Scientific Literature: Yes. Preference learning is a trending topic in RL and current work leverages AI agents to learn preference and actions.
Essential References Not Discussed: Though relevant literature is included but RL based methods are common in literature and it will be interesting to see how the work compares to these methods. Many types of preferences are defined in RL domain but the paper doesn't mention which kind of preference they are using. See and cite the following paper for reference.
1. Advances in Preference-based Reinforcement Learning: A Review
2. Deep reinforcement learning from human preferences
3. Models of human preference for learning reward functions
Other simulation benchmarks can also be cited:
1. RObocasa
Other Strengths And Weaknesses: The work leveraging embodies agents is interesting and reveal good accuracy however a detailed comparison with RL based methods is missing.
The paper also lacks Real-World Validation, the current evaluation is on the simulation (their own). Do we have any results on existing benchmarks?
The paper has limited Scope of Generalization – Only tests simulation-based preference learning without testing on other existing benchamars.
Other Comments Or Suggestions: See weaknesses.
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer mzDn:
Thank you for your thoughtful review and constructive feedback.
> No direct comparison with baseline retrieval-based or reinforcement learning based approaches.
> Other Strengths And Weaknesses: a detailed comparison with RL based methods is missing.
Retrieval-based and RL-based approaches are intuitive candidates for this benchmark, but they face inherent limitations that make them unsuitable for the task. For Retrieval-based methods, they rely on matching input queries to similar patterns in a memory pool. Although they can retrieve relevant demonstrations, they fail to address the core challenge of the few-shot reasoning:
| | State 1 (openai) | Stage 2 (openai) | State 1 (jina-v2) | Stage 2 (jina-v2) |
|-------------|------------------|------------------|-------------------|-------------------|
| Option Level | 16.46 | 15.42 | 16.49 | 16.18 |
| Sequence Level | 14.01 | 24.08 | 14.72 | 26.54 |
These scores are notably poor—only marginally better than DAG-based methods—because retrieval-based systems, while capable of accessing more demonstrations, cannot effectively generalize from historical data to new tasks. We will add this baseline to our paper.
RL-based approaches, especially meta-RL, on the other hand, often need millions of training steps and explicit reward functions for each task. The requirement contradicts with the few-shot nature of our task: we need an agent that can adapt to new tasks given only a few demonstrations in a complex environment, where current RL-based or meta-RL based methods have been not shown satisfactory results. Moreover, designing reward functions for each preference can be subtle and trivial, which is impractical as humans have very diverse and nuanced preferences.
> no real-user evaluations validate whether these improvements translate to better human-AI interactions.
> The benchmark relies on synthetic data, and no real-world demonstrations are included.
> The paper also lacks Real-World Validation
We would argue that as a first of its kind in the study, our work focuses on developing an agent capable of the task in virtual environments before transitioning to real-world scenarios. The preference definitions defined, together with the simulation based on prior research in the community, can be viewed as simplified simulation of real-world human-AI interactions. While not grounded on the real world, our design still focuses on fundamental reasoning and planning tasks where a stable preference can guide the entire process. However, moving forward, it is indeed important to explore our current work in real-world scenes with diverse and unpredictable real-human behaviors. Adapting the pipeline to real-world scenarios is the future direction of our work.
> need more experiments to see how the used metrics is inline with the literature. There is a lot of research on RL based preference learning (see literature section).
RL baselines: see claims above. Levenshtein distance: kindly refer to examples and discussons to reviewer eCTV.
> Relation To Broader Scientific Literature. Essential References Not Discussed.
Thank you very much for your kind reminder.
Our work aligns with PbRL works most closely as we learn from demonstrated action sequences that implicitly encode user preferences. PbRL mainly utilizes human preferences as feedback from experts to replace numeric rewards to help models learn better [1]. Works like [2] explore goals defined in terms of human preferences between trajectory segments, [3] proposes modeling human preferences instead as informed by each segment's regret. They are definity works related to our research topic. We further extend beyond traditional preference-based RL settings in several ways. While most preference-based RL methods require extensive human feedback through pairwise comparisons or explicit reward signals, we focus on learning from minimal demonstrations that implicitly convey preferences. Second, rather than learning a single reward function or policy, our approach aims to identify and abstract generalizable preference patterns across diverse tasks and scenarios.
We will add such discussion and cite these related papers in our revision. And we will also include RObocasa in Sec 2.3 of our paper.
[1] Advances in Preference-based Reinforcement Learning: A Review.
[2] Deep reinforcement learning from human preferences.
[3] Models of human preference for learning reward functions.
> The paper has limited Scope of Generalization
As far as we know, there is a lack of benchmarks for embodied tasks that include systematically-defined human preference or human behavior data. Thus, we also propose a benchmark to tackle this and see this as one of the contributions of our work. We will release the benchmark together with our baselines for the community to test and explore.
We sincerely welcome your further feedback. | Summary: The paper introduces a framework to enhance embodied AI planning by incorporating personalized preferences learned from limited human demonstrations. It proposes the Preference-based Planning (PBP) benchmark and shows that learned preferences serve as effective abstractions, improving personalized plan generation for embodied AI agents.
## update after rebuttal
The rebuttal has addressed my key concerns. I therefore update my rating from weak reject to weak accept.
Claims And Evidence: The proposed benchmark is novel within embodied AI. However, its significance is unclear, as simple symbol-based approaches already perform fairly well (Table 2), surpassing video-based counterparts.
The primary finding from the benchmark—improved personalized planning through learned preferences—is clearly demonstrated, but not particularly novel on its own.
Methods And Evaluation Criteria: The proposed benchmark uses Levenshtein distance to measure discrepancies between generated and ground truth action sequences. However, the rationale for choosing Levenshtein distance is unclear and should be better justified.
Theoretical Claims: The paper does not have theoretical claims
Experimental Designs Or Analyses: The proposed benchmark incorporates a comprehensive set of baselines and evaluations.
Supplementary Material: The supplementary code provides implementations of the key baselines in the proposed benchmark.
Relation To Broader Scientific Literature: The contributions relate closely to recent literature on preference learning and embodied AI.
Essential References Not Discussed: I do not see any major related work missing.
Other Strengths And Weaknesses: There appears to be a mismatch between the paper's framing and its actual contribution. The paper is framed as a new method for preference-based planning, but its main contribution seems to be the empirical benchmark and evaluation.
Other Comments Or Suggestions: No further comments
Questions For Authors: Could you elaborate on the strengths of Levenshtein distance as an evaluation metric?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer eCTV:
Thank you for your thoughtful review and constructive feedback. We appreciate your recognition of the benchmark’s novelty and the clarity of our primary findings. We address your concerns point by point:
> The proposed benchmark is novel within embodied AI. However, its significance is unclear, as simple symbol-based approaches already perform fairly well (Table 2), surpassing video-based counterparts.
Video-based parts are important in the benchmark. We mainly incorporate symbol-based parts to study how different modalities impact performance for more comprehensive evulation. However, the benchmark’s significance greatly lies in its scalability and realism. To ensure our research can address real-world challenges in perception and noise, we should simulate human behaviors and their preferences within complex, real-world conditions. Therefore, video-based parts are more critical for potential future extension, as real-world agents must process raw sensory inputs to infer preferences dynamically. Visual cues can offer nuanced insights into user preferences that are not explicitly stated in the form of text, making them invaluable for applications where understanding subtle, non-verbal user behaviors.
Indeed, as you pointed out, the effectiveness of symbol-based LLMs in preference learning tasks, as demonstrated in our experiments, underscores the relative ease with which these models can handle few-shot induction when provided with explicit action and preference labels. However, it still presents great challenge in real-world grounding. As information from raw sensory often includes a lot of noise, converting everything into well-organized texts is nearly impossible. The poor performance of video-based approaches also illustrates practical difficulties.
> The primary finding from the benchmark—improved personalized planning through learned preferences—is clearly demonstrated, but not particularly novel on its own.
Thank you for acknowledging our findings. While the idea of improved personalized planning through learned preferences is intuitive, our main contribution lies in creating the first realistic benchmark that extends this concept to embodied AI, modeling human preferences through observable behaviors across hierarchical levels, and leveraging machine learning methods to solve preference-guided planning in a scalable and systematic way. We demonstrate comprehensive research across a broad range of settings, and spans a broad range of settings, revealing key insights—for instance, that symbol-based approaches show promise in scalability, though significant challenges persist in both preference learning and planning, which has not been previously identified in the literature. We hope our work can serve as a foundation for future research.
> The proposed benchmark uses Levenshtein distance to measure discrepancies between generated and ground truth action sequences. However, the rationale for choosing Levenshtein distance is unclear and should be better justified.
> Could you elaborate on the strengths of Levenshtein distance as an evaluation metric?
Levenshtein distance is chosen as it quantifies sequential alignment between generated and ground-truth actions, penalizing deviations that violate preferences (e.g., mismatch in task ordering, like AAABCBCA (gt) vs AABCBCAA (predicted). Note how similar they are. In this case, accuracy is 32.5%(very low) but Levenshtein distance is 1/8 (smaller is better, which indicates the result is almost perfect) ). Levenshtein distance is much more suitable in our settings. Unlike rigid exact-match metrics, it accommodates valid variations in execution while ensuring preference adherence. We will add more explanantion in sec 5.2. Actually, Levenshtein distance has been widely used in sequence comparison [1-3].
[1] Yujian, Li, and Liu Bo. "A normalized Levenshtein distance metric." PAMI.
[2] Gu, Jiatao, Changhan Wang, and Junbo Zhao. "Levenshtein transformer." NeurIPS.
[3] Fanello, Sean Ryan, et al. "Keep it simple and sparse: Real-time action recognition." JMLR.
> There appears to be a mismatch between the paper's framing and its actual contribution. The paper is framed as a new method for preference-based planning, but its main contribution seems to be the empirical benchmark and evaluation.
We appreciate this observation. While we agree the benchmark is a key contribution, we would argue that our main contribution lies in a series of studies on machine learning of human preferences. The benchmark serves as a testbed to rigorously evaluate preference-based planning, enabling us to demonstrate novel and comprehensive research across a broad range of settings. Through extensive experiments, we provide new insights into preference-based planning.
We sincerely welcome your further feedback and suggestions to strengthen our work. | null | null | null | null | null | null |
Models of Heavy-Tailed Mechanistic Universality | Accept (poster) | Summary: Recent advancements in deep learning, including neural scaling laws, have highlighted the prevalence of heavy-tailed or power law behaviors in key network components such as the Jacobian, Hessian, and weight matrices. This phenomenon, termed heavy-tailed mechanistic universality (HT-MU), has been empirically linked to model performance, suggesting its fundamental role in deep learning success. To investigate the origins of this behavior, the study introduces a general class of random feature matrix models, the high-temperature inverse-Wishart ensemble. The model identifies three key factors contributing to heavy-tailed spectral densities: (i) complex correlation structures in data, (ii) lower training temperatures, and (iii) implicit bias in model structure leading to reduced eigenvector entropy. The study further explores the implications of HT-MU on learning dynamics, neural scaling laws, and optimizer trajectories.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: No, this paper doesn't propose any method or evaluation criteria.
Theoretical Claims: Yes, I checked the correctness of the proofs for theoretical claims. Proposition 3.1 provides the density of 'optimal features,' which suggests that a stochastic optimizer concentrates on regions with high marginal likelihood (also called model evidence).
Experimental Designs Or Analyses: Yes, I checked the soundness/validity and experimental designs or analyses. The numerical experiments of the paper effectively validates the results of its theoretical analysis.
Supplementary Material: No, this paper doesn't provide any supplementary material.
Relation To Broader Scientific Literature: To uniformly analyze and understand the power-law phenomenon in deep learning, the paper proposes the high-temperature inverse-Wishart ensemble model.
Essential References Not Discussed: No, there are not related works that are essential to understanding the (context for) key contributions of the paper, but are not currently cited/discussed in the paper.
Other Strengths And Weaknesses: **Strengths:**
1. This work proposes a novel high-temperature inverse-Wishart ensemble model, providing a unified theoretical framework to explain the emergence of the power-law phenomenon in deep learning.
2. The paper presents comprehensive numerical experiments, effectively empirically validating the effectiveness of its theory.
3. Overall, the paper is well-written and easy to follow.
**Weaknesses:**
1. The overall writing density of the paper is somewhat high. It is recommended to move some less important discussions on statistical physics to the appendix.
2. It is suggested that the authors add a separate experiment section in the main text to provide details on the experimental setup, conclusions, and their analysis.
3. Regarding the theoretical analysis of the neural scaling law, the authors seem to have only derived the power-law relationship concerning the amount of training data, without considering factors such as model size. It is recommended that the authors provide a more detailed discussion on the relationship between the high-temperature inverse-Wishart ensemble model and the neural scaling law.
Other Comments Or Suggestions: See weaknesses.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work and for taking the time to verify the correctness of our proofs. We appreciate that the reviewer finds the experiments to be sound and suitably validate our model class. We agree about the high writing density; there are many individual components and motivations in our discussion. In line with other reviewer comments, we are considering alterations to the second section to better emphasize relevant details and move away from some of the statistical physics (see comments to Reviewer RfHj for example). We will include all experimental details in the main text by adding a separate experiment section with the following:
5+1 phases of learning experimental details: For Figure 5, we train a MiniAlexNet for the CIFAR10 classification task with different batch sizes. This MiniAlexNet is a simplified version of AlexNet, which contains six layers: the first three are convolutional layers, followed by max-pooling layers, and the last three are fully connected layers. The histograms in Figure 5 are the histograms of the eigenvalues of $WW^T$ where $W$ is the trained weight matrix in the first fully-connected layer with input dimension $192 * 4 * 4$ and output dimension 1000. Here $W$ is initialized by the centered normal distribution with variance $\sqrt{1/\mathrm{fan_in}}$. For all the histograms in Figure 5, we trained the network using SGD with momentum, with a learning rate of 0.01 and a momentum parameter of
0.9 for 200 epochs. For each histogram, we repeat the experiments 3 times for the average. The red dot curves are the numerical simulations of the density function of the HTMP with different $\kappa$ and $\gamma=1000/(192 * 4 * 4)$.
To obtain NTK spectral densities, we consider larger neural networks that are trained to near-zero loss (all $>99.8\%$) on a subsampled dataset of 1000 entries through 200 epochs of a cosine annealing learning rate schedule with 200 epoch period, starting from a learning rate of 0.05 with a batch size of 64. Each model is comprised of the following number of parameters:
- resnet9 (4.8M parameters)
- resnet18 (11.1M parameters)
- vgg11 (9.2M parameters)
- vgg13 (9.4M parameters)
- lenet (62K parameters)
- logistic (30K parameters)
- densenet121 (7.0M parameters)
The output layer of each model is altered from their ImageNet counterparts to classify with ten classes (for CIFAR-10, SVHN, and MNIST datasets).
We also recognize the relationship of the amount of training data to be a limitation. It is possible to complete the scaling law by including the power law dependence on the number of model features, although this differs from model size in a strict sense. Unfortunately, without a precise parameterization, it is difficult to establish such a law, but along with dependencies on individual model properties (e.g. depth, see comments to Reviewer Weri), we consider this of prominent interest in follow-up work. | Summary: This paper argues that many phenomena observed in neural scaling laws arises from universal random matrix theory effects which the authors term heavy tailed universality. The paper introduces a theory which breaks up the deep network optimization into an optimization over features and optimization over the last layer weights. The relative strength of learning for these different components are controlled by by a hyperparameter $\rho$. This leads to a *master model* for the feature matrices that the authors are able to compute the eigenvalue densities of such adapted kernels using techniques from random matrix theory. Depending on the hyperparameters of the model, there can be 6 types of observed feature spectra. The authors make connections to scaling law literature.
Claims And Evidence: The theoretical claims are supported by both proofs and numerical experiments.
Methods And Evaluation Criteria: The authors focus on smaller scale vision models but this is very appropriate for a theoretical paper.
Theoretical Claims: Yes, I read through many of the derivations in the Appendix.
Experimental Designs Or Analyses: Yes, the experiments appear correct.
Supplementary Material: I reviewed many Appendix section but focused on C-F.
Relation To Broader Scientific Literature: This paper studies an important problem of the origin of neural scaling laws and provides an interesting universality hypothesis about their origin beyond the lazy learning regime.
Essential References Not Discussed: There are several important references that were not referenced but I think should possibly be included
1. Li et al analyze how the hidden layer kernels in a Bayesian neural network change in a proportional limit where width and data diverge at fixed ratio https://journals.aps.org/prx/abstract/10.1103/PhysRevX.11.031059.
2. Zavatone Veth et al also analyze the distribution over kernels after training in Bayesian linear networks in a variety of scaling limits https://ieeexplore.ieee.org/abstract/document/9723137
3. Thamm et al empirically study the spectra of trained weight matrices in deep networks https://arxiv.org/abs/2203.14661. Their results may also be relevant to the final feature matrices.
4. Bordelon et al 2024 studied the dynamics of a random feature model concurrent with the work of Paquette et al 2024 and Lin et al 2024, also capturing compute optimal laws while also treating the effect of limited data in addition to limited training time or features https://openreview.net/pdf?id=nbOY1OmtRc. A recent follow up from that group rederived their result using techniques from random matrix theory https://arxiv.org/abs/2502.05074 including $S$-transform methods.
5. Bordelon et al 2024 had a follow up where they considered a simple model of rich (non-kernel) learning dynamics where the kernel is allowed to adapt that reproduced a faster scaling law in terms of source and capacity exponents https://arxiv.org/abs/2409.17858. This can be attributed to the changes to the kernel during optimization. I sense there could be a connection between their results and the theory in this work which allows the feature kernel to adapt beyond its prior.
Other Strengths And Weaknesses: The paper is technically strong and provides many experiments to support its claims. There are some remaining questions about how architectural details like depth and other earlier layers alter their master model (see questions below).
Other Comments Or Suggestions: 1. It could be useful to the reader to quickly define or outline each of the 5+1 phases either in the main text or in the supplementary material so that the reader would not need to consult prior works.
2. In line 951 it says "Appendix _"
Questions For Authors: 1. What is the role of model depth and other architectural details in this model (like widths of earlier layers)? Does all of this enter into the prior distribution over the feature matrix $\pi(M)$? Would it also effect the rate at which the feature matrix can evolve during learning? What if an earlier layer had a significant bottleneck in width compared to the final feature layer. This would likely change the expressivity of the network and decrease the flexibility of the final feature matrix during optimization.
2. In section 5.3, the authors consider optimization trajectories. However, the theory provides the distribution for the *final features.* Could the authors comment on the connection? Are they assuming that the features equilibrate more rapidly than the readout and thus the loss?
3. The authors claim that other works rely on power law assumptions in the data space rather than feature space. This is not universally true, see for example section 5.1 here https://arxiv.org/abs/2409.17858 where the data is uniformly distributed but the nonlinearity in the network causes a power law decay which sets the scaling law on that task. Could the authors revise this?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work, and for providing several references whose inclusion greatly enhances the working document! In particular:
- Thamm et al. do an excellent job of highlighting the phenomenon we are interested in and provides further evidence for our case (their Figure 4 shows a proportion of eigenvectors are non-uniform)
- We thank the reviewer for providing the neural scaling law paper by Bordelon et al. 2024, and we admit that this paper considered the power law assumption for the Fourier spectra of the kernels at initialization. This feature learning setup is similar to our power law assumption of the feature map and is the motivation of our neural scaling law section. They also consider a similar setup to the simultaneous training of weights and features in our Proposition 3.1. Very cool! We will modify the literature review in our neural scaling law section.
The other references seem to provide excellent examples of more fine-grained analyses with other precise models of training and architectures. We agree these provide excellent context and a better illustration of the state-of-the-art in the theory of feature learning.
In addition, we include the following references:
- (Pillaud-Vivien et al., 2018) also assume power law features.
- (Mezard & Montanari, 2009) for statistical mechanics of learning
- (Yang et al., 2023) "Test Accuracy vs. Generalization Gap" instead of Martin & Mahoney, 2021a for review papers on robust metrics
We recognize that we did not summarize the 5+1 phases of learning. We include the following at line 363:
In a sequence of papers, Martin & Mahoney observe 6 classes of empirical behaviors in trained weight matrices, comprising a smooth transition from a random-like Marchenko-Pastur to a heavy-tailed density, before experiencing rank collapse. Excluding rank collapse, the five primary phases are:
(a) Random-Like: Pure noise, modeled by a Marchenko-Pastur density.
(b) Bleeding-Out: Some spikes occurring outside the bulk of density.
(c) Bulk+Spikes: Spikes are distinct and separate from the Marchenko-Pastur bulk.
(d) Bulk-Decay: Tails extend so that the support of the density is no longer finite.
(e) Heavy-Tailed: The tails become more heavy-tailed, exhibiting the behavior of a (possibly truncated) power law.
The transition from (a) to (e) is also seen in (Thamm et al., 2022). This smooth transition between multiple phases is a primary motivation of this work. We find that this behavior is displayed by a combination of a nontrivial covariance matrix to capture the spikes, and the HTMP class with decreasing $\kappa$.
To answer your questions:
1. The depth and architectural effects of the structures are challenging to analyze due to the adopted general approach. These aspects can be studied empirically, necessitating further research. We admit that the model depth and other architectural details in this model will strongly affect the prior distribution of the feature matrix $\Phi$ and also the final feature matrix. In this work, we want to provide a new random matrix model (high-temperature MP law) which may resemble the spectral properties of the feature matrix $\Phi$. By tuning the parameters in HTMP, we can mimic the spectra of the feature matrices in different scenarios, which may be at initialization, may have been well trained, or may be associated with very complicated architectures. Although we do not have a clear picture of the relationship between the architectural details and spectra of the feature matrices, our random matrix model can potentially be viewed as an equivalent simplified model to study the feature matrices with very different architectural details.
2. Yes, we are assuming that features equilibrate more rapidly than the loss. In this way, we are examining trajectories at the end of training (the kernel learning regime in (Fort et al., 2020) "Deep learning vs. kernel learning") once the features are mostly trained. We elaborate more on this in the current version.
3. Interesting! So the input data is uniform, but this example still seems to assume a power law Fourier spectrum in the target function. Unless we are missing something, this still assumes heavy tails in the data (in the labels in this case). However, we have found that (Liao & Mahoney, 2021) "Hessian Eigenspectra" also shows how nonlinearities can influence the spectrum, although they do not prove scaling laws from this. In this case, conditions for the nonlinearity to exhibit heavy tails are still unclear, so we consider this a case of examining individual models.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' thoughtful response. I think with the improved detail about the 5 phases and the improved comparison to prior works, this paper provides an novel and useful theoretical framework to model feature adaptation from the perspective of random matrix theory. Future works could extend this framework to incorporate more architectural details such as depth, nonlinearity, etc. In light of this, I will increase my score as I strongly favor acceptance. | Summary: This paper explores heavy-tailed mechanistic universality (HT-MU) in deep learning by proposing a new family of random feature matrix models based on the "high-temperature inverse-Wishart" ensemble. The paper reviews two mechanisms and presents a third one for the emergence of power laws in different matrices related to trained neural networks. These three mechanisms are (i) complex correlations in data, (ii) reduced training temperatures, and (iii) implicit architectural biases that affect eigenvector entropy. The authors provide theoretical results linking their model to neural scaling laws, the five phases of learning, and optimizer trajectory properties.
Claims And Evidence: While the theoretical framework is ambitious, the logical flow of the paper is extremely hard to follow.
The paper seems to review and reject the PIPO (Population Covariance) approach; they reject it based on "More recent analyses have shown how architectural decisions can alter the power law, but these hold only for specialized models." (lines 245), however, they give no reference to such recent analysis.
Recursive model structure is reviewed and rejected in much the same manner.
Finally, they come to their suggestion of reduced eigenvector entropy. In this section, the manuscript makes a number of confusing statements.
- What do the authors mean by assuming $\Phi$ describes a positive-definite matrix? $\Phi$ was introduced as $n \times m$ feature mapping, with $n$ being the number of datapoints and $m$ the number of features. Generally this is not a square matrix, nor is it positive definite. When is such an assumption expected to hold?
- The manuscript moves on to stating "In this change of variables, it is typically assumed that the distribution of eigenvectors (Q) is uniform." Uniform over what domain? (the d-sphere?) Why is this assumption typical? It would be helpful to cite relevant works.
- The concept of eigenvector entropy is never introduced and a reference for it is never given. In fact, I was hard pressed to find references to it in the literature.
- The paper goes on to talk about "free eigenvectors", these are never defined.
- This is followed by: "Admittedly, the link between (5) and non-uniform eigenvector distributions is nontrivial, so several points of evidence are in order", the status of much of the statements that follow is unclear, is it a conjecture, an interpretation, or a known result?
The paper follows with applications. They start with the 5+1 phases of learning, this phenomenon is not nearly as widely recognized as scaling laws, for the manuscript to be self-contained, it should include at least a brief explanation of what these are.
The authors need to overhaul the presentation to clearly delineate how their evidence supports each claim, ideally with concrete examples or references.
Methods And Evaluation Criteria: See above
Theoretical Claims: The paper makes it difficult to assess its claims. It is often hard to distinguish between known results, conjectures, and rigorous results in the text.
Experimental Designs Or Analyses: The authors show that finite temperature inverse Wishart distributions fit experimental data in several settings which they label according to a nomenclature which is not clarified in the text. Putting the latter point aside, since the theoretical basis for this distribution isn't clear, much further numerical evidence and precision tests are needed in our minds to make this claim sound.
Supplementary Material: I found the appendices quite inaccessible.
Relation To Broader Scientific Literature: The manuscript cites works of Mahoney extensively, in a way that stands out compared to usual academic standards. This choice stands in sharp contrast to the relative thin reference to existing literature, e.g., citing the work of Martin & Mahoney along side a single other paper as examples of "statistical mechanics of learning", though it is not a review or a book, and there are much more prominent works in the field. The same repeats for "robust metrics to assess model quality", a work by Martin & Mahoney is cited together with a single additional paper, again, for such a rich field, a review would be more useful to the reader, or a choice of more canonical works.
At the same time, the paper simply lacks crucial references, for example, the paper mentions the Donsker-Varadhan variational formula (Appendix G.1) but does not cite the paper.
Additionally, the paper is full of jargon that is not common in the literature, for example, the paper cites Arous & Guionnet, 2008 for the statement "While independent matrix elements exhibiting near-ballistic power laws can give rise to heavy-tailed spectral densities" but the jargon "ballistic" does not appear in the cited work.
To continue this problematic line of misreference, they cite Hanin & Nica, 2020 to support the statement "These results assert power laws in the elements of feature matrices." but Hanin & Nica do not mention power laws.
These are just some examples I checked, I'm sure they are not the only ones.
I recommend the authors completely revise their choice of references, as they currently give a partial and skewed view of the literature.
Essential References Not Discussed: The whole choice on references should be revised.
Other Strengths And Weaknesses: Strengths
The paper takes on an ambitious goal of establishing and explaining a universal behavior in deep neural networks. This goal is ambitious and inspiring.
Weaknesses
The paper should be rewritten. At the current state, I gauge it would be inaccessible to the large majority of the ICML community, and the large majority of those whose research field is theory of DL.
Other Comments Or Suggestions: In Table 1, the manuscript mentions "observations" but gives no reference to these observations.
The manuscript's notation is confusing and inconsistent.
1. Eq. 3 includes $\tau$. $\tau$ is not presented before in the manuscript and it is only presented in Appendix G.1.
2. $\rho$ has at least three meanings, two appear in the "metatheorem", and then $\rho$ is introduced again as the ratio $\gamma / \eta$.
3. What is the definition of $N$ in section 4.3 and on? In section 5.2. it seems to be the dataset size, which was previously denoted by $n$.
Questions For Authors: Questions:
1. Below Eq.3 is it assumed that $M=\Phi \Phi^T$ is invertible? Is that mentioned in the manuscript? Is that supposed to be a pseudo-inverse?
2. Could the authors specify what they mean by "For more general classes of models, computing marginal likelihood becomes intractable" in line 195? What is being generalized? Is it the choice of loss function?
3. Later in line 207 the manuscript focuses on the scenario "However, if L(Θ∗, Φ) is constant in Φ , that is, any choice of Φ yields the same training loss". What could be an example of such a scenario?
4. Under the "Activation Matrices" subsection $y$ is introduced as a regression target, but in Appendix $G.1$ $y$ has a different definition, together with a new notation $\epsilon$ which is never introduced. Could the authors clarify their setting? Do the two definitions of $y$ somehow coincide?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful examination of our work and for the thoughtful feedback. We appreciate that they find our framework to be ambitious, and understand the concern about empirical verification of individual claims within our theory. One of our fundamental assumptions---that the eigenvectors are not uniformly distributed---was verified by a reference provided by Reviewer Weri; (Thamm et al., 2022), and we will include the reference to this. However, our primary goal in this paper is the development of a single theoretical model designed explicitly to display observed asymptotic spectral tail behavior (Theorem 4.3). Our discussion is used to motivate the construction of a variational family with the desired characteristics. We study this model to compare its overall predicted tail behavior against empirical phenomena. We cannot claim that this model represents specific architectures or matches empirical phenomena identically; our analysis cannot replace other fantastic work that is being conducted in this direction. To help focus our claims, we add the following to line 53:
Our central objective is to identify a plausible random matrix class that exhibits inverse Gamma law spectral behavior, and the smooth transition from Marchenko-Pastur to heavier-tailed densities observed in Martin & Mahoney (2021) and Thamm et al. (2022).
and replacing 63--65:
- construct a parametric family of spectral densities which includes the Marchenko-Pastur law, but allows for heavy-tailed spectral tail behavior in line with empirical observations (Figures 1 \& 5, Appendix I)
We replace lines 125--135 with the following:
However, PIPO also fails as a standalone learning theory, as it does not account for implicit model biases. If data _alone_ influences tail behavior, models trained on the same dataset should exhibit similar power laws (Section 4.1), which contradicts empirical findings (Yang et al., 2023). At present, theoreticians analytically examine the interactions of individual models in the presence of heavy-tailed data (Maloney et al., 2022), but such analyses are intractable at scale. Originally conjectured by Martin & Mahoney (2021), here, we look for a third alternative: a universal, model-agnostic mechanism that can give rise to different heavy-tailed spectral behaviors from the same dataset. We construct a sequence of hypotheses that lead to such a mechanism, which we refer to as "eigenvector entropy". Matrix models in the literature typically have eigenvectors that are Haar-uniformly distributed (maximum entropy) (Anderson et al., 2010); this is also represented by _delocalization_ (Bloemendal et al. 2014). Breaking this property, reducing entropy, our framework provides a family of variational approximations (HTMP) with the right qualitative behavior: arbitrary power law spectra, and inverse Gamma laws that can arise from model design alone.
Regarding more specific comments:
- $M$ should be positive-definite, not $\Phi$ (typo)
- "Near ballistic" has been changed to "power law exponents $\alpha < 3"
- Donsker-Varadhan now cited
- Hanin & Nica discuss heavy-tailed log-normals; not power laws, but can appear indistinguishable, see (Clauset et al., 2007). We now make this clear.
- "Free eigenvectors" replaced with "eigenvectors $v_1,...,v_N$ with $v_1,...,v_d$ Haar-uniform and $v_{d+1},...,v_N$ fixed.
- The approximation (6) is proposed in accordance with patterned matrix models whose spectral densities we calculate in Appendix D. It is not exact, but designed to replicate qualitative phenomena.
- See response to Reviewer Weri regarding 5+1 phases.
- For further citations to cover claims made about the wider literature, see the response to Reviewer Weri.
- Section 2.5 of (Anderson et al. 2010. An Introduction to Random Matrices.) showed that eigenvectors of GOE/GUE are Haar uniformly distributed; we now include this.
- For general Wigner or sample covariance matrices, (Bloemendal et al. 2014. Isotropic local laws for sample covariance and generalized Wigner matrices) used local law results to prove the delocalization for all eigenvectors, which means all the entries of the normalized eigenvectors are $1/\sqrt{N}$ order. For more details, we refer to local law lecture note by (Benaych-Georges \& Knowles 2016).
To answer your questions:
1. Yes, but this assumption is not needed if $M = \Phi \Phi^\top + \frac{\gamma}{2\tau} I$, so we take this in the updated version.
2. Thank you for the suggestion; this now reads "For more general losses..."
3. Linear regression with equal numbers of parameters and datapoints is one example, since $\min_\Theta L = 0$.
4. The Appendix includes $L^2$ regularization, but otherwise the two are equivalent. We now add $L^2$ regularization to the activation matrices section.
Please let us know if you have remaining questions about claims; space constraints prevent us from addressing further comments, but we are happy to provide evidence in the discussion period.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifications and responses. The answers and corrections to the manuscript are certainly an improvement. However, I believe a major revision is required to make the paper accessible and accurate. I thus recommend rejection, and I encourage the authors to submit a majorly revised version to the next round. | null | null | null | null | null | null | null | null |
Bridging Protein Sequences and Microscopy Images with Unified Diffusion Models | Accept (poster) | Summary: This paper introduces CELL-Diff, a diffusion-based model capable of bidirectionally generating microscopy images from protein sequences and protein sequences from microscopy images. Using conditional morphology reference images (nucleus, ER, microtubules), the model combines continuous diffusion for images and discrete diffusion for sequences. CELL-Diff significantly improves image quality compared to previous methods, evaluated on the HPA and OpenCell datasets.
Claims And Evidence: The authors claim CELL-Diff provides high-fidelity microscopy image generation and accurate sequence-to-image and image-to-sequence transformations. While the image generation performance is strongly supported through quantitative (FID, IoU, MSF metrics) and qualitative evaluations, the claim of accurate image-to-sequence transformation lacks rigorous quantitative evidence. Evidence provided for sequence generation is limited to qualitative motif analyses.
Methods And Evaluation Criteria: - The methodological choice (combining continuous diffusion for images and discrete diffusion for sequences in a unified transformer-based U-Net) is interesting and sound.
- However, the authors' evaluation of image-to-sequence generation is weak, relying primarily on qualitative assessments rather than robust quantitative analyses. The paper could benefit from a larger-scale motif analysis / cluster overlap with localization etc.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design for image generation is solid with clear metrics and appropriate baselines.
Supplementary Material: The appendix has been reviewed; no additional supplementary materials were attached to the paper.
Relation To Broader Scientific Literature: The paper adequately discusses relevant works in the field.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: - Can you please share your thoughts on previous sections?
- How deterministic or diverse are the sequences generated from a given microscopy image?
- Is there any cycle consistency between the two directions? Say for a protein sequence, generate an image, and then feed that generated image back into the model to generate a sequence, do you get back similar sequence?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment 1:** The claim of accurate image-to-sequence transformation lacks rigorous quantitative evidence. Evidence provided for sequence generation is limited to qualitative motif analyses.
**Response:** To quantitatively evaluate the accuracy of the generated sequences, we used DeepLoc 2.1[1] to assess whether the generated sequences contain recognizable Nuclear Localization Signals (NLS) motifs. DeepLoc 2.1 is a deep learning-based classification model for predicting protein subcellular localization based on discrete annotations.
Specifically, we fused the generated sequences to the C-terminus of Green Fluorescent Protein (GFP) and input the fused proteins into DeepLoc 2.1. The model then predicted whether the fusion proteins contained NLS.
We compared CELL-Diff's performance with CELL-E2. For CELL-Diff, we used the 100 generated proteins listed in Table 3, while for CELL-E2, we evaluated the first 100 sequences from their supplementary materials, ranked by their generation scores.
According to DeepLoc 2.1, 78 out of 100 CELL-Diff-generated sequences were predicted to contain NLS motifs, while only 46 out of 100 CELL-E2-generated sequences were recognized as NLS motifs. This indicates that CELL-Diff has a higher probability of generating NLS-containing sequences compared to CELL-E2.
---
**Comment 2:** How deterministic or diverse are the sequences generated from a given microscopy image?
**Response:** To assess the diversity of sequences generated from a given microscopy image, we performed a diversity analysis on 100 generated NLS sequences. We used the following three metrics: Levenshtein Distance, Sequence Entropy, and Tanimoto Diversity. The results are shown below:
| **Model** | **Levenshtein Distance** | **Sequence Entropy** | **Tanimoto Diversity** |
|--------------|---------------------------|-----------------------|------------------------|
| CELL-E2 | 14.25 | 3.83 | 0.99 |
| CELL-Diff | 9.91 | 2.87 | 0.83 |
From the results, we observe that CELL-E2 produces more diverse NLS sequences compared to CELL-Diff. However, the lower diversity in CELL-Diff reflects its stronger conditioning on the input image, which leads to more biologically meaningful and consistent sequence patterns. As demonstrated in response to Comment 1, CELL-Diff generates NLS sequences more likely to be valid, as confirmed by DeepLoc 2.1 analysis.
---
**Comment 3:** Is there any cycle consistency between the two directions? Say for a protein sequence, generate an image, and then feed that generated image back into the model to generate a sequence, do you get back a similar sequence?
**Response:** In general, the space of protein sequences is much larger than the functional space of images. For instance, proteins with different NLSs can produce visually indistinguishable images, as they all localize to the nucleus. Therefore, if we start with one NLS sequence, generate an image showing nuclear localization, and then use that image to generate a new sequence, the resulting sequence will likely contain a different NLS but still direct the protein to the nucleus. Hence, sequence-to-image-to-sequence cycle consistency is not a suitable measure of the model's performance, as it does not guarantee the recovery of the same sequence.
However, image-to-sequence-to-image cycle consistency is more meaningful, as it evaluates whether the generated sequence preserves the same localization or pattern as the original input image. To assess this, we conducted a cycle consistency validation experiment using NLS localization.
We started with a protein image showing nuclear localization, similar to Figure 9 (left). We then used the model to generate 10 sequences fused to the C-terminus of GFP with different sequence lengths. Next, we fed the generated sequences back into the model to generate corresponding protein images.
To quantify the consistency, we measured the IoU similarity between the original and regenerated images. We compared CELL-Diff with CELL-E2, and the results are presented below:
| **Number of Amino Acids** | **CELL-E2** | **CELL-Diff** |
|----------------------------|-------------|----------------|
| 10 | 0.575 | 0.763 |
| 20 | 0.566 | 0.753 |
| 30 | 0.579 | 0.764 |
| 40 | 0.574 | 0.751 |
From the results, CELL-Diff demonstrates better cycle consistency than CELL-E2, as indicated by the consistently higher IoU scores. This suggests that CELL-Diff generates sequences that better preserve the localization and pattern information of the original input image.
---
[1]: Ødum, Marius Thrane, et al. "DeepLoc 2.1: multi-label membrane protein type prediction using protein language models." *Nucleic Acids Research* 52.W1 (2024): W215-W220. | Summary: The paper introduces CELL-Diff, a unified diffusion model designed for bidirectional transformations between protein sequences and their corresponding microscopy images. Given cell morphology images and a protein sequence, CELL-Diff generates corresponding protein images. Conversely, it can also output protein sequences from protein images. CELL-Diff integrates continuous and discrete diffusion models within a unified framework and is implemented using a transformer-based network. The model is trained on the Human Protein Atlas (HPA) dataset and fine-tuned on the Open-Cell dataset. Experimental results demonstrate that CELL-Diff outperforms existing methods in generating high-fidelity protein images.
Claims And Evidence: Yes
1.Claim: CELL-Diff facilitates bidirectional generation between protein sequences and images.
1.Evidence: The paper provides Figure 1 and discusses the model's ability to generate protein images from sequences and vice versa. The methodology section details how the model is trained to handle both types of generation. Experimental results in Section 5.3 and Appendix B also visually support this claim.
The claim is supported by the presented evidence.
2.Claim: CELL-Diff outperforms existing methods in generating high-fidelity protein images.
2.Evidence: The paper compares CELL-Diff with CELL-E2 and uses metrics like MSF-resolvability, IoU, and FID to demonstrate superior performance. Visual comparisons in Figure 4 and Appendix B also highlight the improved image quality.
The claim is well-supported by quantitative and qualitative evidence.
3.Claim: CELL-Diff can be applied for virtual screening of protein localization signals, virtual staining, and protein localization signal generation.
3.Evidence: Section 6.2 details these potential applications and provides supporting figures (Figure 5, 6, and 9) and generated sequence tables.
The applications are well-described, and the results seem promising, but further experimental validation would strengthen these claims.
Methods And Evaluation Criteria: Methods: The proposed CELL-Diff method combines continuous and discrete diffusion models within a unified framework. It employs a transformer-based U-Net architecture with cross-attention mechanisms. The training objective function includes noise prediction loss for the continuous diffusion model and masked value prediction loss for the discrete diffusion model. A latent diffusion model is used to reduce computational costs.
The methods are clearly described and seem appropriate for the problem. The combination of continuous and discrete diffusion, along with the transformer-based architecture, is a reasonable approach.
Evaluation Criteria: The paper uses metrics:
1.MSF-resolvability: This metric measures the capability to discern fine structural details in microscopy images.
2.IoU (Intersection over Union): This metric measures the similarity between two masks, used here to compare predicted and real protein image masks.
3.FID (Fréchet Inception Distance): This metric evaluates the similarity between the real and generated images regarding their feature distributions.
These evaluation criteria are appropriate for assessing the quality and accuracy of generated microscopy images. MSF-resolvability is a particularly relevant metric for this task.
Theoretical Claims: The paper includes theoretical claims related to the diffusion models and the derivation of the training objective.
1.The forward process of the continuous diffusion model is defined in Equation 1, and the subsequent derivation of q(I_t∣I_0) is standard.
2.The reverse process is defined in Equation 2, which is also a standard formulation.
3.The ELBO (Equation 3) and its simplified form (Equation 4) are correctly presented.
4.The derivation of the loss function for OA-ARDM (Equation 7) appears to be correct.
5.The conditional ELBO for continuous and discrete diffusion models (Equations 8 and 9) are derived logically from the previous equations.
6.The combined loss function (Equation 12) is a straightforward combination of the individual losses.
Experimental Designs Or Analyses: The experiments are designed to evaluate the protein image generation performance of CELL-Diff and to demonstrate its potential applications.
The model is trained on the HPA dataset and fine-tuned on the OpenCell dataset.
The performance is compared with CELL-E2 using MSF-resolvability, IoU, and FID metrics.
Ablation studies are conducted to evaluate the effectiveness of the cross-attention mechanism.
Potential applications are demonstrated through virtual staining, virtual screening of protein localization signals, and localization signal generation.
Supplementary Material: Appendix A: Implementation of Discrete Diffusion Model
Appendix B: Protein image generation (additional results)
Relation To Broader Scientific Literature: The paper builds upon previous work in the field of predicting protein properties using learning-based methods. It cites examples such as predicting protein structure, interaction partners, and subcellular localization.
It is related to the development of generative models for designing functional proteins and drug-like molecules.
The work focuses on the relationship between protein sequences and their cellular functions, as characterized by microscopy images, particularly fluorescence microscopy.
It is specifically related to recent work that proposed CELL-E, a text-to-image transformer that predicts fluorescence protein images from sequence input and cell morphology condition, and its enhancement CELL-E2.
Essential References Not Discussed: Some reference of Stable Diffusion
Other Strengths And Weaknesses: Strengths:
1.The paper addresses an important problem: understanding the relationship between protein sequences and their cellular functions.
2.The proposed model has the potential to be a valuable tool for investigating subcellular protein localization and interactions, with potential applications in drug discovery and disease research.
3.The paper demonstrates the model's potential through several applications, including virtual staining, virtual screening of protein localization signals, and localization signal generation.
Weakness:
1.While generally clear, some parts of the paper, especially the technical details of the model and the training process, could be challenging for readers without a strong background in machine learning and diffusion models.
2.The datasets used in the experiments, while relevant, might be considered limited in size and diversity, which could affect the generalizability of the model.
Other Comments Or Suggestions: In the methodology section, providing more visual aids or diagrams to illustrate the diffusion processes and the network architecture could further improve clarity.
It would be beneficial to discuss the computational cost and scalability of CELL-Diff in more detail, as this is an important consideration for practical applications.
Questions For Authors: The authors mentioned that "Some parts of the paper, especially the technical details of the model and the training process, could be challenging for readers without a strong background in machine learning and diffusion models." Could the authors provide more clarification or additional explanations on these technical details to make the paper more accessible to a broader audience?
The authors used the HPA and OpenCell datasets for their experiments. Given the limited size and diversity of these datasets, could the authors discuss the potential impact of this limitation on the generalizability of the model and how future work could address this?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Due to the limitation on the number of characters, we provide a general response to the reviewer's comment.
### 1. The dataset size and model generalizability.
The generalizability of the sequence-to-image task depends on the downstream application.
One application is **virtual staining**. Traditional fluorescence microscopy images typically have four color channels, limiting the ability to visualize the spatial relationships between multiple proteins of interest. With CELL-Diff, we can virtually stain all proteins from the training dataset using the same cell morphology images. As shown in Figure 5, this enables the identification of potential protein-protein interactions. In this context, the generalizability pertains to the conditional cell morphology images. Since both the HPA and OpenCell datasets contain approximately 100 cells for each protein, the model can learn diverse cellular morphologies, achieving strong generalizability with respect to different cell morphologies.
Another application is **sequence-to-image prediction**, which involves generating images for unseen proteins. In cell biology, rather than predicting images for entirely artificial sequences, the focus is often on generating images for biologically relevant variants, such as protein truncations or point mutations. Consequently, the effective protein sequence space in practical applications is much smaller than in tasks like de novo protein design or unconditional sequence generation.
For specific biological problems, such as predicting protein phase separation or identifying disease-related mutations, task-specific datasets can be collected. These datasets are often easier to obtain compared to large-scale, general-purpose datasets. By fine-tuning CELL-Diff on task-specific datasets, the model's generalizability can be improved, making it adaptable to new biological contexts.
To further validate the generalizability of CELL-Diff, we expanded the NLS screening experiment to the INSP yeast NLS dataset[1], which contains 50 valid NLSs from yeast. We generated new proteins by fusing the yeast NLS sequences to the C-terminus of Green Fluorescent Protein (GFP), which is not included in the HPA or OpenCell datasets, and tested CELL-Diff's ability to predict the function of these yeast NLSs.
For the generated images, we quantified the median fluorescence intensity inside and outside the nucleus. If the median intensity inside the nucleus was higher than outside, we considered the model to have made a correct prediction. The total number of correct predictions is denoted as $N_{hit}$, and we computed the identification rate as $N_{hit}/N$, where $N$ is the total number of NLS-tagged proteins.
To further evaluate the model's biological reasoning, we tested different fusion patterns by increasing the number of NLS tags. In practice, proteins fused with multiple NLS sequences have a higher efficiency of entering the nucleus. Thus, we evaluated three configurations: GFP + NLS, GFP + NLS + NLS, and GFP + NLS + NLS + NLS, which correspond to the increasing effectiveness of nuclear localization.
We compared CELL-Diff's identification rate against CELL-E2, and the results are presented below.
| **Test Protein** | **CELL-Diff** | **CELL-E2** |
|-------------------------|---------------|-------------|
| GFP + NLS | 0.82 | 0.54 |
| GFP + NLS + NLS | 0.86 | 0.78 |
| GFP + NLS + NLS + NLS | 0.90 | 0.88 |
From the table, we observe that both CELL-Diff and CELL-E2 can effectively identify NLS-tagged proteins, with CELL-Diff demonstrating higher identification rates, especially for single NLS fusions. Additionally, the identification rate increases as more NLS tags are added, which aligns with real-world biological behavior. This experiment suggests that CELL-Diff captures some underlying biological logic rather than merely memorizing dataset-specific patterns.
### 2. Technical details and visual aids.
We will include more technical details and visual illustrations of our method in the revision.
### 3. Computational cost and scalability.
The CELL-Diff model contains approximately 1 billion parameters, while the VAE used for latent diffusion has 42 million parameters. The model was trained on 2 NVIDIA H200 GPUs for approximately 10 days. CELL-Diff is scalable due to its latent diffusion framework, which reduces computational costs by operating in a lower-dimensional latent space. This enables the efficient generation of high-resolution images. Furthermore, CELL-Diff can be scaled by increasing the model size and utilizing larger computational clusters, making it adaptable for more complex biological datasets and tasks.
[1] Guo, Yun, et al. "Discovering nuclear targeting signal sequence through protein language learning and multivariate analysis." | Summary: This paper introduces CELL-Diff, a unified diffusion model that enables bidirectional generation between protein sequences and fluorescence microscopy images. It combines continuous diffusion for image synthesis and discrete diffusion for sequence prediction, integrating transformer-based cross-attention to fuse multimodal representations. The model is trained on Human Protein Atlas and OpenCell datasets, demonstrating improvements in protein localization prediction and potential applications in virtual staining and protein sequence design.
## Update after rebuttal:
I do not see sufficient justification to change my original rating. Please see my reply to rebuttal below.
Claims And Evidence: - The paper claims that CELL-Diff, a unified diffusion model, can bidirectionally transform between protein sequences and their corresponding fluorescence microscopy images by integrating continuous diffusion (for images) and discrete diffusion (for sequences) within a single framework.
- Quantitative metrics show better image generation than prior models. The concept of bidirectional transformation is novel in this context. Case studies suggest potential biological applications.
- Unseen proteins were not rigorously tested. CELL-Diff is most likely memorizing dataset-specific patterns. The paper oversells its claims. While diffusion models are powerful, they cannot overcome the fundamental limitation of tiny training datasets in an enormous search space. This is likely more of an interpolation method rather than a true sequence-to-image generative model.
Methods And Evaluation Criteria: - CELL-Diff conditions image generation on reference cell morphology images (e.g., nucleus, ER, microtubules). Uses latent diffusion models to process microscopy images efficiently. The training objective combines: Noise prediction loss for continuous diffusion (image generation). Masked value prediction loss for discrete diffusion (sequence prediction).
- Method uses several metrics that measures image clarity and fine structural details (MSF), assesses how well the generated protein images align with ground truth (IoU), evaluates similarity between generated and real images (Frechet Inception Distance)
Theoretical Claims: - The paper does not have any strong theoretical claims. Section 3.1 presents the standard formulation of continuous diffusion models. Section 3.2 discusses OA-ARDM for discrete data (Hoogeboom et al. 2022). Uses random ordering to make the model agnostic to token positions.
- Section 4 of the paper introduces the CELL-Diff model and describes how it integrates continuous and discrete diffusion models for bidirectional transformation between protein sequences and microscopy images. The authors claim to integrate continuous diffusion and discrete diffusion within a single framework. Continuous diffusion follows standard diffusion formulations. Discrete diffusion follows OA-ARDM (Hoogeboom et al., 2022). Not a novel theoretical contribution. The loss formulations are also standard for diffusion models and masked language modeling.
Experimental Designs Or Analyses: - The authors compare CELL-Diff to CELL-E2, a previous protein-to-image generation model. Cell-Diff improves image clarity over CELL-E2. Although visual inspection shows more detailed subcellular structures, none of the metrics used for comparison would necessarily correlate with biological accuracy.
- The image generation results seem promising within the dataset, but there is no proof of generalization to novel proteins. There is no held-out test set of completely unseen protein families, making it impossible to evaluate generalization. The sequence prediction claim is not validated by experts or in wet-lab studies. The methodological contribution (unified diffusion) is not rigorously tested. Molecular interaction predictions could be misleading. If two proteins often appear together in the dataset, the model might just learn their cooccurrence rather than discovering true interactions.
- While ambitious in scope, this paper attempts to tackle an inherently intractable problem without the necessary scale of data, established benchmarks, or biological validation, making its claims more speculative than substantive.
Supplementary Material: Supplementary material shows additional images and generated sequences.
Relation To Broader Scientific Literature: - The paper positions itself at the intersection of protein sequence modeling, fluorescence microscopy, and generative AI. Although the scope might be slightly different, there are many methods that use diffusion to predict structure from sequence (AlphaFold3, RoseTTAFold, Baek et al. 2021, RFDiffusion Waston et al., 2023, FrameDiff Wu et al. 2023). RFdiffusion and FrameDiff use diffusion models, but they generate structured, atomic-level representations, whereas CELL-Diff tries to map sequences directly to microscopy images. There are also diffusion-based generative models for de novo protein sequence design.
- While ambitious, structure prediction and protein design have strong theoretical foundations and are being tackled by leading labs with rigorous validation, making them credible scientific pursuits. In contrast, sequence-to-microscopy image generation lacks structural constraints, relies on limited data, and has no established evaluation metrics, making it far more speculative. Moreover, the state space of high-resolution microscopy images is vastly larger than that of 3D protein structures, further highlighting the impracticality of learning a direct mapping from sequence to image.
Essential References Not Discussed: These are not essential but literature review would be more complete with some of these diffusion-based structure prediction methods.
AlphaFold3 (Abramson et al., 2024) - AlphaFold that uses diffusion
RFdiffusion (Watson et al., 2023) – The first diffusion model for protein structure generation.
FrameDiff (Wu et al., 2023) – Rigid-body diffusion for protein structure prediction.
ProteinSGM (Trippe et al., 2022) – Diffusion for protein sequence design.
FoldFlow (Anand et al., 2022) – Normalizing flows for protein backbone generation.
Other Strengths And Weaknesses: Strengths:
- The paper introduces an ambitious multimodal diffusion framework that, if validated, could open new directions in bridging protein sequences and cellular imaging
Weaknesses:
- The Training data is woefully small: The Human Protein Atlas (HPA) dataset contains around 10K proteins with fluorescence microscopy images. Even though prominent structure prediction methods also use hundreds of thousands of protein structures protein folding problem is not random meaning the number of viable structures is vastly smaller than the full combinatorial space. On the other hand microscopy images have no equivalent constraints. The mapping from sequence to fluorescence image is far less structured than sequence-to-structure. The same protein can exhibit multiple localizations depending on cell type, environment, modifications. No universal, physics-driven constraints like those in protein folding.
- Microscopy Images Are Not Sufficient to Capture Protein Function: Protein function is not solely determined by sequence; it depends on post-translational modifications, cellular environment, and binding interactions. Even if a model memorized all available images, it would not generalize to unseen proteins effectively.
- Bidirectional Mapping Between Sequence and Image is Ill-Defined: A single protein sequence can adopt multiple conformations and localizations depending on: Cell type, Post-translational modifications, Interaction partners. One-to-one mapping between protein sequences and fluorescence images does not exist.
- Diffusion Models May Be Overfitting: The authors claim CELL-Diff outperforms prior methods like CELL-E2, but if the model is trained on such a small dataset, it could be: Memorizing protein localizations rather than learning meaningful structure-function relationships. Hallucinating plausible but incorrect images, which may still look visually appealing but lack biological relevance.
- No Evidence of Generalization to Unseen Proteins: A true sequence-to-image model should be tested on novel sequences never seen in training, but the paper does not provide convincing results for this. Without proper benchmarking on completely held-out protein families, this model might just be fitting noise or dataset-specific patterns.
Other Comments Or Suggestions: n/a
Questions For Authors: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Due to character limitation, we provide a general response to the comments.
We understand the reviewer’s concerns about the concept of sequence-to-image mapping, especially with limited data from HPA and OpenCell. While a universal sequence-to-image model requires massive data, CELL-Diff, as a smaller model, can still make practically relevant predictions for general cell biology. Specifically:
### 1. Feasibility of sequence-to-image mapping.
While the subcellular localization of a protein is influenced by cell type, state, and post-translational modifications, many proteins have a defined localization, often determined by sequence motifs (e.g., NLS for nuclear localization, signal peptide for ER translocation, and CAAX motif for lipid modification and subsequent plasma membrane localization). This premise underpins localization prediction models like DeepLoc[1] and MultiLoc[2], and CELL-Diff extends this concept by using images rather than discrete annotations, allowing quantitative description of multi-localization. For example, Figure 6 shows that CELL-Diff can characterize the non-binary effectiveness of different NLSs and NESs using the nuclear-to-cytoplasmic signal ratio from segmented images. The importance of understanding this quantitative effectiveness is illustrated by the need to add three separate NLSs to Cas9 for sufficient nuclear localization in CRISPR/Cas9 genome editing. Moreover, the input condition image implicitly conveys information about cell type and state. For instance, the CELL-E paper demonstrated the ability to predict the spherical shape of a cytoplasmic protein in a mitotic cell using a DNA-stain condition image.
### 2. The dimension of the image space.
Practical cell biology research often focuses on high-level image features like morphology and colocalization, rather than pixel-by-pixel data. In this sense, the meaningful image state space is likely smaller than the atomic coordinate structural space. As an illustration of this point, previous work (cytoSelf[3]) demonstrated the correlation between dimension-reduced image representations and stoichiometric protein-protein interactions and identified a new component of a protein complex solely by image similarity.
### 3. Dataset size and the validation of sequence-to-image prediction.
Although CELL-Diff's training data is smaller than that for structure prediction models, the HPA dataset covers over 60% of human proteins. Combined with ESM2 embeddings, CELL-Diff makes meaningful predictions. Figure 6 shows accurate localization predictions for NLS- and NES-fused biliverdin reductases, including the KLKIKRPVK sequence from E. coli protein Tus, acting as a mammalian NLS. We also tested GFP (from jellyfish and non-homologous to human proteins) fused to various yeast NLSs and demonstrated the additive effect of multiple NLSs. These results showcase CELL-Diff's ability to extract biological knowledge about nuclear importin recognition from limited data (Reviewer sVh2, 1. The Dataset Size and Model Generalizability.).
We must state that CELL-Diff is not a "true sequence-to-image generative model" capable of translating any random sequence into pixel-perfect cellular images, as structure prediction models do. Instead, it is built as a virtual experiment tool for typical sequence variabilities in cell biology research (mostly mutations of endogenous proteins). We will revise the manuscript to clarify its application and limitations.
### 4. Comparison with CELL-E2.
The intention for us to develop CELL-Diff is indeed to improve the image clarity over CELL-E2 which is the major limitation for CELL-E2 and prevents it from resolving finer subcellular structures other than the nucleus and the nucleolus. We used the standard image similarity metrics to compare the generated images in the test set against the ground truth (Table 1). The improvement in these metrics should correlated with biological accuracy.
### 5. Sequence generation.
For NLS sequence generation, our validation through amino acid composition analysis and clustering was exactly based on our expert knowledge of NLS. In addition, we have now further validated the generated sequences using the annotation-based localization prediction model DeepLoc 2.1[4], which indicates that 78% of the generated sequences are recognized as legitimate NLSs (Reviewer nmZP, Comment 1). Furthermore, our recent wet lab experiments testing 20 generated sequences confirmed that 10 of them exhibited NLS activity, providing direct experimental validation.
[1] DeepLoc: prediction of protein subcellular localization using deep learning.
[2] MultiLoc: prediction of protein subcellular localization using N-terminal targeting sequences, sequence motifs and amino acid composition.
[3] Self-supervised deep learning encodes high-resolution features of protein subcellular localization.
[4] DeepLoc 2.1: multi-label membrane protein type prediction using protein language models.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal and additional experimental validation. I appreciate the effort to clarify the intended scope of CELL-Diff and its potential applications.
That said, I remain unconvinced on several key points. If subcellular localization is largely determined by known sequence motifs (e.g., NLS, signal peptides), the problem could arguably be framed more effectively as a classification task over discrete localization patterns — a well-established and more tractable approach. It remains unclear what scientific value is gained by generating microscopy images rather than predicting localization directly, especially given the challenges and ambiguities in mapping sequences to continuous image space.
The claim that the HPA dataset covers over 60% of human proteins is noted, but each protein can exhibit a wide range of visual phenotypes depending on cellular context, conformational state, interactions, and post-translational modifications. This undermines the assumption of stable, predictable mappings between sequences and images.
Crucially, I still see no evidence that the model generalizes beyond the training distribution. The authors do not explain how they prevent significant overlap between training and test proteins — a major concern, especially in light of potential memorization. While the wet-lab validation is promising, it is unclear whether these results are published or peer-reviewed, and how the tested sequences were selected. If they are close to the training distribution, this may further reinforce the concern of overfitting rather than generalization.
Finally, while improving image clarity is a stated goal, the biological utility of generating sharper synthetic images — as opposed to interpretable or validated biological outputs — remains questionable in the absence of a clearly defined downstream application.
For these reasons, I do not see sufficient justification to change my original rating.
---
Reply to Comment 1.1.1:
Comment: We respectfully disagree with the reviewer’s questioning of the premise of sequence-to-image mapping, particularly considering that this is a field that has already seen multiple peer-reviewed papers as well as preprints in the past couple of years [1,2,3,4].
We appreciate the reviewer’s recognition that, unlike protein structural models for which every atomic coordinate matters, the meaningful image state space for cell biology is not at the pixel-to-pixel level. The reviewer is correct that, mechanistically, the subcellular localization of a specific protein molecule is determined by factors such as its post-translational modification, its interaction with other partners, and the overall state of the cell. A cellular image used in our training data and generated by CELL-Diff, however, describes the expected localization of all molecules for a given protein under the implicit post-translational modification and interaction profiles at the resting state of a widely used cell line, a state and a model system ubiquitously used and known to be highly generalizable in cell biology research. In this sense, the image state space is much more deterministic and thus feasibly predictable, particularly with the help of pretrained protein language embeddings. For example, RAS protein can undergo reversible lipid modification and thus dynamically cycles between the plasma membrane and cytoplasm. The result, on the other hand, is a rather defined participation coefficient between these two subcellular localizations in an image.
Regarding the advantage of using image representations instead of discrete annotations for subcellular protein localization, the reviewer missed the explanations in our previous response. We would like to reiterate that image representations enable the description of multi- and variable localization patterns that are challenging for discrete annotations. This point is illustrated by the capability of CELL-Diff to characterize the ”strength” of nuclear localization and nuclear export signals using the nuclear-to-cytoplasmic ratio from segmented images instead of having to label the protein binarily as either “nuclear” or “cytoplasmic”. No existing localization annotation database has this gray-scale information. Moreover, the enhanced resolution of CELL-Diff also allows distinguishing proteins with subtle localization differences without being confined by pre-existing annotations. For example, Figure 8 shows the image prediction of two proteins, DDX5 and H1-0, both annotated to be “nucleoplasmic”. However, examining their image correlation with DNA staining reveals the specific enrichment of DDX5 at euchromatin, demonstrating the potential application of CELL-Diff in biological discovery.
Finally, in response to the reviewer’s question regarding generation and sequence homology between the training and test data, we note that the yeast NLS test (see the response to Reviewer sVh2, 1. The Dataset Size and Model Generalizability) was picked to avoid any sequence homology with human proteins in the training data. Specifically, the test sequences consist of fusions between GFP (from jellyfish) and yeast sequences. Homology assessments using the Basic Local Alignment Search Tool (BLAST) confirmed that these sequences share no significant similarity with the training data (E-value $> 10^{-5}$). This setup minimizes the risk of memorization and instead requires CELL-Diff to generalize the biological principles of nuclear import recognition. The ability of the model to make accurate predictions on these out-of-distribution sequences demonstrates its capacity to generalize beyond the human proteome.
[1] Khwaja, Emaad, et al. "CELL-E: A Text-to-Image Transformer for Protein Image Prediction." International Conference on Research in Computational Molecular Biology. Cham: Springer Nature Switzerland, 2024.
[2] Khwaja, Emaad, et al. "CELL-E2: Translating proteins to pictures and back with a bidirectional text-to-image transformer." Advances in neural information processing systems 36 (2023): 4899-4914.
[3] Zhang, Xinyi, et al. "Prediction of protein subcellular localization in single cells." bioRxiv (2024).
[4] Kilgore, Henry R., et al. "Protein codes promote selective subcellular compartmentalization." Science (2025): eadq2634. | Summary: The paper, Bridging Protein Sequences and Microscopy Images with Unified Diffusion Models, presents CELL-Diff, a novel generative model that enables bidirectional transformations between protein sequences and fluorescence microscopy images. By leveraging a transformer-based U-Net architecture and integrating both continuous and discrete diffusion processes, CELL-Diff outperforms prior methods in generating high-resolution protein images. The model is trained on the Human Protein Atlas (HPA) dataset and fine-tuned on OpenCell, demonstrating its ability to reconstruct subcellular protein localization with improved fidelity. The proposed approach has significant implications for biomedical research, particularly in protein function prediction and cellular imaging.
Claims And Evidence: The paper claims that CELL-Diff facilitates accurate bidirectional transformation between protein sequences and their corresponding microscopy images, improving upon previous methods like CELL-E and CELL-E2. Experimental results support this claim, demonstrating that CELL-Diff produces higher-resolution images with better spatial fidelity. The authors provide quantitative metrics such as Maximum Spatial Frequency (MSF) resolvability, Intersection over Union (IoU), and Frechet Inception Distance (FID), all of which indicate that CELL-Diff outperforms baselines.
Methods And Evaluation Criteria: The methodology is well-defined, employing diffusion models in both continuous (for images) and discrete (for sequences) state spaces. The evaluation framework includes comparisons with prior models (CELL-E2) using established quantitative metrics.
Theoretical Claims: The paper does not introduce new theoretical developments but builds upon existing diffusion models. The combination of continuous and discrete diffusion processes is well-motivated.
Experimental Designs Or Analyses: The experiments are well-structured, with evaluations on multiple datasets and comparisons against prior work. The use of multiple quantitative metrics strengthens the findings. However, the potential for dataset biases or domain shifts between HPA and OpenCell is not explicitly explored.
Supplementary Material: No
Relation To Broader Scientific Literature: This work aligns with research on multimodal generative modeling, fluorescence microscopy, and protein function prediction. It extends prior work on text-to-image generation by applying diffusion models to biological data. The references to related work in protein structure prediction (e.g., AlphaFold) and generative models are appropriate.
Essential References Not Discussed: The paper covers relevant prior work but does not discuss alternative generative approaches, such as GAN-based models, which have also been applied to biological image synthesis. Including a comparison with these methods could provide a broader context for CELL-Diff’s contributions.
Other Strengths And Weaknesses: Strengths:
- Introduces an innovative bidirectional generative model for protein sequences and microscopy images.
- Demonstrates significant improvements over previous methods in image quality and sequence prediction.
- Uses well-established benchmarks and evaluation metrics.
Weaknesses:
- Limited discussion on potential biases in dataset selection and domain adaptation.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How does CELL-Diff handle proteins with highly disordered or ambiguous subcellular localization?
2. Were any domain adaptation techniques used to mitigate differences between HPA and OpenCell datasets?
3. Could alternative generative architectures, such as GANs, be competitive with the proposed approach?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: **Comment 1:** The potential for dataset biases or domain shifts between HPA and OpenCell is not explicitly explored.
**Response:**
The main difference between HPA and OpenCell is that HPA is larger, but OpenCell has more consistent labeling and higher image quality. The effect of domain shift has already been explored in the previous CELL-E2 paper by testing different pretraining and finetuning arrangements using the same two datasets. Therefore, we did not repeat this evaluation for CELL-Diff.
---
**Comment 2:** The paper covers relevant prior work but does not discuss alternative generative approaches, such as GAN-based models, which have also been applied to biological image synthesis. Including a comparison with these methods could provide a broader context for CELL-Diff’s contributions.
**Response:**
Thank you for the comment. The baseline model used in our comparison, CELL-E2, is based on a VQGAN architecture, making it a GAN-related model. We agree that comparing CELL-Diff with alternative generative approaches would provide valuable context. However, these existing generative models are not specifically designed for this task and require significant adaptation. Implementing and optimizing these models involves careful tuning of hyperparameters and architectural modifications to achieve their best performance.
We are currently exploring other generative approaches, including the consistency model[1], and have obtained some preliminary results, see the following Table. We did not observe a consistent trend of difference across metrics. We hope that the growing interest in biological image synthesis will lead to the development of more specialized models, providing a richer set of baselines for comprehensive evaluation.
**Table 1: Comparison with consistency model.**
| Method | Cell image | MSFR (nm) ↓ | IoU ↑ | FID-T ↓ | FID-O ↓ |
|-----------|----------------|-------------|---------|----------|----------|
| CM | Nucl | 650 | 0.430 | 51.3 | 38.6 |
| CELL-Diff | Nucl | 641 | 0.484 | 60.1 | 51.1 |
| CM | Nucl,ER | 644 | 0.609 | 44.7 | 30.9 |
| CELL-Diff | Nucl,ER | 642 | 0.580 | 55.9 | 60.0 |
| CM | Nucl,MT | 645 | 0.606 | 45.4 | 31.5 |
| CELL-Diff | Nucl,MT | 644 | 0.616 | 51.0 | 47.6 |
| CM | Nucl,ER,MT | 645 | 0.619 | 44.6 | 31.8 |
| CELL-Diff | Nucl,ER,MT | 644 | 0.635 | 50.4 | 45.6 |
---
**Comment 3:** Limited discussion on potential biases in dataset selection and domain adaptation.
**Response:**
See Comment 1.
---
**Comment 4:** How does CELL-Diff handle proteins with highly disordered or ambiguous subcellular localization?
**Response:**
CELL-Diff handles proteins with highly disordered or ambiguous subcellular localization by leveraging the conditional cellular morphology image and the stochastic nature of the diffusion model.
The cellular morphology image provides structural context, capturing the cellular environment, such as the nucleus, ER, and microtubules, as well as the cell type and cell state implicitly contained in the morphological information. This conditioning image acts as a spatial prior, guiding the model to place proteins in realistic and biologically plausible locations.
Additionally, the diffusion process naturally captures stochastic variations, enabling CELL-Diff to model the inherent uncertainty of disordered or ambiguously localized proteins. As a result, the model can generate multiple plausible localizations for the same sequence when sampled multiple times, reflecting the biological variability of such proteins.
---
**Comment 5:** Were any domain adaptation techniques used to mitigate differences between HPA and OpenCell datasets?
**Response:**
We currently address the domain shift between the HPA and OpenCell datasets by using the pre-training and fine-tuning approach. However, we recognize that more advanced domain adaptation techniques could further improve integration between the two datasets. Moving forward, we plan to explore additional domain adaptation methods to mitigate the differences and better align the datasets for improved model performance.
---
**Comment 6:** Could alternative generative architectures, such as GANs, be competitive with the proposed approach?
**Response:**
See Comment 2.
[1] Song, Yang, et al. "Consistency models." (2023).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers and clarifications. I believe this work has a lot of potential practical usage : Using the cell’s visual traits to generate protein localisations is a daring but extremely biologically useful usecase. While the task is daring and very complex, the evaluation used in the paper is very sound, and contributes toward solving this hard task.
Given the practical relevance of the task, no matter how conceptually hard it may be to achieve, and the soundness of the method and the evaluation proposed, I update my recommendation from Accept to Strong Accept. | null | null | null | null | null | null |
"Why Is There a Tumor?": Tell Me the Reason, Show Me the Evidence | Accept (poster) | Summary: Medical AI models effectively detect and segment tumors but often fail to provide explicit clinical reasoning, making their outputs less trustworthy in practice. Existing methods either localize abnormalities without justification or generate textual explanations without spatial grounding. To bridge this gap, the authors curate a dataset of 180K image-mask-rationale triples, verified by expert radiologists, and develop a self-supervised model that disentangles and localizes fine-grained clinical concepts without requiring pixel-level annotations.
Claims And Evidence: The authors mostly backed up their claims with evaluations. However, some claims lack explicit evidence, as detailed in the questions below.
Methods And Evaluation Criteria: The methods and evaluation criteria are clearly presented and justified.
Theoretical Claims: The theoretical claims have been checked and appear to be correct.
Experimental Designs Or Analyses: Soundness/validity of any experimental designs or analyses are checked.
Supplementary Material: No.
Relation To Broader Scientific Literature: The topic is important, and the paper makes a valuable contribution to explainable AI in medical imaging.
Essential References Not Discussed: The paper does not discuss relevant works such as: PadChest (2020): A large-scale dataset of over 160,000 Spanish chest X-rays with radiology reports and multi-label annotations. The recent PadChest-GR (2024) extension provides 4,555 images with grounded reports linking findings to bounding boxes, which directly relates to the authors' goal of grounding clinical concepts in images.
Other Strengths And Weaknesses: Strengths:
- The curated rationale dataset is a significant contribution to improving model explainability.
- The proposed self-supervised optimization method is innovative and addresses the need for fine-grained concept localization.
- The model demonstrates superior performance in segmentation and detection tasks.
Weaknesses:
- The disentanglement constraint assumes strict spatial separation of clinical concepts, which may not always hold due to overlapping or co-existing conditions.
- Lack of discussion on how to handle diffuse diseases that affect the entire image.
- Some experimental results require further clarification (see questions below).
Other Comments Or Suggestions: There is a small typo in line 220: "resample(.)"
Questions For Authors: - Loss Function Weighting: Do the Dice loss and InfoNCE loss have the same weight in Eq. 2? If so, can you elaborate on this decision?
- Disentanglement Constraint: The claim "clinically different concepts should highlight different regions in the image" does not always hold. Some diseases affect multiple regions or share visual features (e.g., pulmonary edema and pneumonia in X-rays), while others are diffuse. The localization constraint may address the first case, but how do you handle the second? Can you clarify how the model adapts to conditions requiring multiple highlighted areas?
- Comprehensiveness Score: Why is comprehensiveness scored lower than other criteria in Section 5.2?
- Table 2 - Second Average Column: What does the second "average" column in Table 2 represent? If it averages precision and AUROC, what is the motivation for this, given that they are different metrics?
- Backbone Choice: Why was MedSAM chosen as the backbone instead of U-Net? Have you compared performance between the two architectures?
- MedSAM Comparison: Is there a direct comparison between MedSAM and your rationale dataset? If so, what benefits does the rationale dataset provide in conjunction with MedSAM?
- Gaze Information: In similar works using different imaging modalities, gaze tracking has been incorporated for model explainability. Could gaze data be integrated into your approach to further enhance grounding?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your encouraging feedback. Please find our point-by-point responses to the comments below.
> **Q1:** Do the Dice loss and InfoNCE loss have the same weight in Eq. 2?
**A1:** The Dice loss and InfoNCE loss are balanced using a hyperparameter rather than having equal weights. We will revise the equation accordingly.
> **Q2:** Some diseases affect multiple regions or share visual features, while others are diffuse. The localization constraint may address the first case, but how do you handle the second? Can you clarify how the model adapts to conditions requiring multiple highlighted areas?
**A2:** Our method optimizes for a balance between overlap and separation of clinically distinct concepts rather than enforcing strict regional exclusivity. As shown by the visualizations in Figure 3 (right panel), the model can highlight both separate and partially overlapping regions corresponding to related concepts.
**How about diffused conditions?** In these scenarios, such as tumors occupying nearly the entire prostate gland, our method can still learn nuanced regions corresponding to distinct tumor-related concepts. However, for certain anatomical structures (e.g., transition zone), our approach may not correctly localize them due to visual absence.
> **Q3:** Why is comprehensiveness scored lower than other criteria in Section 5.2?
**A3:** Comprehensiveness evaluates whether the rationale includes **ALL** details needed to support the tumor segmentation. The lower "strongly agree" percentage for comprehensiveness compared to other metrics is primarily due to its stricter requirement of mentioning ALL critical details. However, when combining the "strongly agree" and "agree" categories, comprehensiveness actually achieves a higher overall percentage than other metrics. This indicates that while some rationales might miss minor details, most successfully include nearly all relevant imaging features.
> **Q4:** What does the second "average" column in Table 2 represent?
**A4:** AUROC reflects patient-level diagnostic performance, while AP captures lesion-level detection performance. Following common practice [1], we report their average as a holistic measure of the model's performance.
> **Q5:** Why was MedSAM chosen as the backbone instead of U-Net? Have you compared performance between the two architectures?
**A5: Why do we focus on MedSAM?** We adopted MedSAM due to its state-of-the-art performance and recent success in building medical foundation models (e.g., BiomedParse). MedSAM is pre-trained on multiple modalities (MRI, CT, X-Ray, etc). This lays out a solid foundation when scaled up to other tumors. **How about using other backbones?** In response to the suggestion, we conducted experiments using TransUNet as the backbone. Cancer detection results are reported in the following Table 6. **Our method improves TranUNet’s baseline performance by 6.2%, demonstrating its generalizability to other backbones.**
Table 6. Cancer detection (AP and AUROC) with TransUNet as the backbone on the rationale dataset.
| | AP | AUROC | Average |
|---|:---:|:---:|:---:|
| Baseline | 0.416 | 0.811 | 0.614 |
| Ours | **0.463** | **0.840** | **0.652** |
> **Q6:** Is there a direct comparison between MedSAM and your rationale dataset?
**A6:** MedSAM requires the user to manually input a bounding box to indicate the region of interest, so we can't directly compare it to our method, which doesn't require these prompts. Instead, we compared our model with other medical SAM-based methods (like MA-SAM and SAMed) that don't need prompts for fair comparisons. The results show that our model achieves significantly better performance, demonstrating the clear benefits of our method.
> **Q7:** Could gaze data be integrated into your approach to further enhance grounding?
**A7:** Thanks for the insightful suggestion. Gaze tracking could indeed enhance our framework. Gaze heatmaps could help validate whether the clinical concepts our model highlights align with regions where experts focus their attention during diagnosis. We noticed that one work [2] discussed setting up the eye-tracking system for the prostate cancer diagnosis. In future work, we will find out if there are open-source datasets with gaze data and explore using them to build more powerful models.
> **Q8:** Discussion of the mentioned reference.
**A8:** Thanks for highlighting these relevant references. We agree that these works closely relate to our approach of grounding clinical concepts. We'll add PadChest and PadChest-GR (2024) to the related work section in the revised version.
*References:*
[1] Saha et al. "Artificial intelligence and radiologists in prostate cancer detection on MRI: an international, paired, non-inferiority, confirmatory study." The Lancet Oncology 2024.
[2] Celik et al. "Eye Tracking System for Prostate Cancer Diagnosis Using Multi-Parametric MRI."
---
Rebuttal Comment 1.1:
Comment: The authors addressed my questions. Overall this is an interesting work. I agree with the reviewer M2zk for the failure case discussion.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer XDug,
We appreciate your recognition of our efforts in addressing your questions and are especially grateful that you found our work interesting.
Thank you again for taking the time to review our paper and engage with our rebuttal.
Sincerely,
The Authors | Summary: This work addresses a highly interesting topic by formulating the task of tumor localization, which involves explaining and identifying tumor regions in medical data. The authors construct a dedicated dataset for this task and establish methods to quantify both the performance and explainability of their approach.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: this is not a theory work.
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: The limits of fair medical imaging AI in real-world generalization. Nature Med
Essential References Not Discussed: No
Other Strengths And Weaknesses: #### **Strengths**
1. **Novel Framework for Tumor Localization**
- The authors propose a novel framework to analyze the reasoning behind tumor localization tasks, which is an innovative and underexplored area in medical imaging research.
2. **Dataset Contribution**
- The introduction of a new dataset specifically designed for this task is a significant contribution. If the authors release this dataset publicly, it could greatly benefit the research community and foster further advancements in the field.
3. **Quantifying Explainability**
- This work is the first to quantify the rationality and explainability of tumor localization, providing a systematic way to evaluate both performance and interpretability, which are critical in medical applications.
#### **Weaknesses**
1. **Generalizability to Other Modalities or Body Parts**
- Since all experiments are conducted on Prostate MRI scans, it raises concerns about the generalizability of the proposed method. Can this framework be extended to other modalities (e.g., CT, X-ray) or different body parts? The authors should clarify why they focused solely on Prostate MRI and discuss the potential limitations or adaptations required for broader applicability.
Other Comments Or Suggestions: please see the weakness part.
Questions For Authors: please see the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your encouraging feedback. The following are our responses to the comments.
> **Q:** Since all experiments are conducted on Prostate MRI scans, it raises concerns about the generalizability of the proposed method. Can this framework be extended to other modalities (e.g., CT, X-ray) or different body parts? The authors should clarify why they focused solely on Prostate MRI and discuss the potential limitations or adaptations required for broader applicability.
**A:** Thank you for raising this critical point. We fully agree that our method will be further strengthened with validation on additional tumor types (e.g., breast, lung, liver). **However, other tumors often do not have rationale datasets.** Although our paper provides a detailed pipeline to curate the rationale dataset, completing the entire curation process is non-trivial and cannot be finished in the short term. In this pilot study, prostate cancer serves as an impactful proof of concept for our method.
**Why is curating a rationale dataset non-trivial?** Obtaining a high-quality rationale dataset requires specialized radiologists’ annotations and extensive multi-institution coordination. For example, in our study, radiologists spent over 150 hours generating rationale data (as mentioned in lines 80–81), but due to clinical duty, they couldn’t dedicate full-time effort. The entire process takes two and a half months to finish. Moreover, data-sharing and legal constraints further delay the process.
**Why prostate cancer?** Prostate cancer is the second leading cause of cancer death in men. Delivering a high-performance AI model with improved interpretability already brings substantial value to clinical practices.
**Why is our method applicable to other tumors? (1) From the data perspective,** the core components to curate a rationale dataset are standard guidelines (PI-RADS) and clinical reports. In other tumor types, such guidelines (BI-RADS [1], Lung-RADS [2], and LI-RADS [3]) and reports are also available. Therefore, it is possible to represent the domain knowledge in proper format (e.g., Trees) using the guidelines and leverage automatic method to generate the rationale data; **(2) From the model perspective,** different tumor types may use different medical imaging modalities for diagnosis such as breast (X-Ray, ultrasound, and MRI), Lung (CT), Liver (CT and MRI) cancer. Our adopted backbone model (MedSAM) is pre-trained across multiple modalities (MRI, CT, X-Ray, Ultrasound, etc), providing a solid foundation model to handle other tumors.
**We will revise our claims in the paper.** In the limitations section, we briefly discuss the paper’s current focus on prostate cancer, and we will revise the text to clarify that our method holds the potential to generalize to other tumors but is currently only evaluated in prostate cancer. Additionally, we are actively collaborating with leading cancer centers to **curate rationale datasets for breast and liver cancers**, with completion expected by Fall 2025. All rationale datasets will be open-sourced to accelerate community efforts.
*References:*
[1] Spak et al. "BI-RADS® fifth edition: A summary of changes." Diagnostic and interventional imaging 98.3 (2017): 179-190.
[2] Christensen et al. "ACR Lung-RADS v2022: assessment categories and management recommendations." Journal of the American College of Radiology 21.3 (2024): 473-488.
[3] Chernyak et al. "Liver Imaging Reporting and Data System (LI-RADS) version 2018: imaging of hepatocellular carcinoma in at-risk patients." Radiology 289.3 (2018): 816-830. | Summary: This paper addresses the challenge of enhancing interpretability in medical AI models for tumor detection and segmentation. The authors propose a novel framework that generates predictions supported by both clinical concepts and visual evidence. To achieve this, they curate a “first-of-its-kind” dataset (will make the dataset publicly available.) containing 180K image-mask-rationale triples, where rationales are high-quality textual justifications for clinical assessments. Additionally, they introduce a rationale-informed optimization method that disentangles and localizes fine-grained clinical concepts without requiring pixel-level annotations. Experiments on multiple medical benchmarks demonstrate superior performance in segmentation, detection, and rationale correctness compared to state-of-the-art models.
Claims And Evidence: The authors' experiments (both internal and external dataset) can support their claims. However, one weakness is that while the authors' method is claimed to be generalizable across different types of tumors, it has only been validated on prostate cancer, lacking verification on other types of tumors.
Methods And Evaluation Criteria: Is the formulation of PDT (PI-RADS Decision Tree) based on clinical standards, rather than merely “developed with radiologist alignment”. That is, whether there are clinical standards as a basis rather than the radiologist 's subjective opinion.
Theoretical Claims: The proofs in the paper are validated through experiments rather than relying on theoretical demonstrations. However, the explanations of the optimization formulas should be made clearer. Specifically, the meanings of \( c^{'}_{k} \) and ϵ1 and ϵ2 are not clearly defined in Equations (3) and (4). Additionally, a more detailed Loss and clear explanation of the formulas (3) and (4) should be provided."
In the part of Localization constraint, provide examples to explain what is meant “Our idea is that different concepts describing the same anatomical structure”
Experimental Designs Or Analyses: 1. The authors should analyze failed cases in the segmentation examples (including true positives and false positives) along with their corresponding explanatory rationales. It is important to investigate whether incorrect segmentations lead to errors in the rationales and to evaluate the consistency between segmentation results and rationales. Such an analysis of failure cases would be more clinically meaningful in assisting doctors to assess the model's predictions.
2. The authors have only validated the experiments on the MedSAM backbone. As a general method, they should validate the universality of the proposed approach on different segmentation backbones (SwinUNETR, UNet, nnUNet).
Supplementary Material: The supplementary material is thorough, explaining the details of the dataset and the settings of the supplementary experiments.
Relation To Broader Scientific Literature: I find this to be an interesting work that addresses the limitations of traditional methods (e.g. GradCAM) in terms of medical AI interpretability. Additionally, the approach used for constructing the dataset is also noteworthy and provides valuable reference for future research.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Even though the author designed Human evaluation to assess the quality of rationales, it is insufficient with only two readers. Besides the reader study, I think the author should design an automated quantitative assessment of the rationales' quality to evaluate the accuracy of the tumor diganosis.
Other Comments Or Suggestions: If the author can conduct verification on multiple tumors and multiple backbones (such as the more mainstream nnUNet), it would be more convincing.
Questions For Authors: 1. The authors' experiments (both internal and external dataset) can support their claims. However, one weakness is that while the authors' method is claimed to be generalizable across different types of tumors, it has only been validated on prostate cancer, lacking verification on other types of tumors.
2. Is the formulation of PDT (PI-RADS Decision Tree) based on clinical standards, rather than merely “developed with radiologist alignment”. That is, whether there are clinical standards as a basis rather than the radiologist 's subjective opinion.
3. The meanings of $ c^{'}_{k} $ and ϵ1 and ϵ2 are not clearly defined in Equations (3) and (4). Additionally, a more detailed Loss and clear explanation of the formulas (3) and (4) should be provided."
4. In the part of Localization constraint, provide examples to explain what is meant “Our idea is that different concepts describing the same anatomical structure”
5. . The authors should analyze failed cases in the segmentation examples (including true positives and false positives) along with their corresponding explanatory rationales. It is important to investigate whether incorrect segmentations lead to errors in the rationales and to evaluate the consistency between segmentation results and rationales. Such an analysis of failure cases would be more clinically meaningful in assisting doctors to assess the model's predictions.
6. The authors have only validated the experiments on the MedSAM backbone. As a general method, they should validate the universality of the proposed approach on different segmentation backbones (SwinUNETR, UNet, nnUNet).
Ethical Review Concerns: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your encouraging feedback. The following are our point-to-point responses to the comments.
> **Q1:** Verification on other types of tumors.
**A1:** We fully agree that our method will be further strengthened with validation on additional tumor types (e.g., breast, lung, liver). **However, other tumors often do not have rationale datasets.** Although our paper provides a detailed pipeline to curate the rationale dataset, completing the entire curation process is non-trivial and cannot be finished in the short term. In this pilot study, we use prostate cancer to prove the concept of our method. Due to character limits, we provide a detailed discussion on why rationale data curation is non-trivial, why prostate cancer, and how our method genrealize to other tumors in response to Rviewer W914. Please refer to that response for the full context.
> **Q2:** Is our PDT (PI-RADS Decision Tree) based on clinical standards?
**A2:** Yes, the PDT was derived directly from the latest PI-RADS v2.1 guideline, which is the widely adopted clinical standard for prostate MRI interpretation.
> Q3: Notation definition, a more detailed Loss, and clear explanation of Equations (3) and (4).
**A3:** $c_k$ and $c_k’$ denote distinct clinical concepts (e.g., $c_k$ = “heterogeneous signal intensity”; $c_k’$ = “non-circumscribed margin”). $\epsilon_1$ and $\epsilon_2$ are thresholding parameters that control the localization of the clinical concepts. Intuitively, **Equation 3** penalizes overlap between highlighted pixels (heatmaps) of clinically distinct concepts. **Equation 4** enforces concepts describing the same structure to activate within its anatomical boundaries.
We use the KKT condition and Lagrange multipliers to convert Equation 4 into unconstrained optimization. This leads to a loss function $\mathcal{L}$ that combines the main objective (segmentation + contrastive learning) $\mathcal{L}_{main} $ with the disentanglement $\mathcal{L}_d$ and localization $\mathcal{L}_c$ constraints. Contributions of the two constraints are balanced by Lagrange multipliers $\lambda$ and $\gamma$. The final loss is as follows:
$\mathcal{L} = \mathcal{L}_{main} + \lambda \mathcal{L}_d + \gamma \mathcal{L}_c$
> **Q4:** Providing examples to explain the localization constraint.
**A4:** For example, "peripheral zone" (anatomical region) and "non-circumscribed margin" (feature of a lesion) should localize within the prostate gland, not unrelated regions like the bladder (outside the prostate).
> **Q5:** An analysis of failure cases would be meaningful in assisting doctors to assess the model's predictions.
**A5:** Thanks for this critical point. It exactly highlights the benefits of providing rationales for model prediction. **Rationales offer a new lens for doctors to access the prediction.** For instance, if the model fails to highlight a tumor in an image (no mask) but still mentions cancer-related concepts (like “obscured margins”), this mismatch warns doctors that the prediction might be wrong.
Our results in Table 3 indicate that the **prediction and rationales are not always consistent.** The task in Table 3 is to classify MRI scans as “tumor” or “not tumor”. We compared two methods. Method 1 uses the model’s segmentation mask (if the model draws a mask = tumor). Method 2 uses the model’s rationale (if cancer-related concepts are activated = tumor). Results showed rationales worked better, especially at the slice level, indicating that predictions and rationales don’t always align.
**Failure case analysis.** Tumors are often difficult to detect in transitional slices—those where the tumor is just starting to appear or fade—leading to potentially incorrect segmentations (e.g., missing a tumor or flagging a healthy area). However, the rationales could still highlight some tumor-related concepts. This inconsistency raises alarm to the doctors that the model potentially makes a wrong segmentation. We will include the failure case analysis in the final version.
> **Q6:** Validating the proposed approach on different backbones.
**A6: Why do we focus on MedSAM?** We adopted MedSAM due to its state-of-the-art performance and recent success in building medical foundation models (e.g., BiomedParse). MedSAM is pre-trained on multiple modalities (MRI, CT, X-Ray, etc). This lays out a solid foundation when scaled up to other tumors. **How about using other backbones?** In response to the suggestion, we conducted experiments using TransUNet as the backbone. Cancer detection results are reported in the following Table 6. **Our method improves TranUNet’s baseline performance by 6.2%, demonstrating its generalizability to other backbones.**
Table 6. Cancer detection (AP and AUROC) with TransUNet as the backbone on the rationale dataset.
| | AP | AUROC | Average |
|---|:---:|:---:|:---:|
| Baseline | 0.416 | 0.811 | 0.614 |
| Ours | **0.463** | **0.840** | **0.652** |
---
Rebuttal Comment 1.1:
Comment: The authors have addressed most of my concerns. Regarding the fail cases, I look forward to seeing their discussion in the final version of the paper. In particular, it would be important to clarify whether such inconsistencies could have a negative impact on doctors (e.g., by interfering with their judgment) or a positive one (e.g., serving as an alarm to alert doctors, as the authors suggested). Overall, this is an interesting work.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer M2zk,
We appreciate your recognition of our efforts in addressing your concerns.
Thanks for the thoughtful comments. We will include a dedicated discussion of the failure cases in the final version of the paper. Alongside the textual discussion, we also plan to include a figure that visualizes the failure cases (both the predictions and corresponding rationales), which we hope will provide clearer insights.
Due to character limitations, our initial response did not address the comments in the “Other Strengths and Weaknesses” section. Below, we provide detailed responses to these questions.
> **Q7:** Is it insufficient with two readers for rationale quality evaluation?.
**A7: The two readers are expert radiologists** from the top cancer centers with 10+ years of clinical practice in prostate cancer. Given their heavy clinical duty, recruiting additional experts as readers is challenging. **Why not other readers?** In our preliminary study, we explored the possibility of including junior radiologists (attendings or residents). However, we identified large rating disagreements between juniors and experts. For example, in 50 studied cancer cases, the juniors and experts only agreed on the PI-RADS score for 68% of the cases (34 out of 50), while the two experts agreed on all cases. Therefore, we decided not to include juniors as readers in this pilot study.
> **Q8:** Automated quantitative assessment of the rationales' quality?
**A8:** For automated quantitative evaluation, one straightforward approach is to adopt metrics from machine translation, such as METEOR [1] or BLEU [2], to measure the similarity between generated and reference rationales. However, this approach requires radiologists to create gold-standard rationales as references.
*Reference:*
[1] Banerjee et al. "METEOR: An Automatic Metric for MT Evaluation With Improved Correlation with Human Judgments." ACL Workshop, 2005.
[2] Papineni, et al. "BLEU: A Method for Automatic Evaluation of Machine Translation." ACL, 2002.
Thank you again for taking the time to review our paper and engage with our rebuttal—we truly appreciate your thoughtful feedback. | null | null | null | null | null | null | null | null |
Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers | Accept (poster) | Summary: This paper proposes Unisolver, a universal neural PDE solver designed as a “foundation model” for solving a broad range of PDEs. Unisolver leverages Transformer architectures pre-trained on a diverse set of PDEs. The model make use of many PDE components, incorporating information such as coefficients, boundary conditions, and notably LLM-based embeddings of PDE expressions. The model achieve better performance than existing models on a diverse dataset.
Claims And Evidence: The claims are supported by the evidence.
Methods And Evaluation Criteria: The method and the evaluation criteria make sense.
Theoretical Claims: Experimental work. No theoretical claim.
Experimental Designs Or Analyses: The experiments and the analyses are sound.
Supplementary Material: The supplementary material is a jupyter notebook that demonstrate the model using a small dataset.
Relation To Broader Scientific Literature: This work make progress toward "foundation model" for solving PDE.
Essential References Not Discussed: Not to the knowledge of the reviewer.
Other Strengths And Weaknesses: Strength:
- Extensive experimental evaluation across diverse PDE benchmarks.
- Detailed and thorough appendices.
- Well-written and easy to follow.
Weakness
- The proposed techniques may be limited in its applicability. Though it's a good combination of existing techniques.
- See Questions.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Could you clarify which numerical solver was used in Table 22? According to McGreivy & Hakim (2024) [1], after controlling for accuracy and resolution, FNO is only approximately 7× faster than traditional numerical solvers. Understanding the specific solver used would help contextualize the reported speedup.
2. For time dependent PDEs, how does the neural solver handle extrapolation to future time steps? Given that neural solvers are typically trained on fixed time intervals, it would be insightful to discuss the model’s ability to generalize beyond the training range.
3. For 2D mixed PDE, the dataset are mostly variants of Navier-Stokes equations with diffusion. How does the model generalize, with and without fine-tuning, to new type of PDEs, especially PDEs with distinct behaviors, such as inviscid burger's equation, wave equation, or Cahn–Hilliard equation, etc.
Q2 and Q3 can be demonstrated without extensive training. While it is expected that performance will degrade for out-of-distribution samples, providing such examples would be valuable for understanding the model’s limitations and guiding future research directions.
[1] McGreivy, N., Hakim, A., 2024. Weak baselines and reporting biases lead to overoptimism in machine learning for fluid-related partial differential equations. Nat Mach Intell 6, 1256–1269.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer WyYo for providing a detailed review and insightful questions.
> **Q1:** "The proposed techniques may be limited in its applicability. Though it's a good combination of existing techniques."
Thank you for the feedback. Regarding the applicability, we would like to clarify that in the $\underline{\text{Incomplete Component Scenario}}$ section, the "incomplete" setting in $\underline{\text{Figure 7}}$ actually means inference with no PDE components provided. Therefore, our method can be trained with partial PDE information, and supports inference with no PDE components, which strengthens its applicability in real-world scenarios with incomplete PDE knowledge.
We acknowledge that our method builds upon existing some existing components; however, the novelty of our method lies in the systematic integration of PDE information into neural surrogate modeling through conditional embedding, which to our knowledge has not been explored in prior works.
> **Q2:** "Could you clarify which numerical solver was used in Table 22?"
Thank you for raising this question. In $\underline{\text{Table 22}}$, the numerical solver used for comparison is the pseudo-spectral solver adopted by Li et al. in the original FNO paper, which is not extensively optimized for the specific task. As noted in the paper you cited, FNO is reported to be up to 1,000 times faster than **a pseudo-spectral solver**, which is consistent with the result in $\underline{\text{Table 22}}$. We promise to cite the mentioned paper and discuss the efficiency improvement more rigorously.
> **Q3:** "How does the neural solver handle extrapolation to future time steps?"
Thank you for your valuable suggestions. We have provided the extrapolation behavior comparison between our model and baseline models in $\underline{\text{Appendix I.3}}$. By extending the prediction horizon to time steps beyond the training range, we observe that all model's performance drops, while our method still outperforms all other baselines.
> **Q4:** How does the model generalize, with and without fine-tuning, to new type of PDEs, especially PDEs with distinct behaviors, such as inviscid burger's equation, wave equation, or Cahn–Hilliard equation, etc.
We appreciate the reviewer's valuable suggestions on providing more PDE type generalization experiments.
Note that we have provided a new PDE generalization analysis in $\underline{\text{Appendix C.2}}$, where Unisolver trained on equations with polynomial order up to 2 demonstrates strong generalization capabilities to equations of polynomial order 3 via fine-tuning.
As per your request, we test our model's performance on **2D wave equation** using 200 samples for training and 20 samples for evaluation. The 2D wave equation exibits significantly differnt behaviors from the PDEs of the training dataset. The zero-shot and fine-tuning performance of our model and a FNO model trained from scratch is shown in the table below. Relative L2 is reported.
| Unisolver (Zero-shot) | Unisolver (Fine-tuned) | Unisolver (From scratch) | FNO (From scratch) |
| -------------------- | --------------------- | ----------------------- | ----------------- |
| 0.774 | 0.0078 | 0.0406 | 0.0667 |
The zero-shot performance of our model is not very impressive, showing that zero-shot generalization to PDEs with significantly different behavior is rather hard. However, when fine-tuned with only 200 samples, our model is able to achieve a relative error smaller than 1% on evaluation samples, which is far better than FNO trained from scratch and also better than Unisolver trained from scratch, demonstrating its strong generalization capability to different types of PDEs and its effectiveness in adapting to new PDEs with limited data. | Summary: The paper proposes a method for solving various types of PDEs by leveraging a pretrained LLM alongside known parameterizations in the form of equations and values. The symbolic equations are embedded using the pretrained LLM, while numerical values and boundary/initial conditions are incorporated separately through conditioning in a Transformer. The approach is evaluated in both in-domain settings (where equations and general conditions remain similar) and out-of-domain settings (where equations and/or their coefficients and parameterizations differ).
Claims And Evidence: - Generalization ability: The proposed method appears to achieve strong performance in both in-domain and out-of-domain settings. This claim is supported by the quantitative results presented in Section 4.
- Universality claim: While the model demonstrates some OOD generalization capabilities through extended experiments on challenging benchmarks, the claim of universality may be overstated. The equations considered remain a relatively small subset of possible PDEs. To substantiate this claim further, additional validation with a broader range of equations, such as those used in mesh-based simulations (e.g., Pfaff et al., 2021), would be necessary.
- Theoretical analysis: Although the paper claims to provide theoretical analysis, I did not find any substantial theoretical justification in the text.
Reference:
- Pfaff et al. (2021), Learning Mesh-Based Simulation with Graph Networks
Methods And Evaluation Criteria: - The proposed method of incorporating prior information by encoding it within a sufficiently flexible framework is conceptually sound. I find it particularly interesting that separating the equation skeleton from the specific numerical values leads to improved performance, highlighting certain limitations of pretrained LLMs.
- The evaluation benchmarks are adequate for demonstrating the model’s generalization capabilities within the family of equations considered, particularly in a squared domain setting.
Theoretical Claims: Not applicable. Although the abstract mentions a "theoretical analysis of the PDE-solving process," I did not find any substantive theoretical claims in the paper.
Experimental Designs Or Analyses: I reviewed the experimental settings and did not find any specific issues.
Supplementary Material: Yes, I reviewed the supplementary material, specifically the additional results.
Relation To Broader Scientific Literature: The key contributions of this paper align with existing research on neural solvers, particularly in the context of enhancing OOD generalization by leveraging privileged information about the equations.
Essential References Not Discussed: To the best of my knowledge, there are no essential related works missing from the discussion.
Other Strengths And Weaknesses: Strengths:
- The results clearly demonstrate OOD generalization when leveraging privileged information about the equations.
- The approach of using LLMs to embed symbolic equations is an interesting direction.
- The inclusion of a CFD benchmark is valuable for showcasing the method in an applied setting, in which the model should have access to privileged information on the equations.
Weaknesses:
- The domain remains discretized on a grid, even in CFDBench, which further weakens the claim of universality.
Other Comments Or Suggestions: - I found the PCA analysis of embedded PDE conditions interesting but in need of improvement. I suggest visualizing it in 3D, as there are three varying conditions, which would better illustrate their distinctions across additional axes. Additionally, adjusting the color shading for each condition could help assess whether the condition embeddings follow a consistent trajectory that aligns with the order of prior values.
- I recommend that the authors soften their claim on universality, as it appears too strong given the current scope of the experiments.
Questions For Authors: Despite strong quantitative results, I noticed that Unisolver exhibits more artifacts in its solutions compared to other baselines, even compared with ViT (e.g., visible in Appendix E, Fig. 14). Do the authors have any insights into why this occurs and what might be causing more often artifacts?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer GsAC for providing the insightful review and valuable suggestions.
> **Q1:** "The claim of universality may be overstated. The equations considered remain a relatively small subset of possible PDEs." "The domain remains discretized on a grid, even in CFDBench, which further weakens the claim of universality." "Soften their claim on universality."
We thank the reviewer for providing valuable feedback.
**(1) Regarding handling irregular geometry.**
We acknowledge the limitation of our current method in handling irregular geometries and have discussed this in $\underline{\text{Appendix K}}$. We highlight that this limitation is shared by existing PDE foundation models such as DPOT, MPP, and Poseidon, all of which focus on data defined on regular grids. One fundamental reason behind this is the lack of suitable large-scale PDE datasets on irregular meshes. To extend Unisolver to irregular geometries, one possible approach is to replace the current canonical Transformer with geometry-general PDE models like Transolver.
**(2) Regarding the potential overstatement of "universality".**
We agree that the claim of "universal" may overstate the current scope of our model, as our method does not handle all possible kinds of PDEs, but rather focuses on several diverse sets of PDEs. Our definition of a "universal" neural PDE solver should be able to incorporate all possible PDE information and to generalize across the wide family of PDEs, and our method is one step beyond existing approaches by systematically encoding available PDE information. We promise to revise our title to **"Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers"** to more accurately reflect the scope and goal of our work.
> **Q2:** Although the paper claims to provide theoretical analysis, I did not find any substantial theoretical justification in the text.
We acknowledge the reviewer's concern. Our model design is inspired by theoretical insights on how PDE components influence the solutions, as shown in the motivating example in $\underline{\text{Section 3.1}}$. However, we would like to clarify that we do not attempt to provide formal theoretical analysis or guarantees in this work.
> **Q3:** Suggestions on improving visualization of embedded PDE conditions.
We sincerely thank the reviewer for providing the valuable suggestions. We provide an additional visualization of the learned PDE condition embeddings, which can be found through this anonymous link: https://anonymous.4open.science/r/rebuttal-4EC7/visualization.png. The new 3D plot provides more insightful visualization of the learned embeddings, clearly reflecting how each coefficient impact the embedded PDE conditions.
> **Q4:** Unisolver exhibits more artifacts.
We thank the reviewer for the detailed observation. $\underline{\text{Figure 14}}$ displays the error maps of each model, which is the absolute difference between the model predictions and the ground truth. Although there are visual artifacts in the error map, it is important to note that the model predictions does not display such significant artifacts. This can be demonstrated in the Full trajectory visualization in $\underline{\text{Figure 17 and 18}}$. Actually, the absolute error is much smaller than the ground truth values, thus making the artifacts in error maps more noticeable. | Summary: The paper introduces Unisolver, a universal neural PDE solver that can handle a wide range of PDEs, unlike traditional neural solvers that are limited to specific equations and coefficients. Instead of merely scaling up data and parameters, Unisolver leverages theoretical insights into PDE structures, embedding key components (e.g., equation symbols, coefficients, and boundary conditions) into a Transformer-based model. This approach integrates physics and deep learning, achieving state-of-the-art performance and superior generalization across diverse PDEs.
Claims And Evidence: 'Universal' is too big to say.
Methods And Evaluation Criteria: Yes
Theoretical Claims: No Such.
Experimental Designs Or Analyses: Yes, all of them.
Supplementary Material: Yes, all of them.
Relation To Broader Scientific Literature: Foundational model is open question to the SciML community. This paper provides a tangent solution.
Essential References Not Discussed: No such.
Other Strengths And Weaknesses: 'Our models were trained on servers with 32 NVIDIA A100 GPUs, each with 40GB memory.' for such simple examples... How do you justify?
Other Comments Or Suggestions: No such.
Questions For Authors: No such.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 8zJe for providing valuable feedback and insightful questions.
> **Q1:** "'Universal' is too big to say."
Thanks for this rigorous review, which is very instructive for us.
**(1) We adopt "Universal" in the context of deep learning to highlight model's flexibility and broad applicability.**
We appreciate the reviewer’s concern. Our use of the term "universal" is not meant to claim generality across all kinds of PDEs, but rather to highlight the **flexibility and broad applicability** of our model in **incorporating diverse PDE information and handling large-scale PDE datasets**. Similar usage also exists in the previous work as "Universal Physics Transformers" [1].
Specifically, our method allows the deep model to incorporate all available PDE information including PDE types, coefficients, boundary conditions, domain geometries and force terms. This allows the model to flexibly handle large-scale and diverse PDE datasets, as demonstrated by our experiments on three challenging benchmarks.
In particular, we have trained a single unified model for all considered 1D PDE datasets, and likewise a single model for 2D PDE datasets containing all kinds of PDE components, each covering a wide spectrum of PDE variations. The experiment setup demonstrates **a certain degree of universality**, as the models generalize well across diverse PDE families within their respective domains.
[1] Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators, NeurIPS 2024
**(2) We will change the title to "Towards Universal Neural PDE Solvers" for scientific rigor.**
Thanks for the reviewer's kind reminder, we acknowledge the potential overstatement of "universal" and we promise to revise our title to **"Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers"** to more accurately reflect the scope and position of our work. Similar usage also exists in previous work [2]. This revision will help clarify that the goal of our work is to make progress towards more generalizable and practical neural PDE solvers.
[2] Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior, NeurIPS 2023
> **Q2:** "For such simple examples... How do you justify?"
**(1) Not all the PDE-solving tasks require large computation resources.**
We apologize for the confusion. While we conduct experiments on a 32-GPU server, we would like to clarify that **not all our models were trained on 32 A100 GPUs**. The HeterNS dataset is a relatively small dataset and models were trained with a single GPU. For the 2D mixed PDEs dataset, we use 8 A100 GPUs to train our model, aligning with the training configuration reported in DPOT. For the 1D time-dependent PDEs dataset, our model was trained on 32 GPUs, which is justified by its large scale and high diversity, comprising over **3 million training samples** in total. The computational cost of Unisolver is shown below, which can also be found in $\underline{\text{Appendix I.7}}$.
| Benchmarks | HeterNS | 1D Time-dependent PDEs | 2D Mixed PDEs |
| - | - | - | - |
| GPU Hours | 24 | 3000 | 800 |
Besides, we also want to highlight that although Unisolver requires some time for training, once trained, it can generalize to new PDEs without retraining and efficiently generate solutions, as can be seen in the efficiency analysis in $\underline{\text{Appendix I.5}}$. This allows Unisolver to serve as an efficient surrogate of numerical solvers, significantly reducing the computational overhead, which further justifies the training cost.
**(2) Our PDE-solving benchmarks are among the hardest ones in current research community.**
Regarding the concern about simplicity, we respectfully argue that the 2D datasets are highly non-trivial, covering a wide range of complex PDE types and diverse PDE components. While the 1D dataset is relatively simpler from a numerical solving perspective, it is specifically constructed to cover a large variety of time-dependent PDEs, posing significant challenges for training and generalization from a deep learning perspective. Specifically, the 1D PDE family contains six polynomial coefficients, various viscosity terms and force terms, three types of boundary conditions, all of them can vary simultaneously, imposing intricate challenges for the model to capture the complex relationship between PDE component inputs and the corresponding solutions.
Finally, we would like to note that the datasets used in our paper **are not intended as end goals, but as a foundation for systematically studying the performance of deep surrogate models** across increasingly challenging PDE settings, paving the way towards more practical and powerful PDE foundation models. | Summary: This paper introduces Unisolver, a framework that conditions a transformer model on various physical parameters relevant to PDEs. The framework distinguishes between domain-wise components (such as equation symbols and coefficients) and point-wise components (such as external forcing). These are incorporated via adaptive layer normalization, with domain-wise components either extracted from a large language model (LLM) or modeled using an MLP, while point-wise components are patchified for compatibility with the transformer. The method is evaluated on three benchmarks: HeterNS, 1D time-dependent PDEs, and 2D mixed PDEs, where it is compared against baselines.
Claims And Evidence: The authors claim that Unisolver achieves strong performance in both in-distribution and out-of-distribution settings. The experimental results suggest that:
- The comparisons with baselines appear fair, with most methods receiving similar input information (except for ICON and PINO, which differ in conditioning).
- However, the "incomplete scenario" setup is somewhat unclear. Does it refer to partial information during training or only at inference?
- The interpretation of the learned PDE embeddings is ambiguous, making it difficult to assess the quality and significance of the extracted representations.
Methods And Evaluation Criteria: The proposed approach is conceptually interesting, as it attempts to create a unified conditioning mechanism for neural PDE solvers. However, a few concerns remain:
- The idea of conditioning a transformer on all available PDE information is relevant, but the method assumes full knowledge of the governing equation, which may not always be realistic.
- The use of an LLM for encoding equation symbols seems questionable. Does the LLM contribute meaningful information, or does it merely introduce additional complexity? The results in Table 7 seem to indicate that the LLM does not significantly improve performance, which raises concerns about the validity of this design choice.
- The introduction suggests that prior approaches fail to incorporate all available information, but this framing might be misleading—previous methods likely did not attempt such exhaustive conditioning because it is not always necessary.
Theoretical Claims: This is an experimental paper.
Experimental Designs Or Analyses: - The experimental setup appears sound, and the evaluation is conducted on diverse PDE scenarios.
- However, the interpretation of learned embeddings is not particularly insightful, making it difficult to assess whether the model genuinely understands PDE structure or is simply performing pattern recognition.
- The partially observable setting is not very convincing: the model still relies on 70% fully observed data, which is a relatively mild missing-data scenario.
Supplementary Material: I have checked the supplementary.
Relation To Broader Scientific Literature: This work aligns with research on generalizable PDE surrogate models and foundation models for PDEs.
Essential References Not Discussed: Key references are missing:
- MPP and Poseidon should be discussed earlier in the text.
- Generalization methods such as CODA [1], CAPE [2], and Zebra [3] should also be mentioned to provide a clearer contextualization of Unisolver’s novelty.
[1]Generalizing to New Physical Systems via Context-Informed Dynamics Model, Kirchmeyer et al, 2022.
[2]Learning Neural PDE Solvers with Parameter-Guided Channel Attention, Takamoto et al, 2023.
[3]Zebra: In-Context and Generative Pretraining for Solving Parametric PDEs, Serrano et al, 2024.
Other Strengths And Weaknesses: The distinction between neural PDE solvers and neural surrogates is not well discussed in the introduction. PDE solvers aim to directly solve the equation, whereas surrogates approximate numerical solutions efficiently.
Other Comments Or Suggestions: 1. Clarify the contribution of the LLM: How much does it actually help? If it provides limited gains, should it be removed?
2. Improve the "Neural PDE Solvers" section in Related Work: The discussion should better differentiate between neural solvers (PINNs, which enforce physics constraints) and neural surrogates (which approximate solutions efficiently).
3. Rephrase certain claims: For instance, stating that "most methods fail to incorporate all PDE information" is an overstatement—rather, existing methods prioritize different aspects of the problem based on their intended applications.
4. Consider an alternative conditioning approach: Instead of repeating equation information at each location, why not use a single token and cross-attention to encode the PDE domain information?
Questions For Authors: 1. What PDE knowledge does the LLM actually encode? Would removing the LLM affect performance?
2. Why use repeated encoding for PDE parameters? Would a single token with cross-attention be a more efficient alternative?
3. Did you find a difference between domain and pointwise embeddings ? Which ones are the most important ?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer Z6hm for providing valuable feedback and suggestions.
> **Q1:** About the incomplete scenario setup and incomplete ratio.
**(1) Clarify our setting.**
Sorry for the confusion. We clarify the incomplete component scenario setup:
- **During training**, each PDE component (viscosity and force in HeterNS) is independently masked with 30% probability, resulting in 49% samples with full components.
- **During evaluation**, the **"incomplete"** setting in Figure 7 means **no components available**, while **"complete"** means full components.
The results demonstrate that our model can inference with no PDE components, and providing components boosts performance.
**(2) New experiment with a larger incomplete ratio.**
We perform additional ablation where **each component is masked with 80% probability** during training, resulting in **only 4% data with full components**. As shown in the results below (averaged across multiple forces), our model outperforms FNO and maintains strong performance.
**No components available: (80% masked)**
|Viscosity|1e-5|5e-5|1e-4|5e-4|1e-3|
|-|-|-|-|-|-|
|FNO|0.1039|0.0485|0.0305|0.0097|0.0047|
|Unisolver|0.0647|0.0237|0.0147|0.0043|0.0022|
**Full component: (80% masked)**
|Viscosity|1e-5|5e-5|1e-4|5e-4|1e-3|
|-|-|-|-|-|-|
|FNO|0.1009|0.0473|0.0288|0.0093|0.0044|
|Unisolver|0.0644|0.0232|0.0136|0.0039|0.0021|
> **Q2:** The method assumes full knowledge of the governing equation, which may not always be realistic.
According to $\underline{\text{Incomplete component scenario section}}$, our model does not rely on complete knowledge of governing equations and can be trained with partial PDE information, supporting inference with no PDE components. This enables application to real-world settings with incomplete PDE knowledge.
Besides, we want to note that beyond real world, simulation in CAE software is also valuable where complete information is easy to obtain and Unisolver can serve as an efficient surrogate.
> **Q3:** About contribution of LLM embedding, interpretation of learned PDE embeddings, and whether the model is simply performing pattern recognition.
As shown in $\underline{\text{Table 6}}$, incorporating LLM embeddings yields **an average improvement of 5.76%**, indicating they provide meaningful information. $\underline{\text{Table 7}}$ further shows that LLM embeddings outperform manually constructed symbolic embeddings on both in-distribution and downstream tasks, with **over 10%** improvement on Advection equation, highlighting better generalization performance of LLM embeddings and benefits beyond simple pattern recognition.
Regarding efficiency, as the LLM has been heavily optimized, generating embeddings incurs negligible computational overhead.
To improve interpretability, we provide additional visualizations: https://anonymous.4open.science/r/rebuttal-4EC7/visualization.png. We filter out equations with viscosity and force for clarity and annotate each cluster with a representative PDE formula. The visualization clearly shows the well-structured latent space of LLM embeddings.
> **Q4:** About statement on existing methods.
We agree that "fail to incorporate all PDE information" may be too strong. We will revise the statement to **"do not fully utilize all available PDE information"** to soften our claim.
> **Q5:** MPP and Poseidon should be discussed earlier; CODA , CAPE, and Zebra should be mentioned.
We will discuss MPP and Poseidon earlier in Related Works for clarity. We have already cited CAPE, and we will cite CODA and Zebra to better contextualize Unisolver’s novelty.
> **Q6:** About distinction between neural PDE solvers and neural surrogates.
We use the term *neural PDE solver* to refer to PINNs and neural operators following prior works like *Message Passing Neural PDE Solver* and *Transolver*. We also acknowledge that under a narrower definition by [1], only PINNs are considered neural solvers. Considering the mixing concept, we prefer to follow the paper *Message Passing Neural PDE Solver* with more relative topic and maintain the "neural PDE solver".
[1] Physics-informed machine learning: A survey on problems, methods and applications.
> **Q7:** About conditioning approach and repeating encoding.
Token repeating is an implementation trick similar to tensor broadcasting. We experiment with cross attention on HeterNS. As shown in the table below, our design outperforms cross attention in PDE information conditioning.
|Viscosity Generalization (Relative L2)|In-Dist|Zero-shot|
|-|-|-|
|Unisolver (Cross Attn)|0.01078|0.0416|
|Unisolver (Ours)|0.0098|0.0374|
> **Q8:** About difference between domain and point-wise embeddings.
The importance of domain-wise vs. point-wise components varies by dataset. In HeterNS, point-wise components (e.g., force terms) are more critical due to their strong influence on fluid patterns. For 1D and 2D mixed PDEs, both types contribute significantly in guiding the simulation.
---
Rebuttal Comment 1.1:
Comment: ### (1) Clarity of the setting
I remain a bit confused by the explanation. What exactly happens in the incomplete scenario at inference time? Is it possible to run the model without providing any conditioning vectors at all? If tokens must be supplied, how are they selected? Also, could this setup be extended to handle new dynamics?
### (2) Incomplete scenario
The two tables seem to show very similar performance. What is the interpretation of this? Does it mean the conditioning has limited impact in this setting?
### Answer to Q2
I agree that this remains a valuable setting to explore. As you correctly point out, Unisolver could serve as a powerful surrogate in that context.
### Answer to Q3
I see that it does help, but the gain doesn’t seem particularly significant. That said, I find the core value of the paper lies in the flexibility of the proposed architecture for handling various conditioning strategies, which is already an interesting contribution on its own.
### Answer to Q6
Apologies if I appear overly strict here, but I don’t think the referenced models should be referred to as "solvers", they are better described as surrogates. I believe we should be precise in our terminology.
### Answer to Q7
Thank you for providing the table. I appreciate the additional detail.
### Answer to Q8
Interesting point. In particular, it is not always clear whether the increased difficulty in modeling certain dynamics comes from complex forcing terms or from specific boundary conditions. It could be interesting to see if the proposed model can help identify the most critical factors of a given dynamics.
Thanks again for the clarifications. With your responses, I now have a better understanding of the paper and will increase my score to 3.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank Reviewer Z6hm again for providing the thoughtful and constructive follow-up response to our rebuttal, as well as for raising the score. We also appreciate the time and care you have taken to provide further suggestions, which are very helpful in improving the clarity and rigor of our work.
Below, we make further clarifications for the remaining points of confusion.
**(1) Clarify the incomplete component scenario at inference time.**
Yes, the model can be run without providing any conditioning vectors at inference time. Note that **learnable tokens** are used to represent **the types of unknown PDE components**, rather than instance-specific values. For example, all viscosity coefficients share one learnable token, and the force terms share another. These learnable tokens serve as indicators to the model that the corresponding components are unknown. This design allows Unisolver to flexibly operate in three modes:
- Full conditioning: All PDE components are provided.
- Partial conditioning: A subset of PDE components is provided.
- Zero conditioning: No PDE components are provided.
The setup also allows the model to handle new dynamics in two ways. If PDE components of the new dynamics are known, they can be directly provided to the model, same as the "zero-shot" results in $\underline{\text{Table 3}}$. If the PDE components are unknown, learnable tokens can be used to represent the unknown components, allowing the model to still perform effectively.
**(2) Explain the results of the incomplete component scenario.**
We would like to highlight that the results in our rebuttal correspond to **a high masking ratio of 80%** during training, meaning that **only 4% of the training samples contain complete PDE information**. Therefore, the performance gap between full conditioning and no conditioning is reduced, as the model mainly learns to predict with limited component guidance.
Additionally, the model input in the HeterNS dataset contains ten history timesteps, which provides dynamic information of the fluid. Therefore, under this highly incomplete supervision, the model tends to rely more on the history inputs to infer the underlying dynamics, which weakens the impact of the components.
Regarding the distinction between "surrogates" and "solvers", we appreciate your emphasis on precise terminology. We will consider the use of terminologies more carefully in our future revisions. We promise to conduct a more comprehensive literature review to determine whether we should replace *neural solvers* with *neural surrogates* to better reflect the nature of our approach.
Thanks again for your support and dedication to our paper and for acknowledging our contributions. | null | null | null | null | null | null |
Inverse problems with experiment-guided AlphaFold | Accept (poster) | Summary: In this paper, the author introduce a method to guide diffusion-based structure prediction models (e.g., AlphaFold 3) with experimental data to sample conformational ensembles.
Claims And Evidence: The claims are well supported by the results shown in the paper.
Methods And Evaluation Criteria: The baseline (AlphaFold 3) and the test sets seem relevant to evaluate the method.
Theoretical Claims: The paper does not introduce new theoretical claims.
In equations (1), (2) and (3), the authors should specify the underlying noise model, leading to the given log-likelihood functions.
Experimental Designs Or Analyses: The experimental setup for testing the possibility of guiding AlphaFold 3 with static electron density maps is valid.
However, in these experiments, most of the structure is also guided using the Substructure Conditioner loss (eq. 3). This is not, per se, an issue but:
* I only saw this information in the supplementary material and believe this would important emphasize this point in the main text to avoid confusion regarding what the method is capable of.
* The authors should discuss:
* Where does y (the Cartesian coordinates of the atoms in the anchored region) come from?
* How should we choose the set of anchored atoms (A)? It seems that we need to know in advance the regions where "new" conformations are likely to appear.
* What is the influence of the size (in terms of consecutive residues) of the non-anchored region? Does the method perform significantly worse when long regions are not anchored?
Furthermore, the maximum number of structure in the ensemble is set to 5.
* Does that limit the applicability of the method to cases where the conformational ensemble only has a few modes?
* Due to what constraints was this maximum number chosen?
Supplementary Material: I read all the supplementary material.
Relation To Broader Scientific Literature: To the best of my knowledge, this is the first method that applies the idea of guiding a pretrained diffusion model with *experimental* data to sample *conformational ensembles*.
Essential References Not Discussed: However, the authors do not cite important prior work in this space (i.e., guiding a protein diffusion model with experimental data). The following references should be discussed:
Fadini, Alisia, et al. "AlphaFold as a Prior: Experimental Structure Determination Conditioned on a Pretrained Neural Network." bioRxiv (2025).
Liu, Yikai, et al. "ExEnDiff: An Experiment-guided Diffusion model for protein conformational Ensemble generation." bioRxiv (2024): 2024-10.
Maddipatla, Sai Advaith, et al. "Generative modeling of protein ensembles guided by crystallographic electron densities." arXiv preprint arXiv:2412.13223 (2024).
Levy, Axel, et al. "Solving inverse problems in protein space using diffusion-based priors." arXiv preprint arXiv:2406.04239 (2024).
Other Strengths And Weaknesses: The description of the forward models is clear and accurate. The method is well described and seems reproducible.
Other Comments Or Suggestions: Typo on L139 (right column): [I]n some cases...
Questions For Authors: Please see my questions in "Experimental Designs Or Analyses".
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their review.
**Underlying noise model.**
*Eq. 1.* We assume a Laplace noise model. Here, the difference between $F_{o}$ and $F_{c}$ is drawn from a Laplace distribution centered at zero with unit scaling. This model, along with the Gaussian model, is used in electron density modelling. However, a more realistic noise model involves complex physics, as noise is introduced at the level of Fourier intensities. We will address this in future works.
*Eq. 2*: Instead of a typical noise model, we assume a piecewise noise model. If the distance is between the lower and upper bounds, then we assume a uniform noise distribution (0 loss). Otherwise, we assume a Gaussian-like noise distribution (quadratic loss). This model is common in NMR structure modeling (Lindorff-Larsen et al., 2005).
*Eq. 3*: We assume a Gaussian distribution with a fixed isotropic covariance as the noise model. Each $\mathbf{x}\_{i}^{k}$ is assumed to be drawn from $\mathcal{N} (\mathbf{y}_{i}, \mathbf{I})$
**Substructure Conditioner**
Due to space constraints, we could not include the substructure conditioner in the main text. We will move it there in the future. Re the other questions:
- $y$ is from AF3 predictions.
- To select the set of anchored atoms, we examine AlphaFold3 (AF3) predicted structures alongside density maps to identify regions that are either not faithful to the map or exhibit structural heterogeneity in the map. We then extract a 3D slice of the map around the region, including all the relevant atoms and applying a padding of 5Å along each axis.
- No, the method does not yield significantly worse results when long regions are not anchored. For instance, in Figure 2A and 3 (of manuscript), a 10-residue region is not anchored by the conditioner. Yet, we recover conformers that fit the density map well and the cosine similarity is comparable to PDB structures. We conducted additional experiments on 15 structures with conformational heterogeneity ([link](https://postimg.cc/XZt5h686)). For some, we optimized regions spanning up to 22 residues (5v2m and 6e2s), and for all cases, the cosine similarity is on par with the PDB (or better). We also included additional baselines like AlphaFlow and ESMFlow (Jing et al., 2024). However, it must be noted that the residue range length does increase the runtime for fitting an ensemble to our experimental observation ([link](https://postimg.cc/qgJK5wcR)).
**Maximum Samples in Ensemble**
- No, this method is effective even when the sample has a single mode too. For instance, all structures in Tabs. A1 & A2 (in manuscript) have a single mode. Here, the ensemble (selected using algorithm 2) consists of similar looking structures without the separation we typically observe in structures with multiple conformations. Additionally, the ensemble in Figs. 2 & A1 (single mode structures) were selected using Algorithm 2 (manuscript).
- We heuristically found that the density is well explained by at most 5 samples and adding more samples to the ensemble would overfit the noise in the density map without yielding a considerable increase in cosine similarity. In the attached Figure ([link](https://postimg.cc/BP2nP0WB)), we plot the normalized cosine similarity against the number of samples in the ensemble of size 15. We see the cosine similarity stabilizes at 5 samples. Post that, we either get a deteriorated fit to the density, or overfit to noise – both undesirable.
**Missing literature review**
- Fadini et al.: This work was released after the ICML deadline. This work fits a single protein conformer to the static electron density map by optimizing AlphaFold2’s MSA contact maps. However, they are unable to account for conformational ensembles (15% of proteins exhibit non-trivial conformational heterogeneity in the same protein). Proteins are better described as ensembles rather than single conformers.
- Liu et al.: This study samples structural ensembles using the str2str protein diffusion model, with guidance from cryo-EM experimental observation. While related, their focus is on a different modality than ours and does not address crystallographic density fitting or NOE observations.
- Maddipatla et al.: This method fits ensembles to crystallographic density maps using Chroma and captures multiple conformers to some extent. We observed that it fails to capture conformational heterogeneity when optimizing over a long residue range due to Chroma’s hierarchical formulation. Moreover, because the sequence conditioning in Chroma is only “promoted” and not imposed, we found it impossible to impose “global” structural constraints as those in the case of NMR.
- Levy et al.: They also use Chroma as their protein diffusion model and inherit similar problems with packing sidechains. Additionally, they do not use experimental electron density maps or distance restraints, but instead rely on synthetic data.
We will update the manuscript to include all aforementioned points. | Summary: This paper introduces Experiment-Guided AlphaFold3, a framework that integrates experimental data with deep learning priors to generate structural ensembles aligned with experimental observables. Standard protein structure predictors like AlphaFold3 produce single static structures, failing to capture conformational heterogeneity. The proposed method adapts AlphaFold3’s diffusion-based sampling to incorporate experimental constraints, refines structures via force-field relaxation, and selects ensembles maximizing agreement with experimental data. Density-Guided AlphaFold3 improves crystallographic modeling by generating structures more faithful to electron density maps. NOE-Guided AlphaFold3 refines ensembles to satisfy NMR-derived distance restraints, better capturing protein dynamics. The approach significantly reduces computational time compared to traditional crystallographic and NMR workflows. Results show enhanced structural accuracy over standard AlphaFold3 and, in some cases, even PDB-deposited structures. This work advances protein structure modeling by bridging the gap between deep learning predictions and experimental measurements.
Claims And Evidence: The paper primarily validates its approach through specific case studies, such as ubiquitin for NMR and selected crystal structures for X-ray modeling. However, it remains unclear how well the method generalizes across diverse protein families, particularly for flexible or disordered proteins and multi-domain assemblies. Without broader benchmarking, the extent to which Experiment-Guided AlphaFold3 captures conformational heterogeneity across structurally varied proteins is uncertain. Additional validation on a wider range of experimental datasets would strengthen the claim of broad applicability.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable and relevant, but they currently rely on a limited set of case studies. A more diverse benchmarking strategy, additional validation metrics, and explicit runtime comparisons would improve the strength of the evaluation.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: The experimental design is reasonable but has limitations in scope and validation. The method is tested on a few case studies (e.g., ubiquitin, selected crystal structures), raising concerns about generalizability to diverse protein families. The claim of computational efficiency is not backed by explicit runtime comparisons. Additionally, statistical analyses (e.g., RMSD distributions, significance testing) are missing. Broader benchmarking and more rigorous validation would strengthen the conclusions.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper builds on recent advances in deep learning-based protein structure prediction, particularly AlphaFold3 (Abramson et al., 2024), and addresses its limitation in capturing conformational heterogeneity. While previous methods like Rosetta (Baek et al., 2021) and MD-based NMR refinement (Lindorff-Larsen et al., 2005) have incorporated experimental data, they are computationally expensive. The work also relates to AlphaFlow (Jing et al., 2024) and ensemble modeling approaches in X-ray crystallography (Furnham et al., 2006; van den Bedem & Fraser, 2015) and NMR. However, it uniquely leverages AlphaFold3 as a structure prior, integrating experimental constraints to generate physically realistic and experimentally consistent structural ensembles. By bridging deep learning-based structure prediction with experimental refinement, this work contributes to both computational and experimental structural biology.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: A key concern is that the paper's primary audience seems to be structural biologists rather than the machine learning (ML) community. The writing style and structure are uncommon for an ML paper, with extensive discussions on case studies and biological validation rather than a focus on methodological advancements or generalizable ML insights. While this depth may make it a high-quality protein design or structural biology application paper, it is unclear whether it aligns with ICML’s core focus on ML innovation. The paper would be more suitable for ICML if it placed greater emphasis on the ML contributions, generalizable techniques, and broader computational impact rather than domain-specific experimental results.
Other Comments Or Suggestions: I initially feel inappropriate to accept this paper due to the weak experiments and the uncommon writing style. I am not a biologist, so I will consider raise my initial score by carefully reading other reviewers' opinions and seeing my major concerns are well addressed.
## Update after rebuttal
I appreciate the authors' significant efforts in conducting additional experiments, which have strengthened the paper. As a result, I have decided to raise my score. However, the unconventional writing style—unusual for a machine learning conference—still prevents me from assigning a higher score, despite finding the paper more interesting with the new results.
Questions For Authors: No more questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments.
> tested on a few case studies (e.g., ubiquitin, selected crystal structures)
This paper is as a proof of concept that AlphaFold3 (AF3) can be guided by experimental observations to generate heterogeneous ensembles.
We note that there was extensive quantitative evaluation presented in the Appendix (crystallography: Tab. A1-3 – 44 structures; NMR: Tab. A4 – 13 structures in the manuscript). Our case studies were curated in order to showcase the method in biologically and methodically interesting settings. For e.g., Ubiquitin is the benchmark protein for NMR structure determination, and was the sole subject of study of highly influential works in Nature (Lindorff-Larsen et al., 2005) and Science (Lange et al., 2008). It is essential for any NMR structure determination work to benchmark their performance on Ubiquitin.
In crystallography, our benchmarks comprise several “difficult” and interesting cases that we curated, which AF3 consistently fails on (Tabs. A1 – 3). As control, we cover “simple” cases, where AF3 performs well, to show our approach does not deteriorate AF3’s performance (Tab. A2).
That said, considering the reviewer’s request, we curated an additional set of results for both crystallography (+15 proteins) and NMR (+27 proteins). The updated quantitative results are available at these links: https://postimg.cc/XZt5h686, https://postimg.cc/Wtws9D06.
We added new baselines to existing tables (Tab. A3 and A4): https://postimg.cc/6yhdQS9C, https://postimg.cc/FYQKksH6
We are currently extending this study to refit the entire PDB.
> Unclear how well the method generalizes across diverse protein families
- *Flexible proteins:* AF3 structures are generally rigid and do not accurately capture the flexibility of proteins. Protein flexibility is captured by NMR in two ways: (i) solution-state NMR, the proteins are generally more flexible in solution compared to crystals, (ii) NMR order parameters, e.g. N-H S², measure the backbone flexibility of a protein because they capture the time-expectation of the N-H bond vector. In Fig. A3, we compare the backbone flexibility of the recovered ensemble against the experimentally measured order parameters. We also demonstrate that we successfully capture this flexibility, but unguided AF3 does not.
- *Disordered proteins:* Because AF3 was trained on crystal structures, it fails to predict intrinsically disordered proteins. While very interesting, this is beyond the scope of this work and deserves a dedicated study.
- *Multidomain assemblies:* One of the reasons we chose AF3 as the generative model is its ability to predict the structure of protein-ligand and multi-protein complexes. We currently have promising preliminary results of extending the proposed methods to such cases, however, we believe that this deserves a dedicated study.
> primary audience seems to be structural biologists than ML community
We understand the reviewer’s concern. We agree that our approach is of immediate use to structural biologists, and our writing style might be unorthodox to the ICML audience. Yet, it was the ML community that pioneered the development of protein structure generative models, such as AlphaFlow (ICML 2024). Our perspective is that one of the primary uses of a protein structure generative model is its usage as a structural prior while solving inverse problems. To our knowledge, ours is the first work that employed these structural priors to solve real-world inverse problems. This is inherently a computational problem (not a biological one) addressed with computational tools. While we believe that the tools developed in our work directly impact structural biology workflows, they also introduce a new benchmark for validating future developments in modeling protein structure priors on different downstream tasks. Furthermore, our work might inspire different ways of solving these important real-world inverse problems.
> computational efficiency, runtime comparisons
For crystallography (Table: https://postimg.cc/qgJK5wcR), our approach samples a batch of 16 proteins in ~7 minutes for 300+ residue systems, adding minimal latency vs. unguided AF3. For NMR (Table: https://postimg.cc/jndm3jrZ), we generate ensembles of 20 conformations in ~10 minutes—significantly faster than restrained MD methods like [CYANA](https://link.springer.com/article/10.1007/s10858-015-9924-9), while modeling distance restraints as true ensemble statistics.
>Broader benchmarking and more rigorous validation would strengthen the conclusions.
To validate NMR, we evaluate backbone flexibility using N-H S² parameters (Fig. A2). For crystallography, we quantify ensemble heterogeneity via bimodality scores (subsection A3/Fig. A2), outperforming baselines in comparative benchmarks. (https://postimg.cc/F794Vc04, https://postimg.cc/dh5rV8jt, https://postimg.cc/VS5MrdK9). We note that we _don't_ guide our ensembles on either of the validation metrics. | Summary: This paper proposes to use AlphaFold3 as a structure prior for protein crystal structure determination from cryo-EM or NMR experiments. Specifically, it uses AlphaFold3 to predict the initial structure and then use the gradient from the electron densities to guide the diffusion module of AlphaFold3 to refine its structure to agree with experimental observation. The method is evaluated on crystallographic electron density data in PDB and shows good performance.
Claims And Evidence: The claims made in this paper looks convincing.
Methods And Evaluation Criteria: The evaluation criteria make sense, but the experiment section would benefit from more baseline.
Theoretical Claims: There is no theoretical claims in this paper
Experimental Designs Or Analyses: The experimental design would benefit from more baselines, such as [1].
[1] Accelerating crystal structure determination with iterative AlphaFold prediction, Acta Crystallogr D Struct Biol, 2023
Supplementary Material: Yes I reviewed the supplementary section.
Relation To Broader Scientific Literature: This paper is the first method that uses classifier guidance to refine AlphaFold3 predicted structure. This contribution is original.
Essential References Not Discussed: [1] This paper should cite Accelerating crystal structure determination with iterative AlphaFold prediction, Structure Biology 2023
Other Strengths And Weaknesses: The proposed method is the first method that uses classifier guidance to refine AlphaFold3 predicted structure. This contribution is important.
Other Comments Or Suggestions: Suggestion: the author should move some important figures to the main paper rather than in the supplementary material.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their review.
**More baselines**
Following the reviewer’s suggestion, we extended the evaluation for both the X-ray crystallography and the NMR experimental observations.
We include the following additional baselines:
- AlphaFlow (Jing et al., 2024)
- ESMFlow (Jing et al., 2024)
And specifically for X-ray crystallography altlocs benchmark (Table A3) we also compared against:
- Chroma (Ingraham et al., 2022)
- Chroma-guided (Maddipatla et al., 2024)
The updated tables for X-ray have been included in this [link](https://postimg.cc/6yhdQS9C).
**X-ray Crystallography.** We particularly focused on Table A3 as it captures structural heterogeneity (altlocs). We note that, in all cases, guided AlphaFold3 (AF3) outperforms all other baselines, and in some cases, we are able to fit structures to density better than the structures deposited to the PDB.
We further extended the violin plot in Figure A2 to include the aforementioned baselines ([link1](https://postimg.cc/F794Vc04), [link2](https://postimg.cc/dh5rV8jt), [link3](https://postimg.cc/VS5MrdK9)). We notice that our method captures bimodality better than all other methods (including recent experiment-guided approaches from Maddipatla et al. 2024). We hope that these baselines help justify our claims even further.
*Regarding Terwillinger et al. 2023 [1]*: We attempted to include the comparison to [1] as the reviewer suggested, however, we did not find a publicly available codebase for [1]. We hope that the reviewer can understand that it is difficult to replicate a comprehensive study in this timeframe. Additionally, based on our understanding of the paper, the goal of this paper was not to recover structural heterogeneity given the density.
**NMR.** In a similar vein, we added comparison to AlphaFlow and ESMFlow (Jing et al. 2024) for the NMR structure determination benchmark (Table A4). We attempted running Chroma (both unconditional and NOE-conditioned), however, we observed that due to lack of explicit sequence conditioning in Chroma, the produced ensembles completely deviated from the true structures. Our results suggest that the structural ensembles produced by NOE-guided AlphaFold adhere to the constraints better than all other baselines, and in half of the cases, they produce a better agreement to the constraints compared to the deposited NMR structures resolved with MD, while taking a tiny fraction of the runtime ([link](https://postimg.cc/FYQKksH6)).
**Note Regarding “Relation to Broader Scientific Literature:”**
While we are the first to use classifier guidance to refine AF3 predictions, we would like to emphasize that we are proposing a new approach to perform classifier guidance on diffusion models. Specifically, regular classifier guided diffusion is i.i.d. in that it encourages each sample in the batch to independently reduce a loss function. However, we are proposing a non-i.i.id. classifier guided diffusion that encourages the entire batch to reduce a loss function.
**Not citing Terwillinger et al. 2023 [1]**
We apologize to the reviewer for oversight. We will add a comprehensive discussion to [1], along with references suggested by Reviewer 4.
We hope this clears all concerns. | Summary: This paper examines the inverse problem of resolving protein structure from experimental data and capturing the heterogeneity arising from the dynamic nature of proteins as an ensemble. To do so, it guides the diffusion module of AF3 with experimental data to satisfy NOE constraints and substructure likelihoods.
Claims And Evidence: I think the key claim here is that we can get a better ensemble of structures out of AF3 that obeys structural heterogeneity if we already have our hands on some x-ray crystollography or NMR data, or known atomic substructures. On the case study proteins, shown in the Appendix, the experimental data guided results have better cosine similarity. The authors also introduce an algorithm to select samples based on the matching pursuit algorithm.
Methods And Evaluation Criteria: **Method**: The method "hacks" the diffusion module in AF3, which outputs an ensemble of structures. The method "refines" this output based on given experimental data. The likelihood of the experimental observation is calculated given each individual ensemble member (for substructures) and for the ensemble average (for electron density and interatomic distances). At the end, AMBER relaxation is used, similar to AF2. Most of the results are in the Appendix. Quantitative metrics include cosine similarity. Results are calculated on selected case study proteins.
**Evaluation**: The evaluation criteria is frankly pretty unclear to me -- as I'll describe throughout this review, I'm not sure why the baseline of AF3 is chosen since it accepts such different inputs (ie. it seems trivial to compare a version of the structure prediction model _with_ the structural experiment data as input and a version _without_ it).
Theoretical Claims: n/a
Experimental Designs Or Analyses: Three data terms are considered:
* crystallographic electron density maps
* nuclear Overhauser effect (NOE) restraints
* sub-structure conditioning using known atom locations
**Question**: what is used as ground truth here? Do you still solve the inverse problem in some other way in order to assess the solution from the AF3-guided method?
From the way I understand it right now, the experiment is almost saying "if you have the result, and guide the prediction with the result, you'll preform better", which feels trivial -- I'll be coming back to this point throughout the review, but I think it's hard for me to appreciate the rest of this paper without clearing up this confusion.
Supplementary Material: The appendix includes more details on the algorithms used, how data was preprocessed, and additional scientific background.
Relation To Broader Scientific Literature: not qualified to comment, but my (possibly uninformed) hunch is that actual methods for solving structure from NMR/etc. is more useful, like CryoDRGN etc.?
Essential References Not Discussed: not qualified to comment, but baselines are generally too vague / doesn't seem to match the problem statement in Section 3
Other Strengths And Weaknesses: Strengths:
* As I'll mention throughout this review, I don't fundamentally understand why we need to predict the structure if we already have structure experimental results, though I think this could just be due to me not being familiar enough with this subject area. I think guiding AF3 is a pretty cool engineering feat, but I don't understand this problem enough to comment on how this is better than previous methods.
Weaknesses:
* As I understand it right now, this problem is sort of a trivial one from the ML perspective (i.e. if you guide the prediction with the result, then of course it should do better than not having the result, right?) -- I'm hoping maybe this is only because I'm missing some key piece of understanding, so I'm happy to have a conversation about this during the discussion period.
* Related to my overall confusion about the problem setting: the presentation of the paper is not very well-suited for a ML conference. For example, there aren't any baselines except what is already deposited in the PDB, and it's pretty hard to find what's used for ground truth for the evaluations in Section 6
Other Comments Or Suggestions: Nitpicks:
* using the world "inverse problems" in the title feels too broad; can we use something more specific here?
* There isn't really a "results" section in the main text. We have a description of the experiment, and the results described in words all in the same paragraph, and need to flip to the appendix to find the results. From a subjective presentation perspective, I'd prefer to see a smaller / less developed Figure 2 and see more of the graphs in the original text.
* Related works are sprinkled throughout the paper; this makes it hard for the ML audience to directly understand what the comparable problems are.
* The article makes frequent allusions to NOE in the introduction without explaining what it is except the abbrv. -- this is a simple fix that can really make this work more accessible to more parts of the ML audience.
Overall: this seems like a really interesting paper! With the current presentation, though, it seems more useful at a journal where it can reach the right domain experts
Questions For Authors: I've sprinkled forms of these questions through out the review, but for completeness:
1. Why we can't just get the structure from the experimental NMR /x-ray crystallography data using the existing ways of solving this inverse problem, rather than go through AF3-guidance?
2. Why were these baselines (AF3, existing PDB deposits) chosen? These seem to be solving a very different problem setting, with very different input information available.
3. What is the ground truth that's used for the results in Section 6?
4. Am I correct in understanding that the problem setting we're looking at is that of resolving the original heterogeneous protein structure from the experimental NMR/x-ray crystallography data? If this is the case, it feels like that we should be comparing to CryoDRGN etc.
Again -- I don't feel fully confident in my understanding of this area, so I'm happy to have a discussion about this with the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We hope our answers explain the purpose and novelty of the proposed framework and invite the reviewer to ask further questions.
**Clarification 1: The problem is not trivial**
Raw experimental observations do not contain the atomic model. **In crystallography**, the raw experimental data are intensities of the X-ray diffraction patterns collected at different orientations. Phases, required to reconstruct the electron density, cannot be directly measured and are estimated to generate a rough electron density map that is iteratively refined (partially manually) by improving the fit of the model to the density map that is improved by better phase estimates from the improved model.
Traditionally, the structure is represented as a product of marginal distributions, i.e., Gaussian distributions centered at atom locations with B-factors representing the uncertainty within the atom position. This representation is limited since: 1. ~15% of protein structures exhibit multiple backbone conformers (modes) reinforcing the fact the electron density is an ensemble measurement, 2. It loses information crucial for mechanistic structural biological studies.
We claim to be probably the first approach to sample from the joint distribution of the entire set of atoms. This is possible because of AF3 sequence-conditioned diffusion model. We believe that ensembles of realizations sampled from the joint distribution are necessary to study complicated effects like allostery. We are currently working on refitting the structures on the PDB with ensembles instead of B-factors.
In **biomolecular NMR**, through-space dipolar couplings are measured and assigned to imply pairwise distance restraints within the structure/ensemble. Recovering the structure/ensemble from distance restraints requires restrained molecular dynamics simulations, taking hours to days to produce one sample. Typically hundreds of proteins are simulated, and the ~20 lowest energy ones are selected. Not only is this approach time-consuming, it also fails to treat the NOE restraint as an ensemble measurement [Fowler NJ et al., 2022 Structure]. Attempts at ensemble MD [Lindorff-Larsen et al., 2005, Lange et al., 2008], are extremely computationally expensive and have been tested on only a handful of proteins. Our approach accelerates this workflow by at least two orders of magnitude. We are currently running NMR-based structure determination for the entire PDB. We believe downstream analyses of the refitted NMR structures could potentially lead to scientific discoveries.
**Major concern 2: Baselines.**
Our method is not an “improved version of AlphaFold” and solves a different problem. AF3 is an inductive model capable of predicting the structure from the sequence in the absence of any experimental input. We use AF3 as a prior in solving the inverse problem of recovering the structure given the experimental input (transductive method). AF3 performs well on many protein regions and our benchmarks were chosen as “difficult” and interesting cases where AF3 fails consistently. We show that guiding with experimental data produces structures consistent with the experiment and often fits the electron density better than the PDB structures.
We compare the PDB structure as the state of the art solution and unconditional AF3, to quantify the information gained due to the conditioning by the experimental input. “Simple” cases in which AF3 already performs well, emphasize that the experimental guidance does not deteriorate performance (Tables in the Appendix). We extended our experiments to include additional baselines and proteins. The updated quantitative results are available at the following link - X-ray: https://postimg.cc/XZt5h686, NMR: https://postimg.cc/Wtws9D06.
*No real “groundtruth” exists*. The observables contain only indirect information about the underlying atomic structural ensemble and cannot be compared to directly. The electron density does not contain atom labels, and in NMR only distance constraints are observed directly. Hence there is no real “ground truth”. However, we still quantify and report quality of fit criteria as customary in the field.
**Minor comments:**
> CryoDRGN
CryoDRGN solves inverse problems in cryoEM, outputting electrostatic potential maps, not atomic models and is not relevant to the proposed methods.
> NOE
Sec. 3.2 explains how NOE is manifested in the form of interatomic distance constraints. We will add a brief explanation about the physics of the underlying phenomenon.
> “inverse problems” feels too broad
We can refine it to “Experiment-guided AlphaFold for characterizing protein structure ensembles” or similar.
>Need to flip to appendix for results
With more results than can possibly fit into the space limitations, we had to relegate some part to the Appendix. For the text we favored visuals demonstrating conceptually the different test cases. It is not ideal, and we will attempt a better distribution of the results. | null | null | null | null | null | null |
Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples | Accept (poster) | Summary: This paper introduces "Flow of Reasoning" (FOR), a novel, data-efficient method for finetuning large language models (LLMs) to achieve divergent reasoning, i.e., generating multiple, diverse, and valid solutions to multi-step reasoning problems. FOR formulates LLM reasoning as a Markovian flow on a directed acyclic graph and adapts principles from Generative Flow Networks (GFlowNets) to train the LLM to sample reasoning paths with probabilities proportional to the problem's reward. The authors show, across six challenging reasoning benchmarks (including BlocksWorld, Game24, Rubik's Cube, 1D-ARC, GSM8k, and ProntoQA), that FOR, with minimal training examples (around 15), significantly outperforms various baselines including supervised finetuning, reward-maximization reinforcement learning, and prompting-based methods, in terms of both solution accuracy and diversity. The method leverages efficient exploration techniques including local search and a prioritized replay buffer to improve the training process.
## update after rebuttal
As discussed during the rebuttal period, I think some minor issues still exist, and the authors need to add some details to the paper. I would like to maintain my score.
Claims And Evidence: 1. The paper focuses on 6 main benchmark tasks. It also performs some testing of OOD transfer from smaller to large problems. While testing these benchmarks, how representative are they of all multi-step reasoning problems? There is limited discussion on the types of reasoning these benchmarks don't cover. For example, are there types of reasoning (e.g. causal, counterfactual) for which this approach is expected not to be a good choice?
2. Computational Cost of GFlowNets vs. Alternatives: The introduction and related work sections highlight the potential computational cost of search-based inference methods (ToT, RAP). While FOR amortizes inference cost into training, the paper acknowledges in Appendix C.3 that FOR's training time is significantly higher than SFT and even PPO (Table 7: FOR 6833s, SFT 196s, PPO 1740s). This crucial point needs to be discussed more prominently in the main text, not just the appendix. The trade-off between inference speed and training cost should be explicitly addressed. The claim of efficiency is somewhat misleading without this context. The paper argues for amortized inference, but a user concerned with overall computational cost (including training) might still prefer a slower inference method with much faster training.
Methods And Evaluation Criteria: 1. The paper uses a diversity metric based on manually annotating 50 test examples to evaluate semantic differences between solutions in GSM8K. While this is a reasonable approach, manual annotation can be subjective, and 50 examples might be insufficient for a robust evaluation of diversity in a task as open-ended as mathematical reasoning. The paper should acknowledge the limitations of this approach and, if possible, explore alternative or supplementary diversity metrics that are less reliant on manual annotation, or increase sample size. For instance, using different paraphrases, could be an automated way to detect different solution paths.
Theoretical Claims: N/A
Experimental Designs Or Analyses: 1. The paper mentions that search-based methods (ToT, RAP) and O1-series are evaluated with limited runs due to time and budget constraints. This is a significant limitation, as it can lead to an inaccurate assessment of their performance, especially regarding diversity and creativity, which are inherently stochastic. The paper should either conduct more runs (ideally) or clearly acknowledge the potential for underestimating these baselines' capabilities.
2. The ablation study analyzes the impact of removing each component individually. It would be helpful to also discuss potential interactions between these components. For example, is the replay buffer more important when local search is not used? A more comprehensive ablation study, though potentially computationally expensive, would explore removing combinations of components.
Supplementary Material: I reviewed the supplementary material (Appendices A - H).
Relation To Broader Scientific Literature: The paper leverages GFlowNets, extending their application beyond established domains like molecule generation to multi-step reasoning, similar to but distinct from concurrent work like GFN-CoT by focusing on reasoning steps rather than token-level generation. The work incorporates ideas from reinforcement learning exploration, adapting methods like local search and prioritized replay buffers to the GFlowNet framework for LLMs. Finally, it positions itself within the broader context of LLM prompting and fine-tuning, presenting an alternative approach that attempts to address data requirements of supervised methods and the lack of diversity focus in reward-maximization reinforcement learning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1. While applying GFlowNets to LLM reasoning is novel in this specific formulation, the core techniques (trajectory balance, local search, replay buffer) are adapted from existing GFlowNet literature.
2. The paper acknowledges Takase et al. (2024) as concurrent work but states it is limited to math problems. A more thorough comparison, even if brief, would be valuable. Are there other concurrent works on applying GFlowNets or similar diversity-seeking methods to LLMs that should be acknowledged and differentiated?
3. The algorithm description could be more precise. For example, the "Sample from training dataset" step needs more detail. How are examples selected? Randomly? With specific criteria? The update of the replay buffer D also needs clarification. Are all sampled trajectories added, or only those exceeding a certain reward threshold?
Other Comments Or Suggestions: N/A
Questions For Authors: 1. The paper acknowledges in the appendix that FOR's training time is significantly higher than SFT and PPO. Could you discuss this trade-off between inference speed and training cost more prominently in the main text? In what scenarios would the increased training cost of FOR be justified by the improved inference performance?
2. Have you explored any methods to reduce the training cost of FOR, such as more efficient exploration strategies or approximations of the GFlowNet objectives?
3. How does the performance of FOR change with different base LLMs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your insightful comments and suggestions.
- Q1. How representative are the benchmarks? Are there reasoning types where it fails?
A1: Some tasks like GSM8K and Game24, are popular LLM benchmarks, while others, like BlocksWorld and 1D-ARC, are known to be challenging for LLMs. FoR remains a general framework for multi-step reasoning: whenever a problem can be decomposed into intermediate states and has a well-defined reward, we expect FoR to be applicable.
- Q2: Tradeoff between amortized inference vs. slower inference with faster training. When training cost of FoR is justified by its better inference performance?
A2: Unlike methods like ToT/RAP without diversity, FoR enables efficient amortized inference with diversity. Though training time is long, it eliminates abundant inference costs in ToT/RAP, keeping the overall cost low when the number of tasks needed to be solved is numerous. Once trained, it avoids repeated, expensive searches. In such cases, the initial training cost is largely offset by the benefits of efficient inference. Compared to SFT/PPO, FoR has higher training costs but is highly data-efficient, needing only ~15 examples, reducing data collection costs.
- Q3: Diversity metric (human annotation & automatic)
A3: Following your advice, we use a paraphrase-based method [1] with GPT-4o to automatically evaluate solution diversity on GSM8K. Please see detailed results in A2 to reviewer C9xd due to page limit. The results show that FoR’s diversity is still higher than baselines by using GPT4o, although it consistently overestimates.
- Q4: Evaluation of diversity for search-based methods and O1-series.
A4: We ran ToT-DFS multiple times on the Blocksworld task with the same setup as in the paper. The results are shown below:
||2-step|4-step||6-step||
|-|-|-|-|-|-|
|Method|Acc.(%)|Acc.(%)|Diversity|Acc.(%)|Diversity|
|ToT-DFS|40.0|42.9|1.0|31.3|1.1|
|FoR|100|98.4|1.3|78.4|1.3|
Search methods (ToT/RAP) show limited diversity despite multiple runs. Table 1 in the paper shows ToT/RAP needs 4-40× FoR's inference time per sample, making equal-sample comparison time-unfair. We'd like to clarify that Table 1 shows that O1-mini achieves decent accuracy but limited diversity across multiple runs.
- Q5: Replay buffer is more important without local search?
A5: We add an additional ablation study by removing the local search to assess the impact of the other components in FoR. See the results below:
|Method|4-step Acc.(%)|4-step Div.|6-step Acc.(%)|6-step Div.|
|-|-|-|-|-|
|FoR w/o local search|89.7|1.2|53.9|1.3|
|w/o replaybuffer|78.6|1.1|34.3|1.2|
|w/o ϵ-sampling|83.3|1.1|49.5|1.1|
|w/o augmented reward|71.4|1.0|30.3|1.2|
Table 5 in the paper and the above results show that removing the replay buffer causes a smaller performance drop in FoR with local search (4-6%) than without it (11-19%), indicating that the replay buffer is more critical when the local search is removed.
- Q6: While applying GFlowNets to LLM reasoning is novel, core techniques are adapted from existing GFlowNet literature.
A6: Thanks for recognizing our novel formulation, which allows us to adapt existing approaches to the new multi-step LLM reasoning domain. This ability to reuse and generalize, rather than reinvent everything from scratch, is a key advantage of our work.
- Q7: Related works.
A7: We'll add more discussion on Takase et al. (2024) in the revision. Their work generates diverse math solutions via token-level diversity, similar to Hu et al. (2024) (mentioned in our Introduction, Lines 95–96). Oh et al. (2024) [2] follow these lines and apply GFlowNets to LLM preference optimization. In contrast, FoR targets broader reasoning tasks beyond math or preference learning, modeling thought process structures with GFlowNets.
- Q8: Details on training data sampling and replay buffer update mechanism.
A8: We used random sampling from the full training set. For the replay buffer, we set the capacity to store 50 trajectories, new trajectories will replace the lowest-reward ones when the buffer is full. These details will be added to Appendix D.
- Q9: More methods (e.g. efficient exploration) to reduce the training cost.
A9: We use (1) local search and (2) ϵ-sampling (Sec. 4.7) to encourage efficient exploration. In Game24, we use offline data to reduce model's exploration costs. Other methods to accelerate training are a promising future direction.
- Q10: Different base LLMs.
A10: We test LLama-3-8B, Qwen-2.5-7B, and InternLM2.5-7B-Chat on Blocksworld (one task due to rebuttal constraints), and FoR consistently outperforms baselines (see A5 to reviewer c9xd due to page limit).
References:
[1] Michail et al. "PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models." COLING 2025.
[2] Oh Joon Kwon, et al. GDPO: Learning to Directly Align Language Models with Diversity Using GFlowNets. EMNLP 2024 | Summary: The paper introduces a novel fine-tuning method called Flow of Reasoning (FOR) that trains large language models (LLMs) to generate diverse, high-quality multi-step reasoning paths using minimal training examples. The key idea is to formulate the reasoning process as a Markovian flow on a directed acyclic graph (DAG) and to leverage GFlowNet-inspired training objectives. By assigning probabilities to entire reasoning trajectories proportional to an (unnormalized) reward, FOR encourages the discovery of multiple valid solution paths. Extensive experiments on six reasoning tasks demonstrate that FOR not only improves overall accuracy but also significantly enhances the diversity and creativity of the generated solutions compared to standard methods such as supervised fine-tuning (SFT), reward-maximizing reinforcement learning , and various prompting-based approaches.
Claims And Evidence: The claims made in the submission are well-supported by empirical evidence:
1. The superiority of FOR over baselines is demonstrated across six diverse reasoning tasks with consistent improvements in accuracy, diversity, and creativity metrics (Tables 1-6).
2. The ablation studies (Table 5) convincingly show the contribution of each component (local search, augmented rewards, replay buffer, ε-sampling).
3. Data efficiency is well-demonstrated through comparisons with SFT trained on varying amounts of data (Figure 3).
4. The case studies in Figures 5-6 provide qualitative evidence of FOR's ability to discover diverse solution paths.
Methods And Evaluation Criteria: The proposed method’s formulation of multi-step reasoning as a Markovian flow is both novel and well-motivated. Using the trajectory balance constraint to link the flow to rewards is theoretically sound and aligns with recent advances in GFlowNets. The evaluation criteria—accuracy, diversity (semantic differences among correct solutions), creativity (unique solutions), and runtime efficiency—are appropriate for the task at hand and provide a comprehensive picture of the method’s strengths and limitations.
Theoretical Claims: The paper primarily presents a conceptual framework rather than making formal theoretical claims requiring proofs. The authors adapt existing GFlowNet theory to the domain of reasoning steps in a sound manner.
Experimental Designs Or Analyses: The experimental designs are robust:
* Task Diversity: Evaluations span across several reasoning domains (embodied, mathematical, spatial, abstraction), which reinforces the generality of the approach.
* Baselines and Metrics: The comparisons with both prompting-based methods and various fine-tuning strategies, along with clearly defined evaluation metrics, add rigor to the analysis.
One point for further improvement would be a deeper analysis of how hyperparameters (especially in reward design) affect the outcomes. Additionally, exploring the method’s performance with varying amounts of training data could further highlight its data efficiency.
Supplementary Material: The supplementary materials include additional details on:
* Prompt design and local search procedures.
* Extended experimental results and ablation studies.
* Detailed derivations for the training objectives.
While the supplementary content is comprehensive, providing even more in-depth discussions on hyperparameter sensitivity and the derivation of the trajectory balance constraint would be beneficial.
Relation To Broader Scientific Literature: The key contributions of the paper are related to the current research LRMs, which could potentially be an approach to refine the thinking and exploration process of the current reasoning trajectory.
Essential References Not Discussed: To my knowledge, there are no essential references not discussed.
Other Strengths And Weaknesses: Strengths:
* Novel Formulation: The flow-based perspective and the integration of GFlowNet techniques are both innovative and promising.
* Comprehensive Evaluation: The extensive experimental validation across multiple reasoning tasks is a significant strength.
* Data Efficiency: The method’s ability to work with minimal training data is particularly compelling.
Weaknesses:
* Complexity in Derivations: Some of the theoretical derivations might be challenging for readers not already familiar with GFlowNets.
* Hyperparameter Sensitivity: The reliance on handcrafted reward designs and specific hyperparameters might limit the method’s generalizability without further analysis.
* Scalability: While the approach works well on the tasks presented, its performance when scaling to larger models or more open-ended tasks is not fully explored.
Other Comments Or Suggestions: Suggestions:
* Consider including a more detailed discussion on the sensitivity of the method to different hyperparameter settings, especially in the reward design.
* A discussion on potential computational overhead or scalability challenges when applying the method to larger models or datasets would strengthen the paper.
* Clarifying the limitations and potential failure modes of the method could provide a balanced view of its applicability.
* Try to generalize the current FoR to wider range of challenging reasoning tasks, such as Math and Code.
Questions For Authors: Question:
1. How sensitive is FOR to the reward design? Could you provide guidelines for designing rewards for new tasks to help understand the generalizability of your approach?
2. The training cost of FOR is significantly higher than SFT (Table 7). Are there optimizations that could make the approach more computationally efficient without sacrificing performance?
3. How does FOR perform when the number of possible reasoning paths grows very large? Are there scaling limitations when applying this to more complex reasoning tasks?
4. Have you explored combining FOR with other techniques like verifiers or self-consistency approaches? This might address some of the limitations for longer-range reasoning.
5. How did you determine the reward weights (e.g., λ values) for each task? Is there a systematic approach to tuning these hyperparameters?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for recognizing our strong performance across 6 benchmarks, ablation studies, data efficiency, and case studies.
- Q1: Hyperparameters selection and analysis? Guidelines for designing rewards? Handcrafted rewards?
A1: Please refer to Section 4.7 and Figure 3 for the hyperparameter analysis on a small training set (i.e., 15). We search for the best-performing λ values on each task's training set.
We further assess how the intermediate reward weight λ affects test set performance. Results for varying λ values are shown below:
|λ|0||0.5||1||1.5||2||
|-|-|-|-|-|-|-|-|-|-|-|
|FoR|Acc.(%)|Div.|Acc.(%)|Div.|Acc.(%)|Div.|Acc.(%)|Div.|Acc.(%)|Div.|
|4-step|90.5|1.2|92.9|1.2|97.6|1.2|98.5|1.3|95.2|1.3|
|6-step|47.1|1.2|71.7|1.3|75.8|1.3|78.4|1.3|76.8|1.3|
The results show that performance consistently improves as λ increases up to 1.5, but drops a bit at λ= 2 across 4- to 6-step settings. The influence of λ becomes more pronounced with more steps.
For the guidelines, we use the generalizable common principles: 1. For tasks with likely correct outputs (e.g., GSM8K), a binary success reward suffices. 2. For challenging reasoning tasks (e.g., BlocksWorld, Game24) with sparse rewards, intermediate rewards from prior works (Hao et al., 2023) (e.g., LLM log-likelihood, rule-based) help.
Regarding handcrafted rewards, in GSM8K, we simply use a standard outcome reward (0/1). Additionally, we use the reward model Qwen2.5-Math-PRM-7B, achieving 62.62% accuracy and 1.31 diversity (please see A2 to reviewer c9xd due to page limit). These two reward designs show that FoR does not rely on sophisticated reward design, supporting general applicability.
We will include all these in the revision.
- Q2: Varying amounts of training data.
A2: We ran additional experiments on the 6-step BlocksWorld task using Llama-3-8B with varying training sizes {1, 15, 30, 45, 60}, following the same setup as Section 4.7. The results are shown below:
|Method|1||15||30||45||60||
|-|-|-|-|-|-|-|-|-|-|-|
||Acc.(%)|Div.|Acc.(%)|Div.|Acc.(%)|Div.|Acc.(%)|Div.|Acc.(%)|Div.|
|SFT|15.0|1.0|40.0|1.0|50.0|1.0|60.0|1.1|70.0|1.0|
|FoR|45.0|1.3|80.0|1.3|85.0|1.3|90.0|1.3|90.0|1.4|
FoR consistently outperforms SFT across all data sizes—for example, with just one training example, FoR achieves a relative 200% higher accuracy and a relative 30% higher diversity. This highlights FoR’s data efficiency even in low-resource settings.
- Q3: Theoretical derivations.
A3: Please refer to Appendix B for background information on GFlowNets.
For the trajectory balance (TB) constraint, please see Section 3.2. Below is a short derivation of the TB constraint:
Let $P_F(τ) = \prod_{t=1}^{n} P_F(s_t \mid s_{t-1})$ be the forward trajectory distribution, and $P_B(τ) = \prod_{t=1}^{n} P_B(s_{t-1} \mid s_t)$ be the backward trajectory distribution.
The **TB constraint** requires: $P_F(τ) \cdot Z(s_0) = P_B(τ) \cdot R(s_n)$.
This constraint aligns the flow allocated to $τ$ under the forward policy with reward-scaled backward probability, enabling proper credit assignment.
We will add these in the revision.
- Q4: Scalability and open-ended tasks.
A4: For larger models, we run additional experiments with LLaMA-3-70B and Qwen2.5-72B, and FoR consistently achieves better accuracy and diversity (see A5 to reviewer c9xd due to page limit). On the open-ended GSM8K benchmark, FoR also achieves stronger performance, highlighting its scalability.
- Q5: Limitations and potential failure modes.
A5: Please see the Limitations in Appendix H and will move it to the main text. For potential failure modes, FoR, like other on-policy RL methods, may struggle in sparse-reward settings. However, FoR mitigates this by the augmented intermediate rewards (Sec. 4.7), which provide dense guidance, and by leveraging off-policy data (Sec. 4.3 Game24).
- Q6: More reasoning tasks, such as Math and Code.
A6: We’d like to clarify that FoR has already been evaluated on GSM8K, a math benchmark (Sec. 4.6), as well as on ARC-1D (Sec. 4.5), which involves Python program synthesis.
- Q7: Optimizations to be computationally efficient?
A7: We'd like to clarify that although FoR’s training is slower, it is data-efficient (Section 4.7), requiring only ~15 training examples—keeping the overall training cost low. Two potential further optimization directions are: 1. Off-Policy Data: Parallel off-policy training can reduce the computational cost (Appendix H). 2. Intermediate rewards: accurate reward functions (e.g., robust reward models) enable faster convergence.
- Q8: A large number of reasoning paths? Scaling limitations?
A8: FoR performs well with large reasoning spaces. For example, the Game24 task involves ~8,000 distinct reasoning trajectories per sample, showing that FoR could scale to complex reasoning tasks.
- Q9: Other techniques like verifiers?
A9: We use rule-based rewards as verifiers in Rubik’s Cube and 1D-ARC (Sec. 4.4&4.5). Your suggestion is also a promising direction. | Summary: The paper introduces Flow of Reasoning (FOR), a method for training Large Language Models (LLMs) to generate diverse, high-quality reasoning paths with minimal training examples. The authors formulate multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph, adapting Generative Flow Networks (GFlowNets) to train LLMs to sample reasoning paths with probabilities proportional to their rewards.
The key innovation is enabling "divergent reasoning" - generating multiple valid solutions to a problem rather than just maximizing rewards for a single solution path. FOR incorporates local search with destroy-and-reconstruction processes to augment training trajectories, and uses both online and offline exploration strategies including replays and ε-sampling.
The method demonstrates superior performance across six challenging reasoning tasks (BlocksWorld, Game24, Rubik's Cube, 1D-ARC, GSM8k, and ProntoQA), outperforming prompting-based methods and fine-tuning approaches in both accuracy and solution diversity with only ~15 training examples.
Claims And Evidence: The claims are well-supported by comprehensive experiments:
1. Performance comparisons with numerous baselines (CoT, ToT, RAP, SFT, PPO, GFN-CoT)
2. Detailed ablation studies demonstrating the contribution of each component
3. Quantitative metrics for accuracy, diversity, and creativity
4. Analysis of data efficiency showing FOR's effectiveness with limited examples
5. Well-designed case studies illustrating how FOR discovers multiple correct solutions
The methodology is sound and the empirical results convincingly demonstrate FOR's advantages.
Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem:
The six reasoning tasks cover diverse domains (embodied, mathematical, spatial, abstraction, etc.)
The metrics (accuracy, diversity, creativity) directly address the paper's goal of divergent reasoning
The baselines represent state-of-the-art methods in both prompting and fine-tuning approaches
The ablation studies isolate the contributions of individual components
The reward designs for each task are thoughtfully crafted to balance accuracy and exploration.
Theoretical Claims: There doesn't seems to has concrete theoretical claim
Experimental Designs Or Analyses: Report performance across multiple runs with standard deviations
Define clear metrics that directly measure their objectives
Include OOD testing to demonstrate generalization capabilities
Provide detailed implementation specifications in the supplementary material
Supplementary Material: I reviewed the supplementary material including 1) Algorithm details (Appendix D & E), 2) Prompt templates for all tasks, 3) Case studies showing solution diversity, etc.
Relation To Broader Scientific Literature: The paper builds on three critical areas:
LLM reasoning: Extends CoT/ToT approaches by enabling diverse reasoning paths
GFlowNets: Adapts them from molecule generation to structured reasoning
Diverse sampling methods: Provides a principled alternative to beam search
Unlike previous applications of GFlowNets with LLMs, FOR implements higher-level modeling at the reasoning step granularity rather than token level, which is key to its success in reasoning tasks.
Essential References Not Discussed: The paper generally covers relevant literature thoroughly.
Other Strengths And Weaknesses: Strengths:
Novel formulation of LLM reasoning as a Markovian flow problem
Data efficiency (works with ~15 examples)
Comprehensive evaluation across diverse reasoning tasks
Clear ablation studies demonstrating component contributions
Practical runtime comparable to efficient baseline methods
Weaknesses:
The paper doesn't address integration with larger models or more complex real-world tasks
The trajectory diversity metric could be more sophisticated to better capture semantic differences
Limited exploration of other reward formulations that might further enhance diversity
No analysis of whether diversity truly benefits downstream applications
Other Comments Or Suggestions: Nothing critical to comment
Questions For Authors: How would the method perform with even larger models (70B+)?
Would the gains in diversity increase or decrease relative to baselines?
Could the approach be extended to iterative reasoning refinement settings where solutions are revised based on feedback?
Have you explored whether a meta-learning approach could reduce the need for task-specific reward engineering?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for recognizing our innovative Markovian flow approach to multi-step LLM reasoning with GFlowNets, our strong performance across six benchmarks, and our method’s data efficiency.
- Q1: More complex real-world tasks?
A1: We evaluate FoR on six benchmarks that pose significant challenges for current LLMs (e.g., embodied reasoning in BlocksWorld and math reasoning in GSM8K). These tasks capture core difficulties encountered in real-world scenarios. For instance, the BlocksWorld task requires not only spatial reasoning but also practical world knowledge, closely mirroring real-world planning problems.
- Q2 The diversity metric to capture semantic differences?
A2: Thank you for your suggestion. We intentionally use human annotation for GSM8K to ensure the evaluation is accurate and reliable, despite the high cost of it. Here, we run additional experiments using a paraphrase-based method [1] to automatically evaluate the diversity with GPT-4o to measure the number of different solutions for each problem. The results on GSM8K are shown below:
|Method|Acc. (%)|Div. (Human)|Div. (GPT-4o)|
|-|-|-|-|
|CoT (2-shot)|45.72|1.12|1.60|
|SFT (α = 1.0)|52.69|1.13|1.63|
|FoR|57.39|1.26|1.72|
|FoR w/ Process Reward Model (PRM)|62.62|1.31|1.77|
Compared to human annotation, GPT-4o overestimates the reasoning diversity, indicating that these automatic metrics currently do NOT provide more reliable evaluation. So, we leave this evaluation method as future work.
- Q3: Reward formulations that enhance diversity?
A3: Thanks for your inspiring comment. We agree that explicitly incorporating diversity into the reward formulation may further enhance solution diversity. We will discuss this in the next version.
- Q4: Analysis of diversity truly benefits downstream applications
A4: Thanks, we'll further highlight this in the revision. We'd like to clarify that diversity indeed benefits downstream tasks by improving accuracy, as mentioned in the Introduction (Lines 101-104). For a detailed analysis, please refer to Section 4.7 (Experiment Discussion) and Appendix F. We find that diversity promotes exploratory behaviors (e.g. in Game 24, it explores a large trajectory space), thereby enhancing the robustness of the model (Fig. 6).
- Q5: Scale with larger models, and diversity gains compare to baselines?
A5: Given the time and resource constraints in our research lab during the rebuttal period, we ran additional experiments to evaluate FoR's scalability and diversity gains using LLaMA-3 (8B & 70B) and Qwen2.5 (7B & 72B) on the BlocksWorld task. Results, shown below, indicate that FoR consistently improves accuracy and diversity with larger models. Compared to CoT baselines, FoR exhibits greater diversity gains as model size increases. This suggests that the gains of FoR become more pronounced as model capacity increases. Even with minimal data (15 examples), FoR yields clear improvements, highlighting its potential for robust performance across different base models. The results are shown below and will be included in the revision.
|Model|4-Step Acc. (%)|4-Step Div.|6-Step Acc. (%)|6-Step Div.|
|-|-|-|-|-|
|CoT 5-shot (Llama3-8B)|28.57|1.05|15.82|1.05|
|CoT 5-shot (Llama3-70B)|45.23|1.05|46.46|1.11|
|FoR (Llama3-8B)|98.41|1.27|78.44|1.33|
|FoR (Llama3-70B)|100.00|1.38|87.65|1.40|
|FoR (Qwen2.5-7B)|100.00|1.24|86.86|1.36|
|FoR (Qwen2.5-72B)|100.00|1.41|90.13|1.46|
|FoR (InternLM2.5-7B-Chat)|100.00|1.26|83.83|1.31|
- Q6: Iterative reasoning refinement based on feedback?
A6: Yes, FoR supports iterative refinement. During both training and inference, by incorporating feedback—from the model itself or an additional model—into the state, subsequent state predictions refine solutions iteratively. This aligns with R1/O1's approach, where each step either refines the current state or advances the reasoning. For example, in BlocksWorld, given a current block position, the model predicts an action like 'move blue onto yellow,' and then generates feedback on whether to proceed or refine. Based on the state including this feedback and block positions, the model predicts the next state, iteratively refining its actions or proceeding. While we haven't tested this, iterative refinement based on feedback is a promising future direction.
- Q7: A meta-learning approach to reduce the need for task-specific reward engineering?
A7: While we haven't directly explored meta-learning approaches, we ran an additional experiment on an existing process reward model, Qwen2.5-Math-PRM-7B, for the GSM8K task. Table in A2 indicates that using this reward model with FoR improves performance over the previous 0/1 outcome reward function, achieving a 9% relative increase in accuracy and a 4% relative gain in diversity. This shows a promising step towards reducing the need for task-specific reward engineering.
References:
[1] Michail et al. (2025) "PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models." COLING 2025. | null | null | null | null | null | null | null | null |
Manipulation Inversion by Adversarial Learning on Latent Statistical Manifold | Reject | Summary: This paper aims at improving the gan inversion method to achieve both good reconstruction and realism of editing. Several findings from the paper indicates that to really invert an image back to the latents space, it better to prevent from finding only the local minimum (which harms the realism of editing), but to also preserve the manifold of the the latent space.
Claims And Evidence: I find the several findings from section 3 of the paper quite persuasive and interesting.
However, I am not fully convinced by the performance of this paper. Although claimed by the authors, better manipulation inversion is equal to better inversion, it is how well the editing after inversion that matters. There are enough evidence that the method performs better than others in manipulation inversion, yet I can not find clear evidence that this paper also perform better in editing realism.
Methods And Evaluation Criteria: The proposed method make sense, as the previous point-based inversion methods does not guarantee a good editing after inversion.
The metrics for inversion are reasonable. But as I mentioned above, not enough metrics about editing are presented.
Theoretical Claims: The main assumption seems to be lemma 4.2. I think it I valid given the assumption holds.
Experimental Designs Or Analyses: The given experiments looks good.
Supplementary Material: Yes. All parts.
Relation To Broader Scientific Literature: Not to my knowledge.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: 1. Strengths - This paper attempts to solve the general trade-off between inversion accuracy and editing performance in GAN. The findings in the paper and the proposed method is quite interesting and make sense to me.
2. Strengths - This paper shows better performance in manipulation editing.
1. Weaknesses- As I mentioned in "Claims And Evidence", more comparisons are needed to show the improvement in this paper in editing.
2. Weaknesses- I think there is an important experiment for the authors to complete: given a ground truth latent code and the generated image, when inverted and get a inverted latent, how similar are the gt latent and inverted latent? The similarity can be measured in both distance, and how they would response given the same editing (this should be what this paper perform better than the baselines.)
3. Weaknesses- The writing in section 4 is hard to follow. I understand the objective of separate equations, yet the overall pipeline is still unclear to me. For example, the L_{j} and L_{s} from eqn (9) are not introduced in the paper. Also, the computation of S and J in eqn (5) need further discussion other than referring to other paper.
Other Comments Or Suggestions: No.
Questions For Authors: I will increase my score if the following questions can be answered.
1. Better writing for the method part. What's the entire pipeline? What's the meaning of the missing terms? See the "Claims And Evidence" for more detail.
2. More evaluation on inversion accuracy and editing performance. Please see the "Other Strengths And Weaknesses" and "Claims And Evidence" for more detail.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We wish to sincerely thank the reviewer for the valuable comments and insightful suggestions.
**[Q1 Weakness 1: Editing Performance]**
Indeed, as pointed out by Lemma 4.2 of our manuscript, the performance of editing is in accordance with the accuracy of manipulation inversion. Our best performances on manipulation inversion thus verify the superiority for image editing of our method. We also newly conduct extensive evaluations on image editing, in which our method consistently achieves the best editing performances. We refer to our response to Reviewer#1qCg [Q3] for more details.
**[Q2 Weakness 2: Experiment of Inverting $w$]**
Yes. We agree with the reviewer that the distance between the ground-truth and inverted latent codes is also crucial to evaluate the effectiveness of our method. As illustrated in [Figure 2](https://anonymous.4open.science/r/icml2025_14926/fig_w.pdf), we thus conducted new evaluations regarding the latent space MSE, and report the results in [Table 4](https://anonymous.4open.science/r/icml2025_14926/table_w.pdf). As can be seen from this table, our method, by effectively optimizing the manipulation inversion with our VAT strategy, consistently achieves the best accuracy. We further evaluate the MSE between editing images, by choosing the smile direction, and also report the results in [Table 4](https://anonymous.4open.science/r/icml2025_14926/table_w.pdf), which again achieves the superior performances.
**[Q3 Weakness 3: Clarifying Section 4]**
Indeed, the random manipulation is sampled by our VAT strategy proposed in Section 4, in which one manipulation direction is optimized at each iteration. More specifically, for each latent code $\mathbf{w}$, we compute the perturbation loss $\psi(\mathbf{w}, \mathbf{v})$, and iteratively solve for the worst-case direction $\mathbf{v}^*$ via power iteration on the Hessian of $\psi$, i.e., (Golub \& der Vorst, 2000) of our manuscript, ensuring maximal disruption to the inversion consistency. The worst-case direction $\mathbf{v}^*$ is then used to optimize our encoder to achieve the manipulation inversion. We further provide [Algorithm 1](https://anonymous.4open.science/r/icml2025_14926/alg.pdf) to depict the overall pipeline of our method.
Moreover, $\mathcal{L}\_{j}$ and $\mathcal{L}\_{s}$ are the losses that correspond to the semantic and Jacobian terms. More specifically, by combining Equation (6) and (8) in our manuscript, we are able to calculate the Riemannian gradient given the target loss $\psi(\mathbf{w},\mathbf{v})=\mathbf{d}^T\mathbf{d}$, where $\mathbf{d}=f(g(\mathbf{w}+\beta\frac{\mathbf{v}}{||\mathbf{v}||_2}))-\beta\frac{\mathbf{v}}{||\mathbf{v}||_2}-\mathbf{w}$. In this way, we are able to move a step further based on Equation (6) of our manuscript, as follows
$$
\nabla_r\phi(\mathbf{w})=\nabla_e\phi(\mathbf{w})^T(\mathbf{S}^T\mathbf{S}+\mathbf{J}^T\mathbf{J})=2\mathbf{d}^T(\mathbf{S}^T\mathbf{S}+\mathbf{J}^T\mathbf{J}),
$$
where $\phi(\mathbf{w})=\psi(\mathbf{w},\mathbf{v})$. Therefore, the achieved Riemannian gradient via our established manifold is equivariant to calculate the Euclidean gradient by the modified target loss $\psi(\mathbf{w},\mathbf{v})=\mathbf{d}^T(\mathbf{S}^T\mathbf{S})\mathbf{d}+\mathbf{d}^T(\mathbf{J}^T\mathbf{J})\mathbf{d}$, and we thus denote $\mathcal{L}\_{j}=\mathbf{d}^T(\mathbf{J}^T\mathbf{J})\mathbf{d}$ and $\mathcal{L}\_{s}=\mathbf{d}^T(\mathbf{S}^T\mathbf{S})\mathbf{d}$, such that the manipulation inversion loss can be effectively optimized via our final loss given by Equation (9) of our manuscript.
Furthermore, $\mathbf{J}$ is essentially the Jacobian of the pre-trained generator, which can be computed efficiently by the gradients during each iteration, namely, via backpropagation with respect to pixel outputs (following Ramesh et al., 2018). On the other hand, $\mathbf{S}$ consists of semantic directions, obtained by either supervised or unsupervised manners, which formulates the semantic linear space in the latent space; this is obtained by Shen et al., 2020. We wish to thank the reviewer for pointing out the ambiguity of our Section 4, and we will elaborate more on this, including the calculation on $\mathbf{S}$ and $\mathbf{J}$, in the revision.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for the rebuttal.
I will increase my rating, with the belief that the author will improve their writing, adding the more qualitative and quantitative evaluations in the final version.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive opinion and valuable comments! In the revised version, we will further improve our writing, together with comprehensively including more qualitative and quantitative evaluations to solid evaluations of our advantages. | Summary: This paper aims to enhance the editing ability of current GAN inversion methods. First, this paper investigates the properties of the latent space of StyleGAN and obtains three interesting findings (Sec. 3). Based on these findings, this paper proposes adversarial learning for latent manipulation inversion and anisotropic Gaussian distribution for latent features. Experimental results show improved reconstruction quality in latent manipulation inversion.
Claims And Evidence: The quality of editing pertains not only to the preservation of image details that are not intended for modification but also to the accuracy/quality of the desired attribute (direction). I believe that a more effective manipulation inversion can enhance detail preservation. However, I do not think it necessarily improves the accuracy/quality of the target attributes.
Methods And Evaluation Criteria: No, this paper does not evaluate how the proposed method affects the quality and accuracy of the editing results. For example, does the proposed method reduce the accuracy of adding or removing glasses?
Theoretical Claims: Yes.
Experimental Designs Or Analyses: This paper does not evaluate how the proposed method affects the quality and accuracy of the editing results.
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: The findings of StyleGAN's latent space are interesting; they provide a deeper understanding of StyleGAN and GAN inversion. The analytical method for these findings can be generalized to more common latent spaces.
Essential References Not Discussed: The essential related works are well-discussed and cited.
Other Strengths And Weaknesses: ### Strengths
+ A thorough analysis of the latent space in StyleGAN, with potential generalization to broader latent spaces, such as the h-space in Diffusion Models.
### Weaknesses
- There is a lack of a comprehensive study on how the proposed method affects editing accuracy and quality. For example, the Fréchet Inception Distance (FID) score of the edited results and the accuracy of adding or removing an attribute. These metrics are commonly used in other studies on image editing.
- I do not see the definitions of $L_j$ and $L_s$ in (9).
Other Comments Or Suggestions: N/A
Questions For Authors: 1. As mentioned above, please discuss or evaluate how the proposed method affects the editing quality/accuracy.
2. Please provide the definitions of $L_j$ and $L_s$.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Many thanks for the insightful suggestions.
**[Q1 Weakness 1: Quality and Accuracy of the Editing Results]**
Indeed, the realism and accuracy of image editing are tightly related to the accuracy of manipulation inversion, which has been proved in our manuscript. We further conducted comprehensive evaluations on the editing performances. We may refer to our response to Reviewer#1qCg [Q3] for more details. Notably, for several target attributes with distinct semantics, we calculate the Clip scores to assess the semantic accuracy of edited images, in which our method achieves superior performances. Our method also obtains the lowest FIDs and highest ClipIQA scores, verifying the superior quality of edited images. This is due to the manipulation inversion oriented optimization, in which advanced domains are searched and optimized in the latent space of StyleGAN.
**[Q2 Weakness 2: Clarifying $\mathcal{L}_j$ and $\mathcal{L}_s$]**
Many thanks for pointing out this. Indeed, by combining Equation (6) and (8) in our manuscript, we are able to calculate the Riemannian gradient given the target loss $\psi(\mathbf{w},\mathbf{v})=\mathbf{d}^T\mathbf{d}$, where $\mathbf{d}=f(g(\mathbf{w}+\beta\frac{\mathbf{v}}{||\mathbf{v}||_2}))-\beta\frac{\mathbf{v}}{||\mathbf{v}||_2}-\mathbf{w}$. In this way, we are able to move a step further based on Equation (6) of our manuscript, as follows
$$
\nabla_r\phi(\mathbf{w})=\nabla_e\phi(\mathbf{w})^T(\mathbf{S}^T\mathbf{S}+\mathbf{J}^T\mathbf{J})=2\mathbf{d}^T(\mathbf{S}^T\mathbf{S}+\mathbf{J}^T\mathbf{J}),
$$
where $\phi(\mathbf{w})=\psi(\mathbf{w},\mathbf{v})$. Therefore, the achieved Riemannian gradient via our established manifold is equivariant to calculate the Euclidean gradient by the modified target loss $\psi(\mathbf{w},\mathbf{v})=\mathbf{d}^T(\mathbf{S}^T\mathbf{S})\mathbf{d}+\mathbf{d}^T(\mathbf{J}^T\mathbf{J})\mathbf{d}$, and we thus denote $\mathcal{L}\_{j}=\mathbf{d}^T(\mathbf{J}^T\mathbf{J})\mathbf{d}$ and $\mathcal{L}\_{s}=\mathbf{d}^T(\mathbf{S}^T\mathbf{S})\mathbf{d}$, such that the manipulation inversion loss can be effectively optimized via our final loss given by Equation (9) of our manuscript. | Summary: This article introduces a manipulation inversion method for GAN models. It constructs the generative manifold using different editing vectors to create a more stable and reliable inversion space.
Claims And Evidence: This article conducts extensive experiments to demonstrate that their method achieves state-of-the-art (SOTA) performance. However, based on their theory, the approach could potentially work with multiple directions, though this is not explicitly stated.
Methods And Evaluation Criteria: This research is helpful to improve the quality of the image inversion.
Theoretical Claims: This article is to claim they apply the w'=w+v to re-encode and mimax the similarity of the results and the v the manifold space. But, after reading, there are some issues should be disscussed:
1. the goal is to use different directions to construct the estimated manifold. However, due to the high dimensionality of the latent space, it is challenging to sample all possible directions or optimize the space manifold effectively.
2. in equation (7) of section 4.3, the objective is to maximize 𝑣 , but since 𝑣 is a vector, its meaning in this context is unclear. If the intention is to maximize ∣∣𝑣∣∣, there is a discrepancy because the equation normalizes 𝑣 to 𝑣/∣∣𝑣∣∣, making it a unit vector that does not influence the maximization directly.
Experimental Designs Or Analyses: In inversion research, the primary focus should be on the editing of inverted images. However, there is limited experimental work addressing this aspect. For face images, you can compute the ID metric [1] to demonstrate that your method preserves identity consistency between the original, inverted, and edited images.
Also, you should show the time and space using in your method
[1]Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. ArcFace: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4690–4699, 2019.
Supplementary Material: no supplementary material
Relation To Broader Scientific Literature: in the previous method, it is one-way computing. This method is to multi-inverse optimizing the encoder manifold.
Essential References Not Discussed: [1] Bhattad, Anand, et al. "Make it so: Steering stylegan for any image inversion and editing." arXiv preprint arXiv:2304.14403 (2023).
[2] Wang, Tengfei, et al. "High-fidelity gan inversion for image attribute editing." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[3] Yao, Xu, et al. "Feature-style encoder for style-based gan inversion." arXiv preprint arXiv:2202.02183 (2022).
[4]Roich, Daniel, et al. "Pivotal tuning for latent-based editing of real images." ACM Transactions on graphics (TOG) 42.1 (2022): 1-13.
Other Strengths And Weaknesses: Strengths:
Introduce a new method to inverse images and the results are more stable and reliable.
This article is good writing.
Weakness:
please see the questions.
Other Comments Or Suggestions: no
Questions For Authors: Initially, the goal is to use different directions to construct the estimated manifold. However, due to the high dimensionality of the latent space, it is challenging to sample all possible directions or optimize the space manifold effectively.
Secondly, in equation (7) of section 4.3, the objective is to maximize 𝑣 , but since 𝑣 is a vector, its meaning in this context is unclear. If the intention is to maximize ∣∣𝑣∣∣, there is a discrepancy because the equation normalizes 𝑣 to 𝑣/∣∣𝑣∣∣, making it a unit vector that does not influence the maximization directly.
Thirdly, there are some other method you should compare with:
[1] Bhattad, Anand, et al. "Make it so: Steering stylegan for any image inversion and editing." arXiv preprint arXiv:2304.14403 (2023).
[2] Wang, Tengfei, et al. "High-fidelity gan inversion for image attribute editing." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[3] Yao, Xu, et al. "Feature-style encoder for style-based gan inversion." arXiv preprint arXiv:2202.02183 (2022).
[4]Roich, Daniel, et al. "Pivotal tuning for latent-based editing of real images." ACM Transactions on graphics (TOG) 42.1 (2022): 1-13.
Forthly, when people inite the point on the high density part, like hubness latents [5], it will fail to inverse the images. Would your method have the similar problem with it?
[5] Liang, Yuanbang, et al. "Exploring and exploiting hubness priors for high-quality GAN latent sampling." International Conference on Machine Learning. PMLR, 2022.
Finally, while your claims suggest that using multiple directions could be effective, this approach is not reflected in your experimental results.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for the insightful comments.
**[Q1 Theoretical Claims: Multiple Directions]**
**Questions for Authors 1: Handling Multiple Directions in Manifold Construction:** Indeed, our manifold is established based on the semantic direction $\mathbf{S}$ and Jacobian matrix $\mathbf{J}$. The Jacobian matrix $\mathbf{J}$ is defined by excessive sampling in the latent space. However, we effectively employed the gradients of the generator to obtain $\mathbf{J}$, following (Ramesh et al., 2018) of our manuscript; this avoids the excessive sampling procedure. Our manipulation inversion requires sampling $\mathbf{v}$ when calculating the loss of Equation (1) in our manuscript, which is intractable. We thus propose the VAT strategy, by choosing the worst-case $\mathbf{v}^*$. The worst-case $\mathbf{v}^*$ can also be effectively solved by power iteration on the Hessian of $\psi$, i.e., (Golub & der Vorst, 2000) of our manuscript. We also provide our detailed pipeline in the newly added [Algorithm 1](https://anonymous.4open.science/r/icml2025_14926/alg.pdf).
**Questions for Authors 2: Clarification on Equation (7) and Maximization of $\mathbf{v}$:** Eq. (7) aims to maximize the perturbation loss $\psi(\mathbf{w},\mathbf{v})$, by choosing the worst-case $\mathbf{v}^*$. By constraining $\mathbf{v}$ to a unit vector ($\frac{\mathbf{v}}{||\mathbf{v}||_2}$), we enforce the maximization by focusing on directions, instead of infinitely increasing the scales in the latent space. The scalar $\beta$ governs the perturbation strength instead. This way, the resulting $\mathbf{v}^*$ identifies the direction that most disrupts inversion consistency. This way, our VAT-related loss in Equation (7) of our manuscript essentially formulates an upper bound of the primary manipulation inversion loss given by Equation (1) of our manuscript. Based on this, minimizing $\psi(\mathbf{w},\mathbf{v}^*)$ thus ensures the robust and effective convergence of our encoder $f$ to arbitrary perturbations.
**[Q2 Experimental Designs Or Analyses: Editing Performance and Method Complexity]**
Indeed, as pointed out by Lemma 4.2 of our manuscript, the performance of editing is in accordance with the accuracy of manipulation inversion. Our best performances on manipulation inversion thus verify the superiority for image editing of our method. We also newly conduct extensive evaluations on image editing, in which our method consistently achieves the best editing performances. We refer to our response to Reviewer#1qCg [Q3] for more details.
Regarding the complexity, instead of excessively sampling multiple directions, the proposed VAT strategy effectively reduces the computational complexity of our method. However, our method needs to calculate the Jacobian of the generator, thus requiring to forward twice during training. Consequently, our method was trained on a single NVIDIA GeForce RTX 4090 GPU, with a total training time of about $80$ hours. Regarding the FSE baseline, it consumed 50 hours as comparison. Both methods consumed comparable memory requirements. Inference with our method does not increase computational complexity or memory cost, compared to existing state-of-the-art GAN inversion methods.
**[Q3 Questions for Authors 3: Comparing Methods]**
Ref. [1] operates in Z space for better inversion accuracy and editing. However, we did not find the publicly available codes within the tight rebuttal period. Ref. [2] (HFGI) and Ref. [3] (FSE) are two state-of-the-art comparing methods and have been already compared in our manuscript. For Ref. [4], the proposed PTI is an optimization-based method, which fine-tunes the generator to allow for improved accuracy for each image, instead of the encoder-based methods for all images including Refs. [2,3] and our method. This requires excessive computation to search for the best per image during the inference. In the rebuttal, we also report the new comparisons of PTI in [Table 1](https://anonymous.4open.science/r/icml2025_14926/table_edit.pdf), [Table 2](https://anonymous.4open.science/r/icml2025_14926/table_id.pdf), and [Table 3](https://anonymous.4open.science/r/icml2025_14926/tab_reconstruction_quatitative_results.pdf), in which our method also achieves the best performances for both reconstruction and editing.
**[Q4 Questions for Authors 4: Hubness Problem]**
Indeed, we did not observe the hubness problem empirically. This may due to two possible aspects: (a) **Pretrained Encoder Initialization:** The fixed encoder maps inputs to semantically stable regions in latent space, avoiding hubness-prone initialization that commonly appears in random sampling. (b) **Multi-Constraint Optimization:** Jacobian regularization stabilizes gradients, while semantic constraints anchor optimization to plausible latent regions, jointly preventing convergence to hubness-dominated local optima. We believe further analysis on hubness problem could improve inversion, which is left for future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I appreciate the clarifications, but I still have a few questions:
1. In the updated Algorithm 1, there are some square symbols—could you clarify what they represent?
2. In #1qCg [Q3], it is mentioned that the ClipDiff score is defined. However, the results are not presented as they are in Algorithm 1 and Tables 1, 2, and 3. Could you provide more details or clarify this?
3. Would it be possible to include an example of hubness initialization in the final version? It sounds like a significant improvement, and I would appreciate a concrete demonstration.
But I'm happy to improve the score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer very much for the valuable comments and insightful suggestions. We are glad that we've addressed your questions!
**[Q1 Clarifying Algorithm 1]**
The square symbols in [Algorithm 1](https://anonymous.4open.science/r/icml2025_14926/alg.pdf) indicate squared $l_2$-norms. More specifically, the perturbation loss $\psi(\mathbf{w}, \mathbf{v}) = ||f(g(\mathbf{w} + \beta \frac{\mathbf{v}}{||\mathbf{v}||_2})) - \beta \frac{\mathbf{v}}{||\mathbf{v}||_2} - \mathbf{w}||_2^2$ measures the squared $l_2$ distance between the perturbed reconstruction $f(g(\mathbf{w} + \beta \frac{\mathbf{v}}{||\mathbf{v}||_2})) - \beta \frac{\mathbf{v}}{||\mathbf{v}||_2}$ and the original latent code $\mathbf{w}$, where the subscript $||\cdot||_2$ denotes the $l_2$ norm and the superscript $||\cdot||^2$ is the squaring operation. Similarly, the manipulation inversion loss $\psi(\mathbf{w}, \mathbf{v^*})$ also employs the squared $l_2$-norm to quantify reconstruction error from the encoder output.
**[Q2 Clarifying ClipDiff Score]**
Yes, we introduce ClipDiff to evaluate editing performance on the Church dataset, where ground-truth attribute directions are not available. We report the ClipDiff in [Table 1](https://anonymous.4open.science/r/icml2025_14926/table_edit.pdf), under the “Church Editing” section. This is due to unlike human faces with predefined semantic attributes (e.g., eyeglasses), Church dataset lacks ground-truth annotated attributes, in which the edit directions are obtained via GANSpace in an unsupervised style. Therefore, we cannot rely on traditional metrics such as ID and CLIP scores in [Table 2](https://anonymous.4open.science/r/icml2025_14926/table_id.pdf), in which ground-truth labels are required. On the other hand, [Table 3](https://anonymous.4open.science/r/icml2025_14926/tab_reconstruction_quatitative_results.pdf) reports the reconstruction accuracy, in which the ClipDiff score may not be suitable.
To address this, we develop ClipDiff to evaluate editing effectiveness, which is defined as the cosine distance between CLIP image embeddings from the input and edited images. A larger ClipDiff score indicates a distinct semantic shift, capturing editing content. We also use ClipIQA as complementary to assess the perceptual quality of the edited images. This ensures that the editing performance is evaluated from being both semantically meaningful and visually coherent.
We therefore report both ClipDiff and ClipIQA scores for the Church dataset in [Table 1](https://anonymous.4open.science/r/icml2025_14926/table_edit.pdf), under the “Church Editing” section. From this table, our method achieves the highest ClipDiff ($\uparrow0.4273$) and ClipIQA ($\uparrow0.5104$), indicating that our edits are both semantically distinct and perceptually superior quality than all the baselines. We shall further clarify our comparisons based on ClipDiff score in our revised version, together with comprehensive evaluations for editing and comparing methods.
**[Q3 A Concrete Demonstration of Hubness Initialization]**
We appreciate the insightful feedback! Following the suggestion, we conducted additional experiments on hubness latent features. Indeed, the input of our method is the real-world images, and we added new experiments on our inverted latent features from real-world images, acting as the initialization for the StyleGAN generator.
We then calculated the portion of falling into the high-density regions, i.e., belonging to the hubness latent feature that deteriorates the inversion. More specifically, since the threshold $t$ determines the minimum number of $k$-nearest points for the current latent feature to be regarded as the hubness, we inverted the latent features from $10k$ test images and evaluated the numbers of hubness latent features under varying $t$ thresholds. We report the results in [Figure 3](https://anonymous.4open.science/r/icml2025_14926/fig_hubness.pdf) and [Table 5](https://anonymous.4open.science/r/icml2025_14926/table_hubness.pdf), in which the default setting of $t$ is 50 in the suggested Ref. [5]. As can be seen from this figure and table, our approach consistently results in the smallest numbers of hubness latent features across $10k$ samples and different $t$ thresholds, whereas baseline methods such as FSE, E2Style and pSp still exhibit considerable concentration in these problematic areas. For the default setting of $t=50$, our method achieves non-hubness latent features for all $10k$ samples. This statistically demonstrates that our encoder-driven mapping effectively avoids high-density regions—commonly referred to as hubness—in the $W$-space of StyleGAN, which negatively impact inversion quality as pointed out by the reviewer.
In our revised version, we will further clarify this in our final version, emphasizing that the observed advantage is an natural property of our pretrained encoder initialization and manifold-assisted optimization on the latent space of StyleGAN. | Summary: The paper proposes Manipulation Inversion, a novel GAN inversion method that addresses the inherent trade-off between accurate image reconstruction and realistic editing by focusing on manipulating latent spaces rather than direct image reconstruction. Motivated by a systematic analysis of the latent space in StyleGAN, the authors uncover critical issues: multiple latent representations corresponding to similar identities, anisotropic semantic variations, and inconsistent image variations from latent edits. To tackle these, the authors establish a latent statistical manifold using a factorized probabilistic model, capturing local curvature and semantic directions in the latent space. An adversarial learning strategy is introduced to efficiently optimize inversion performance without excessive sampling. Experimental results demonstrate superior manipulation inversion and reconstruction accuracy, validating the method's compatibility with diverse GAN architectures.
## update after rebuttal
Thank the authors for more information provided in rebuttal. I think many of my concerns have been addressed, including the implementation details and quantitative editing performance. But the additional visual examples still don't show any advanced editing case except the current glasses, smile and makeup. So I would just maintain my current ratings of weak leaning toward acceptance.
Claims And Evidence: The observations and analyses of existing issues of current algorithms are accurate and reasonable to me. I would suggest to present more visual examples of the current limitations and failure cases etc. Fig. 2 shows a framework but lacks details.
Methods And Evaluation Criteria: The idea of introducing a unit random manipulation to train the inverter is novel and effective. I'm wondering how this random manipulation is sampled. The paper mainly illustrate it mathematically but lacks implementation details.
The evaluation for reconstruction is well conducted, while that for editing is not good enough. Please see below "Experimental Designs Or Analyses" section.
Theoretical Claims: I checked the proofs and formulas in the paper and didn't find issues. I think they're clearly listed and deduced and easy to follow.
Experimental Designs Or Analyses: I think the major issue of the experiments and results is that the editing performance is less evaluated compared to the reconstruction part. The paper claims the editing quality as the key issue to address but lacks related evaluations.
Only Fig. 4 in the main paper displays some editing results, while the effects of "smiling" or "makeup" are natually trivial. The second row of "eyeglasses" doesn't show clearly change compared to other methods.
Figs. 7 and 8 in the appendix show more results, while the total examples are still limited, and the effects are not significant to apply, and no comparisons are made.
No table is provided to measure the editing performance. Although it's not as easy as for reconstructions to measure since there is no ground truth, there are still some indirect metrics such as CLIP score between the edited images and the editing prompt.
The datasets used in the experiments are also limited on human faces and cars. StyleGANs are also originally trained on the bedroom, church etc. datasets. There are also many other third-party pre-trained StyleGANs to leverage, so that readers can understand visually better how the editing performance varies across scenarios and attributes.
Supplementary Material: The supplementary material provides additional visual results and theoretical proofs, which have been discussed in above questions.
Relation To Broader Scientific Literature: In my understanding the idea of introducing a random unit manipulation to train an editing-plausible inverter might be able to be extended to using human feedback or reinforcement learning, compared to the discriminator used in this work.
Essential References Not Discussed: I think this paper has cited sufficient related references.
Other Strengths And Weaknesses: Please see above.
Other Comments Or Suggestions: I have no other comments.
Questions For Authors: I have no other questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Many thanks for the valuable and insightful comments!
**[Q1 Claims and Evidence: More Visual Examples on Analysis]**
Yes. We have conducted further analysis on the generating latent space of StyleGAN, revealing the possible limitations for GAN inversion. More specifically, Figure 2-(a) of our manuscript essentially depicts our Finding 1, and we further add new visual examples in [Figure 1](https://anonymous.4open.science/r/icml2025_14926/fig_findings.pdf)-(a), which further highlights the multi-domain characteristics.
Regarding our Finding 2, Figure 2-(b) of our manuscript emphasizes the anisotropy property within the latent space. We also provide more visual examples in [Figure 1](https://anonymous.4open.science/r/icml2025_14926/fig_findings.pdf)-(b), by additionally calculating the identity (ID) values. From this figure, we can conclude that for the same scale, different manipulation directions exhibit distinct anisotropy across IDs and semantics.
Moreover, our Finding 3, together with Figure 3-(c) of our manuscript, reveals the inconsistent variation. We also clarify the details and provide more visual illustrations in [Figure 1](https://anonymous.4open.science/r/icml2025_14926/fig_findings.pdf)-(c), which demonstrate the non-monotonic trends of MSE in the image space, when consistently increasing the editing scale. Therefore, the above analysis reveals the characteristics of the latent space, in which existing GAN inversion methods may be limited during the optimization. For example, the majority of existing GAN inversion methods optimize the image MSE, which may not ensure the semantic consistency and accuracy within the latent space as pointed out by our Finding 3.
**[Q2 Methods And Evaluation Criteria: Lack Implementation Details]**
Indeed, the random manipulation is sampled by our VAT strategy, in which one manipulation direction is optimized at each iteration. More specifically, for each latent code $\mathbf{w}$, we compute the perturbation loss $\psi(\mathbf{w}, \mathbf{v})$, and iteratively solve for the worst-case direction $\mathbf{v}^*$ via power iteration on the Hessian of $\psi$, i.e., (Golub \& der Vorst, 2000) of our manuscript, ensuring maximal disruption to the inversion consistency. The worst-case direction $\mathbf{v}^*$ is then used to optimize our encoder to achieve the manipulation inversion. We further provide [Algorithm 1](https://anonymous.4open.science/r/icml2025_14926/alg.pdf) to depict the overall pipeline of our method.
**[Q3 Experimental Designs Or Analyses: Editing Performance]**
Indeed, for GAN inversion, the evaluations on editing performances are *ad hoc* due to the lack of ground-truth, as also pointed out by the reviewer. This also motivates us to establish a new proxy to evaluate the editing realism, i.e., the accuracy of manipulation inversion, with proved equivalence via Lemma 4.2 of our manuscript. Therefore, Fig. 4 of our manuscript demonstrates that for various editing directions, our method is able to precisely invert back, thus equally proving the superior editing performances of our method.
We further conducted new evaluations based on the suggestions from the reviewer, i.e., using widely applied FID, ID, and Clip scores to evaluate the editing performance in face scenarios, together with the ClipDiff and ClipIQA scores for church scenarios. Please note that for face scenarios, we are aware of the semantics of editing attribute, thus capable of calculating Clip scores as suggested by the reviewer. For church scenarios, however, we cannot access the ground-truth semantics to edit attribute, thus infeasible to calculate IDs and Clip scores. Instead, we employ the editing directions by GANSpace and calculate the difference of Clip features between unedited and edited images (named as ClipDiff), to verify the effectiveness of editing. This is calculated by
$$
\mathrm{ClipDiff} = 1 - \mathrm{cos}\left(\phi(\mathbf{x}\_\mathrm{original}),\phi(\mathbf{x}_{\mathrm{edited}})\right)
$$
where $\phi(\cdot)$ denotes Clip image encoder. Moreover, since church scenarios only contain 300 test images, the FIDs can significantly vary, and we calculate ClipIQA scores instead to evaluate the subjective quality of edited images.
We report the results in [Table 1](https://anonymous.4open.science/r/icml2025_14926/table_edit.pdf), in which our method consistently achieves the best editing performances, across different scenarios and attributes. Our superior editing performance is also in accordance with the best performance of manipulation inversion, in which their equivalence has been pointed out in our manuscript.
**[Q4 Relation to Broader Scientific Literature]**
Yes, using human feedback or reinforcement learning instead of our VAT mechanism is expected to be useful when extending our manipulation inversion method to large-scale generative models, in which the latent space becomes further complicated. We leave this as our interesting future work. | null | null | null | null | null | null |
On Teacher Hacking in Language Model Distillation | Accept (poster) | Summary: The paper introduces the phenomenon of "teacher hacking," where using a fixed offline dataset for distillation degrades performance, and proposes solutions like online data generation and increased data diversity to mitigate this issue.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The key contributions of the paper are well-grounded in the existing scientific literature and provide new insights and practical strategies for improving the distillation process of language models. The paper effectively builds upon previous research while addressing an understudied limitation of knowledge distillation, contributing to the broader understanding of language model training and optimization.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The paper introduces a novel phenomenon, teacher hacking, and provides a systematic framework for its analysis. The findings have important implications for the development of more robust language models.
2. The authors conduct extensive experiments on multiple datasets and model configurations, providing a thorough understanding of the teacher hacking phenomenon and its mitigation strategies.
3. The paper offers practical strategies to mitigate teacher hacking, such as using online data generation and increasing dataset diversity, which can be directly applied in real-world scenarios.
Weaknesses:
1. The paper lacks significant theoretical claims or proofs, focusing primarily on empirical observations. A more theoretical understanding of the teacher hacking phenomenon could strengthen the paper.
2. The paper does not provide a comprehensive comparison with other distillation methods or techniques that could potentially mitigate teacher hacking. This limits the understanding of the relative effectiveness of the proposed strategies.
3. While the authors use multiple datasets, the scope is limited to specific tasks such as summarization, translation, and instruction following. Expanding the scope to include other tasks and datasets could provide a more comprehensive understanding of the phenomenon.
4. The paper does not provide a clear differentiation between teacher hacking and reward hacking, nor does it explore how insights from one phenomenon could inform the other. This limits the understanding of the unique aspects of teacher hacking and its broader implications.
5. The paper lacks detailed descriptions of the experimental procedures, and the code used for the experiments is not open-sourced. This makes it difficult for other researchers to reproduce the results and verify the findings, limiting the transparency and credibility of the research.
Other Comments Or Suggestions: See weaknesses above.
Questions For Authors: See weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank Reviewer jZCq for the valuable feedback. In the following, we address the questions raised in the review.
> **The paper lacks significant theoretical claims or proofs, focusing primarily on empirical observations. A more theoretical understanding of the teacher hacking phenomenon could strengthen the paper.**
We agree that the theoretical statements could strengthen the paper, but we find the derivation of the theory very challenging since the statement of teacher hacking highly relies on the presence of multiple models as well as the need to take into account the optimization procedure itself. We leave theoretical explanations as a promising direction for further work.
> **The paper does not provide a comprehensive comparison with other distillation methods or techniques that could potentially mitigate teacher hacking. This limits the understanding of the relative effectiveness of the proposed strategies.**
In the Appendix, we provide an experiment with multiple distillation losses and data generation strategies, none of which were shown to be resistant to the effect of teacher hacking.
> **While the authors use multiple datasets, the scope is limited to specific tasks such as summarization, translation, and instruction following. Expanding the scope to include other tasks and datasets could provide a more comprehensive understanding of the phenomenon.**
While more experiments are always welcome, we believe our experiments encompass three relatively different tasks.
> **The paper does not provide a clear differentiation between teacher hacking and reward hacking, nor does it explore how insights from one phenomenon could inform the other. This limits the understanding of the unique aspects of teacher hacking and its broader implications.**
The effects of teacher hacking and reward hacking are different because there is no reward function during the distillation procedure and no teacher during the reinforcement learning from human feedback. However, we can interpret teacher hacking as an analogy to reward hacking that happens during the other part of the post-training pipeline. Nevertheless, the roots of both effects come from over-optimization of imperfect proxy objectives, but the nature of proxies is different.
> **The paper lacks detailed descriptions of the experimental procedures, and the code used for the experiments is not open-sourced. This makes it difficult for other researchers to reproduce the results and verify the findings, limiting the transparency and credibility of the research.**
Unfortunately, we cannot open source our code. However, we believe we have provided ample experimental details in the appendix to reproduce all our experiments.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and clarifications. The paper introduces teacher hacking, an interesting and novel phenomenon. However, it lacks sufficient theoretical analysis to support and contextualize this finding. A more robust theoretical foundation would help fully realize its potential impact. Given the theoretical focus of the paper and the modest experimental improvements, I am inclined to maintain my current evaluation. While the work shows promise, it would benefit from further theoretical exploration in future research directions.
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer jZCq for highlighting that "The paper introduces teacher hacking, an interesting and novel phenomenon," which is the main goal of our paper. We also agree with Reviewer jZCq that "it would benefit from further theoretical exploration in future research directions." We hope this current paper would be a first step in that direction. | Summary: This paper identifies and formally defines phenomenon of teacher hacking, which describes a trend for student LM to "overfit" to the teacher model instead of the ground-truth, golden oracle distribution we want it to learn. Authors identify the use of fixed offline dataset to be a key reason for teacher hacking to occur, and advocates the use of online data generation to avoid teacher hacking during LM distillation. They also highlight the data diversity as a key factor and suggest that teaching hacking can be effectively detected by observing when the optimization deviates from polynomial convergence laws.
## Update after rebuttal
I am satisfied with author's rebuttal and keep my score of 3.
Claims And Evidence: The claims made by this paper is quite clear and evidence show satisfying support for their claims. I don't have questions regarding this part.
Methods And Evaluation Criteria: The evaluation criteria and dataset selection make sense to support the claim of this work.
Theoretical Claims: This paper mainly discuss an empirical phenomenon and does not highlight theoretical analysis; there is some theoretical discussion based on JS divergence and KL divergence regarding the loss used in LM distillation and this part looks fine to me.
Experimental Designs Or Analyses: The soundness of experimental settings are generally fine. I have following questions for the experimental design and result analysis:
Q1: You designed two stages for the experiment, and in the first stage, oracle LM generates oracle dataset for SFT on both teacher and student models to provide an initial checkpoint. Why would it be necessary to do this for student model? Since the student model needs to learn from the teacher model in the second stage by distillation (and implicitly learn from the oracle model anyways).
Q2: Not enough analysis for absense of teacher hacking when using online data sources. Why the teacher hacking problem naturally disappear in the online data source distillation setting? Intuitively, the more (diverse) data samples used in distillation, the more similarly should student model mimic the teacher's behaviour. It is not surprising that proxy metric continue to decrease with longer training, but it is surprising (and lacks explanation) why the golden metric can also generalize without degradation as well. Can authors further elaborate on this and provide some insights?
Supplementary Material: I have reviewed each section of Appendix and don't have additional questions regarding them.
Relation To Broader Scientific Literature: The paper identifies degradation of learning effects in the context of language model distillation. The core idea is not very new and the results obtained here are not so surprising, but this work indeed provides some take-away messages for langauge model trainining practitioners.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Paper writing is well-structured, and the take-away information is quite clear and easy to consume, which is an advantage of this paper.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank Reviewer KCu2 for the helpful feedback! In the following, we answer the questions raised in the review.
> **Q1: You designed two stages for the experiment, and in the first stage, oracle LM generates oracle dataset for SFT on both teacher and student models to provide an initial checkpoint. Why would it be necessary to do this for student model?**
We totally agree; it is indeed possible to distill directly, using the pretrained (PT) model as an initial checkpoint. There are two main reasons why we utilize SFT instead of PT:
- The quality of the student generation should not be too poor in order to benefit from online student generations;
- We want to draw a more direct analogy with RLHF, which is typically performed with an initial SFT checkpoint.
> **Q2: Not enough analysis for absense of teacher hacking when using online data sources. Why the teacher hacking problem naturally disappear in the online data source distillation setting? [...] Can authors further elaborate on this and provide some insights?**
We do not have a solid theoretical explanation for this behavior. In our understanding, when the student model observes the teacher’s logit in the same context for too long, it starts memorizing the teacher's logits rather than generalizing to the ground-truth behavior. However, the clear threshold between memorization and generalization in our example is not clear, and we left the identification of it as an interesting direction for further work.
Additionally, we can connect the observed behavior with the increase in diversity of the dataset, which is known to be highly beneficial for the generalization abilities of the network (see, e.g., Bukharin et al, 2024, Chen et al. 2024, Zhang et al., 2025).
#### References
Alexander Bukharin, Shiyang Li, Zhengyang Wang, Jingfeng Yang, Bing Yin, Xian Li, Chao Zhang, Tuo Zhao, and Haoming Jiang. 2024. Data Diversity Matters for Robust Instruction Tuning. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 3411–3425, Miami, Florida, USA. Association for Computational Linguistics.
Chen, H., Waheed, A., Li, X., Wang, Y., Wang, J., Raj, B., & Abdin, M. I. (2024). On the Diversity of Synthetic Data and its Impact on Training Large Language Models. arXiv preprint arXiv:2410.15226.
Zhang, C., Zhong, H., Zhang, K., Chai, C., Wang, R., Zhuang, X., ... & He, C. (2025). Harnessing Diversity for Important Data Selection in Pretraining Large Language Models. ICLR 2025
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. The generalization/memorization point of view is interesting and makes sense. | Summary: This work investigates a novel phenomenon termed "teacher hacking," where student language models (LMs) over-optimize to imperfections in the teacher model during knowledge distillation, leading to degraded performance on the true objective. The authors propose a controlled experimental setup involving an oracle model (ground-truth distribution), a teacher model distilled from the oracle, and a student model distilled from the teacher. Through systematic experiments, they demonstrate that teacher hacking occurs when using fixed offline datasets for distillation but can be mitigated using online data generation techniques. The study highlights data diversity as a critical factor in preventing teacher hacking and provides practical strategies to address the issue.
Pros:
1) The paper identifies and formally defines the "teacher hacking" phenomenon, drawing an insightful analogy to "reward hacking" in reinforcement learning from human feedback (RLHF). This perspective bridges gaps in understanding distillation limitations.
2) The semi-synthetic framework with an oracle model provides a rigorous way to measure ground-truth performance (golden metrics) and proxy metrics (teacher-student alignment). This setup allows clear detection of teacher hacking through U-shaped proxy-golden curves.
3) The authors provide actionable solutions, such as online data generation, increasing prompt diversity, and expanding datasets with multiple completions. These strategies are validated across multiple tasks and model sizes.
4) The study includes experiments across diverse tasks (summarization, translation, instruction following), model architectures (T5 variants), and loss functions (forward/reverse KL, Jensen-Shannon). This breadth strengthens the generalizability of findings.
Cons:
1) Dataset Diversity Limitations:
* While the paper emphasizes data diversity, the experiments on the WMT-14 en-de translation task show minimal impact of dataset diversity on teacher hacking (Fig. 9). This inconsistency suggests the phenomenon may be task-dependent, with translation tasks being less sensitive to diversity manipulations.
* The "x0.5 prompts, 2x gen" and "x0.2 prompts, 5x gen" experiments (Fig. 6) reduce prompt diversity but increase generations per prompt. However, the analysis does not quantify the trade-off between prompt diversity and generation redundancy, leaving ambiguity about optimal resource allocation.
2) Model Size and Capacity Mismatch:
* The experiments distilling T5-large to T5-small show proxy metric increases indicative of classical overfitting rather than teacher hacking (Fig. 11). This suggests the framework may conflate overfitting with teacher hacking when model capacity gaps are large, weakening the specificity of the teacher hacking diagnosis.
* The study focuses on T5-based models, limiting generalizability to other architectures (e.g., transformer-decoder LMs). The phenomenon's dependence on architectural differences remains unexplored.
3) Experimental Design Gaps:
* The offline-online data mixture experiments (Fig. 13) use fixed α values (10%, 50%, 90%) but do not systematically vary α across a continuous range. This prevents identifying the minimum online data proportion required to suppress teacher hacking.
* The golden metric improvements from increased generation budgets (Fig. 7) are marginal for proxy metrics, suggesting diminishing returns. The paper does not analyze cost-benefit trade-offs for different generation strategies.
4) Task-Specific Sensitivity:
* Teacher hacking effects are more pronounced in instruction-following tasks (Natural Instructions) than translation tasks (WMT-14 en-de). The paper does not investigate why certain tasks are more susceptible, potentially due to inherent dataset properties or evaluation metrics.
5) Evaluation Metric Limitations:
* The golden metric (distance to oracle) is task-agnostic and may not correlate with downstream task performance. The paper lacks task-specific evaluations (e.g., BLEU for translation, ROUGE for summarization) to validate practical implications.
Claims And Evidence: The claims in the submission are generally supported by clear and convincing evidence, though with some notable exceptions:
**Supported Claims**
- Teacher hacking occurs during knowledge distillation with fixed offline datasets: Strongly supported by experimental results showing U-shaped proxy-golden curves where golden metrics (distance to oracle) deteriorate while proxy metrics (distance to teacher) improve (Fig. 4, Fig. 5).
- Teacher hacking can be detected by deviations from polynomial convergence laws: Convincingly demonstrated through log-log plots comparing online and offline training dynamics, where offline methods show clear deviations from expected convergence patterns (Fig. 5).
- Online data generation effectively mitigates teacher hacking: Well-supported across multiple tasks and model sizes, showing consistent improvement in golden metrics when using online data sources (Fig. 5, Fig. 8).
**Problematic Claims**
- Data diversity is the key factor in preventing teacher hacking: While supported by experiments manipulating dataset diversity (Fig. 6, Fig. 9), this claim has limitations:
* The impact of diversity varies significantly across tasks (minimal effect on translation tasks)
* The analysis doesn't quantify the trade-off between prompt diversity and generation redundancy
* The experiments don't establish diversity as the sole or primary factor, as other aspects of data quality may also play roles
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for investigating teacher hacking in LM distillation:
- Controlled experimental setup: Effectively isolates and measures teacher hacking through oracle, teacher, and student models.
- Golden and proxy metrics: Capture both ground-truth performance and distillation alignment.
- Diverse experimental scenarios: Demonstrate generalizability across tasks, model sizes, and loss functions.
Some aspects could be strengthened with task-specific evaluations and more rigorous dataset diversity quantification. Overall, the methods and criteria are well-suited to the problem.
Theoretical Claims: The paper makes several theoretical claims about teacher hacking, including its definition, detection via deviation from polynomial convergence laws, and mitigation strategies. These claims are logically consistent and supported by mathematical formulations and experimental evidence. The theoretical framework for measuring distances between language model distributions (forward/reverse KL divergence, Jensen-Shannon divergence) is sound and appropriately applied in the analysis. The paper does not present formal proofs requiring rigorous verification but rather builds its arguments on established information-theoretic measures and empirical validation. The theoretical claims appear correct within the context of the problem and experimental setup presented.
Experimental Designs Or Analyses: Some minor issues could be addressed:
- The impact of dataset diversity shows variability across tasks (particularly translation tasks), suggesting task-dependent effects that aren't fully explored.
- The distinction between classical overfitting and teacher hacking could be more clearly established in some model size comparisons.
- The experimental analysis would benefit from more systematic variation of parameters in the offline-online data mixture experiments.
Overall, the experimental designs are robust and valid for investigating the teacher hacking phenomenon.
Supplementary Material: I reviewed relevant parts of the supplementary material that relate to the experimental designs and analyses, including additional experiments on different datasets, model size variations, loss function comparisons, and details on hyperparameters and dataset configurations.
Relation To Broader Scientific Literature: The paper's key contributions relate to broader scientific literature as follows:
- Teacher hacking analogy: Extends reward hacking research in RLHF, linked to Goodhart's law (Amodei et al., 2016; Gao et al., 2023).
- Controlled setup: Similar to experimental designs in reward hacking studies (Gao et al., 2023), using golden/proxy metrics.
- Mitigation strategies: Align with ML principles emphasizing data quality/diversity.
- Convergence analysis: Relates to scaling law research (Kaplan et al., 2020).
- Knowledge distillation foundations: Builds on established techniques (Hinton et al., 2015; Sanh et al., 2020).
Essential References Not Discussed: The paper adequately covers relevant prior work in knowledge distillation and reward hacking, but some of the recent works can be discussed, e.g., KD for LMs [1,2], reward hacking in RLHF [3]
[1] Dual-Space Knowledge Distillation for Large Language Models. In Proc. of EMNLP 2024.
[2] Revisiting Knowledge Distillation for Autoregressive Language Models. In Proc. of ACL 2024.
[3] Mitigating Reward Hacking via Information-Theoretic Reward Modeling. In Proc. of NeurIPS 2024.
Other Strengths And Weaknesses: Strengths
- Originality: The paper introduces the novel concept of "teacher hacking" in LM distillation, analogous to reward hacking in RLHF, and provides a systematic framework to study it.
- Significance: The findings have practical implications for developing more reliable and safe language model distillation pipelines.
- Clarity: The paper is well-structured with clear explanations of the experimental setup, methodology, and results.
Weaknesses
- Limited architectural exploration: The study focuses on T5-based models, limiting generalizability to other architectures.
Task-specific evaluations: The paper lacks comprehensive task-specific metric evaluations to validate practical implications.
Other Comments Or Suggestions: - Terminology: Ensure consistent terminology usage.
- Task metrics: Include task-specific metrics for comprehensive assessment.
- Architectures: Experiment with different model architectures.
- Dataset diversity: Use quantitative metrics to measure dataset diversity.
Questions For Authors: see weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank Reviewer z8Ux for the detailed and valuable feedback! In the following, we address the issues raised in the review.
## Dataset diversity limitations.
> **The analysis doesn't quantify the trade-off between prompt diversity and generation redundancy**
> **The experiments don't establish diversity as the sole or primary factor, as other aspects of data quality may also play roles**
> **Dataset diversity: Use quantitative metrics to measure dataset diversity.**
We emphasize that our study focuses not just on the diversity of the prompts but on the diversity of the entire prompt-completion dataset. We found that our approach is the only one that satisfies the following three key properties:
(1) Preserving the conditional distribution of completions given the prompt – ensuring that answer quality remains the same.
(2) Reducing the total diversity of the dataset.
(3) Preserving dataset size – eliminating data quantity as a confounding factor.
We are unaware of any experimental setup that maintains dataset size and generation redundancy while modifying prompt diversity. However, if the reviewer can suggest one, we would happily implement it. The closest related work, Song et al. (2024), varied prompt dataset diversity by increasing the number of prompts sampled. This violates (3) and prevents us from isolating the effect of data diversity from total data quantity.
Following Song et al. (2024), we report dataset diversity as the ratio of unique bigrams in tokenized prompt-completion pairs to the total number of tokenized bigrams, multiplied by the dataset size. Additionally, we report the dataset diversity ratio to the dataset's diversity with a 1:1 prompt-generation ratio.
| Dataset \ prompt-generation ratio | 0.2:5 | 0.5:2 | 1:1 | 1:2 | 1:3 |
|-----------------------------------|------------|------------|------------|------------|------------|
| XSum | 4813.3 (0.47) | 7292.1 (0.72) | 10140.1 (1.0) | 10645.8 (1.05) | 11112.9 (1.1) |
| WMT-14 | 48079.4 (0.77) | 54511.5 (0.87) | 62652.0 (1.0) | 85855.5 (1.37) | 105242.1 (1.68) |
| Natural Instructions | 8464.1 (0.45) | 13311.6 (0.71) | 18751.5 (1.0) | 19631.7 (1.05) | 20430.0 (1.08) |
These measurements explain why the influence of diversity reduction on WMT is much smaller: diversity changes in the prompt-completion dataset are not as dramatic as in the summarization or instruction-following datasets. Another factor is that the prompts and completions in the WMT-14 dataset are much shorter than in XSum and Natural Instructions. We will integrate this discussion into the manuscript.
#### Reference:
Song et al. (2024). Scaling data diversity for fine-tuning language models in human alignment. COLING-2024.
## Model Size and Capacity Mismatch
> **[...] the framework may conflate overfitting with teacher hacking when model capacity gaps are large, weakening the specificity of the teacher hacking diagnosis.**
> **Limited architectural exploration**
We agree that it would be interesting to study the phenomenon of teacher hacking on different architectures and with different model capacity gaps. However, we leave this direction for further work.
## Experimental Design Gaps
> **[...] The paper does not analyze cost-benefit trade-offs for different generation strategies.**
> **The experimental analysis would benefit from more systematic variation of parameters in the offline-online data mixture experiments.**
We agree that it would be valuable to identify the minimal additional compute needed to suppress the effect of teacher hacking. We left the question of exploration of optimal trade-offs as a direction for further work.
## Task-Specific sensitivity
> **[...] The paper does not investigate why certain tasks are more susceptible, potentially due to inherent dataset properties or evaluation metrics.**
We agree that studying teacher hacking on a larger variety of datasets and tasks might be interesting, especially given the task-dependent strength of some of our recommendations. We leave this study as a promising direction for further work.
## Evaluation metric limitations
> **Task metrics: Include task-specific metrics for comprehensive assessment.**
The decision to use the distance to the oracle as the only ground-truth evaluation metric is connected to the final problem we are solving. In particular, ROUGE / BLEU metrics between oracle and student generations will serve only as a very noisy and imperfect proxy to the distance between student and oracle distributions, which we can evaluate using a less noisy estimation. At the same time, using these metrics with human references would violate the assumption of our setup that the oracle model is a source of ground-truth distribution.
## Missing related work
We will happily add suggested references to the related work section. | null | null | null | null | null | null | null | null |
Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples | Accept (poster) | Summary: This paper investigates the effect of “difficult” examples in preference optimization (particularly, in the context of DPO).
It finds that these examples harm performance, and propose a data selection algorithm to filter these examples to be applied before DPO.
Claims And Evidence: The claims are mostly convincing. Certain claims, e.g., “Selective DPO … reduces undesired hallucinations” are not directly supported by evidence. The authors should remove this claim if they do not specifically investigate hallucinations.
Methods And Evaluation Criteria: There are a few areas in which evaluations could be improved (in decreasing order of importance):
1. It feels strange that the authors compare Selective DPO to DPO variants like SimPO which involve an algorithmic change rather than data selection. Rather than comparing Selective DPO to other preference optimization algorithms, to illustrate the value of data selection the authors should be applying data selection to each of these algorithms and showing improvement over the base algorithm (without data selection). For example, they should compare SimPO to Selective SimPO.
2. The authors mention other data selection methods in the related works (e.g., confidence-based selection) but do not include these as baselines for their method. These feel like more important baselines than other preference optimization algorithms like SimPO.
3. Currently, the authors only evaluate performance on model-based preference evaluations (a model rates how much it likes a response, or which of two responses it prefers). This evaluation can be biased and opaque. It may also be of interest to measure the effect of this data selection method on benchmarks with a ground-truth. For example, does this improve MMLU accuracy?
Theoretical Claims: The paper does not include theoretical claims.
Experimental Designs Or Analyses: In the definition of “learned step”, why is the reference model being considered? It feels like it should only depend on the model itself. In particular, even the reference model might be able to reliably distinguish certain preferred vs. rejected answers, but in this formulation it wouldn’t be distinguishing any of them.
Supplementary Material: I did not review supplementary material.
Relation To Broader Scientific Literature: Previous work has suggested that language models should be trained on data on which they exhibit high confidence, e.g., on facts that are well-known [1]. This is the same principle as the authors apply to alignment here. These works should be discussed and cited.
[1] Ghosal, Gaurav, Tatsunori Hashimoto, and Aditi Raghunathan. "Understanding finetuning for factual knowledge extraction." arXiv preprint arXiv:2406.14785 (2024).
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
The authors provide a nice analysis of an intuitive principle for data selection. They are rigorous in explaining its effectiveness (they make an effort to rule out issues like labeling errors as alternative causes for why removing difficult examples helps). The experiments illustrating that model capacity plays a role in difficulty are quite convincing and valuable.
Weaknesses:
Besides the weaknesses described in the individual sections above, this paper may be of limited significance if these findings only hold in a preference optimization setting, and, in particular, for DPO (rather than for its variants or RLHF).
Other Comments Or Suggestions: No.
Questions For Authors: The author’s seem to be hinting at a broader claim about datas selection: that difficult examples hinder performance. Why do they constrain their experiments to an alignment/preference optimization setting? Do these results transfer to other settings (e.g., SFT, pre-training)? If not, then what makes preference optimization particularly amenable to this sort of data selection?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and insightful comments. We address the concerns below:
**Q1) Claims about hallucinations**
We revised the statement “reduces undesired hallucinations” to “generates policies that have lower NLLs”.
---
**Q2) Application to other DPO variants**
Our study evaluates data selection on two datasets and four LLMs, showing significant gains. We agree that testing additional DPO variants is valuable. However, extending our method is non-trivial, as the curriculum relies on model, dataset, and loss function. To address your concerns, we include comparisons with other data selection baselines (see Q3).
**Q3) Comparison with more data selection baselines.**
The mentioned confidence-based selection method selects data using reward margins from golden reward functions. We reproduce their idea using GPT-4 ratings: $r^*(x,y_w) - r^*(x,y_l)$ and sort samples by this reward margin. Results are reported in [this link](https://selective-dpo.pages.dev/) (https://selective-dpo.pages.dev/), labeled with `Reward Margin (Descending, GPT-4)` .
We observe no consistent performance benefit, suggesting that reward margin is not a reliable selection metric in our setting. This may stem from our focus on real datasets, in contrast to the mentioned work using manually corrupted labels.
---
**Q4) Evaluation on MMLU**
We report MMLU and related results in Table 8 (page 17). Performance is broadly consistent across DPO variants. One exception is GSM8K, where SelectiveDPO-Mistral-7B often produces correct answers in dialogue form rather than the strict format `### <The Answer>`. This issue is resolved by including ~10% more math-style examples during training.
---
**Q5) Definition of the learned step**
We defined the *learned step* as the step after which the *implicit reward model* can distinguish the preferred from rejected answers with a large probability: $P(y_w > y_l | x) = \sigma(\beta\frac{\pi_{\theta}(y_w |x)}{\pi_{\text{ref}}(y_w|x)} - \beta\frac{\pi_{\theta}(y_l |x)}{\pi_{\text{ref}}(y_l |x)}) > \sigma(\delta)$. This follows the beautiful DPO formulation and reflects the intuition that the LLM is secretly a reward model. Defining the *learned step* using only $\pi_\theta$ is indeed cleaner and worth further exploration.
---
**Q6) Missing related work**
We appreciate the reviewer’s reminder. The mentioned work observes similar trends in **SFT** training. However, its definition of “unfamiliar knowledge” does not directly apply to **alignment** tasks. We agree the work is very relevant and have now cited it in our Related Work section.
---
**Q7) Scope of the data selection strategy**
We thank the reviewer for recognizing our contribution. The proposed selecting strategy is tailored for alignment data (prompt–preferred–rejected). It does not transfer directly to pre-training or SFT tasks, which use corpus or prompt–completion pairs and would require adapted difficulty metrics. However, we note that many SFT data selection papers (including the one mentioned) are not tested on alignment or pre-training tasks. This is not typically viewed as a limitation.
---
We sincerely thank the reviewer once again. We have revised the paper to include additional baselines and discussions addressing the raised concerns. With these updates, we hope the reviewer may find the work strengthened and reconsider their evaluation. | Summary: The paper starts w/ an observation that preference samples have different difficulty level (i.e., how easy/hard it is to learn for abgiven model w/ a different capacity). The paper posits that harder examples deteriorates preference alignment due to the examples being too hard for a model to learn. The way to quantify this is using the earliest training time that the sample is learned correctly.
On section 3, the paper shows experimental results showing that this is indeed the case-- the number of samples classified as easy/hard correlates w/ the model's sizes. Next, the paper propose to use validation loss as proxy. The paper then shows that including hard samples deteriorates model's performance. And then proposes Selective-DPO: DPO w/ discarding hard samples
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: The paper is related to the DPO and data selection for DPO literature.
Essential References Not Discussed: Yes, I think there is 1 paper that is very related, but not cited: https://arxiv.org/abs/2410.08847
Another thing that i think is somewhat related (but the authors should correct me if I'm wrong), is training dynamics: https://arxiv.org/abs/2009.10795
Other Strengths And Weaknesses: 1. I feel like it will be beneficial to compare or study the relation w/ more complicated data selection techniques that uses other criteria such as: https://arxiv.org/abs/2410.08847
2. Another thing I'm wondering is: Does the validation loss/lerning time changes for each sample as we increase the number of epochs? like for example, if it becomes correct at epoch 2, but then becomes incorrect again in epoch 3, what does this mean?
Other Comments Or Suggestions: See Strength/Weakness
Questions For Authors: See Strength/Weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and for pointing us to highly relevant related works. Your kind comments have helped us better position our contribution. Below we address the concerns in detail:
**Q1) Missing related work.**
Thank you for highlighting `arxiv 2410.08847.` This work focuses on identifying and filtering out training examples that cause *likelihood displacement* in DPO. We believe our work carries different motivations: we study how example difficulty impacts alignment while they analyze which sample cause *likelihood displacement* in DPO. We now include this paper in the Related Work section and highlight the distinction.
Regarding `arxiv 2009.10795`, we appreciate your mention of this important work on training dynamics. Their findings—easy samples have little value, ambiguous ones aid generalization, and hard samples is helpful despite may contain noise—are insightful. However, our conclusions differ in several ways:
- In alignment, hard samples consistently degrade performance.
- Simple data errors cannot fully explain this degradation—we introduced a series of experiments to support this.
- Larger models benefit from difficult examples, as shown in Figure 5.
We believe this difference stems from the models under study: our work focuses on LLM alignment, while theirs analyzes small models and classification tasks.
---
**Q2) Comparison with complicated data selection**
We agree that comparing with more advanced data selection methods (e.g., CHES scores from Arxiv 2410.08847) would strengthen our work. However, conducting this comparison is non-trivial and requires:
- Calculating model-specific CHES scores on our datasets.
- Establishing principled thresholding strategies for selection.
We are actively working on this and will update our results once we complete a fair and thorough comparison.
Alternatively, we add comparisons against other intuitive data selection metrics including *perplexity gap*, *reward margin*, and *completion length* as suggested by other reviewers. The results are available at [this link](https://selective-dpo.pages.dev/) (https://selective-dpo.pages.dev/). We hope these comparisons would alleviate your concern.
---
**Q3) Evolution of validation loss**
To better illustrate our intuition, we visualize the *preference probability* metric: $P(y_w >y_l |x)=\sigma(r(x,y_w) - r(x,y_l))$, which closely aligns with validation loss: $VL = -\log P(y_w > y_l|x)$.
As shown in the new results (Figure 13), many easy samples are learned early. Roughly 40% of the samples remain difficult throughout training, indicated by consistently low preference probabilities.
In response to your question: yes, we do observe a small subset of samples that become “correct” first and “incorrect” after. These samples tend to lie between the easiest and hardest, suggesting the model intermittently understands them—consistent with the “ambiguous instances” concept in the training dynamics work you cited.
---
We appreciate your encouragement to consider easy and ambiguous examples more deeply. While our current work emphasizes the negative impact of overly difficult samples, we agree that a fuller picture—including the role of ambiguous instances—would benefit the field.
Once again, we thank the reviewer for the valuable suggestions and for pointing us to relevant research. Your comments have significantly strengthened the scope and clarity of our revision. | Summary: This paper investigates the impact of difficult samples in DPO settings and finds that overly difficult examples can be detrimental to LLM alignment. Following the curriculum learning (CL) pattern, which organizes examples from easy to difficult, the authors propose Selective DPO. This method utilizes the original DPO loss (referred to as validation loss) as an alternative to the typical metric in CL, the learned step, as a measure of difficulty. The empirical results highlight the effectiveness of Selective DPO, revealing that using only half of the Ultrafeedback examples can achieve better performance compared to using all the data.
Claims And Evidence: The claims within this paper are well supported by empirical evidences.
Methods And Evaluation Criteria: The proposed method, Selective DPO, appears meaningful for LLM preference alignment and could enhance the efficiency of the process.
Theoretical Claims: This paper does not contain theoretical claims.
Experimental Designs Or Analyses: The experimental design is valid and well-organized, encompassing a variety of ablation studies. These include different base models, learning rates, and hybrid DPO schemes, such as other DPO-series algorithms that use selectively easy examples.
Supplementary Material: No supplementary material is provided with this manuscript.
Relation To Broader Scientific Literature: This paper confirms the practicality of using curriculum learning in LLM preference alignment.
Essential References Not Discussed: There are no essential references missing.
Other Strengths And Weaknesses: **Strengths**
- The proposed method, Selective DPO, could largely improve data efficiency in LLM alignment.
- This paper discusses what are difficult examples for LLMs, clarifying that these are not simply data errors in practice.
**Weaknesses**
- The essence of curriculum learning lies in the difficulty metric. Although the authors have discussed simpler metrics in Section 3.1 through the learned step, from a data selection standpoint, it would be more intuitive to conduct experiments that explore and compare these with more simple or relevant metrics, such as completion length or attention scores.
- Compared to DPO-series baselines, this work does not focus on constructing a DPO-based loss function, but instead on data selection for LLM preference alignment. Therefore, it would be more relevant to compare it with data selection baselines, such as [1], which is already mentioned in this work.
[1] Curriculum Learning with Quality-Driven Data Selection, NeurIPS 2024.
Other Comments Or Suggestions: The related work could be improved. More recent data selection work are missing, for example, [1-4].
[1] "Instag: instruction tagging for analyzing supervised fine-tuning of large language models", ICLR 2024.
[2] "A preliminary study of the intrinsic relationship between complexity and alignment", ACL 2024.
[3] "Improving Data Efficiency via Curating LLM-Driven Rating Systems", ICLR 2025.
[4] "Rule-Based Rating and Selection of LLM Training Data", arxiv 2024.
Questions For Authors: Does curriculum learning also apply to the supervised fine-tuning phase beyond preference alignment? Are there any existing studies that support this?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the insightful suggestions regarding comparisons and related work. We address each comment in detail below:
**Q1) Comparison with other difficulty metrics.**
Prior work [0] has examined *prompt length* and *attention scores*, finding limited benefits for alignment. Building on this, we conducted experiments with: *completion length*, *perplexity*, *perplexity gap*, and *reward margin*. Full results are available in [this link](https://selective-dpo.pages.dev/) (https://selective-dpo.pages.dev/).
None of these metrics consistently outperformed our validation-loss-based approach. Notably, sorting by *completion length* (ascending) led to model collapse: the model overfit to short completions and failed to recover, highlighting the potential risks of overly simplistic heuristics.
**Q2) Comparison with additional data selection baselines.**
The suggested baseline [1] targets multimodal LLMs and introduces *perplexity* to select high quality samples for **SFT**. Following their idea, we implemented two variants in the **DPO** setting:
- *Perplexity of chosen*
- *Perplexity gap*
To avoid arbitrary thresholding, we followed a consistent protocol: (1) sort examples by the metric, (2) train with fixed hyperparameters (Table 3, page 15), and (3) evaluate performance across data percentages. Result is available at the shared link. Key findings:
- *Perplexity of chosen* improves over random sampling, suggesting it is a viable scoring function for curriculum learning.
- However, when used as a selection filter, it does not clearly distinguish “useful” from “harmful” examples—all data partitions appear beneficial.
---
**(Q3) Missing related work.**
Thank you for pointing out relevant papers. We have reviewed and will include them in our revised Related Work section:
- [1] emphasizes diversity and complexity for SFT data selection.
- [2] introduces a tree-based measure of data difficulty and finds complex SFT data contributes the most.
- [3] proposes LLM-rated quality scores and emphasizes the role of data quality in SFT.
- [4] explores a general data selection framework for pre-training and fine-tuning.
---
**(Q4) Can curriculum learning benefits SFT?**
Our work centers on **data selection for alignment**, not SFT. We discuss curriculum learning primarily as a tool to investigate example difficulty for alignment tasks. While we are encouraged by positive signs (e.g., Figure 3), we acknowledge that the following discussion is preliminary and may lack the nuance of dedicated studies.
- **CL in alignment**: We observe modest gains in our ablation (Figure 3) and expect greater benefits with refined pacing functions. However, designing these strategies is outside the scope of this work.
- **CL in SFT**: This remains a promising area. Prior studies ([5], [6]) show benefits for learning robustness and reasoning tasks.
- **Difficulty-aware selection in SFT**: Although we focus on alignment, the core idea—that overly difficult examples may hurt small models—may extend to SFT. In particular, [7] reports similar challenges in distillation, where small models underperform when exposed to overly complex teacher outputs. However, we don’t think borrowing our finding to SFT is an easy thing since it needs to redesign the *learned step and validation loss* (SFT data has different format).
[5] YODA: Teacher-Student Progressive Learning for Language Models, Arxiv 2024
[6] Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond, Arxiv 2025
[7] Small Models Struggle to Learn from Strong Reasoners, Arxiv 2025
---
Once again, we thank for the insightful comments and references. We appreciate the positive comments regarding our experiment design and we respectfully hope that the reviewer could reevaluate our work given the responses to your concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttals. I appreciate the experiments with additional baselines or metrics. However, as I said before, while the paper primarily focuses on data selection, the main experiments predominantly compare it to DPO-series baselines rather than exploring data selection for LLM preference alignment. It would be more meaningful to center the primary experiments around data selection efforts. Given this discrepancy, I believe the experimental design has significant issues that require substantial revisions. While I recognize and appreciate the effort put into implementing the baseline, I feel it's insufficient to conclusively demonstrate an overall performance improvement. Therefore, I will retain my original score.
---
Reply to Comment 1.1.1:
Comment: **Thank you for your thoughtful comments.** We also appreciate your earlier positive remarks regarding our ***experimental design and analysis***. In response to your current concern:
**Data selection for alignment remains an underexplored area.** As detailed in our related work (see Section 7 and Appendix, p.14), a few prior eff noisy label settings [1–3]. However, such methods are limited and may not generalize well when label quality is generally high and noise is not the main bottleneck (we verified this in Table 1). Another piece of evidence is that, among the eight works mentioned by reviewers, only one [4] pertains to **preference data selection**—and it targets refusal alignment for unsafe prompts, which differs from our goal of general preference alignment.
**SFT data selection methods are not directly applicable to preference data.** The structural mismatch between SFT and DPO datasets limits the applicability of existing SFT data selection techniques to DPO. While SFT data typically comprises (prompt, completion) pairs, DPO training requires (prompt, preferred, rejected) triplets.
In response to the reviewers' insightful suggestions, we have implemented several SFT-style scoring functions. Now our comparison include: **9 DPO-series algorithms** like WPO and SimPO, **4 existing data selection (correction) methods**: label flipping, label smoothing, CHES(thanks to **Reviewer** **4NoX**)[4], reward margin (confidence-based score)[1], and **3 techniques borrowed from SFT data selection literature**: perplexity gap, perplexity of chosen, completion of chosen. We hope this comprehensive comparison addresses your concern.
We sincerely thank the reviewer for your valuable time, effort, and thoughtful comments, which greatly contribute to strengthening the alignment research community. We hope our work also advances this goal by highlighting the significant yet overlooked role of data selection in alignment.
[1] Impact of preference noise on the alignment performance of generative language models. arXiv 2024
[2] Secrets of RLHF in large language models part ii: Reward modeling. arXiv 2024
[3] A note on DPO with noisy preferences and relationship to IPO. arXiv 2023
[4] Unintentional unalignment: likelihood displacement in direct preference Optimization. ICLR 2025
[5] Curriculum learning with quality-driven data selection. arXiv 2024
---
As requested by the reviewers, we have conducted additional benchmarking experiments (for expanding Table 1) comparing our approach with other data selection methods. Specifically, we evaluate:
- **CHES (lowest 50%)**: an algorithm introduced in [4], originally designed for refusal alignment on unsafe prompts.
- **RM (highest 50%)**: a data selection strategy from [1] that filters out low-confidence samples, identified by GPT-4-generated reward margins.
- **PPL (middle 50%)**: a SFT data selection method proposed in [5]. We select samples with medium-level perplexity on chosen responses following their idea.
The benchmarking results are presented below.
| Mistral-7B-SFT | Length-Controlled Win Rate | Win Rate |
| --- | --- | --- |
| DPO | 15.1 | 12.5 |
| SimPO | 21.5 | 20.8 |
| WPO | 24.4 | 23.7 |
| `CHES(lowest 50%)` | 18.91 $\pm$ 0.74 | 16.5 $\pm$ 1.13 |
| `RM(highest 50%)` | 16.21 $\pm$ 0.66 | 13.13 $\pm$ 1.21 |
| `PPL(middle 50%)` | 17.34 $\pm$ 0.62 | 15.40 $\pm$ 1.10 |
| Selective DPO | 27.1 $\pm$ 0.63 | 28.9 $\pm$ 1.31 |
| Llama-3-8B-SFT | Length-Controlled Win Rate | Win Rate |
| --- | --- | --- |
| DPO | 18.2 | 15.5 |
| SimPO | 22.0 | 20.3 |
| WPO | 23.1 | 22.2 |
| `CHES(lowest 50%)` | 17.12 $\pm$ 0.69 | 15.91 $\pm$ 1.11 |
| `RM(highest 50%)` | 19.7 $\pm$ 0.61 | 16.12 $\pm$ 1.24 |
| `PPL(middle 50%)` | 15.3 $\pm$ 0.59 | 15.68 $\pm$ 1.10 |
| Selective DPO | 24.9 $\pm$ 0.77 | 25.3 $\pm$ 1.36 |
| Qwen-2.5-7B-SFT | Length-Controlled Win Rate | Win Rate |
| --- | --- | --- |
| DPO | 17.8 | 15.9 |
| SimPO | 27.2 | 23.4 |
| WPO | 28.2 | 24.5 |
| `CHES(lowest 50%)` | 17.2 $\pm$ 0.72 | 16.1 $\pm$ 1.18 |
| `RM(highest 50%)` | 18.0 $\pm$ 0.66 | 16.3 $\pm$ 1.20 |
| `PPL(middle 50%)` | 13.72 $\pm$ 0.59 | 16.40 $\pm$ 1.14 |
| Selective DPO | 28.0 $\pm$ 0.63 | 26.4 $\pm$ 0.90 |
| Gemma-2-9B-SFT | Length-Controlled Win Rate | Win Rate |
| --- | --- | --- |
| DPO | 19.0 | 16.4 |
| SimPO | 25.7 | 21.6 |
| WPO | 30.1 | 26.7 |
| `CHES(lowest 15%)` | 12.41 $\pm$ 0.65 | 9.19 $\pm$ 0.92 |
| `CHES(lowest 50%)` | 18.91 $\pm$ 0.75 | 16.54 $\pm$ 1.14 |
| `RM(highest 50%)` | 19.24 $\pm$ 0.78 | 15.46 $\pm$ 1.13 |
| `PPL(middle 50%)` | 21.63 $\pm$ 0.78 | 17.53 $\pm$ 1.18 |
| Selective DPO | 29.1 $\pm$ 0.66 | 29.3 $\pm$ 1.02 | | Summary: This paper focuses on the alignment performance w.r.t. data difficulties. The central claim is that the difficult data points exceeds model capabilities, and therefore harm the alignment results.
To start with, it is crucial to define the difficulty measure, authors use "learned step" as a metric to quantify data difficulty, and find that this variable keeps to be similar across different runs on different data split and shuffle. However, this metric requires evaluation every gradient update, which is time consuming. Authors propose to use validation loss as a proxy metric and show its strong correlation with learned step.
By using proxy metric, data is ordered by its difficulty, which is given by trained reference models, and authors keep $\tau$% top easiest data to sequentially train DPO models.
Results show that SelectDPO is superior to DPO and other variants on chat tasks. Abalation studies on reference models , $\tau$% and weak-to-strong curriculum fultill the analysis of SelectDPO
Claims And Evidence: ### Main Claims:
* Learned step is a proper metric for difficulty evaluation and shows consistency across runs. Supported by Figure 2.
* Validation loss is a good proxy metric for difficulty evaluation. Supported by Figure 2: strong correlationwith learned step.
* Data difficulty orderding (SelectDPO) outperforms random shuffling. Supported by Figure 3.
* Difficult data are not all noisy data. Supported by Figure 4.
* Larger model can benefit from more difficult problem. Supported by Figure 5.
* SelectDPO outperforms other DPO variants (Table 1).
### Weakness & Question
* Label flipping (Figure 4) cannot fully support the claim on data error. It is possible that the "difficult" problem contains more mislabeled data, let's say 30%, and flipping causes the mislabeled data to be 70%. Therefore, the performance drop after flipping will be expected.
Methods And Evaluation Criteria: Methods and eval make sense.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiment looks good overall.
Some questions and weaknesses:
* Considering the variance of the chat benchmarks, authors should report the variance in the result table for better understanding of the performance.
* Is the reference model crucial to be the same as the policy model? Figure 9 shows 7B curriculum is better than 3B's one, what about other 7B models curriculum? Is the reference model type important here?
* Is the validation loss suprior to other data selection strategies?E.g., perplexity gap.
* The DPO validation loss may have length bias. Does the data selection strategy based on validation loss has a length bias too?
Supplementary Material: N/A
Relation To Broader Scientific Literature: This work aligns the line of work in data selection, such as perplexity-based selection.
This work distinct from other works from:
(1) selection strategy
(2) detailed ablation on different hypothesis, design choices and hyperparameters.
Essential References Not Discussed: The paper does not miss important paper to the best of my knowledge.
Other Strengths And Weaknesses: Strenths:
* The paper is very clear
* The experiment is overall solid.
Weakness:
* See above
Other Comments Or Suggestions: ## update after rebuttal
I keep my score.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and constructive feedback, particularly regarding the label flipping experiment and the perplexity gap baseline. These comments help clarify and reinforce our central contribution: that alignment performance is critically influenced by the mismatch between model capacity and example difficulty. Please check our new results at [this link](https://selective-dpo.pages.dev/) (https://selective-dpo.pages.dev/) and the following response:
**Q1) Label flipping experiment (Figure 4a)**
Figure 4a tests whether label noise is the primary cause of performance degradation on difficult examples. Flipping all difficult samples did not improve performance, suggesting that label noise alone is unlikely to explain the difficulty. While partial noise (e.g., 30%) may exist, we do not claim the data is noise-free—only that noise is not the dominant factor.
To address your concern, we flipped only those examples identified as both difficult and potentially mislabeled by a reward model (`Skywork/Skywork-Reward-Gemma-2-27B-v0.2`, 1,414 examples in Qwen2.5). This targeted flipping (`Label Flipping (Skywork)`) also showed no consistent benefit across four models, reinforcing our conclusion. Notably, the original labels are from GPT-4.
**Q2) Reporting variance**
All figures already include standard error bars across runs. In addition, we report standard error (over 3 runs) in the result tables for completeness. Here are the results on Alpaca Eval 2:
| Mistral-7B-SFT | Length-Controlled Win Rate, | Win Rate |
| --- | --- | --- |
| SimPO | 21.5 | 20.8 |
| WPO | 24.4 | 23.7 |
| Selective DPO (LoRA) | $\textbf{25.4} \pm0.80$ | $\textbf{27.4} \pm1.26$ |
| Selective DPO | $\textbf{27.1} \pm 0.63$ | $\textbf{28.9} \pm1.31$ |
| Llama-3-8B-SFT | Length-Controlled Win Rate, | Win Rate |
| --- | --- | --- |
| SimPO | 22.0 | 20.3 |
| WPO | 23.1 | 22.2 |
| Selective DPO (LoRA) | $ 21.1 \pm 0.73$ | $18.3 \pm 1.14$ |
| Selective DPO | $\textbf{24.9} \pm 0.77$ | $\textbf{25.3} \pm 1.36$ |
**Q3) Curriculum transfer**
Figure 9 compares a model’s own curriculum with that of a smaller model trained on the same (pre-training and SFT) data. Results show that a model benefits more from its own curriculum. To test cross-model transfer, we trained Qwen2.5-7B using curricula from Mistral-7B and Qwen2.5-32B. The results confirm our conclusion: **a model’s own curriculum is most effective**.
**Q4) Comparison with perplexity gap.**
While various data selection methods exist for SFT, such as perplexity-based filtering, they are not directly applicable to DPO-style preference data due to fundamental differences in format: SFT data consists of (prompt, completion) pairs, while DPO uses (prompt, preferred, rejected) triplets. This mismatch makes direct application of SFT techniques unsuitable for alignment. To address this gap, we implemented *perplexity of chosen,* *perplexity gap, etc.*
To assess their effectiveness without relying on arbitrary thresholding, we adopted a consistent evaluation protocol: (1) sort examples by each metric, (2) train DPO using the hyper-parameter set in Table 3 (page 15), and (3) evaluate whether performance drops after removing a portion of the data.
None of these alternatives yielded consistent improvements over our validation-loss-based selection strategy.
**Q5) Length bias in the selected data.**
As shown in Figure 10 (page 18), selected (easier) examples tend to have slightly shorter responses and smaller length gaps (preferred minus rejected). This pattern arises because DPO loss often assigns higher loss to longer examples. Since our strategy filters out high-loss (difficult) samples, it indirectly favors examples with smaller length gaps. While this introduces a mild length bias, we agree that it warrants future investigation.
**We sincerely appreciate your insightful suggestions.** These have helped clarify the scope and robustness of our findings. We will incorporate the new experiments and discussions into the revised manuscript. Please let us know if this response sufficiently addresses your concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal. I maintain my tendency to accept papers after reading authors' rebuttal and new results.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for suggesting improvements to our label-flipping experiments and for proposing the insightful perplexity-gap baseline. These additions have significantly enhanced the quality of our manuscript. Please kindly inform the AC if our response fully addresses your concerns. | null | null | null | null | null | null |
Parallel Simulation for Log-concave Sampling and Score-based Diffusion Models | Accept (spotlight poster) | Summary: The authors of this paper study two separate but related problems: sampling from an isoperimetric distribution and also sampling from a score-based diffusion model. Under
some assumptions on the target distribution $\pi$---e.g. it satisfies a
log-Sobolev inequality (isoperimetric) and is $\beta$-log smooth---the
authors demonstrate an approach to generate samples arbitrarily close to being
sampled from the isoperimetric target or a score-based diffusion model that only takes
$O(\log(d/\epsilon^2))$ iterations, while the current state of the art takes
$O(\log^2(d/\epsilon^2))$ iterations. This result assumes that the number of
cores available to parallelize computation is $\Theta(d/\epsilon^2)$, but
they provide analogous results in the case they are bottlenecked on the
number of cores. Their results also typically require more space than some
alternatives, which they attribute to the more volatile paths of the
overdamped Langevin diffusion vis-a-vis the underdamped Langevin
diffusion. E.g., alternatives usually require
$\tilde{O}(d^{1.5}/\epsilon^2)$ space while this approach requires
$\tilde{O}(d^2/\epsilon^2)$.
The paper is purely theoretical in nature, and to prove their above result,
they build upon previous work that uses Picard iterations to subdivide the
sampling problem across time. However, rather treating each subdivided
segment as independent and parallelizing across the different stages, the
authors propose a method that iterates over segments based on both the previous time slice and previous stage of the current time slice to propagate their approximations.
## update after rebuttal
Based on the authors feedback and other reviews, I'm inclined to keep my score of a weak accept mostly out of my lack of confidence.
Claims And Evidence: There are no empirical studies to back up their claim. That being said, the
authors provide a sketch of the proof in Section 3 and also a
detailed proof of their main claims in the appendix. E.g., Theorem 4.2, which establishes
the superior rate (in terms of iterations) for approximately sampling from an isoperimetric
distribution, is proved in Appendix B, while Theorem 5.4, which establishes a similar result but for approximately sampling from a score-based diffusion model, is tackled in Appendix C.
Methods And Evaluation Criteria: There are no empirical analyses conducted in this paper.
Theoretical Claims: No, I only skimmed the proofs in the appendix, these were not checked
line-by-line.
Experimental Designs Or Analyses: Again, there are no empirical results in this paper.
Supplementary Material: Only briefly. I looked through Appendices A-C.
Relation To Broader Scientific Literature: The Algorithm 1 presented in the paper could provide a way to speed up the
algorithms that sample from diffusion-based methods when the number of
compute resources (e.g. RAM and cores) is high. If demonstrated empirically,
this could have a substantial effect on the genAI industry.
Essential References Not Discussed: To the best of my knowledge, no.
Other Strengths And Weaknesses: The paper's core contribution seems to be reordering the manner in which the
Picard iterations are taken to approximate sampling from the path of the
SDE. This appears to be an interesting idea, although the significance is a
bit tempered by the extra space complexity needed to execute this approach.
The paper is only theoretical in nature, so it is hard to state its
practical significance. It's case would be greatly bolstered if there were empirical
work demonstrating its efficacy. That being said, it appears to produce results on
the Pareto frontier of time and space complexity for sampling from an SDE.
The paper is fairly well written, although there are a few mistakes here and
there.
Other Comments Or Suggestions: L128: capitalized "The"
L144: Might have missed it, but what's V?
L183: Is the integral correct? Seems like the $f_t$ should be $f_s$?
L194: "girds"
Questions For Authors: [Q1] Are there any ways to empirically validate whether the approaches
outlined here hold? If so, could they be added to the paper and compared
against alternatives?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and constructive feedback. We are encouraged by your recognition of the core contribution—**the diagonal reordering of Picard iterations to achieve improved parallel iteration complexity, the soundness of the theoretical analysis,
and the potential applicability of our approach in high-compute environments (e.g., with sufficient RAM and cores)**. We address the specific concerns and questions raised below:
**"Are there any ways to empirically validate whether the approaches outlined here hold? If so, could they be added to the paper and compared against alternatives?"**
We agree that empirical validation is important. While we are not aware of existing implementations of our diagonal update scheme, we believe it could be adapted to diffusion-based samplers by modifying their update scheduling. We will add a discussion on this potential direction in the revised version.
**Regarding typos:**
We will carefully proofread the manuscript and correct all grammatical and typographical errors to improve the clarity and overall presentation in the revised version. | Summary: The proposed paper introduces a novel parallel sampling technique that significantly enhances the time complexity of both sampling under isoperimetry and score-based diffusion model inference problems from $O(\log^2 d)$ to $O(\log d)$. The primary algorithmic innovation lies in the parallel sampling across time slices, rather than sequentially updating each time slice. Furthermore, the authors provide a corresponding lower bound, demonstrating the optimality of their proposed algorithm.
Claims And Evidence: The assertions regarding the algorithmic and technical novelty, as well as the enhancement in time complexity, are substantiated, well-defined, and compelling.
Methods And Evaluation Criteria: The proposed parallel sampling algorithm appears to be logically sound and presents a promising avenue for enhancing the time complexity, thereby making a significant contribution to the field.
Theoretical Claims: I have reviewed the appendices for the proofs, and they appear to be sound.
Experimental Designs Or Analyses: This is theoretical work and no experiment is involved.
Supplementary Material: The appendices comprise the supporting evidence for the primary assertions presented in the main content. I have reviewed the appendices to ascertain the veracity of the claims and the supporting proofs.
Relation To Broader Scientific Literature: The work is well-positioned in the literature. The $O(\log d)$ time complexity is clearly an improvement from $O(\log^2 d)$ in [1] in the context of diffusion models and in [2] in the context of sampling.
[1] Chen, Haoxuan, et al. "Accelerating diffusion models with parallel sampling: Inference at sub-linear time complexity." Advances in Neural Information Processing Systems 37 (2024): 133661-133709.
[2] Anari, Nima, Sinho Chewi, and Thuy-Duong Vuong. "Fast parallel sampling under isoperimetry." The Thirty Seventh Annual Conference on Learning Theory. PMLR, 2024.
Essential References Not Discussed: The literature review is comprehensive.
Other Strengths And Weaknesses: The paper is well-structured, effectively summarizing the related works, proposing the novel algorithm, and presenting results in a mathematically rigorous manner. The authors have also included numerous explanatory remarks throughout the paper to facilitate comprehension.
Other Comments Or Suggestions: It would be more beneficial to provide additional intuitive explanations and justifications for the validity of the claim $L_n^j \leq a L_{n-1}^j + b L_n^{j-1}$. It appears that only some intuition for Problem (a) is presented in the main content, which still lacks certain details (“make use of the contraction of gradient descent”). Consequently, readers may require a thorough exploration of the proofs to gain a comprehensive understanding of the algorithm. In my opinion, this formula is the essence of the algorithm and even the entire paper. It would be preferable to explicitly express $a$, $b$, and (in the presence of $\delta^2$) $c$ in terms of the length of the time slice, Lipschitz constant, and other relevant parameters. Furthermore, it would be beneficial to provide additional discussions on the selection of $J$, $N$, and $P$ (which is not adequately explained in the main content).
Questions For Authors: - Can the space complexity be improved from $O(d^2)$ to $O(d^{3/2})$?
- Is it possible to extend this parallel sampling technique to discrete diffusion models?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your detailed and positive feedback. We greatly appreciate your recognition of **our novel parallel sampling technique, the clarity of the paper’s structure, the soundness of the theoretical analysis, and the potential of our approach to substantially improve time complexity and contribute meaningfully to the field**. Below, we address your specific suggestions and questions:
**"...provide additional intuitive explanations and justifications for the validity of the claim $L_n^j\leq aL_{n−1}^j+bL_{n}^{j−1}$...selection of $J$, $N$, and $P$..."**
We appreciate your thoughtful observation that this recurrence forms the core of our analysis. Recall we denote the truncation error at the $n$-th time slice and the $j$-th iteration as $L_n^j$ in the squared sense.
- **For Problem (b):** by Young’s inequality and the definition of the exponential scheme, we can bound the change of $L_n^j$ in one update as
$$
L_n^j = a L_{n-1}^j + b L_n^{j-1}
$$
with $a = O(e^{h/2})$ and $b = O(L^2 e^{h/2} h^2)$. Thus, by choosing the time step $h$ sufficiently small relative to the Lipschitz constant $L$, we ensure convergence along the Picard direction (indexed by $j$). Specifically, this choice guarantees that $b < 1$ and $a > 1$, with $a$ remaining bounded by a constant.
- **For Problem (a):** since the score itself is not Lipschitz smooth, an additional score estimation error term $c \delta^2$ appears in the one-step update of $L_n^j$. To ensure the total score estimation error remains bounded, it is necessary to have $a, b < 1$. Similarly, by Young’s inequality and the definition of the Euler–Maruyama scheme, we can bound the change of $L_n^j$ as
$$
L_n^j = a L_{n-1}^j + b L_n^{j-1} + c \delta^2
$$with$$
a = 1 - 0.1 \frac{\beta h}{\kappa} + O(\kappa)(3 \beta^2 h^2)^P,$$$$
b = O((\beta^2 h^2)^P), \quad
c = O(\kappa \delta^2 h^2)
$$Here, intuitively, $a$ comes from the contraction of the gradient mapping with an additional term from the Picard direction, $b$ reflects convergence along the Picard direction, and $c$ accounts for the accumulation of score estimation error $\delta$ over time length $h$, with an additional scaling by $\kappa$ due to Young’s inequality.
- **On the choice of $J$, $N$, and $P$**:
To ensure convergence along the Picard direction, we choose the step size as $h \approx 1/\beta$. The total time horizon is $\alpha \log(d/\varepsilon^2)$ for Problem (a), and $\log(d/\varepsilon^2)$ for Problem (b), leading to a total of $N = O(\log(d/\varepsilon^2))$ time slices in both cases.
Since Picard iterations converge exponentially fast, it is sufficient for each time slice to undergo $O(\log(d/\varepsilon^2))$ updates to ensure the desired overall accuracy. This implies that the total number of diagonal iterations satisfies $J = N + O(\log(d/\varepsilon^2))$.
For the parameter $P$, which controls the number of steps within each blockwise Picard update, we require $a < 1$ in the recurrence relation to ensure contraction in Problem (a). This imposes the condition $P = \Theta(\log \kappa).$
**"Can the space complexity be improved from $O(d^2)$ to $O(d^{3/2})$?"**
Yes—if only total variation (TV) convergence is required, applying our method to underdamped Langevin dynamics or ODE-based implementations of diffusion models is sufficient to achieve a space complexity of $O(d^{3/2})$. However, extending this to obtain similar guarantees under KL divergence remains an open question, which we leave as an important direction for future work.
**"Is it possible to extend this parallel sampling technique to discrete diffusion models?"**
This is a great question. At a high level, our method operates by regrouping discretized grids along the time horizon and updating them in a diagonal fashion. We believe this diagonal scheduling approach offers a general framework for enabling parallelism along the time direction. While adapting the method to discrete diffusion models may require specific modifications, we believe the core strategy remains applicable and represents a promising direction for future work.
---
Rebuttal Comment 1.1:
Comment: I would like to express my gratitude to the authors for their response. Upon implementing the reviewers' suggestions and clarifying certain parts of the paper, which would enhance potential readers' comprehension of their algorithm and proof methodology, I believe this work has made a significant contribution to the field. Consequently, I have revised my evaluation from 4 (Accept) to 5 (Strong Accept).
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind feedback and updated evaluation. We truly appreciate your support! | Summary: The paper obtains novel rates for the parallel complexity of sampling, both in the gradient oracle (MCMC) setting and the score-based denoising setting. The rates are O(log d/eps), which is sharp (in epsilon). The proof is based on a refined Picard iteration scheme, with careful analysis in order to obtain the rate.
Claims And Evidence: The authors claim a parallel complexity of O(log d/varepsilon) for MCMC under standard assumptions, and for score-based models (SDE variant) under Lipschitz score (which is one of the standard settings). The authors provide detailed proofs for their claims.
Methods And Evaluation Criteria: This is not applicable to the paper.
Theoretical Claims: As discussed, the theoretical claims relate to the parallel complexity, which ultimately arises from a detailed analysis of a Picard iteration scheme. The results seem rigorous on a brief skim.
Experimental Designs Or Analyses: The paper is not empirical in nature, and so does not contain experiments.
Supplementary Material: I have skimmed the proof and the core argument seems to be rigorous. I did not check all the technical details.
Relation To Broader Scientific Literature: The primary references are discussed; this result is related to the parallel complexity of MCMC samplers studied in Amari et al. and subsequent works. Whereas those works obtained $\log^2(d/\varepsilon)$ rates, the present rate obtains the same with a single logarithm.
Essential References Not Discussed: As far as I am aware, the most relevant references have been covered.
Other Strengths And Weaknesses: The primary weakness of this result is that the analysis feels somewhat like an incremental result, utilizing the same Picard scheme (although improved) compared to prior work.
Nonetheless, the improvement in logarithmic factor (which is highly important in the parallel setting) guarantees a tight rate, and the work will be useful for future follow-ups concerning the underdamped Langevin or ODE variant of the score-based generative model.
Thus, I think this work has merit and should be accepted as it stands. Nonetheless, if the authors are capable it would be helpful to supplement this work with any additional results.
Other Comments Or Suggestions: Line 129 is unfinished.
Line 425: parallalizable -> parallelizable
Questions For Authors: A more thorough discussion of condition number (kappa) dependence in both time and space would be appreciated, with reference to prior work.
How difficult is it to handle the underdamped Langevin dynamics? What are the barriers and why can the current analysis not immediately give results?
Can the Lipschitz score estimation assumption be removed, similar to the work from Benton et al.?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and positive evaluation of our work, particularly your recognition of **the sharp iteration complexity in terms of $\varepsilon$ and the rigor of our analysis**. We also appreciate your conclusion that the work merits acceptance. Below, we address your questions and comments in detail:
**"A more thorough discussion of condition number (kappa) dependence in both time and space would be appreciated, with reference to prior work."**
Thank you for pointing this out. We will revise the paper to include a more explicit discussion of how the condition number $\kappa$ affects both the step size and the overall iteration complexity. In particular:
- **For iteration complexity:**
Our method achieves an iteration complexity of $O(\kappa)$, which matches the state-of-the-art sequential query complexity established by [AC24]. In the parallel setting, we believe it is inherently difficult to break the $O(\kappa)$ barrier using the Picard method, as the length of each time slice must scale inversely with the smoothness in order to preserve non-expansiveness along the Picard direction.
- **For space complexity:**
Our method builds on the regrouping of scheduled grids along the time direction, and such schedual was proposed by [VW19]. Since we update all grids simultaneously in the worst case, the space complexity is given by $d N_{\text{seq}} = d \kappa^2$, where $N_{\text{seq}} = \kappa^2$ is the number of time steps required in the sequential setting.
[AC24] Shifted Composition III: Local Error Framework for KL Divergence, Jason M. Altschuler, Sinho Chewi, 2024
[VW19] Santosh Vempala and Andre Wibisono. Rapid convergence of the unadjusted Langevin algorithm: isoperimetry suffices, 2019
**"How difficult is it to handle the underdamped Langevin dynamics? What are the barriers and why can the current analysis not immediately give results?""**
Applying our method to underdamped Langevin dynamics, as in [ACV24], yields $\log(d/\varepsilon^2)$ iteration complexity and improved space complexity $d^{3/2}/\varepsilon^2$. However, due to the failure of the triangle inequality for KL divergence, our current analysis cannot establish optimal iteration complexity for KL convergence—our main goal—so we leave this as future work.
[ACV24] Fast parallel sampling under isoperimetry. Nima Anari, Sinho Chewi, Thuy-Duong Vuong, 2024
**"Can the Lipschitz score estimation assumption be removed, similar to the work from Benton et al.?"**
We believe not, at least for Picard-type methods, as the Lipschitz condition is crucial to ensure convergence along the Picard direction.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their remarks. I would note that, if it is possible to obtain some iteration complexity in KL(discretized process | true process), this could still be of value as it would already give some improvements to the parallel complexity in total variation.
Furthermore, the Lipschitz score estimation assumption is typically tackled by a varying step-size schedule. Perhaps a similar idea could be attempted here?
---
Reply to Comment 1.1.1:
Comment: - Thank you for the remark. We agree that establishing iteration complexity in terms of KL(discretized process | true process) would indeed be valuable, as it can yield improvements in the parallel complexity under total variation as well. We believe this is a promising direction for future work.
- Good question! A varying step-size schedule is a standard approach for handling large Lipschitz constants, as discussed in [YFZS24]. In our setting, if the scheduled time points are regrouped so that each time slice has length bounded by $O(1/\text{smoothness})$, convergence along the Picard direction can be maintained. Under this adjustment, our diagonal-style update remains applicable and effective. More broadly, we believe our regroup-and-diagonal-update framework can be applied to any schedule satisfying this time-slice condition.
[YFZS24] Lipschitz Singularities in Diffusion Models, Zhantao Yang et al., 2024. | Summary: Parallel sampling methods propose to speed up sampling by more efficiently simulating diffusions. Prior work splits up the simulation interval into $\log(d)$ chunks and performs $\log(d)$ iterations on each chunk sequentially. This yields an overall complexity of $\log(d)^2$. This paper proposes to remove the need for the $\log(d)$ sequential steps by communicating between the chunks diagonally as illustrated in their Figure 1. Two close settings of simulating a diffusion with an approximate score are analyzed in the paper: the overdamped langevin diffusion for sampling from un-normalized densities and the reverse diffusion of an OU process. In both cases, the authors show that the $\log(d)$ sequential steps are unnecessary if the chunks communicate diagonally.
## update after rebuttal
The authors have fixed their misstated theorem and have clarified their contribution. I think their diagonal update scheme is interesting and I agree that it does introduce some differences with prior analyses. I raise my score as a consequence.
Claims And Evidence: The paper claims are the following:
1. Theorem 4.2: We can sample from smooth distributions verifying the log-sobolev inequality in $\log(d/\epsilon^2)$ steps by simulating the overdamped langevin diffusion with the diagonal communication scheme in Figure 1. Evidence: The authors provide proofs for this claim by closely extending the work of Anari et al 2024.
2. Theorem 5.4: We can simulate the reverse diffusion with a learnt, uniformly Lipschitz score, in $\log(d/\epsilon^2)$ steps. Evidence: The authors closely build on the proofs of Chen et al 2024 to show that their diagonal communication scheme obviates the need for $\log(d)$ sequential steps.
Methods And Evaluation Criteria: .
Theoretical Claims: Issues with Theorem 4.2: It appears that theorem 4.2 is misstated. A crucial ingredient seems to be the gradient mapping contraction which is used in lemma B.5. This only holds for strongly log-concave distributions. If my understanding is correct, Theorem 4.2 does not apply to the broader class of smooth LSI distributions. If my understanding is wrong, I kindly ask the authors to clarify.
Experimental Designs Or Analyses: .
Supplementary Material: .
Relation To Broader Scientific Literature: The related work is discussed well. The work consists of very close modifications of prior analyses to incorporate the diagonal communication scheme.
Essential References Not Discussed: .
Other Strengths And Weaknesses: The paper is very well written. The statements of problems a and b could be made more precise by explicitly stating the simulated diffusion.
The only minor weakness I see is the (understandable) closeness to prior work and the lack of discussion on the assumptions:
- The $L_\infty$ score approximation for problem a is in my view an unnecessary notational burden and does not capture any realistic setting. Approximate scores are either approximate in weaker metrics or are more common in the problem b setting.
- The uniform Lipschitz assumption in problem b is very strong. The Lipschitz constant of the score can (or, in cases with well separated mode, always) depends on the dimension and it would be beneficial to discuss it as it affects the stepsize $h$ if my understanding is correct.
Other Comments Or Suggestions: .
Questions For Authors: 1. Could the authors clarify if their theorem 4.2 is misstated? Is it missing the strong-log-concavity assumption?
2. Could the authors expand on the possible dimension dependence on the uniform Lipschitz assumption and its effects on their bound?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful questions and appreciation of our work, particularly recognizing the clarity of the writing and **the effectiveness of the diagonal communication scheme for parallel sampling**. Below, we address your questions and comments:
**"...gradient mapping contraction...missing the strong-log-concavity assumption?"**
You're absolutely right—the contraction property used in Lemma B.5 indeed requires the strong log-concavity of the target distribution. We will revise Theorem 4.2 to explicitly state this assumption and clarify its role in ensuring convergence under the diagonal update scheme. Additionally, we will update all related claims and discussions throughout the paper to consistently reflect this assumption.
**"...by closely extending the work of Anari et al 2024... build on the proofs of Chen et al 2024... (understandable) closeness to prior work..."**
While we adopt several standard tools in parts of our analysis—such as interpolation arguments (e.g., VW19 or Section 4.2 of Che23), and Girsanov-based comparisons in Appendix C.2 for the diffusion model—we emphasize that our work addresses analytical challenges not present in prior literature.
In particular, our **diagonal update scheme** introduces **changing start points**, which presents analytical challenges not encountered in prior work. As discussed in Section 3, this required new analysis to control error propagation along the Picard iteration and time direction. As Reviewer KnEh pointed, the recurrence
$$
L_n^j = a L_{n-1}^j + b L_n^{j-1} + c \delta^2
$$
captures the core of our theoretical analysis. We appreciate this insight and will provide additional intuition for this recurrence, highlighting its importance and distinguishing our analytic contributions from previous work.
[VW19] Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices.
[Che23] Sinho Chewi. Log-concave sampling.
[CRYR24] Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity
**"The L∞ score approximation... unnecessary notational burden... does not capture any realistic setting..."**
We would like to clarify that the $L^\infty$ approximation follows the same formulation as in [ACV24]. As discussed in their paper, this form of approximation is natural in the context of discrete sampling, where approximate scores can be efficiently computed in $O(1)$ iterations using access to a weighted counting oracle or Laplace transform. Furthermore, in spin glass models, the score function can be approximated in the $L^\infty$ norm using approximate message passing as shown in [AM22].
Moreover, since the final accumulated error in our method depends on the score approximation error at each queried point with varying weights, a natural refinement is to define a weighted version of Assumption 5.1. Also, another alternative relaxation is to adopt a bias-variance decomposition, as in Assumption 4 of [BCESZ22], though it is still formulated in an $L^\infty$-style metric.
[ACV24] Fast parallel sampling under isoperimetry.
[BCESZ22] Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo
[AM22] Sampling from the Sherrington-Kirkpatrick Gibbs measure via algorithmic stochastic localization
**"The Lipschitz constant of the score... affects the stepsize $h$...Could the authors expand on the possible dimension dependence on the uniform Lipschitz assumption and its effects on their bound?"**
We acknowledge that the uniform Lipschitz assumption in Problem (b) is indeed strong; however, it is a common simplification adopted in prior works [CRYR24], [CCLL22]. As discussed in [SBDD22], the Lipschitz constant must be large to capture multimodal distributions, reflecting a trade-off between model expressivity and training stability. In particular, in settings with well-separated modes, the score's Lipschitz constant can grow significantly, even becoming unbounded near the zero point [YFZS24].
To ensure convergence of the Picard iterations under these conditions, the product $L_s^2 e^{h_n} h_n$ must be sufficiently small. This requirement implies that the length of each time slice, $h_n$, should scale as $O(1/L_s^2)$. Consequently, the number of time slices becomes $N = O(L_s^2 \log d)$, leading to an **overall iteration complexity of $O(L_s^2 \log d)$**.
Furthermore, our regrouping of time grids provides a flexible framework that could help mitigate Lipschitz singularities in other diffusion models, such as E-TSDM, as studied in [YFZS24]. We will expand on this point in the revised version.
[CRYR24] Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity
[CCLL22] Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
[YFZS24] Lipschitz Singularities in Diffusion Models
[SBDD22] Can Push-forward Generative Models Fit Multimodal Distributions? | null | null | null | null | null | null |
How Effective Can Dropout Be in Multiple Instance Learning ? | Accept (poster) | Summary: The authors investigate the effectiveness of dropout in Multiple Instance Learning (MIL), asserting that removing the top-k most important instances within a bag enhances performance and generalization, even under noise attacks. Additionally, they introduce a dropout method, termed MIL-Dropout, which integrates into existing MIL frameworks.
Claims And Evidence: Based on the experimental analysis, the manuscript claims that MIL-Dropout enhances the performance of existing MIL methods across five MIL benchmark datasets and two WSI datasets while incurring negligible computational cost.
Methods And Evaluation Criteria: Yes, proposed methods supports the claims, but required more evaluation.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: The experimental analysis on five MIL benchmark datasets and two WSI datasets demonstrates that MIL-Dropout enhances the performance of existing MIL methods.
Supplementary Material: A. Investigation Experiment Design
Relation To Broader Scientific Literature: Integrating the proposed dropout approach can enhance the performance of Multiple Instance Learning (MIL).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - The manuscript fails to clearly articulate its key findings and contributions.
- The related work and literature survey are highly limited. The authors should engage more with existing research, outlining relevant findings that informed their approach.
- Additional background information on Multiple Instance Learning (MIL) should be included in the introduction.
- The manuscript does not effectively present the impact or effectiveness of dropout in MIL or how performance varies as a result.
- In Figure 4 (Right), the legend is missing.
Other Comments Or Suggestions: - In Figure 4 (Right), the legend is missing.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review our work and for providing many valuable comments. Please find our detailed responses to the reviewer’s concerns below:
---
1. **Unclear Contributions:**
Our investigation reveals two key findings:
- **Finding 1:** DropInstance produces flatter local minima and improves generalizability compared to DropNeuron.
- **Finding 2:** DropInstance mitigates gradient direction errors during the learning trajectory, thereby enhancing data fitting and
overall performance, which we have summarised in the manuscript.
2. **Insufficient Literature Review:**
We agree with the reviewer’s comment regarding the limited discussion of related work. In the revised version, we will include a more comprehensive review of relevant studies.
3. **MIL Background:**
Due to space limitations in the main paper, we will provide additional background information on Multiple Instance Learning (MIL) in the supplementary material.
4. **Experimental Results:**
The results reported in Tables 1 and 2 reflect the performance changes achieved by incorporating our proposed dropout method, which is designed to enhance the performance of existing MIL approaches. We will also add further visualizations in the supplementary material to illustrate the differences in model performance before and after applying our method.
5. **Legend Clarification:**
We will revise the legend in the updated version to improve clarity.
6. **Insufficient Evaluation:**
In line with suggestions from other reviewers, we have conducted additional experiments, as shown below.
- **Because UNI was trained on TCGA and Camelyon, using those features risks data leakage. To avoid this, we evaluate UNI features on the EBRAINS dataset by running the experiment three times**.
|Model|Accuracy|F1|Δ Accuracy|Δ F1|
|:--|--:|---:|---:|--:|
|ABMIL|65.4|68.7|—|—|
|+MIL Dropout|70.4|73.2|+ 5.0|+ 4.5|
|TransMIL|67.4|74.4|—|—|
|+MIL Dropout|71.3|79.4|+ 3.9|+ 5.0|
|DSMIL|67.4|74.4|—|—|
|+MIL Dropout|69.3|76.0 |+ 1.9|+ 1.6|
|DTFD|53.4|63.6|—|—|
|+MIL Dropout|64.8|69.8|+ 11.4|+ 6.2|
|ILRA|64.8|74.4|—|—|
|+MIL Dropout|66.8|76.0|+ 2.0|+ 1.6|
|CAMIL|60.7|68.5|—|—|
|+MIL Dropout|69.1|75.6|+ 8.4|+ 7.1|
|R2T|57.9|66.1|—|—|
|+MIL Dropout|64.8|69.8|+ 6.9|+ 3.7|
- **Below are the results on Camelyon16 (Imagenet pre-trained) using the mentioned latest method. We add MIL‑Dropout via two linear layers before feeding features into the MIL module**.
|Model|Accuracy|F1|AUC|Δ Accuracy|Δ F1|Δ AUC|
|:-|--:|---:|--:|--:|--:|--:|
|ILRA|85.8|81.4|88.4|—|—|—|
|+MIL Dropout|87.2 |86.4 |90.1 |+1.4|+5.0|+1.7|
|CAMIL|84.8|82.6|86.8 |—|—|—|
|+MIL Dropout|86.4|85.7|91.2 |+1.6|+3.1|+4.4|
|R2T|83.1|81.6|84.5|—|—|—|
|+MIL Dropout|85.6|84.7|87.4 |+2.5|+3.1|+2.9|
- **We evaluated our method on survival prediction using two TCGA datasets(LUAD and BRCA), following the protocols of [4,5]**.
| Model | TCGA‑LUAD | +MIL-Dropout | Δ | TCGA‑BRCA | +MIL-Dropout| Δ |
|--|-:|-:|--:|--:|---:|----:|
| ABMIL | 65.7 | 67.7 | +2.0 | 72.8 | 75.2 | +2.4 |
| DSMIL | 61.4 | 63.4 | +2.0 | 68.8 | 72.1 | +3.3 |
| TransMIL | 64.3 | 67.2| +2.9 | 72.1 | 74.7 | +2.6 |
| DTFD (AFS) | 62.0 | 65.3 | +3.3 | 71.1 | 73.6 | +2.5 |
| DTFD (MaxS) | 65.9 | 68.3 | +2.4 | 72.8 | 75.7 | +2.9 |
- **Our MIL dropout can be seamlessly integrated with other MIL methods, as it is orthogonal to them. We conducted experiments using our patch features and observed performance improvements, further demonstrating the flexibility and effectiveness of our dropout.**
|Model (runing 3 times on CAMELOYON16 Imagenet)|Accuracy|F1|AUC|Δ Accuracy|Δ F1|Δ AUC|
|:---|----:|---:|----:|---:|----:|----:|
|PAM[1]|85.0|83.2|86.7|—|—|—|
|+MIL Dropout|86.2|84.3|87.7 |+1.2|+1.1|+1.0|
|DPSF[2]|—|—|—|—|—|—|
|[3]|—|—|—|—|—|—|
> **[2][3]: The full open-source code has not been released; we believe our MIL dropout can be integrated into these methods.**
---
We hope our response addresses your concern. please let us know if you have any questions. Thanks!
---
**References**
[1].Unleash the Power of State Space Model for Whole Slide Image with Local Aware Scanning and Importance Resampling (TMI 2024)
[2].Dynamic Policy-Driven Adaptive Multi-Instance Learning for Whole Slide Image Classification (CVPR 2024)
[3].Boosting Whole Slide Image Classification from the Perspectives of Distribution, Correlation and Magnification. (ICCV 2023)
[4]. Jaume, G., et al. (2024). Modeling dense multimodal interactions between biological pathways and histology for survival prediction.CVPR.
[5]. Song, A. H., et al. (2024). Multimodal prototyping for cancer survival prediction.ICML.
---
Rebuttal Comment 1.1:
Comment: The rebuttal adequately addresses my major concerns, particularly regarding the evaluation details, which are now satisfactory. However, I expect the authors to address the remaining weaknesses.
---
Reply to Comment 1.1.1:
Comment: **Thank you very much for reviewing our response and providing your positive feedback—we truly appreciate your time and consideration.**
Regarding the remaining weaknesses mentioned by the reviewer, we are unclear about what additional basic findings or clarifications are expected from us. **We have already presented our key findings in the manuscript (see Section 3.1) and detailed the performance variations of MIL dropout in the ablation study**. Is the reviewer suggesting that we provide a comparison with other dropout methods, including a review of the related dropout literature?
**Since our work is the first to investigate dropout in MIL**, we have identified several other dropout methods for comparison and plan to incorporate them in future revisions.
| Model | Accuracy | F1 | AUC |
|--------------------|----------|------|------|
| ABMIL | 86.3 | 85.0 | 86.0 |
| +Our MIL Dropout | 87.2 | 86.4 | 90.1 |
| +PDL | 86.1 | 84.3 | 85.2 |
| +Concrete Dropout | 85.3 | 84.2 | 84.0 |
| +guided dropout | 82.9 | 81.2 | 81.4 |
| +alpha-dropout | 83.7 | 81.8 | 78.1 |
**We are glad to have resolved the issues you mentioned. If there are any issues, please let us know (edit the previous comment), and we will respond as soon as possible. May we kindly ask if you'd be willing to update your review rating to reflect your current satisfaction level?**
Best Regards,
The authors | Summary: This paper investigates the effectiveness of Dropout in Multiple Instance Learning (MIL), particularly for histopathological whole-slide image classification. To address the overfitting issue in conventional two-stage MIL training (feature extraction + aggregation) caused by noisy embeddings and weak supervision, the authors propose a counter-intuitive MIL-Dropout approach: By discarding top-k important instances within bags, this method reduces gradient directional errors and converges to flatter optima. The framework contains two novel modules: 1) An averaging-based attention mechanism for efficiently determining instance importance, and 2) A query-based instance selection strategy for identifying instances to discard. Experiments on five MIL benchmark datasets and two WSI datasets demonstrate that MIL-Dropout significantly enhances existing MIL methods’ performance with negligible computational overhead.
Claims And Evidence: Yes, the claims made in the manuscript are largely supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed method(s) and/or evaluation criteria (e.g., benchmark datasets) are appropriately justified for the current problem or application.
Theoretical Claims: Yes, I have examined the validity of the theoretical claims underlying the proposed approach as presented in the article, primarily the Top-K instance selection method described in Section 4 and the associated query mechanism for selecting G number of instances. After examining the references cited in that section (Woo et al., 2018; Park et al., 2018; Tu et al., 2019a; Ilse et al., 2018), I confirm that the theoretical basis of the proposed method is logically sound and free from issues.
Experimental Designs Or Analyses: Yes, I have checked the validity of experimental designs and analyses from Section 6.1 to Section 6.4. The experimental design and analysis conducted for the proposed method in the article are methodologically sound and empirically valid.
Supplementary Material: I have reviewed the supplementary material in Appendix A.
Relation To Broader Scientific Literature: The MIL-Dropout proposed in this paper enhances existing MIL models by discarding Top-K important instances, whose fundamental contribution lies in establishing a novel instance sampling strategy. This aligns with findings from prior studies (e.g., [1][2][3][4]) that proposed diverse instance sampling strategies to improve model performance. However, it should be emphasized that the proposed sampling strategy primarily operates through shallow feature extractors in conventional MIL frameworks, constituting a methodological distinction from earlier approaches.
[1] Unleash the Power of State Space Model for Whole Slide Image with Local Aware Scanning and Importance Resampling (TMI 2024)
[2] Dynamic Policy-Driven Adaptive Multi-Instance Learning for Whole Slide Image Classification (CVPR 2024)
[3] MG-Trans: Multi-Scale Graph Transformer With Information Bottleneck for Whole Slide
Image Classification (TMI 2023)
[4] Boosting Whole Slide Image Classification from the Perspectives of Distribution, Correlation and Magnification. (ICCV 2023)
Essential References Not Discussed: To the best of my knowledge, the key technical contribution of the proposed MIL-Dropout in this paper is unique: By discarding Top-K important instances through shallow feature extractors in existing MIL frameworks, it facilitates model convergence towards superior performance. Although prior work (e.g., [1] proposing bag filtering eliminate partial instances, [2] directly discarding top instances) has made analogous contributions, the novel dropping strategy developed in this study operates within inner neural network layers, demonstrating distinct technical innovation.
[1] Dgmil: Distribution guided multiple instance learning for whole slide image classification. (MICCAI 2022)
[2] Bi-directional weakly supervised knowledge distillation for whole slide image classification(neurips 2022)
Other Strengths And Weaknesses: Strengths:
1.The paper introduces a novel application of dropout in MIL by targeting the top-k important instances, diverging from traditional neuron-level dropout.
2.The proposed method is designed as a plug-and-play module that can be seamlessly integrated into the majority of existing MIL frameworks
Weaknesses:
1.The proposed method exhibits dependence on parameter selection (K and G), which may hinder its generalization to broader WSI datasets. When encountering new datasets, re-optimization of these parameters may be required to achieve optimal performance.
2.While 10-fold cross-validation was applied on five MIL benchmarks, this was not conducted for the WSI datasets, which may hinder the evaluation of the proposed MIL-Dropout’s robustness in real-world clinical diagnostic scenarios.
Other Comments Or Suggestions: Some minor issues:
1.The computational complexity analysis lacks concrete experimental validation. For instance, there is no quantification of the additional memory overhead when integrating MIL-Dropout into existing frameworks like ABMIL or TransMIL, nor is there systematic evaluation of its impact on training and inference time.
The experiments did not incorporate the features from the latest foundation model (e.g., UNI and CONCH). These advanced features may narrow the performance gap between existing MIL methods with and without MIL-Dropout integration, thus diminishing the significance of this work.
Questions For Authors: 1.Can the K instances selected by MIL-Dropout and their associated G instances be visualized? Do these selected instances exhibit pathological relevance or interpretability?
2.Is the proposed MIL-Dropout limited to shallow feature extractors in existing MIL methods? For example, can it be integrated into other modules like the Transformer in TransMIL?
3.What advantages does MIL-Dropout offer over existing instance sampling strategies (e.g., test-time sampling in [1], dynamic sampling in [2], Bag Filter in [3])? Can MIL-Dropout outperform these approaches?
[1] Unleash the Power of State Space Model for Whole Slide Image with Local Aware Scanning and Importance Resampling (TMI 2024)
[2] Dynamic Policy-Driven Adaptive Multi-Instance Learning for Whole Slide Image Classification (CVPR 2024)
[3] Boosting Whole Slide Image Classification from the Perspectives of Distribution, Correlation and Magnification. (ICCV 2023)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Thank you very much for taking the time to review our work and for providing such valuable feedback. In response to the reviewer's concerns, we address the following point:**
1. **Related Literature:**
Regarding the sampling-related papers mentioned by the reviewer, our method differs, and it is orthogonal to those methods, allowing for integration with them. In the revised version of our paper, we will cite these papers and include an extended discussion in the related works section.
2. **Computational Analysis.**
Our MIL dropout incurs a negligible computational overhead during training, similar to other dropout methods, and it is not used during inference.
As mentioned in the manuscript (refer to the complexity analysis), the proposed dropout is not used during inference, adds no extra parameters, and has a negligible impact on training time.
|Based on Camelyon16(one epoch)|FLOP(M)|Params(K)|Training time(sec)|Inference time(sec)|
|---|---:|---:|---:|---:|
|AEML|872.26|174.27|5.86|3.06|
|+ Dropout|872.26|174.27|5.86|3.06|
|+ Our MIL Dropout(K=20,G=10)|872.26|174.27|5.86|3.06|
3. **UNI Feature.**
Consistent with requirements from other reviewers, we have replicated the relevant experiments here for clarity. Because UNI was pre-trained on TCGA and Camelyon data—raising the possibility of data leakage—we conducted additional experiments using the independent EBRAINS dataset. The results of these experiments are shown below.
|Model|Accuracy|F1|Δ Accuracy|Δ F1|
|:-|-:|-:|-:|-:|
|ABMIL|65.4|68.7|—|—|
|+MIL Dropout|70.4|73.2|+ 5.0|+ 4.5|
|TransMIL|67.4|74.4|—|—|
|+MIL Dropout|71.3|79.4|+ 3.9|+ 5.0|
|DSMIL|67.4|74.4|—|—|
|+MIL Dropout|69.3|76.0 |+ 1.9|+ 1.6|
|DTFD|53.4|63.6|—|—|
|+MIL Dropout|64.8|69.8|+ 11.4|+ 6.2|
|ILRA|64.8|74.4|—|—|
|+MIL Dropout|66.8|76.0|+ 2.0|+ 1.6|
|CAMIL|60.7|68.5|—|—|
|+MIL Dropout|69.1|75.6|+ 8.4|+ 7.1|
|R2T|57.9|66.1|—|—|
|+MIL Dropout|64.8|69.8|+ 6.9|+ 3.7|
Although our MIL dropout can still offer improvements when the better patch features are implemented, the performance gains may be less substantial (about 1.5 to 0.5). Nevertheless, as demonstrated by the experiments, foundation model features often do not perform optimally on private datasets, making our approach particularly suitable for such scenarios.
4. **Visualization**
Although MIL dropout dynamically selects the top-k and similar instance group G in each training iteration, our MINIST toy experiments (using the target instance "9") confirm that both the top-k selection and G effectively focus on the digit "9". We plan to include these visualizations in the supplementary material.
5. **Additional Experiments**
Currently, our experiments primarily utilize shallow feature extractors. We also attempted to integrate MIL dropout into the classifier (add more linear layers for integrating MIL Dropout) following the MIL module(actually the same thing we felt), and the performance remained largely unchanged. Furthermore, integrating MIL dropout into a transformer architecture was unsuccessful—likely because MIL dropout interferes with layer normalization.
|Model (runing 5 times on CAMELOYON16 Imagenet)|Accuracy|F1|AUC|
|:--|--:|-:|-:|
|TransMIL|84.7|83.3|86.5|
|+MIL Dropout in shallow extractor|86.0|84.9|89.4|
|+MIL Dropout in classifier |87.6|82.5|88.7|
|+MIL Dropout in Transformer|64.5|52.5|52.5|
**MIL dropout fails to converge for integration into transformer blocks.**
5. **Extra Comparision:**
Our MIL dropout can be seamlessly integrated with other MIL methods, as it is orthogonal to them. We conducted experiments using our patch features and observed performance improvements, further demonstrating the flexibility and effectiveness of our dropout.
|Model (runing 3 times on CAMELOYON16 Imagenet)|Accuracy|F1|AUC|Δ Accuracy|Δ F1|Δ AUC|
|:---|-:|---:|----:|---:|----:|----:|
|PAM[1]|85.0|83.2|86.7|—|—|—|
|+MIL Dropout|86.2|84.3|87.7 |+1.2|+1.1|+1.0|
|DPSF[2]|—|—|—|—|—|—|
|[3]|—|—|—|—|—|—|
> **[2][3]: The full open-source code has not been released, and we believe our MIL dropout can be integrated into these methods.**
---
### **We appreciate your time and hope that our response addresses your concerns. We will also add a discussion of the literature references/citations (as you mentioned) in the revision. If you have any further questions, please let us know.**
---
**References**
[1].Unleash the Power of State Space Model for Whole Slide Image with Local Aware Scanning and Importance Resampling (TMI 2024)
[2].Dynamic Policy-Driven Adaptive Multi-Instance Learning for Whole Slide Image Classification (CVPR 2024)
[3].Boosting Whole Slide Image Classification from the Perspectives of Distribution, Correlation and Magnification. (ICCV 2023) | Summary: The authors study how dropout can be used specifically for multiple instance learning (MIL), with a focus on its use for WSI-level prediction tasks within computational pathology.
They propose a MIL-specific dropout method named MIL-Dropout. It entails dropping the top K instances, along with G instances which are highly similar to these top K instances. MIL-Dropout can be applied to various MIL approaches such as ABMIL and TransMIL.
The proposed MIL-Dropout is evaluated on five MIL benchmark datasets, and on two WSI-level classification datasets. When combined with various MIL approaches (e.g. ABMIL), the performance is consistently improved at least to some extent.
##########
##########
##########
##########
Update after the rebuttal:
I remain borderline on this paper.
I think the proposed MIL-Dropout potentially could be at least quite useful, but I tend to agree with reviewer 3cHV that there are some remaining concerns.
I will thus keep my current score of "_2: Weak reject (i.e., leaning towards reject, but could also be accepted)_".
Claims And Evidence: Overall, yes.
Methods And Evaluation Criteria: Overall, yes.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: The experimental setup is OK overall, but could include one or two more WSI-level datasets/tasks. Could also include a more recent histopathology foundation model as the patch-level feature extractor.
Supplementary Material: Briefly went through the appendix.
Relation To Broader Scientific Literature: OK discussion of related work in Section 2.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: Strengths:
- The studied problem is quite interesting, if dropout can be used to consistently improve the performance of e.g. ABMIL models within computational pathology, this would definitely be valuable.
- The proposed MIL-Dropout method makes sense overall, and can be applied to various common MIL approaches.
- The performance of various MIL approaches is consistently improved at least to some extent on two WSI-level classification tasks (Table 2).
Weaknesses:
- The paper could be a bit more well written, it contains quite a few typos etc.
- The experimental evaluation only includes two WSI-level datasets. No proper external validation.
- The WSI-level experiments don't utilize any of the recent histopathology foundation models as patch-level feature extractors.
- The proposed MIL-Dropout is not compared against any other dropout variant as a baseline.
Other Comments Or Suggestions: Questions/Suggestions:
- For Figure 5.c, could you show more examples? Is this actually a general trend?
- The results in Table 2, are these with optimized values for K and G? I.e., different values for CAMELOYON16 vs TCGA-NSCLC, different values for ImageNet model vs SimCLR model, for ABMIL vs TransMIL etc? If so, might these reported performance gains for MIL-Dropout not be somewhat overstated? Because, if K and G are chosen just a bit differently, the performance might be worse than for the non-dropout baselines?
Minor things:
- "3. Preliminary." --> "3. Preliminary"?
- Line 66: "First, Different" --> "First, different".
- Line 68: "we focus on integrating the proposed a general" --> "we focus on integrating the proposed general"?
- Line 83: "the dropouts based" --> "the dropout methods based"?
- Line 139: "The learning of an MIL can be achieved" --> "The learning of a MIL model can be achieved"?
- Line 158: "In an MIL framework", I would personally write "a MIL framework" instead, since I say "MIL" like the word "mill", but maybe that's just me.
- Line 137: "reduce gradient direction errors in learning trajectory" --> "reduce gradient direction errors in the learning trajectory"?
- Figure 4 caption: "smallest GDE, training loss , and" --> "smallest GDE, training loss, and".
- Line 205: "(e.g.Dropout)" --> "(e.g., Dropout)"?
- Line 269: "In a previous investigation, the attention map, which indicates the importance of each instance, is obtained from Eq. (11)", "previous investigation" --> "previous iteration"? Or what is this previous investigation, a previous paper?
- Line 290: "Our MIL-Dropout only adds additional two hyperparameters" --> "Our MIL-Dropout only adds two additional hyperparameters"?
- Line 299: "The complexity is substantially reduced to the fact that O(N D(l)) due to K is typically much smaller than N", I don't understand, reformulate?
- Line 317: "MIL aggregators and their variants to valid" --> "MIL aggregators and their variants to validate".
- Line 644: "A.1.1. GRDIENT MEASURE EXPERIMENT", typo.
- Figure 5 caption, "and similarity instance S" --> "and similarity instances G"?
Questions For Authors: 1. Could you add at least one more WSI-level dataset/task? E.g. another TCGA dataset, or the PANDA dataset?
2. Could you add results when using a pathology foundation model as patch-level feature extractor, e.g. UNI? Does MIL-Dropout consistently improve the performance also in this setting?
Justification of overall recommendation:
Quite interesting and solid paper. Nothing groundbreaking, but the proposed MIL-Dropout could definitely be at least quite useful. However, I think the current experimental evaluation for WSI-level tasks should be extended a bit. Thus, I am currently leaning towards reject.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **We sincerely thank the reviewer for the valuable feedback, and for taking the time to read and engage with our paper. Below we address the reviewer’s concerns point by point.**
---
### 1.**Questions/Suggestions**:
- **Experimental setup:**
Figure 5(c) reports ablation results on two datasets using ImageNet‑pretrained features. All experiments in Table 2 use the optimal values of K and G identified in Figure 5(c).
- **Performance degradation with high drop rate**:
- Dropping a large proportion of instances inevitably degrades performance — not only for our proposed MIL‑Dropout but also for standard/vanilla dropout.
- When most features or instances are zeroed out, the model cannot distinguish positive from negative instances (ReLU outputs zero for both), while the bag-level label remains positive. This prevents effective learning.
- In our Camelyon16 experiments, aggressive dropout led to the failure of convergence for all methods; See the following results.
|Model|Accuracy|
|:---|---:|
|ABMIL|86.3|
|+Dropout(p=0.4)|66.2|
|+Dropout(p=0.8)|53.4|
|+Dropout1D(p=0.4) |63.1|
|+Dropout1D(p=0.8)|51.3|
|+Our MIL-Dropout (k = 100, G= 5)|59.9|
|+Our MIL-Dropout (k = 400, G= 5)|55.7|
- **Optimal dropout threshold:**
- By restricting dropout to at most 10% of the average instance count (either via Top‑K selection or a dropout probability _p_ = 0.1), we observe consistent performance improvements across datasets.
> **We appreciate the reviewer’s suggestion and will add more detailed explanations in the revised manuscript, and will also correct the spelling errors you pointed out.**
### 2.**Extra WSI task/datasets Evaluation**:
- **Task & Datasets:**
We evaluated our method on survival prediction using two TCGA datasets(LUAD and BRCA), following the protocols of [1,2].
- **Evaluation Metric:**
Concordance index (C‑index) with %. The K = 10, G = 10 for MIL-Dropout.
- **Results:**
| Model | TCGA‑LUAD | +MIL-Dropout | Δ | TCGA‑BRCA | +MIL-Dropout| Δ |
|-------------|----------:|-------------:|-----------:|----------:|-------------:|-----------:|
| ABMIL | 65.7 | 67.7 | +2.0 | 72.8 | 75.2 | +2.4 |
| DSMIL | 61.4 | 63.4 | +2.0 | 68.8 | 72.1 | +3.3 |
| TransMIL | 64.3 | 67.2 | +2.9 | 72.1 | 74.7 | +2.6 |
| DTFD (AFS) | 62.0 | 65.3 | +3.3 | 71.1 | 73.6 | +2.5 |
| DTFD (MaxS) | 65.9 | 68.3 | +2.4 | 72.8 | 75.7 | +2.9 |
> **These results demonstrate that our MIL Dropout consistently improves all baseline methods on WSI-based survival prediction**.
### 3.**Foundation Model Features**:
- **UNI Features:**
As same as requested by Reviewer 3, we evaluated our method using UNI features. Because UNI was pretrained on TCGA and Camelyon data—which poses a risk of data leakage—we conducted additional experiments using the independent EBRAINS dataset.
|Model|Accuracy|F1|Δ Accuracy|Δ F1|
|:---|------------:|-------:|-----------:|----:|
|ABMIL|65.4|68.7|—|—|
|+MIL Dropout|70.4|73.2|+ 5.0|+ 4.5|
|TransMIL|67.4|74.4|—|—|
|+MIL Dropout|71.3|79.4|+ 3.9|+ 5.0|
|DSMIL|67.4|74.4|—|—|
|+MIL Dropout|69.3|76.0 |+ 1.9|+ 1.6|
|DTFD|53.4|63.6|—|—|
|+MIL Dropout|64.8|69.8|+ 11.4|+ 6.2|
|ILRA|64.8|74.4|—|—|
|+MIL Dropout|66.8|76.0|+ 2.0|+ 1.6|
|CAMIL|60.7|68.5|—|—|
|+MIL Dropout|69.1|75.6|+ 8.4|+ 7.1|
|R2T|57.9|66.1|—|—|
|+MIL Dropout|64.8|69.8|+ 6.9|+ 3.7|
> **Our MIL Dropout consistently improves other MIL methods.**
---
We hope our response addresses your concern. If you have any further questions, please let us know. Thanks!
---
**References**
[1]. Jaume, G., et al. (2024). *Modeling dense multimodal interactions between biological pathways and histology for survival prediction.* CVPR.
[2]. Song, A. H., et al. (2024). *Multimodal prototyping for cancer survival prediction.* ICML.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply.
I have read the other reviews and all rebuttals.
The other reviews are mixed, I don't think they give me any obvious reasons to change my score.
The authors have provided a quite comprehensive rebuttal, with some new results. MIL-Dropout has been demonstrated to be applicable to even more MIL approaches, and I appreciate the added survival prediction experiments.
I do however agree with reviewer 3cHV that there are some remaining concerns.
As pointed out by reviewer 3cHV in their response, it's not true that UNI has been trained on TCGA or Camelyon16. Thus, I would really like to see UNI results being added for these datasets (and preferably also for the TCGA survival prediction task).
Also, the question _"For Figure 5.c, could you show more examples? Is this actually a general trend?"_ in my review has not been discussed.
Currently, I'm borderline on this paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for their continued engagement and thoughtful feedback. Please find our clarifications below:
---
## UNI feature extractors on TCGA datasets
As mentioned in our previous rebuttal, we have conducted survival prediction experiments on two TCGA datasets: **TCGA-LUAD** and **TCGA-BRCA**, whose features are extracted by the **UNI feature extractor**.
We would like to respectfully emphasize that our study primarily focuses on the impact of Dropout in **general Multiple Instance Learning (MIL) frameworks, rather than specifically on WSIs feature extractor.**
**While we understand the value of including more feature extractors, we believe that continually expanding the scope of experiments may detract from our main contribution.**
---
## Clarification on Figure 5.c
Yes, the observed behavior is consistent across examples, i.e., more positive instances are explored. In the final version of the paper, we will include several more examples on different datasets in the supplementary material to provide a more comprehensive view.
---
We greatly appreciate your thoughtful comments and the opportunity to further improve our work. | Summary: This paper mainly studies the Dropout scheme for Multi-Instance Learning (MIL). First, this work reveals the superiority of DropInstance over traditional Dropout scheme through empirical experiments on a WSI dataset. Then, this work investigates the suitable scheme for DropInstance, Top-K instance dropping. Finally, a MIL-Dropout approach is proposed for MIL. This approach leverages average pooling to determine the importance of instances, which eases the dependence on instance's attention scores, followed by dropping the Top-K instances to implement the aforementioned Top-K instance dropping. Experiments on classical MIL benchmark and WSI datasets demonstrate the effectiveness of the proposed method.
## update after rebuttal
The authors' rebuttal fails to directly answer my questions and concerns. My detailed responses to the authors' rebuttal could be found in the thread. Thus, my major concerns remain as follows:
- Experimental settings are not aligned with the common settings in WSI classification, which leads to the concerns about the convincingness of the experimental results.
- Inherent limitations of the proposed method, e.g., being sensitive to the setting of K and relying on careful fine-tuning, could largely limit the impact of this work.
- Some theoretical claims lack convincing justifications. In a nutshell, I don't think the theoretical claims could be made from the empirical experiments on only one dataset. Convincing justifications are need to support the authors' theoretical claims.
Claims And Evidence: Some claims are not supported by convincing evidence. My major concerns lie on
- The effectiveness of Top-K instance dropping: It is only verified on WSI classification task and one dataset. I am curious about whether it could be generalized to more tasks or datasets, so as to demonstrate the generalization of the proposed method.
- No theoretical analysis is conducted to support this paper's core argument. However, the authors claim "whereas an in-depth analysis reveals its theoretically intriguing properties." at lines 65 and 66 on page 2.
Methods And Evaluation Criteria: **Methods**: Most designs seems reasonable. However, these designs are completely driven by the empirical results on a single dataset. This raises critical concerns about the generalization of the proposed method.
**Evaluation Criteria**: seems reasonable.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, I have checked the experimental design of this paper. My concerns are as follows:
- In the experiments, WSIs are processed by ResNet-50 to obtain bag features for MIL. However, this way is out-of-date. Current papers in this field generally adopt more advanced pretrained foundation models like UNI (Nature Medicine, 2024) and CONCH (Nature Medicine, 2024). In other words, the experimental settings are not aligned with the typical settings in the field of MIL-based WSI classification. This calls into question the effectiveness of the proposed method.
- MIL-based WSI classification is an active topic that is going rapid developments. This paper only includes the MIL baselines proposed in 2023 or earlier years. Many state-of-the-art MIL methods for WSI classification are not compared in this paper, *e.g.*, CAMIL (ICLR, 2024) and R2T (CVPR, 2024).
Supplementary Material: N/A
Relation To Broader Scientific Literature: No obvious relation
Essential References Not Discussed: It is not clear the relation of this paper with the reference [https://arxiv.org/pdf/2308.10112]. I have given this concern in my comments to the authors. Hope the authors could clarify this in the rebuttal.
Other Strengths And Weaknesses: Strengths:
- This paper investigates the Dropout scheme for MIL. This topic is under-explored in MIL as far as I know.
- Some empirical studies are conducted to justify the proposed scheme. Most of them are intuitive and easy to understand.
Weaknesses:
- Lack of theoretical analysis to prove the generalization of the proposed MIL-Dropout method.
- Some statements are not rigorous for academic papers and should be reworded.
- The experimental settings are not typical and fail to align with the common settings in WSI classification tasks.
In summary, I acknowledge that the authors investigate an interesting problem in MIL. However, most claims are only supported by empirical results, which are not convincing enough to prove the generalization of the proposed method. Essential theoretical analysis would be better. Moreover, the experimental settings in WSI classification tasks are not aligned with common settings, which call into question the real performance of the proposed method. Given these critical issues, I think that the current paper can't get published in ICML and it should be improved substantially in terms of theoretical analysis and experiments.
Other Comments Or Suggestions: Please see *Questions For Authors*
Questions For Authors: My main questions and concerns are as follows:
- For the statement, "However, this suboptimal training scheme suffers from ”noisy” feature embeddings, typically resulting in reduced performance.", written on page 1, I am curious about why the authors assert the training scheme is sub-optimal. As far as I know, the two-stage learning scheme, patch feature extracting + MIL aggregator, is a standard framework for WSI classification.
- For the statement, "an in-depth analysis reveals its theoretically intriguing properties", written on page 2, I fail to find any theoretical analysis related to this part. Could the authors provide more details for this statement?
- For the statement, "dropping important instances not only results in performance gain but also shows theoretical guarantees.", written on page 1, I also fail to find theoretical guarantees; instead, the empirical results on one WSI dataset are observed.
- For the statement, "achieves better generalization by converging to flatter local minima (see example in Fig 1 (Right) and evidence in Fig. 3)", written on page 1, I am curious about whether this is the same in other WSI datasets or this is the same when the pretrained model is changed to UNI or CONCH but not ResNet-50.
- For the Top-K instance dropout, there are two severe limitations:
- How to determine K. Would the performance be sensitive to this? How can I believe this method is a stable one for MIL?
- If the number of positive instances are less than K, what will happen?
- A pretrained ResNet-50 model is used to extract features from patches. This setting is out-of-date. Could the authors provide the main results for modern pretrained models like UNI (Nature Medicine, 2024) or CONCH (Nature Medicine, 2024).
- The baselines presented in Table 2 are from the work published in 2022 or earlier. Could the authors provide the results for state-of-the-art MIL baselines, like ILRA (ICLR, 2023), CAMIL (ICLR, 2024) and R2T (CVPR, 2024).
- It is not clear the relation of this paper with the reference [https://arxiv.org/pdf/2308.10112].
References:
[1] UNI: Towards a General-Purpose Foundation Model for Computational Pathology
[2] CONCH: A visual-language foundation model for computational pathology
[3] ILRA: Exploring low-rank property in multiple instance learning for whole slide image classification
[4] CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images
[5] R2T: Feature re-embedding: Towards foundation model-level performance in computational pathology
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: **We greatly appreciate the reviewer’s valuable feedback. Below is our response:**
1. **Sub‑optimal** refers to using a feature extractor that isn’t trained jointly with the MIL module, resulting in non‑optimal embeddings. Even strong feature extractors (self‑supervised training foundation models) achieve only ~0.60 accuracy on EBRAINS (e.g., UNI). Prior work has begun exploring end‑to‑end training strategies, which empirically outperform two‑stage approaches [1].
2. **Claim of Theoretical analysis:**
We will revise the text to be more precise and accurately reflect the level of theoretical discussion presented in the paper.
Our claims are based on established theoretical and empirical findings (Keskar et al., 2016; Liu et al., 2023):
- Smaller sharpness leads to flatter local minima, which generally improves generalization.
- Gradient variance is widely used to evaluate the optimizers.
3. **Analysis of additional datasets:**
We observed the same phenomenon on **TCGA** in ViT pre-trained on ImageNet. Models like UNI or CONCH were trained on TCGA and Camelyon, introducing data leakage; thus, downstream evaluation on those datasets is not meaningful. In Following ViT based experiments, DropInstance shows the same effect described in the manuscript:
|Method (TCGA - ViT)|Sharpeness (ε=1.00e-03)|
|:-----|----------------------:|
|DropNeuron|1.36e-7|
|**DropInstance**|**1.16e-8**|
|Baseline|1.06e-6|
4. **Top-K:** Standard dropout also degrades performance when a large proportion of features or instances are zeroed out. If all positive instances are dropped, positive and negative instances become indistinguishable (ReLU outputs zero), while the bag label remains positive, preventing the model from learning. In our Camelyon16 experiments, aggressively dropping instances caused all methods to fail to converge.
|Model|Accuracy|
|:---|---:|
|ABMIL|86.3|
|+Dropout(p=0.4)|66.2|
|+Dropout(p=0.8)|53.4|
|+Dropout1D(p=0.4) |63.1|
|+Dropout1D(p=0.8)|51.3|
|+Our MIL Dropout (k = 100, G= 5)|59.9|
|+Our MIL Dropout (k = 400, G= 5)|55.7|
However, limiting dropout to no more than 10% of the average instance count—either by selecting Top‑K instances or using a dropout
p=0.1—consistently improved performance. Our ablation study confirms this result and will add more details to the
revised manuscript.
5. **Extra-Experiments:**
Below are the results on Camelyon16 (Imagenet pretrained) using the mentioned latest method. We add MIL‑Dropout via two linear layers before feeding features into the MIL module.
|Model|Accuracy|F1|AUC|Δ Accuracy|Δ F1|Δ AUC|
|:---|------------:|-------:|-------:|-----------:|----:|----:|
|ILRA|85.8|81.4|88.4|—|—|—|
|+MIL Dropout|87.2 |86.4 |90.1 |+1.4|+5.0|+1.7|
|CAMIL|84.8|82.6|86.8 |—|—|—|
|+MIL Dropout|86.4|85.7|91.2 |+1.6|+3.1|+4.4|
|R2T|83.1|81.6|84.5|—|—|—|
|+MIL Dropout|85.6|84.7|87.4 |+2.5|+3.1|+2.9|
Because UNI was trained on TCGA and Camelyon, using those features risks data leakage. To avoid this, we evaluate UNI features on the EBRAINS dataset by running the experiment three times.
|Model|Accuracy|F1|Δ Accuracy|Δ F1|
|:---|------------:|-------:|-----------:|----:|
|ABMIL|65.4|68.7|—|—|
|+MIL Dropout|70.4|73.2|+ 5.0|+ 4.5|
|TransMIL|67.4|74.4|—|—|
|+MIL Dropout|71.3|79.4|+ 3.9|+ 5.0|
|DSMIL|67.4|74.4|—|—|
|+MIL Dropout|69.3|76.0 |+ 1.9|+ 1.6|
|DTFD|53.4|63.6|—|—|
|+MIL Dropout|64.8|69.8|+ 11.4|+ 6.2|
|ILRA|64.8|74.4|—|—|
|+MIL Dropout|66.8|76.0|+ 2.0|+ 1.6|
|CAMIL|60.7|68.5|—|—|
|+MIL Dropout|69.1|75.6|+ 8.4|+ 7.1|
|R2T|57.9|66.1|—|—|
|+MIL Dropout|64.8|69.8|+ 6.9|+ 3.7|
Our MIL Dropout consistently improves other MIL methods, with the largest gains on noisier UNI features—showing its effectiveness when patch features are suboptimal.
5. **Relation to arXiv:2308.10112**
PDL is an unpublished work, and our MIL Dropout complements and extends the theory-grounded analysis behind PDL. While PDL methods are largely intuition‑driven and derive DropInstance from nonlinear interpolation without deeper analysis, our work demonstrates that DropInstance leads to flatter minima and better generalization. We further show that Top‑K dropout reduces gradient variance. We appreciate the reviewer’s pointer to this paper and will include a detailed discussion in the revised manuscript.
|Model|Accuracy|F1|AUC|
|:---|---:|---:|---:|
|ABMIL|86.3|85.0|86.0|
|+Our MIL Dropout|87.2|86.4|90.1|
|+PDL|86.1|84.3|85.2|
|+Concrete Dropout|85.3|84.2|84.0|
|+guided dropout|82.9|81.2|81.4|
|+alpha-dropout|83.7|81.8|78.1|
Due to space constraints, we provide a comparison of our method against PDL and other dropouts on Camelyon16 (ImageNet).
---
**Reference**
[1] Li et al. (2023). Task‑specific fine‑tuning via variational information bottleneck for weakly‑supervised pathology WSI classification.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal and the additional results. I have carefully read it.
- To **A1**: The authors seem to misunderstand my question. I understand and totally agree with the authors' explanation for "Sub-optimal". My concern lies on that the authors *assert* the training scheme is sub-optimal *without any explanation to readers*. In other words, the clarification on why the two-stage training scheme is sub-optimal seems missing. By the way, I'd like to correct the authors that UNI could obtain > 80% Acc on EBRAINS (see Fig. 2.f on page 4 in the UNI paper).
- To **A2**: The case is that the experiments conducted in this paper leverage the established theoretical tools or findings from the published papers, right? I don't think this case belongs to theoretical analysis. Generally, theoretical analysis is presented in the form of Theorem, Proposition, or Corollary. Moreover, theoretical results typically give some rules that could be applied to a class of cases. I don't think one empirical experiment on only one dataset can lead to theoretical findings.
- To **A3**: I strongly encourage the authors to check CONCH and UNI papers. These two foundation models are not trained on TCGA or Camelyon16, and there is no data leakage concern on them. So, common experimental settings, *e.g.*, using foundation models as the patch feature extractor, should be adopted. This could make the experimental results more convincing.
- To **A4**: My concern lies on that the performance of the proposed method relies on the fine-tuning of K. If I choose to use the proposed method for MIL on one specific dataset, I would concern about whether I need to spend much time carefully adjusting K to achieve good results. If the dataset is changed, should I carefully do the adjustment again? For standard Dropout, common settings (e.g., a large p like p=0.9 or p=0.85) often leads to good performance. To be specific, the setting of p is not so sensitive because although the standard Dropout drops 10% or 15% neurons, the remaining fixed 90% or 85% neurons could adjust themselves automatically to obtain good performances.
- To **A5**: The same issue, namely, there is no data leakage in UNI or CONCH. So, common experimental settings should be adopted. By the way, UNI features could obtain >80% Acc on the EBRAINS dataset when using an ABMIL baseline. I am curious about why there are so large differences (~15%) between the results reported by UNI (Chen et al., Nature Medicine, 2024) and this paper.
Given the above, my major concerns remain as follows:
- **Experimental settings** are not aligned with the common settings in WSI classification, which leads to the concerns about the convincingness of the experimental results.
- **Inherent limitations of the proposed method**, e.g., being sensitive to the setting of K and relying on careful fine-tuning, could largely limit the impact of this work.
- **Some theoretical claims lack convincing justifications**. In a nutshell, I don't think the theoretical claims could be made from the empirical experiments on only one dataset. Convincing justifications are need to support the authors' theoretical claims.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our response. We address the reviewer’s concerns below:
---
## 1. Sub-optimal.
**Frankly speaking, I believe these two questions aren’t much different**. As mentioned in [1], the feature extractor was not tuned based on the labels, which resulted in suboptimal features. **We will include further clarification in the introduction section in a future revision.**
---
## 2. Inconsistency
We did indeed use UNI as the patch feature extractor. The inconsistencies in experimental results can be attributed to two factors:
1. **Preprocessing Differences:** The method used for patch extraction can affect performance. This is evident when comparing DTFD-MIL and transMIL; for instance, transMIL shows a performance drop (despite both models using ResNet-50 pretrained on ImageNet), while DTFD-MIL employs OTSU’s thresholding method. **Our raw WSIs patch preprocessing might differ with UNI paper**.
---
2. **Number of Classes in MIL methods:** Adjusting `arg.number_classs` can lead to significant performance improvements in most MIL methods. **However, since EBRAINS is a multi-class problem, and to avoid violating the binary MIL assumption[refer to Attention-based Deep Multiple Instance Learning, Problem formulation]**, we have formulated it as a one-vs-rest binary classification problem:
$$
y_{i,c} =
\begin{cases}
0, & \text{iff } \sum_{n=1}^{N} y_{i,c}^n = 0,\quad y_{i,c}^n \in \{(0, 1)\} \\
\ 1, & \text{otherwise,}
\end{cases}
$$
where $\(y_{i,c}^n = 1\)$ denotes a instance with significant contribution to class $\(c \in \{1,\dots,C\}\)$. The final bag-level prediction $\(y_i\)$ for a bag is computed as the class with the highest probability:$\(y_i = \underset{c}{\mathrm{argmax}} \ y_{i, c}\)$, which is consistent with the one-vs-rest scheme.
---
## 3. Sensitivity of Top-K
**We kindly ask the reviewer to refer to our ablation study (see Figure 5 (a) and (b))**. Our results show that as long as the top-k remains within a reasonable range (between 3 and 30), performance consistently improves. This range is both reasonable and robust, as performance only degrades if the majority of instances are dropped. **Moreover, adjusting the top-k parameter is essential for different datasets/tasks, just as one would tune the learning rate, optimizer, or other hyperparameters when switching datasets.**
---
## 4. Theoretical Guarantees
Although our investigation initially started with a single dataset, we have validated the effectiveness of MIL dropout on three downstream datasets and with different feature extractors (including the MIL benchmark and two WSIs datasets). This is further supported by the newly added **survival prediction experiment** in rebuttal. **In other words, these results demonstrate that the MIL dropout phenomenon occurs in various tasks and contributes to improved performance**.
---
**We appreciate the reviewer's suggestion and will carefully study the referenced UNI paper**. However, we would like to emphasize that our work is not centered on the feature extractor, but rather on introducing a component that enhances the performance of MIL.
--- | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.